entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 17
188
| authors
sequence | primary_category
stringlengths 5
18
| categories
sequence | text
stringlengths 2
629k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.03948v1 | 20230708101129 | Reading Between the Lanes: Text VideoQA on the Road | [
"George Tom",
"Minesh Mathew",
"Sergi Garcia",
"Dimosthenis Karatzas",
"C. V. Jawahar"
] | cs.CV | [
"cs.CV"
] |
G. Tom et al.
Center for Visual Information Technology (CVIT), IIIT Hyderabad, India
{george.tom,minesh.mathew}@research.iiit.ac.in, [email protected]
Computer Vision Center (CVC), UAB, Spain
{sergi.garcia,dimos}@cvc.uab.cat
AllRead Machine Learning Technologies
Reading Between the Lanes: Text VideoQA on the Road
George Tom1 0009-0002-7343-1680 Minesh Mathew1 0000-0002-0809-2590 Sergi Garcia-Bordils2,3 0000-0002-4222-8367 Dimosthenis Karatzas2 0000-0001-8762-4454 C.V. Jawahar10000-0001-6767-7057
August 12, 2023
=============================================================================================================================================================================================
Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness.
Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time.
To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3,222 driving videos collected from multiple countries, annotated with 10,500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering. The dataset is available at http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqahttp://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa
§ INTRODUCTION
In this work, we propose a new dataset for Visual Question Answering (VQA) on driving videos, with a focus on questions that require reading text seen on the roads and understanding road signs. Text and road signs provide important information to the driver or a driver assistance system and help to make informed decisions about their route, including how to reach their destination safely and efficiently. Text on roads can also provide directions, such as turn-by-turn directions or the distance to a destination. Road signs can indicate the location of exits, rest stops, and potential hazards, such as road construction or detours. Reading text and understanding road signs is also important for following traffic laws and regulations. Speed limit signs, yield signs, and stop signs provide important information that drivers must follow to ensure their own safety and the safety of others on the road.
VQA is often dubbed as the Turing test for image/video understanding. The early datasets for VQA on images and videos <cit.> largely ignored the need for reading and comprehending text on images and videos, and questions were mostly focus on the visual aspects of the given image or video. For example, questions focused on the type, attributes and names of objects, things or people. However, the text is ubiquitous in outdoor scenes, and this is evident from the fact that nearly 50% of the images in the MS-COCO dataset have text in them <cit.>.
Realizing the importance of reading text in understanding visual scenes, two datasets—Scene text VQA <cit.> and Text VQA <cit.> were introduced that focus exclusively on VQA involving scene text in natural images.
Two recent works called NewsVideoQA<cit.>, and M4-ViteVQA<cit.> extend text-based VQA works to videos by proposing VQA tasks that exclusively focus on question-answers that require systems to read the text in the videos.
Similar to these works that focus on text VQA on videos, our work proposes a new dataset where all the questions need to be answered by watching driving videos and reading the text in them. However, in contrast to NewsVideoQA which contains news videos where question-answer pairs are based on video text (born-digital embedded text) appearing on news tickers and headlines, the text in videos in our dataset are scene text. The text in the road or driving videos are subjected to blur, poor contrast, lighting conditions and distortions. Text while driving goes by fast and tends to be heavily occluded. Often, multiple frames needs to be combined to reconstruct the full text, or a good frame with readable text needs to be retrieved. These difficulties made researchers focus on road-text recognition exclusively, and there have been works that focus exclusively on the detection, recognition and tracking of road text videos <cit.>. On the other hand M4-ViteVQA contains varied type of videos such as sports videos, outdoor videos and movie clips. A subset of these videos are driving videos. In contrast, our dataset is exclusively for VQA on driving videos and contains at least three times more questions than in the driving subset of M4-ViteQA. Additionally, questions in our dataset require both reading road text and understanding road signs, while M4-ViteVQA's focus is purely on text-based VQA.
Specifically our contributions are the following:
* We introduce the first large scale dataset for road text and road sign VQA containing 10K+ questions and 3K+ videos.
* We provide a thorough analysis of the dataset and present detailed statistics of videos, questions and answers. We also establish heuristic baselines and upper bounds that help to estimate the difficulty of the problem.
* We evaluate an existing popular VQA model and two SoTA VideoQA models on our dataset and demonstrate that these models fail to perform well on the new dataset since they are not designed to read and reason about text and road signs.
§ RELATED WORK
§.§ VideoQA
In video question answering(VideoQA), the goal is to answer the question in the context of the video. Earlier approaches to VideoQA use LSTM to encode the question and videos<cit.>.
Several datasets have been created in recent years to assist research in the field of video question answering (VideoQA). Large datasets such as MSRVTT-QA<cit.> contain synthetic generated questions and answers where the questions require only an understanding of the visual scenes. MOVIE-QA<cit.> and TVQA<cit.> are based on scenes in movies and TV shows. Castro et al.<cit.> introduced a dataset with videos from the outside world for video understanding through VideoQA and Video Evidence Selection for interpretability. MOVIE-QA<cit.>, TVQA<cit.>, HowtoVQA69M<cit.>
provide explicit text in the form of subtitles. Multiple-Choice datasets<cit.> consist of a pre-defined set of options for answers. When compared to open-ended datasets, they can be considered limiting in the context of real-world applications. Synthetically generated datasets<cit.> contain questions that are generated through processing video descriptions, narration and template questions. MSRVTT-QA<cit.> exploits the video descriptions for QA creation. HowToVQA69M<cit.> uses cross-modal supervision and language models to generate question-answer pairs from narrated videos, whereas ActivityNetQA<cit.> uses template questions to generate the QA pairs. Xu et al. introduced the SUTD-TrafficQA<cit.> dataset and the Eclipse model for testing systems' ability to reason over complex traffic scenarios. The SUTD-TrafficQA<cit.> dataset contains multiple-choice questions that are based on different traffic events. RoadTextVQA is an open-ended dataset that deals with questions related to the text information found in road videos or the signs posted along roads. Recent studies<cit.> on pretraining transformers on other vision and language tasks have shown excellent results for the VideoQA task. Lei et al. <cit.>, in their study, uncovered the bias present in many video question-answering datasets, which only require information from a single frame to answer, and introduced new tasks aimed at training models to answer questions that necessitate the use of temporal information.
§.§ VideoQA involving video text
NewsVideoQA<cit.> and M4-ViteVQA<cit.> are two recently introduced datasets that include videos with embedded born-digital text and scene text, respectively. Both datasets require an understanding of the text in videos to answer the questions.
Embedded text, sometimes called video text in news videos, is
often displayed with good contrast and in an easy-to-read style.
Scene text in the RoadTextVQA dataset can be challenging to read due to the factors such as occlusion, blur, and perspective distortion. M4-ViteVQA contains videos from different domains, a few of them being shopping, driving, sports, movie and vlogs. The size of RoadTextVQA is more than three times the size driving subset of M4-ViteVQA. Additionally, a subset of questions in RoadTextVQA also requires domain knowledge to answer questions related to road signs. Few recent works<cit.> on vision and language transformers have shown to work well with text-based VQA tasks. Kil et al.<cit.> introduced PreSTU, a pretraining method that improves text recognition and connects the recognized text with the rest of the image. GIT(GenerativeImage2Text)<cit.> is a transformer-based model for vision and language tasks with a simple architecture that does not depend on external OCR or object detectors.
§.§ Scene Text VQA
Our work, which focuses on VQA requiring text comprehension within videos, shares similarities with other studies dealing with text in natural images, commonly known as Scene Text VQA. The ST-VQA<cit.> and TextVQA<cit.> datasets were the first to incorporate questions requiring understanding textual information from natural images. LoRRa<cit.> and M4C<cit.> utilized pointer networks<cit.> that generate answers from a fixed vocabulary and OCR tokens. In addition, M4C used a multimodal transformer<cit.> to integrate different modalities. TAP<cit.> employed a similar architecture to M4C and incorporated a pretraining task based on scene text, improving the model's alignment among the three modalities. Another study, LaTr<cit.>, focused on pretraining on text and layout information from document images and found that incorporating layout information from scanned documents improves the model's understanding of scene text.
§ ROADTEXTVQA DATASET
This section looks at the data collection and annotation procedure, data analysis, and statistics.
§.§ Data Collection
The videos used in the dataset are taken from the RoadText-3K<cit.> dataset and YouTube. The RoadText-3K dataset includes 3,000 ten-second road videos that are well-suited for annotation because they have a considerable quantity of text.
The RoadText-3K dataset includes videos recorded in the USA, Europe, and India and features text in various languages such as English, Spanish, Catalan, Telugu and Hindi. Each video contains an average of 31 tracks. However, the European subset is excluded from the annotation process for RoadTextVQA as it is dominated by texts in Spanish/Catalan, and the RoadTextVQA is designed specifically for English road-text.
In addition to the videos from RoadText-3K, additional dashcam videos were sourced from the YouTube channel J Utah[ <https://www.youtube.com/@jutah>]. 252 videos from USA and UK were selected, and clips with a substantial amount of text were further selected by running a text detector over the video frames. Being a free and open-source text detector popular for scene text detection, we went with EasyOCR<cit.> as our choice of text detector. The RoadText-3K videos have a resolution of 1280x720 with a frame rate of 30 frames per second. To keep the data consistent, the YouTube clips were downsampled to the same resolution and frame rate of 1280x720 at 30fps.
Individuals who are proficient in the English language were hired to create the question-answer pairs. To ensure the quality of the applicants, an initial training session was conducted, followed by a filtering mechanism in the form of a comprehensive quiz. The quiz was designed to ensure that the question-answer pairs were created by individuals who had a solid grasp of the English language and a good understanding of the task, thereby enabling us to maintain a high standard of quality in the annotations.
The annotation process involved two stages, and a specifically designed web-based annotation tool was used. In the initial stage, annotators add the question, answers and timestamp triads for videos shown to them.
All the questions have to be based on either some text present in the video or on any road sign. In cases where a question could have multiple answers in a non-ambiguous way, the annotators were given the option to enter several answers. The timestamp is an additional data point which is collected, and it is the aptest point in the video at which the question is answerable. The annotators were instructed to limit the number of questions to not more than ten per video and to avoid asking any questions related to the vehicle license plate numbers. If there were no possible questions that could be asked from the video, then the annotators were given the option to reject it.
In the verification stage, the video and the questions are shown, and the annotators had to add the answers and the timestamps. We made sure that verification is done by an annotator different from the one who has annotated it in the first stage.
If the question is incorrect or does not follow the annotation guidelines, it is flagged and rejected. If for a question, there are common answers in the annotation stage and verification stage, then that question is considered valid. All the common answers are considered valid answers to the question.
In the verification stage, additional data regarding the question-answers are also collected. The questions are categorically tagged into two distinct classes. Firstly, based on the type of question— text-based or traffic sign-based.
The second classification captures whether the answer for a question, i.e., the text that makes up the answer, is present in the video or not.
§.§ Data Statistics and Analysis
The RoadTextVQA dataset contains 3,222 videos and 10,500 question-answer pairs.
Among the 3,222 videos, 1,532 videos are taken from the RoadText-3K dataset and the rest are from YouTube.
The data is randomly split into 2,557 videos and 8,393 questions in the train set, 329 videos and 1,052 questions in the test, and 336 videos and 1,055 questions in the validation set.
The videos for the test and validation sets were randomly chosen from the RoadText-3K split, as it has ground truth annotations for text tracking. Methods that use OCR data can take advantage of the accurate annotations provided by RoadText-3K.
We present statistics related to the questions in RoadTextVQA through <ref>, and <ref>. <ref> shows the most frequent questions and their frequencies. “What is written on the road with white block letters?" is the most recurrent, followed by questions regarding the speed limits on the roads.
<ref> provides a comprehensive overview of the question distribution in RoadTextVQA, with the majority of the questions being centred around details of shops located along the road. <ref> depicts the word count in the questions and answers, respectively. The average number of words in the questions in RoadTextVQA is 10.8, while the average number in the answers is 1.45. The average number of words in questions is much higher when compared to other text-based VideoQA datasets, as seen in <ref>. The percentage of unique questions stands at 86.6%, while the percentage of unique answers is 40.7%. <ref> shows the top 30 answers and the number of occurrences. <ref>, in the form of a word cloud, illustrates the most frequently occurring answers and OCR tokens. The most popular answers are “right", “left", “yes", and “no". The most prevalent OCR tokens in the videos are “stop", “only", and “one way".
The distribution of the videos in the dataset based on the geographic location where it was captured is shown in <ref>.
More than two-thirds of the videos in the dataset are captured from roads in the USA.
The majority of questions are grounded on text seen in the video (61.8%), and the rest are based on road signs. Road signs can also contain text, such as speed limit signs or interchange exit signs. 68% of questions have answers that can be found within the text present in the video, while the remaining 32% of questions require an answer that is not a text present in the video.
§ BASELINES
This section presents details of the baselines we evaluate on the proposed RoadTextVQA dataset.
§.§ Heuristic Baselines and Upper Bounds
We evaluate several heuristic baselines and upper bounds on the dataset. These heuristics and upper bounds are similar to those used in other VQA benchmarks, such as TextVQA<cit.> and DocVQA<cit.>. The following heuristic baselines are evaluated:
(i) Random Answer: performance when answers to questions are randomly selected from the train split.
(ii) Random OCR token: performance when a random OCR token from the video is picked as the answer.
(iii) Majority Answer: performance when the most common answer in the train split is considered as the answer for all the questions.
The following upper bounds are evaluated
(i) Vocab UB: the upper bound on predicting the correct answer if it is present in the vocabulary of all the answers from the train split.
(ii) OCR UB: the upper bound on performance if the answer corresponds to an OCR token present in the video.
(iii) Vocab UB + OCR UB: this metric reflects the proportion of questions for which answers can be found in the vocabulary or the OCR transcriptions of the video.
§.§ M4C
The M4C<cit.> model uses a transformer-based architecture to integrate representations of the image, question and OCR tokens. The question is embedded using a pretrained BERT<cit.> model. Faster R-CNN<cit.> visual features are extracted for the objects detected and the OCR tokens in the image.
The representation of an OCR token is formed from the FastText<cit.> vector, PHOC<cit.> vector, bounding box location feature, and Faster R-CNN feature of the token. A multi-head self-attention mechanism in transformers is employed, enabling all entities to interact with each other and model inter- and intra-modal relationships uniformly using the same set of transformer parameters. During answer prediction, the M4C model employs an iterative, auto-regressive decoder that predicts one word at a time. The decoder can use either a fixed vocabulary or the OCR tokens detected in the image to generate the answer.
§.§ SINGULARITY
The architecture of SINGULARITY<cit.> is made up of three major components: a vision encoder using ViT<cit.>, a language encoder utilizing BERT<cit.>, and a multi-modal encoder using a transformer encoder<cit.>. The multi-modal encoder uses cross-attention to collect information from visual representations using text as the key. Each video or image is paired with its corresponding caption during the pretraining phase, and the model is trained to align the vision and text representations using three losses (i) Vision-Text Contrastive: a contrastive loss which aligns the representations of vision and language encoders, (ii) Masked Language Modeling<cit.>: masked tokens are predicted (iii) Vision-Text Matching: using the multi-modal encoder, predict the matching score of a vision-text pair.
We use the SINGULARITY-temporal model, which is pretrained on 17M vision caption pairs<cit.>.
The SINGULARITY-temporal model contains a two-layer temporal encoder that feeds its outputs into the multi-modal encoder. SINGULARITY-temporal makes use of two new datasets named SSv2-Template Retrieval, and SSv2-Label Retrieval created from the action recognition dataset Something-Something v2 (SSv2)<cit.>. The pretraining is a video retrieval task using text queries. An additional multi-modal decoder is added for open-ended QA tasks and is initialised from the pretrained multi-modal encoder, which takes the multi-modal encoder's output as input and generates answer text with [CLS] as the start token.
§.§ GenerativeImage2Text
GIT(GenerativeImage2Text)<cit.> is a transformer-based architecture aimed at unifying all vision-language tasks using a simple architecture pretrained on 0.8 billion image text pairs. GIT consists of an image encoder and a text decoder and is pretrained on a large dataset of image text pairs. The image encoder is a Swin-like<cit.> transformer based on the contrastive pretrained model, which eliminates the need for other object detectors or OCR. As for the text decoder, the GIT uses a transformer with a self-attention and feed-forward layer to generate text output. The visual features and the text embeddings are concatenated and used as inputs to the decoder. GIT is able to gradually learn how to read the scene text with large-scale pretraining and hence achieves SoTA performance on scene-text-related VQA tasks such as ST-VQA. For video question answering, GIT employs a method of selecting multiple frames from the video and separately embeds each frame with a learnable temporal embedding which is initialized as zeros, and the image features are concatenated and used similarly to the image representation. The question and the correct answer are combined and used in the sense of a special caption, and the language model loss is computed solely on the answer and the [EOS] token.
§ EXPERIMENTS AND RESULTS
This section covers the evaluation metrics, the experimental setup, and the experiment results.
§.§ Experimental Setup
Evaluation metrics. We use two evaluation metrics to evaluate the model's performance: Average Normalized Levenshtein Similarity (ANLS)<cit.> and Accuracy (Acc. (%)). The Accuracy metric calculates the percentage of questions where the predicted answer exactly matches any of the target answers.
ANLS, on the other hand, does not award a zero score for all predictions that do not match the ground truth string exactly.
The score was originally proposed to act softly on cases where the predicted answer differs slightly from the actual.
ANLS measures a similarity(based on the Levenshtein distance) between the prediction and ground truth and normalizes it as a score in the range [0,1]. If the score is less than 0.5, the final ANLS score for the prediction is set to zero.
OCR transcriptions. The ground truth annotations were utilized for the videos in the RoadText-3K set, while for the remaining videos, the OCR transcriptions were sourced using the Google Cloud Video Intelligence API. Both RoadText-3K ground truth annotations, and the Google API provide text transcriptions at the line level.
We use the line-level text transcriptions as the OCR tokens for the calculation of OCR upper bounds and OCR-based heuristics as given in the <ref>. When a text track gets cut off from the frame or partially occluded by other objects in a video, the Google Cloud Video Intelligence API treats it as a new track, whereas RoadText-3K annotations ignore the partially occluded tracks. This is why in the <ref>, the number of videos vs the number of tracks is a bit inflated for the YouTube clips when compared to RoadText-3K clips.
Experimental setup for M4C.
The M4C<cit.> model is trained using the official implementation, and the training parameters and implementation details remain consistent with those used in the original paper. We used a fixed vocabulary of size 3926 generated from the train set.
The training data consists of image question-answer pairs where the image selected for training is the one on which the questions are based, specifically the timestamp frame. After training, the model is evaluated using two approaches. Firstly, it is tested on the timestamp QA pairs of the test set, and secondly, it is evaluated on the video level by sampling ten frames from the respective video for each QA pair and obtaining the model prediction for every frame individually. The final answer is determined by taking the most common answer from the ten individual frame predictions.
Experimental setup for SINGULARITY.
We fine-tuned the pretrained SINGULARITY-temporal 17M model on four NVIDIA Geforce RTX 2080 Ti. The fine-tuning process was run for 20 epochs with a batch size of 16, starting with an initial learning rate of 1e-5 and increasing linearly in the first half epoch, followed by cosine decay<cit.> to 1e-6. The other parameters used for training are the same as the official implementation. The video frames were resized to 224x224, and a single frame with random resize, crop and flip augmentations was utilised during training, whereas 12 frames were used during testing. Additionally, we fine-tuned the SINGULARITY model, which has been pretrained on the MSRVTT-QA<cit.> dataset.
Experimental setup for GIT.
The training process for GIT was carried out using a single Tesla T4 GPU for 20 epochs with a batch size of 2.
We use an Adam<cit.> optimizer with an initial learning rate starting at 1e-5 and gradually decreasing to 1e-6 through the use of cosine decay.
The GIT model was trained using the official VideoQA configuration used for MSRVTT-QA training. We fine-tuned the pretrained GIT-large model on our dataset, using six frames that were evenly spaced as inputs during both training and testing. In addition, we further fine-tuned the GIT model that was pretrained on the MSRVTT-QA<cit.> dataset.
§.§ Results
Heuristic baselines and upper bound results are presented in the <ref>. The heuristic baselines yield very low accuracy, which indicates the absence of any bias due to the repetition of answers.
Random OCR heuristic gives close to 2% accuracy, meaning that there is enough text present in the video that selecting a random OCR from the video will not yield high accuracy. The OCR upper bound is 36.6% which is low when compared to the percentage of questions which have the answers present in the video. The low OCR UB can be attributed to how the text detection and how ground truth annotation is done. The response to a question may be split into multiple lines within the video, leading to the representation of the answer as separate tokens in the OCR output. This happens because the annotations in the OCR process were carried out on a line level. From the upper bound result of Vocab + OCR UB, we can see that more than three-quarters of the answers are present in either the vocabulary or in the OCR tokens of the video.
The results on M4C are shown on <ref>. The frame level results, where we evaluate on the timestamp frame, show an accuracy of 38.20% and the video level results, where we evaluate on ten frames, give an accuracy of 28.92%. The results show that answering the question is still a challenging task, even when we reduce the complexity of the problem by providing the aptest frame for answering the question and ground truth OCR tokens.
We show the results after fine-tuning on SINGULARITY and GIT in <ref>. The accuracy of the questions requiring answers to be extracted from the video (AP) is comparatively lower, while the accuracy of the questions where the answer is not present in the video is comparatively higher.
Compared to AP, ANP is less complex to answer because it involves a fixed set of answers. In contrast, AP requires dynamic extraction from OCR tokens, resulting in the ANP set having better accuracy than AP.
Additionally, fine-tuning the model that has been pretrained on the MSRVTT-QA dataset shows improvement in accuracy across all categories(TB, RSB, AP, and ANP).
Fine-tuning GIT results in better performance compared to SINGULARITY. GIT also shows a similar trend when fine-tuned on pretrained MSRVTT-QA dataset. The “answer is present in the videos(AP)" subset has an improvement of 3.9% in accuracy when compared with SINGULARITY, whereas the “answer is not present(ANP)" in the videos subset has a gain of 6.3%. M4C tested on a single frame shows better results compared to VideoQA models. This can be attributed to the fact that we explicitly provide the OCR tokens and the correct frame on which the question is framed to the model. M4C tested on ten frames gives comparable results to GIT.
We show some of the qualitative results in <ref>. As the complexity of the scene and the obscurity of the scene text increase, it becomes more and more difficult for the model to predict the correct answer. VideoQA baselines achieve better results on questions that do not require the extraction of answers from the video.
§ CONCLUSIONS
We introduce RoadTextVQA, a new Video Question Answering dataset where the questions are grounded on the text and road signs present in the road videos. Our findings from the baseline models' performance indicate a need for improvement in existing VideoQA approaches for text-aware multimodal question answering.
Future work can involve augmenting the dataset by incorporating videos obtained from diverse global locales. Currently, there are recurrent questions and answers due to repeating elements in the videos.
Including videos from various locations broadens the diversity of the dataset by providing a more comprehensive range of questions and answers and minimizes any biases within the dataset. To our best knowledge, currently, there are no Visual Question Answering models that explicitly incorporate road signs. Models can integrate road signs as an additional input or pretrain on road sign-description pairs to enhance their ability to respond to questions that require domain knowledge.
We believe this work would encourage researchers to develop better models that incorporate scene text and road signs and are resilient to the challenges posed by driving videos. Additionally, drive further research in the area of scene text VideoQA and the development of advanced in-vehicle support systems.
§ ACKNOWLEDGEMENTS
This work has been supported by IHub-Data at IIIT-Hyderabad, and grants PDC2021-121512-I00, and PID2020-116298GB-I00 funded by MCIN/AEI/
10.13039/501100011033 and the European Union NextGenerationEU/PRTR.
splncs04
|
http://arxiv.org/abs/2307.06085v2 | 20230712111513 | Investigating the visible phase-curve variability of 55 Cnc e | [
"E. A. Meier Valdés",
"B. M. Morris",
"B. -O. Demory",
"A. Brandeker",
"D. Kitzmann",
"W. Benz",
"A. Deline",
"H. -G. Florén",
"S. G. Sousa",
"V. Bourrier",
"V. Singh",
"K. Heng",
"A. Strugarek",
"D. J. Bower",
"N. Jäggi",
"L. Carone",
"M. Lendl",
"K. Jones",
"A. V. Oza",
"O. D. S. Demangeon",
"Y. Alibert",
"R. Alonso",
"G. Anglada",
"J. Asquier",
"T. Bárczy",
"D. Barrado Navascues",
"S. C. C. Barros",
"W. Baumjohann",
"M. Beck",
"T. Beck",
"N. Billot",
"X. Bonfils",
"L. Borsato",
"C. Broeg",
"J. Cabrera",
"S. Charnoz",
"A. Collier Cameron",
"Sz. Csizmadia",
"P. E. Cubillos",
"M. B. Davies",
"M. Deleuil",
"L. Delrez",
"D. Ehrenreich",
"A. Erikson",
"A. Fortier",
"L. Fossati",
"M. Fridlund",
"D. Gandolfi",
"M. Gillon",
"M. Güdel",
"M. N. Günther",
"S. Hoyer",
"K. G. Isaak",
"L. L. Kiss",
"J. Laskar",
"A. Lecavelier des Etangs",
"C. Lovis",
"D. Magrin",
"P. F. L. Maxted",
"C. Mordasini",
"V. Nascimbeni",
"G. Olofsson",
"R. Ottensamer",
"I. Pagano",
"E. Pallé",
"G. Peter",
"G. Piotto",
"D. Pollacco",
"D. Queloz",
"R. Ragazzoni",
"N. Rando",
"H. Rauer",
"I. Ribas",
"N. C. Santos",
"M. Sarajlic",
"G. Scandariato",
"D. Ségransan",
"D. Sicilia",
"A. E. Simon",
"A. M. S. Smith",
"M. Steller",
"Gy. M. Szabó",
"N. Thomas",
"S. Udry",
"B. Ulmer",
"V. Van Grootel",
"J. Venturini",
"N. A. Walton",
"T. G. Wilson",
"D. Wolter"
] | astro-ph.EP | [
"astro-ph.EP"
] |
Center for Space and Habitability, University of Bern, Gesellschaftsstrasse 6, 3012 Bern, Switzerland Space Telescope Science Institute, Baltimore, MD 21218, USA Physikalisches Institut, University of Bern, Sidlerstrasse 5, 3012 Bern, Switzerland Department of Astronomy, Stockholm University, AlbaNova University Center, 10691 Stockholm, Sweden Observatoire Astronomique de l'Université de Genève, Chemin Pegasi 51, CH-1290 Versoix, Switzerland Instituto de Astrofisica e Ciencias do Espaco, Universidade do Porto, CAUP, Rua das Estrelas, 4150-762 Porto, Portugal INAF, Osservatorio Astrofisico di Catania, Via S. Sofia 78, 95123 Catania, Italy Ludwig Maximilian University, University Observatory Munich, Scheinerstrasse 1, Munich D-81679, Germany University of Warwick, Department of Physics, Astronomy & Astrophysics Group, Coventry CV4 7AL, United Kingdom University of Bern, ARTORG Center for Biomedical Engineering Research, Murtenstrasse 50, CH-3008, Bern, Switzerland Université Paris-Saclay, Université Paris Cité, CEA, CNRS, AIM, 91191, Gif-sur-Yvette, France Space Research Institute, Austrian Academy of Sciences, Schmiedlstrasse 6, A-8042 Graz, Austria Jet Propulsion Laboratory, California Institute of Technology, Pasadena, USA Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre, 4169-007 Porto, Portugal Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife, Spain Departamento de Astrofisica, Universidad de La Laguna, 38206 La Laguna, Tenerife, Spain Institut de Ciencies de l'Espai (ICE, CSIC), Campus UAB, Can Magrans s/n, 08193 Bellaterra, Spain Institut d'Estudis Espacials de Catalunya (IEEC), 08034 Barcelona, Spain European Space Agency (ESA), European Space Research and Technology Centre (ESTEC), Keplerlaan 1, 2201 AZ Noordwijk, The Netherlands Admatis, 5. Kandó Kálmán Street, 3534 Miskolc, Hungary Depto. de Astrofisica, Centro de Astrobiologia (CSIC-INTA), ESAC campus, 28692 Villanueva de la Cañada (Madrid), Spain Université Grenoble Alpes, CNRS, IPAG, 38000 Grenoble, France INAF, Osservatorio Astronomico di Padova, Vicolo dell'Osservatorio 5, 35122 Padova, Italy Institute of Planetary Research, German Aerospace Center (DLR), Rutherfordstrasse 2, 12489 Berlin, Germany Université de Paris, Institut de physique du globe de Paris, CNRS, F-75005 Paris, France Centre for Exoplanet Science, SUPA School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews KY16 9SS, UK INAF, Osservatorio Astrofisico di Torino, Via Osservatorio, 20, I-10025 Pino Torinese To, Italy Centre for Mathematical Sciences, Lund University, Box 118, 221 00 Lund, Sweden Aix Marseille Univ, CNRS, CNES, LAM, 38 rue Frédéric Joliot-Curie, 13388 Marseille, France Astrobiology Research Unit, Université de Liège, Allée du 6 Août 19C, B-4000 Liège, Belgium Space sciences, Technologies and Astrophysics Research (STAR) Institute, Université de Liège, Allée du 6 Août 19C, 4000 Liège, Belgium Centre Vie dans l’Univers, Faculté des sciences, Universit'e de Genève, Quai Ernest-Ansermet 30, CH-1211 Genève 4, Switzerland Leiden Observatory, University of Leiden, PO Box 9513, 2300 RA Leiden, The Netherlands Department of Space, Earth and Environment, Chalmers University of Technology, Onsala Space Observatory, 439 92 Onsala, Sweden Dipartimento di Fisica, Universita degli Studi di Torino, via Pietro Giuria 1, I-10125, Torino, Italy Department of Astrophysics, University of Vienna, Türkenschanzstrasse 17, 1180 Vienna, Austria Konkoly Observatory, Research Centre for Astronomy and Earth Sciences, 1121 Budapest, Konkoly Thege Miklós út 15-17, Hungary ELTE Eötvös Loránd University, Institute of Physics, Pázmány Péter sétány 1/A, 1117 Budapest, Hungary IMCCE, UMR8028 CNRS, Observatoire de Paris, PSL Univ., Sorbonne Univ., 77 av. Denfert-Rochereau, 75014 Paris, France Institut d'astrophysique de Paris, UMR7095 CNRS, Université Pierre & Marie Curie, 98bis blvd. Arago, 75014 Paris, France Astrophysics Group, Keele University, Staffordshire, ST5 5BG, United Kingdom Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstrasse 2, 12489 Berlin, Germany Dipartimento di Fisica e Astronomia "Galileo Galilei", Universita degli Studi di Padova, Vicolo dell'Osservatorio 3, 35122 Padova, Italy Department of Physics, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, United Kingdom ETH Zurich, Department of Physics, Wolfgang-Pauli-Strasse 2, CH-8093 Zurich, Switzerland Cavendish Laboratory, JJ Thomson Avenue, Cambridge CB3 0HE, UK Zentrum für Astronomie und Astrophysik, Technische Universität Berlin, Hardenbergstr. 36, D-10623 Berlin, Germany Institut für Geologische Wissenschaften, Freie Universität Berlin, 12249 Berlin, Germany ELTE Eötvös Loránd University, Gothard Astrophysical Observatory, 9700 Szombathely, Szent Imre h. u. 112, Hungary MTA-ELTE Exoplanet Research Group, 9700 Szombathely, Szent Imre h. u. 112, Hungary Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge, CB3 0HA, United Kingdom
E.A. Meier Valdés et al.
55 Cnc e is an ultra-short period super-Earth transiting a Sun-like star. Previous observations in the optical range detected a time-variable flux modulation that is phased with the planetary orbital period, whose amplitude is too large to be explained by reflected light and thermal emission alone.
The goal of the study is to investigate the origin of the variability and timescale of the phase-curve modulation in 55 Cnc e. To this end, we used the CHaracterising ExOPlanet Satellite (CHEOPS), whose exquisite photometric precision provides an opportunity to characterise minute changes in the phase curve from one orbit to the next.
CHEOPS observed 29 individual visits of 55 Cnc e between March 2020 and February 2022. Based on these observations, we investigated the different processes that could be at the origin of the observed modulation. In particular, we built a toy model to assess whether a circumstellar torus of dust driven by radiation pressure and gravity might match the observed flux variability timescale.
We find that the phase-curve amplitude and peak offset of 55 Cnc e do vary between visits.
The sublimation timescales of selected dust species reveal that silicates expected in an Earth-like mantle would not survive long enough to explain the observed phase-curve modulation. We find that silicon carbide, quartz, and graphite are plausible candidates for the circumstellar torus composition because their sublimation timescales are long.
The extensive CHEOPS observations confirm that the phase-curve amplitude and offset vary in time.
We find that dust could provide the grey opacity source required to match the observations.
However, the data at hand do not provide evidence that circumstellar material with a variable grain mass per unit area causes the observed variability.
Future observations with the James Webb Space Telescope (JWST) promise exciting insights into this iconic super-Earth.
Investigating the visible phase-curve variability of 55 Cnc eThis article uses data from the CHEOPS programme ID CH_PR100006.^,The raw and detrended photometric time-series data are available in electronic form at the CDS via anonymous ftp to cdsarc.cds.unistra.fr (130.79.128.5) or via <>
E. A. Meier Valdés1 ^https://orcid.org/0000-0002-2160-8782
< g r a p h i c s >,
B. M. Morris2 ^https://orcid.org/0000-0003-2528-3409
< g r a p h i c s >,
B.-O. Demory1,3 ^https://orcid.org/0000-0002-9355-5165
< g r a p h i c s >,
A. Brandeker4 ^https://orcid.org/0000-0002-7201-7536
< g r a p h i c s >,
D. Kitzmann1 ^https://orcid.org/0000-0003-4269-3311
< g r a p h i c s >,
W. Benz3,1 ^https://orcid.org/0000-0001-7896-6479
< g r a p h i c s >,
A. Deline5,
H.-G. Florén4,
S. G. Sousa6 ^https://orcid.org/0000-0001-9047-2965
< g r a p h i c s >,
V. Bourrier5 ^https://orcid.org/0000-0002-9148-034X
< g r a p h i c s >,
V. Singh7 ^https://orcid.org/0000-0002-7485-6309
< g r a p h i c s >,
K. Heng8,9,10,
A. Strugarek11 ^https://orcid.org/0000-0002-9630-6463
< g r a p h i c s >,
D. J. Bower1 ^https://orcid.org/0000-0002-0673-4860
< g r a p h i c s >,
N. Jäggi3 ^https://orcid.org/0000-0002-2740-7965
< g r a p h i c s >,
L. Carone12 ^https://orcid.org/0000-0001-9355-3752
< g r a p h i c s >,
M. Lendl5 ^https://orcid.org/0000-0001-9699-1459
< g r a p h i c s >,
K. Jones1,
A. V. Oza13 ^https://orcid.org/0000-0002-1655-0715
< g r a p h i c s >,
O. D. S. Demangeon6,14 ^https://orcid.org/0000-0001-7918-0355
< g r a p h i c s >,
Y. Alibert3 ^https://orcid.org/0000-0002-4644-8818
< g r a p h i c s >,
R. Alonso15,16 ^https://orcid.org/0000-0001-8462-8126
< g r a p h i c s >,
G. Anglada17,18 ^https://orcid.org/0000-0002-3645-5977
< g r a p h i c s >,
J. Asquier19,
T. Bárczy20 ^https://orcid.org/0000-0002-7822-4413
< g r a p h i c s >,
D. Barrado Navascues21 ^https://orcid.org/0000-0002-5971-9242
< g r a p h i c s >,
S. C. C. Barros6,14 ^https://orcid.org/0000-0003-2434-3625
< g r a p h i c s >,
W. Baumjohann12 ^https://orcid.org/0000-0001-6271-0110
< g r a p h i c s >,
M. Beck5 ^https://orcid.org/0000-0003-3926-0275
< g r a p h i c s >,
T. Beck3,
N. Billot5 ^https://orcid.org/0000-0003-3429-3836
< g r a p h i c s >,
X. Bonfils22 ^https://orcid.org/0000-0001-9003-8894
< g r a p h i c s >,
L. Borsato23 ^https://orcid.org/0000-0003-0066-9268
< g r a p h i c s >,
C. Broeg3,1 ^https://orcid.org/0000-0001-5132-2614
< g r a p h i c s >,
J. Cabrera24,
S. Charnoz25 ^https://orcid.org/0000-0002-7442-491X
< g r a p h i c s >,
A. Collier Cameron26 ^https://orcid.org/0000-0002-8863-7828
< g r a p h i c s >,
Sz. Csizmadia24 ^https://orcid.org/0000-0001-6803-9698
< g r a p h i c s >,
P. E. Cubillos27,12,
M. B. Davies28 ^https://orcid.org/0000-0001-6080-1190
< g r a p h i c s >,
M. Deleuil29 ^https://orcid.org/0000-0001-6036-0225
< g r a p h i c s >,
L. Delrez30,31 ^https://orcid.org/0000-0001-6108-4808
< g r a p h i c s >,
D. Ehrenreich5,32 ^https://orcid.org/0000-0001-9704-5405
< g r a p h i c s >,
A. Erikson24,
A. Fortier3,1 ^https://orcid.org/0000-0001-8450-3374
< g r a p h i c s >,
L. Fossati12 ^https://orcid.org/0000-0003-4426-9530
< g r a p h i c s >,
M. Fridlund33,34 ^https://orcid.org/0000-0002-0855-8426
< g r a p h i c s >,
D. Gandolfi35 ^https://orcid.org/0000-0001-8627-9628
< g r a p h i c s >,
M. Gillon30 ^https://orcid.org/0000-0003-1462-7739
< g r a p h i c s >,
M. Güdel36,
M. N. Günther19 ^https://orcid.org/0000-0002-3164-9086
< g r a p h i c s >,
S. Hoyer29 ^https://orcid.org/0000-0003-3477-2466
< g r a p h i c s >,
K. G. Isaak19 ^https://orcid.org/0000-0001-8585-1717
< g r a p h i c s >,
L. L. Kiss37,38,
J. Laskar39 ^https://orcid.org/0000-0003-2634-789X
< g r a p h i c s >,
A. Lecavelier des Etangs40 ^https://orcid.org/0000-0002-5637-5253
< g r a p h i c s >,
C. Lovis5 ^https://orcid.org/0000-0001-7120-5837
< g r a p h i c s >,
D. Magrin23 ^https://orcid.org/0000-0003-0312-313X
< g r a p h i c s >,
P. F. L. Maxted41 ^https://orcid.org/0000-0003-3794-1317
< g r a p h i c s >,
C. Mordasini3,1,
V. Nascimbeni23 ^https://orcid.org/0000-0001-9770-1214
< g r a p h i c s >,
G. Olofsson4 ^https://orcid.org/0000-0003-3747-7120
< g r a p h i c s >,
R. Ottensamer36,
I. Pagano7 ^https://orcid.org/0000-0001-9573-4928
< g r a p h i c s >,
E. Pallé15 ^https://orcid.org/0000-0003-0987-1593
< g r a p h i c s >,
G. Peter42 ^https://orcid.org/0000-0001-6101-2513
< g r a p h i c s >,
G. Piotto23,43 ^https://orcid.org/0000-0002-9937-6387
< g r a p h i c s >,
D. Pollacco44,
D. Queloz45,46 ^https://orcid.org/0000-0002-3012-0316
< g r a p h i c s >,
R. Ragazzoni23,43 ^https://orcid.org/0000-0002-7697-5555
< g r a p h i c s >,
N. Rando19,
H. Rauer24,47,48 ^https://orcid.org/0000-0002-6510-1828
< g r a p h i c s >,
I. Ribas17,18 ^https://orcid.org/0000-0002-6689-0312
< g r a p h i c s >,
N. C. Santos6,14 ^https://orcid.org/0000-0003-4422-2919
< g r a p h i c s >,
M. Sarajlic3,
G. Scandariato7 ^https://orcid.org/0000-0003-2029-0626
< g r a p h i c s >,
D. Ségransan5 ^https://orcid.org/0000-0003-2355-8034
< g r a p h i c s >,
D. Sicilia7,
A. E. Simon3 ^https://orcid.org/0000-0001-9773-2600
< g r a p h i c s >,
A. M. S. Smith24 ^https://orcid.org/0000-0002-2386-4341
< g r a p h i c s >,
M. Steller12 ^https://orcid.org/0000-0003-2459-6155
< g r a p h i c s >,
Gy. M. Szabó49,50,
N. Thomas3,
S. Udry5 ^https://orcid.org/0000-0001-7576-6236
< g r a p h i c s >,
B. Ulmer42,
V. Van Grootel31 ^https://orcid.org/0000-0003-2144-4316
< g r a p h i c s >,
J. Venturini5,
N. A. Walton51 ^https://orcid.org/0000-0003-3983-8778
< g r a p h i c s >,
T. G. Wilson26 ^https://orcid.org/0000-0001-8749-1962
< g r a p h i c s >,
D. Wolter24
Received: 2 February 2023 / Accepted: 4 July 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The super-Earth 55 Cnc e is the only transiting planet of the five planets that are known to be orbiting its star. The star is one of the brightest stars known to host planets (V=6). Because of its short orbital period (P=0.74 days), 55 Cnc e is catalogued as an ultra-short period (USP) planet. Among the population of discovered USP planets, 55 Cnc e is one of the most frequently studied close-in exoplanets. However, the vast number of observations across the entire spectrum, from the ultra-violet (UV) <cit.> to the infrared (IR) <cit.>, did not lead to a conclusive understanding of this object.
55 Cnc e was discovered via radial velocity (RV) observations at McDonald Observatory with the Hobby-Eberly Telescope (HET) <cit.> with an RV solution pointing to a 2.808-day period. Later, <cit.> argued that the reported period was an alias, and they computed a true orbital period of 0.74 days. <cit.> and <cit.> independently discovered the planet to be transiting its host star and confirmed the previously predicted period with the Microvariability and Oscillations of Stars (MOST) telescope and the Spitzer space telescope, respectively.
The first photometric observations in the optical revealed that the measured flux at different phases of the planetary orbit could not be explained by thermal emission and reflected light alone <cit.>. An extensive observation campaign between 2011 and 2015 with MOST concluded that the phase modulation and phase offset change over time <cit.>. The occultation was not detected in the MOST dataset. Spitzer observed 55 Cnc e multiple times, revealing a significant variability in the occultation depth between 2012 and 2013 <cit.> that was later confirmed by <cit.>. However, a recent analysis using the Transiting Exoplanet Survey Satellite (TESS) <cit.> observations indicated weak evidence of variability in the occultation depth in the optical range across the sectors <cit.>.
The phase-curve of an exoplanet measures the light of a star and planet throughout an orbit. It exposes different sides of the planet to the observer <cit.>. Between transit and occultation (secondary eclipse), the measured flux varies because a different phase of the planet is observed. Here we focus on tidally locked planets that transit their host star. In the absence of atmospheric dynamics, the flux will reach its lowest point when the planet transits and peak at the secondary eclipse, where the flux corresponds to the star alone. If the phase-curve peak has a phase offset, this could imply atmospheric winds or dynamics.
Atmospheric dynamics and weather can produce variability in the shape and amplitude of a phase curve, but all other time-varying sources must be ruled out. There is a precedent of observed phase-curve variability of an exoplanet in addition to 55 Cnc e. <cit.> claimed evidence for variability in the atmosphere of the hot Jupiter HAT-P-7 b, but a reassessment by <cit.> concluded that stellar noise might entirely cause the claimed variability. Stars with convective outer envelopes have granules that vary stochastically in time. Supergranulation is a similar dynamical phenomenon that involves horizontal flows that occur on longer timescales, and it displays larger photometric amplitudes <cit.>. In future work, asteroseismic predictions of the variability amplitude at each frequency as a function of the stellar properties may help extrapolate the stellar phenomena of HAT-P-7 (F6V) to 55 Cnc (G8V).
A thermal map of the planet derived with Spitzer observations in the IR revealed a hot spot that is offset 41 degrees east of the substellar point <cit.>. This phase offset can be explained by a narrow region of volcanic activity or by a circulating atmosphere. Moreover, the night-side brightness temperature is about 1400 K, while the hottest region on the day-side is approximately 1300 K hotter. The high temperature gradient indicates inefficient heat redistribution from the day-side to the night-side. The temperature contrast and the offset hot spot are consistent with an optically thick atmosphere in which atmospheric recirculation mostly occurs on the day-side, or a planet without atmosphere with magma flows at the surface. The Spitzer observations were further analysed by <cit.>, who concluded that the phase curve favours a substantial atmosphere on 55 Cnc e. A recent reanalysis of the Spitzer dataset found a phase offset consistent with zero, with a markedly higher day-side temperature of 3770 K and a gradient of 2700 K to the night-side temperature <cit.>.
The phase curve of another USP planet was obtained with Spitzer and Kepler. K2-141 b is a small rocky planet discovered by Kepler, orbiting its host star every 6.7 hours. The observations in the IR and optical are consistent with thermal emission and reflected light. The phase offset of this exoplanet is negligible <cit.>.
Some effort was expended to identify certain atmospheric species on 55 Cnc e. So far, there is no evidence of H Ly α absorption <cit.>, He <cit.>, H_2O, TiO <cit.>, CO, CO_2, HCN, NH_3, and C_2H_2 <cit.>. <cit.> found hints of Ca_II H&K and Na-D, but did not claim detection due to the low significance of the signal and variable Ca_II. A recent survey <cit.> of a single transit concluded no detection of absorption of O, Si, Al, Na, Mg, K H, P, F, Sr, S, C, Cl, V, Cr, as well as Fe, Ca, Ti, Mn, Ba, Zr, and its singularly ionized forms. This extensive list of no detections strengthens the hypothesis of a heavyweight atmosphere, if any is present at all.
The process that causes the puzzling observations remains unknown. Temporal variability in UV transit observations suggests star-planet interactions (SPIs) as the possible origin <cit.>. Because of the short orbital distance, the planet and its host star might be magnetically connected <cit.>. However, more recent work showed that the energy budget seems too low to cause the measured signal <cit.>. Other hypotheses include active volcanism or an inhomogeneous circumstellar dust torus.
In this paper, we present the results of an extensive campaign with CHEOPS to observe the phase curve of 55 Cnc e. First, we present the observations, the detrending of the systematics, and the phase-curve model fit to the data in Sect. <ref>. Then we present the results for the phase-curve amplitude and phase offset in Sect. <ref>. Based on the results, a discussion follows in Sect. <ref>. We conclude in Sect. <ref> and highlight future projects that explore this fascinating system.
§ METHODS
§.§ CHEOPS
The CHaracterising ExOPlanet Satellite (CHEOPS) <cit.> is an on-axis Ritchey-Chrétien telescope with a primary mirror with a diameter of 320 mm. CHEOPS is designed to operate nadir-locked in a Sun-synchronous orbit at an altitude of 700 km above the Earth's surface. The exposures have a distinctive three-pointed point-spread function (PSF) due to the partial obscuration of the primary mirror by the secondary mirror and its three supports and because the telescope is intentionally defocused <cit.>. The photometer operates in the visible and near-IR range (0.33 μm to 1.1 μm) using a back-illuminated charge-coupled device (CCD) detector.
CHEOPS performed 29 photometric visits of 55 Cnc between 23 March 2020 and 26 February 2022, each visit observing at least one orbital period of planet e. The duration of each visit ranged between 25 and 40 hours. Each frame had an exposure time of 44.2 s obtained by stacking 20 individual readout of 2.2 s. The observation log is given in Table <ref>. The first visit was analysed by <cit.>. Before stacking, small images (called imagettes) of 30 pixels in radius were extracted that contain the PSF of the target star. For each individual frame (referred to as subarray), ten imagettes were downlinked. The observations were reduced with the data reduction pipeline (DRP) <cit.>. Complementary to the DRP, (Brandeker et al., in prep.; see also descriptions in , and ) is a photometry-extraction Python package that uses PSF photometry on the 30-pixel imagettes. The results using the subarrays and imagettes are consistent, and thus we chose to use the subarray dataset in this work for computational efficiency.
To prepare our data, we first discarded all observations flagged by (caused by cosmic rays, contamination from a satellite passing through the field of view, or passage above the South Atlantic Anomaly (SAA)) <cit.>. We normalised the flux by dividing all measurements by their median value and removed all points above 3σ and below 6σ from the median of the absolute deviations. The observations that occur before and after Earth occultation often exhibit a significant offset of the star position on the CCD, which is caused by refraction of the light in the upper Earth atmosphere. Therefore, we masked out all measurements with centroids above 3.5σ away from the median centroid.
Additionally, we removed high-background level above 4σ from the median. The background level usually increases before or after Earth occultation. We also determined whether light from the nearby star 53 Cnc might affect the flux measurements by leaking into the aperture. While 53 Cnc falls within the full frame of the CCD of 1024x1024 pixels, the 200-pixel aperture is not affected by leaking flux. The DRP report included in every CHEOPS observation file estimates a mean flux due to nearby stars of approximately 0.001% relative to the flux of the target and thus contributes negligibly to the light curves. The orbital configuration of CHEOPS means that its field of view (FOV) rotates. Any significant flux contamination would therefore be apparent on the CHEOPS orbit timescales of 100 minutes.
§.§ Detrending basis vectors
The spacecraft introduces several systematics to photometry.
Previous works with CHEOPS datasets have corrected these trends via linear regression <cit.>, Gaussian process <cit.>, or a PSF detrending method <cit.>. The last method consists of determining vectors associated with changes in the PSF shape by conducting a principal component analysis (PCA) on the subarray images and selecting the vectors that contribute most for use in the light-curve model. For this work, we performed a correction via polynomial regression.
Because of the orbital configuration of CHEOPS, the field of view of the telescope rotates once per orbit. It is necessary to detrend the flux of the star against the roll angle to ensure that the rotating field of view does not introduce correlated noise. We implemented the effect of the roll angle as the following Fourier series <cit.>:
rll
X_roll angle = ∑_i=1^Na_i cos(iψ_i) + b_i sin(iψ_i) ,
where ψ is the roll angle, and a_i and b_i are the best-fit weights of the roll angle components to the polynomial fit. We limited the series up to fourth order (N=4).
A strong increase or decrease in flux at the beginning of a visit has been observed in many datasets, the so-called "ramp effect". This is presumably caused by the change in pointing orientation from one target to the next and is related to the temperature of the spacecraft recorded by the thermistor readout called ThermFront 2. We added a linear term of this thermistor readout as a basis vector to detrend the flux. For some visits, as detailed in the next paragraph, we deemed it necessary to add a quadratic term of ThermFront 2.
<cit.> performed an exhaustive search through different combinations of basis vectors that are correlated with the flux of 55 Cnc and concluded that the combination of the cosine and sine of the roll angle, a unit vector representing the normalised stellar flux and the ThermFront 2 thermistor readout, are the best set for their study. We also performed a statistical comparison based on the Bayesian information criterion (BIC) <cit.> for different combinations of basis vectors in a first fit with a linear regression. The reason for using the BIC and not another more robust information criterion is computational efficiency because the fit was only preliminary for the selection of the basis vectors. By definition, the BIC gives a larger penalty per parameter for large datasets and thus favours simpler models <cit.>. For our current purpose, it suffices. We considered the same set of basis vectors as <cit.> for all visits. In addition to these vectors, we included the background level, quadratic terms of time and thermistor readout, and harmonics of the cosine and sine of the roll angle of Eq. (<ref>) when the BIC favoured the selection (ΔBIC>10). The set of basis vectors used in each visit is listed in Table <ref> in Appendix <ref>. We constructed an independent design matrix X for each visit with the selected basis vectors. The basis vector coefficients β were obtained via polynomial regression and were stored for a later stage. The uncertainties σ_β on the coefficients were obtained from the covariance matrix.
§.§ Phase-curve model
We modelled the flux variation as a sum of terms,
rll
F = Tr + Occ + f_P ,
where Tr is the transit model, Occ is the occultation model, and f_P is the phase-curve model. The transit model was based on <cit.>, implemented in the Python package <cit.>. The occultation function was implemented in as an eclipse model without limb darkening. To perform a statistical model comparison, we used multiple functional forms for f_P. First, we modelled the variation in out-of-transit flux with a simple sinusoidal function with a period matching the orbital period of planet e,
rll
f_P = A cos(2π/Pt-ϕ-π) ,
where P is the planetary orbital period, and A and ϕ are the phase-curve amplitude and offset, respectively. We define the orbital phase on [-π , π] , where ϕ = π is at the time of mid-transit and ϕ = 0 is the occultation. A strong sinusoidal modulation of the flux of 55 Cnc phased with the orbital period of planet e was reported by <cit.> and more recently by <cit.>. Further studies also used a sinusoidal function, among other models <cit.>.
We also considered a phase function based on the assumption that the planet is a Lambertian sphere that scatters isotropically or emits thermally, defined as
rll
f_P = aE(sin(|θ|)+(π-|θ|cos(|θ|)) ,
with a the amplitude, E the occultation function, and θ the orbital phase. This phase function has no phase offset by construction.
<cit.> presented a semi-analytical model[For details about the derivation, see Appendix A in <cit.>.] for planets in a circular orbit with asymmetric phase variations. This piecewise-Lambertian sphere is suitable for phase-curve observations in the optical range, constructed with significant reflection or emission between two longitudes. This model adds two parameters corresponding to the longitudes ξ_1 and ξ_2. Local longitudes ranging between these values have a lower reflectivity than other longitudes on the planet. Our last functional form for the phase curve is a flat line outside of transit and occultation, which implies a constant baseline flux. We tried two variations of a constant continuum, one fitting for the occultation depth, and another assuming an occultation depth fixed to zero.
Each CHEOPS visit was analysed individually. Based on previous studies <cit.>, we assumed a circular orbit. We further assumed a quadratic limb-darkening law, where the priors on the coefficients specific to the CHEOPS transmission were retrieved from tables containing computed values by <cit.> using the ATLAS <cit.> and PHOENIX stellar models <cit.>. For 55 Cnc A, we used a PHOENIX stellar model with an effective temperature of 5200 K and surface gravity log(g) = 4.5 <cit.>. In our model, we fitted for the time of the mid-transit t_0, the impact parameter b, the quadratic limb-darkening coefficients u_0 and u_1, the planet-to-star radius ratio R_p/R_⋆, the occultation depth δ_occ, the phase-curve parameters, and the best-fit estimators of the basis vectors selected in Sect. <ref> and presented in Table <ref>. To deal with the systematics, we implemented a hierarchical model (also known as multilevel model) <cit.>, where the parameters themselves are described by parameters called hyperparameters. In particular, the mean μ_H and σ_H are distributions instead of a fixed numerical value. The priors on μ_H use the information of the polynomial regression made previously, while σ_H is an exponential distribution. The use of hierarchical models allows for an efficient estimate of the uncertainties in the basis vector coefficients, thus avoiding arbitrary scaling of the covariance matrix uncertainties from the polynomial regression. In Table <ref> we list detailed information about the priors we used. The transit depth was obtained by two different formulations: As the planet-to-star radius ratio squared and for a given stellar limb-darkening law and impact parameter, we implemented the analytic solutions from <cit.>. In our model, the occultation depth parameter was free to explore negative values as well. We set Gaussian priors on the stellar radius and mass based on <cit.>. The model was implemented in a Markov chain Monte Carlo (MCMC) using the no u-turn sampler (NUTS) <cit.>, a variant of Hamiltonian Monte Carlo. We sampled the posterior distributions for the parameters with the probabilistic programming package <cit.>, with 32 000 draws and 4000 burn-in iterations. After each run, we checked that the chains were well mixed and that the Gelman-Rubin statistic was below 1.01 for all parameters <cit.>.
§ RESULTS
We present the results assuming a sinusoidal shape for the phase curve based on the fact that this model is preferred in most visits (see Sect. <ref> for details). The detrended flux and samples of the fitted phase-curve model for every CHEOPS visit are shown in a gallery in Fig. <ref>, while Table <ref> presents the best-fit values of the phase-curve parameters with 1σ uncertainty and the residual root mean square (RMS) of each visit. The overlapping phase-curve model and confidence interval in Fig. <ref> exhibit sharp curves in the transit and spikes due to stitching between the gaps. Appendix <ref> presents the residual flux.
§.§ Transit and occultation depth
The transit depth across visits, shown in Fig. <ref>, agree within 3σ with a mean transit depth of 348 ppm, except for visits 8, 18, and 29. Visual inspection of the residuals of these visits reveals trends that remain in the data, especially in visit 18, which exhibits a strong trend out of transit. In general, these visits are noisier than others, as indicated by the residual RMS. The moderate evidence of variability in the transit depth agrees with <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
The occultation depth (Fig. <ref>) varies by more than 3σ in some visits, notably visits 5, 10, 24, and 28. The best-fit occultation depth is negative in many cases, which lacks physical meaning. There seems to be a temporal correlation in the trend between the transit depth and occultation depth, which is especially clear in the first six visits. It is possible that fitting each visit independently for the orbital parameters, such as the impact parameter or mid-transit time, causes this trend. Our results are consistent with those in <cit.>, who used the same CHEOPS dataset. In Appendix <ref> we present plots of the relation between relevant phase-curve parameters.
§.§ Amplitude
The phase-curve amplitude initially observed by <cit.> is seen in most of the CHEOPS visits, but the magnitude changes. The highest modulation was observed during the first visit. For visits 8, 9, 19, 21, 23, 24, 25, 28, and 29, the amplitude is consistent with zero at 2σ significance. The difference between the lowest and highest phase-curve amplitude is 48 ppm. The phase-curve amplitude for each visit is displayed in Fig. <ref>. A weak correlation between the phase-curve amplitude and both the transit and occultation depth is observed (see Figs. <ref> and <ref>), which likely arises from the model construction. As in <cit.>, we do not assume that the origin of the sinusoidal signal in the flux is planetary. As a consequence, the sinusoidal signal can alter the flux level before and after a transit or occultation.
§.§ Phase offset
Our observations reveal a changing phase offset for the visits that varies over the complete phase parameter space. The phase offset for each visit is shown in Fig. <ref>. The small phase-curve amplitude and the huge uncertainty in the offset are related, as shown in Fig. <ref>. If a sinusoidal function has a small amplitude, it converges in practice to a straight line, and any point is good as the peak of a sinusoid. This is the case for visits 8, 23, 24, and 28.
The best-fit median phase offset of visits 5, 6, 12, 13, 17, 19, 21, and 25 shifts the phase peak close to the transit. This has two possible ramifications: Either the sinusoidal function, which has no physical interpretation, is a poor fit to the data, or an astrophysical event occurred during these visits that caused an excess flux during transit. If the phase offset originates at the planet, then a maximum of reflected light on the night-side would be implied if the best-fit values peaked during transit or close to it. While analysing the wide uncertainty in visits 19 and 25, we realised that the reason for is that the ingress and egress of the transit were not observed by CHEOPS. As a consequence, the mid-transit time parameter has a bimodal distribution. The same phenomenon is observed in visits 9 and 24, although in these cases, the phase peak is not close to the transit. In Appendix <ref> we show the joint posterior distributions of a CHEOPS visit that covered the transit ingress and egress and a visit where these events could not be observed. The sinusoidal model is statistically preferred over physically motivated models (Sect. <ref>) in most CHEOPS visits. This suggests that it is not a poor fit to the data.
§.§ Consecutive CHEOPS visits
Some visits were scheduled to start immediately after a previous observation of 55 Cnc had ended. These continuous visits provide useful information on the phase-curve change and its timescale. The consecutive visits are numbers 3 and 4, 5 and 6, 8 and 9, 10 and 11, 12 and 13, and 24 to 27. We compare consecutive visits by overplotting the posterior distributions of the transit depth, occultation depth, phase-curve amplitude, and offset to properly estimate the significance between each parameter and their joint correlations. Figure <ref> shows the posterior distribution functions of phase-curve parameters and joint correlation plots of visits 3 and 4. The values of the transit depth differ at 1.9σ between visits 3 and 4, and the significance is at 2.9σ between visits 26 and 27. For the rest of the visit pairs, the difference is below 1.6σ. The occultation depth varies over 3σ between visits 3 and 4. The strong variability in the occultation depth between visits 3 and 4 is shown in the top left panel in Fig. <ref>, where the two posterior distributions barely overlap. The occultation depth varies significantly at 4.6σ between visits 10 and 11 and at 3σ between visits 24 and 25. The rest of the visit pairs show a difference below 2σ. The phase-curve amplitude differs at 1.9σ and 2σ between visits 3 and 4 and between visits 10 and 11, respectively. The sequential increase in amplitude from visits 25 to 27 exceeds 2.5σ from one visit to the next. Other pairs are consistent below 1.5σ. Finally, the phase offset exhibits change over 3σ between visits 8 and 9, 25 to 27, and between 26 and 27. Incidentally, the joint correlation plots show no significant correlation between the transit depth and phase-curve amplitude. There is low evidence of a variable transit depth and phase-curve amplitude between consecutive visits, but the occultation depth and phase offset vary significantly over 3σ in some cases. The 2D posterior distribution correlation plot between the phase-curve amplitude and offset (third panel in Fig. <ref> at the bottom from left to right) does not overlap at the 3σ level, revealing an overall phase-curve change due to the joint change in amplitude and offset between visits 3 and 4. The same is true for visits 8 and 9, 25 and 26, and 26 and 27. Because a CHEOPS visit of 55 Cnc e lasts approximately 1.5 orbital periods, we conclude that the joint change in the parameters describing the phase curve occurs on the order of the planetary orbital timescale or approximately on the order of a day.
§.§ Model comparison
The considered models for the phase curve were compared using the leave-one-out (LOO) cross-validation <cit.>, which is a method for estimating the pointwise out-of-sample prediction accuracy from a Bayesian model. The LOO is similar to the widely applicable information criterion (WAIC), but it is more robust when the observations contain weak priors or sensitive outliers, at the cost of being computationally more expensive. It consists essentially of estimating the relative likelihood for one model to be preferred over other models in a set using the posterior samples of the MCMC.
The top-ranked model has the lowest LOO value. The higher the difference in the LOO between models, the better the top-ranked model. In practice, the threshold of ΔLOO to consider a model significantly better than others is subjective and debated <cit.>, but it is common convention to consider a model significantly better if the ΔLOO to the second-ranked model is greater than 10. Another relevant parameter in the model comparison is the statistical weight, which can be interpreted as an estimate of the probability that the model will make the best predictions on future data among the considered models. The values range between 0 and 1, and the sum of the weights for a set of models is equal to 1.
Because we considered five phase-curve models for a total of 29 CHEOPS visits, we present the information of the model comparison summarised in Fig. <ref>. The y-axis represents the CHEOPS visits, where each row depicts the model comparison for a specific visit. Each marker represents a phase-curve model, as indicated in the legend in Fig. <ref>. The x-axis shows the LOO relative to the the top-ranked model. Thus, the top-ranked model appears leftmost in the plot with ΔLOO=0. The error bars are the standard error of the difference in the expected log-predictive density between each model and the top-ranked model. Complementary to the figure, Table <ref> reports the statistical weight of the top-ranked model.
The simple sinusoidal modulation of the phase curve is preferred in 18 out of 29 visits, while the flat phase curve is favoured by the LOO in 6 visits. A flat phase curve without occultation is ranked best in 4 visits. The piecewise-Lambertian is preferred only in visit 22. It is worth mentioning that for the cases when a flat phase curve is preferred by the LOO, the rest of the models follow close behind. To elaborate further, in visits 8, 9, 11, 21, 23, and 25, the difference in the LOO between the best and worst model is below 5. However, the flat model takes most of the statistical weight in visits 8, 9, and 23. The best-fit results of the sinusoidal model show that the amplitude for the above-mentioned visits is consistent with zero within 2σ. The small difference between the models reported by the LOO can be explained by the fact that at small amplitudes, the functions converge to a flat line. Visit 4 stands out due to its low amplitude, but it nonetheless slightly favours the sinusoidal model by the LOO. A flat phase curve without occultation is favoured in visit 28, even though the occultation depth is significant. This is even one of the deepest occultations of the dataset. While the piecewise-Lambertian is preferred only in visit 22, there is no strong indication that the observations are described best by a planet with an asymmetric albedo.
§.§ Thermal emission
We estimated the thermal contribution in the CHEOPS bandpass by retrieving a theoretical stellar spectrum from the PHOENIX stellar model <cit.> with an effective temperature of 5200 K, surface gravity log(g) = 4.5 <cit.>, and a planet temperature of 2697±270 K, which is the maximum hemisphere-averaged temperature measured by <cit.> with Spitzer observations. When the uncertainty in the brightness temperature is taken into account, the thermal contribution in the CHEOPS bandpass ranges between 3 and 11 ppm.
§ DISCUSSION
The CHEOPS observations of 55 Cnc e present a puzzling case. The phase-curve amplitude and phase offset change from one visit to the next by up to 50 ppm and span a wide range of offset angles. Consecutive visits reveal changes on the timescale of at least the orbital period.
When we attribute the mechanism that causes this to activity of some sort, then the change between high and low phase-curve amplitude can be ascribed to periods of activity and inactivity, depending on the nature of the mechanism <cit.>. A sufficiently strong grey absorber could obscure the dayside enough to produce a flat phase-curve signal and might produce flux variation due to scattering <cit.>.
When the process behind the change in the phase-curve amplitude and offset of the signal comes from the planet or is bounded in its vicinity, then we would expect the occultation depth to be approximately twice the phase-curve amplitude. Most visits do not satisfy 2A ≈δ_occ. Additionally, the sinusoidal model infers a phase offset close to the transit or during transit for some visits. It is hard to conceal an event on the night-side of the planet to cause a stronger signal than at any other orbital phase. Thus, the origin of the variable signal is probably not at the planet.
§.§ Power spectrum
We characterised the periodic signals in the residuals with a Lomb-Scargle periodogram (see <cit.> for a review). In Fig. <ref> the blue points are unprocessed flux measurements, and the darker blue curve shows the binned data. Similarly, the red points represent the power of residual data, while the darker red curve shows the binned data. The residuals consist of the CHEOPS observations after removing systematics, transit, occultation, and phase-curve model. We identified the corresponding frequency of the known periodicities such as the orbital period of planet e and the CHEOPS orbital period. Because a single CHEOPS visit of 27 hours translates into approximately 10 μHz, we did not consider frequencies below this value. The power spectrum searches for periodicities in the time-series observations. Periodic signals can only be measured reliably for periods shorter than the visit duration <cit.>.
In the residuals, no strong signals remain at the orbital period of the planet. The power at the CHEOPS orbital period also shows an absence of power. However, the Lomb-Scargle periodogram in Fig. <ref> exhibits strong peaks at higher-order harmonics of the CHEOPS orbital period at frequencies starting around 1000 μHz. This can be indicative of an improper removal of systematic noise induced by the spacecraft. To analyse this in depth, we performed tests on simulated data.
We constructed 29 sets, each one consisting of 2000 points with an observing efficiency of 56% (see Table <ref>), lasting for 1.5 orbital periods. The gaps of 100 minutes in the dataset simulate the CHEOPS occultation caused by Earth. Between two consecutive sets, which represent CHEOPS visits, we added gaps of random durations. The points were randomly drawn from a normal distribution representing white noise on a level comparable to the real CHEOPS observations. In addition, we added correlated noise with a 1D Gaussian filter with a period that matched the stellar rotation period of 38.8 days <cit.>. The resulting power spectra show peaks at a frequency corresponding to the 100-minute duration of the gaps due to the CHEOPS orbital configuration and its higher harmonics. The peaks in our residuals might therefore be explained by the CHEOPS orbital configuration and do not necessarily imply an insufficient systematics detrending. Moreover, the absence of power in the periodogram at the CHEOPS orbital period shows that there are no strong signals at this frequency in the residuals.
Another source of time-correlated astrophysical noise that we expect in the power spectrum is stellar granulation of the host 55 Cnc, which occurs at multiple time and length scales <cit.>. A granulation phenomenon called supergranulation has a characteristic timescale similar to the orbital period of 55 Cnc e, and this could give rise to the apparent phase variations that change in amplitude and phase over time. It is difficult to reliably measure the excess power on the supergranulation timescale with these CHEOPS observations because the duration of each visit in this work is about one granulation timescale. Future observations with longer visit durations, or theoretical advances in modeling the long-timescale granulation phenomenon, can address this hypothesis.
§.§ Phase-curve amplitude variability
Motivated by the 47.3-day periodicity in the occultation depth of 55 Cnc e found by <cit.>, we investigated the putative variability timescale of the phase-curve amplitude. Similar to <cit.> and <cit.>, we constructed three different models to reproduce the amplitude as a function of time. The first model consists of a flat line. The second model is a linear function, where the free parameters are the slope and intercept. The last model is a sine function of the form Asin(2π t/P+B)+C, where t is BJD time, and P is the variability period. The caveat of this analysis is that there is no precise timing associated with the amplitude of the phase, in contrast to the precise mid-eclipse time used in <cit.>. As an approximation, we used the mean BJD time of each CHEOPS visit. The periodogram of the amplitude of each CHEOPS visit shows a maximum peak at 52.25 days. This peak has no dominant power compared to other peaks, and thus, we used the maximum value only as a reference to set a uniform prior between 40 and 70 days. We fitted our models in an MCMC routine. The sine function is favoured by the LOO because it carries all the statistical weight, and with a ΔLOO=11 to the linear function, ranked second. The flat line is ranked last, with ΔLOO=14. We find a period of 51.9 ± 1.4 days, A=11.1 ± 2.3 ppm, B=73.85 ± 21.84 degrees, and C=16.2 ± 1.6 ppm. The amplitudes of each visit are phase-folded at the best-fit period and are shown in Fig. <ref>. From the best-fit sinusoidal model, we infer an estimate reference timing of the local maximum at 2458934.98 BJD.
The period is similar to the 47.3-day periodicity on the occultation depth <cit.>.
At the present time, it is unclear whether the periodicity in the occultation depth in <cit.> and in the phase-curve amplitude are related.
The 51.9-day period is absent from the power spectrum of the CHEOPS residuals (Fig. <ref>). The rotation period of the star of 38.8 days from combined photometry and spectroscopy <cit.> and 42.7 from photometry <cit.> appear close to the 51.9-day period. We also note that the orbital period of the non-transiting planet c is 44.4 days. An approximate computation of the reflected light that planet c could contribute yields a maximum value of 2.7 ppm. For this estimate, we assumed no thermal emission and a geometric albedo of unity to obtain an upper limit. Because planet c is not transiting and its inclination is unknown, RV measurements <cit.> estimate a mass comparable to the mass of Saturn, and thus, we used the nominal radius of Saturn for the calculation. The estimated reflected light that could leak into the field of view of CHEOPS is too low to match the 11 ppm amplitude found in the periodic signal of the phase-curve amplitude, and it is thus unlikely to be the sole cause for the signal.
The origin of our 51.9-day signal is unknown.
§.§ Observed flux dips
Visual inspection of the detrended flux and bins in Fig. <ref> reveals a decrease in flux, so-called dips, outside of the transit and occultation of 55 Cnc e. The most conspicuous dip is found in visit 17 after the second occultation, approximately at BJD time 2459251.98. Other identified dips occur during visits 26, 27, and 28 at BJD time 2459595.3, 2459595.95, and 2459630.28, respectively. In Table <ref> we summarise the identified dips with an estimated time of the event and depth.
Another interesting case is visit 10, where the residuals in Fig. <ref> show three small dips that occur roughly every 0.5 days at approximately BJD time 2459229.1, 2459229.59, and 2459230. However, inspection of the power spectrum on the complete dataset of the residuals does not reveal a significant periodicity at 0.5 days (the corresponding frequency of 23.1 μHz).
We checked whether another planet in addition to planet e might be transiting during the CHEOPS visits. 55 Cnc A is known to host five planets, of which planet e is the only transiting planet. We used the ephemeris of the remaining planets in Table 3 of <cit.> to compute predicted transit times. Only during visit 4 did a predicted possible transit of planet b coincide with the observing time. Phase-folding of the observations provided no hint of a transit, and running an MCMC searching for the planet was negative. The dips are not related to a transit or occultation of planets b, c, d, or f. After carefully inspecting the dips and each frame of the corresponding visit, we note that the dip durations coincide exactly with the duration of a CHEOPS orbit. We therefore suspect that they are systematic.
Based on the analysis of the observations on the phase-curve, we discuss possible mechanisms that might have caused the phase-curve amplitude and shift variability. Previous research suggested that refractory material <cit.> or an inhomogeneous circumstellar dust torus <cit.> might explain the observations, as might volcanic activity <cit.> or star-planet interaction <cit.>.
§.§ Dust in the environment of 55 Cnc e
In the following sections, we build a toy model to study dust in the orbital environment of 55 Cnc e. First, we estimate the required material to obscure the stellar light by an arbitrary fraction. Then we constrain the composition of the material by estimating its characteristic sublimation timescale and comparing this to the variability timescales in the data. Finally, we compute the motion of the dust in the system after it escapes the planet to determine whether the ejected material could form a circumstellar torus.
Because of the uncertainty on the composition and interior of the planet, we consider characteristic species for a rocky planet and lava worlds: silicon monoxide, fayalite (iron-rich end-member[An end-member is a mineral at the extreme end of a mineral series in terms of purity, often described as solid solutions with varying compositions of some chemical elements.] of olivine), enstatite (an end-member of pyroxene), forsterite (magnesium-rich end-member of olivine), α-quartz and amorphous quartz, corundum, silicon carbide, and graphite. The silicates pyroxene and olivine are expected in rocky exoplanets because they predominate in Earth's mantle <cit.>. α-quartz refers to quartz with a trigonal structure, which is a crystalline unit that looks like an oblique cube whose angles in the corners are equal, but not rectangular. Table <ref> lists the selected species.
§.§.§ Estimated mass loss
The amplitude change in the phase curve reveals a maximum difference of approximately 50 ppm, as shown in Fig. <ref>. We investigated the estimated amount of material required to produce a 50 ppm change in flux. For the purpose of this computation, we assumed instead of considering material causing variation out of transit that the material composed of grey absorbing grains transited the star. We determined the material that is required to obscure the star by 50 ppm.
Following <cit.>, we first computed the total mass required to absorb or scatter a certain fraction f of the starlight. We assumed that the dust is optically thin (otherwise, the surface of the planet would cool down enough so that the material would not be produced in the first place <cit.>). We let the dust cover an area A of the stellar disk π R_⋆^2 and have an optical depth τ_d=m_dκ_d, where the subscript d represents the dust, m_d is the grain mass per unit area, and κ_d is the opacity. Then the fraction of starlight is given by
rll
f = A/πR_⋆^2m_dκ_d .
When we consider the grains to be spherical with radii s and internal density ρ_int, the opacity is given as
rll
κ_d = 3/4ρ_ints .
The total mass in dust covering the star is given as a function of the grain mass per unit area m_d and total covered area A as
rll
M_d = m_dA .
Substituting Eq. (<ref>) into Eq. (<ref>), solving for m_dA, and substituting into Eq. (<ref>), we obtain a total mass of
rll
M_d = 4π/3fρ_intR_⋆^2s .
To obtain a numerical estimate, we first took possible grain sizes into account that are opaque in the CHEOPS bandpass and transparent in the Spitzer bandpass <cit.>, resulting in a plausible range of particle radii 0.1 ≤ s ≤ 0.7 μm. The grain density was chosen to be 3 g cm^-3, similar to the density of forsterite and enstatite. The radius of the star was R_⋆=0.94 R_⊙. For a fraction of obscured starlight of f=50 ppm, we estimate a mass between 2.7× 10^9 and 1.88 × 10^11 kg. Finally, the mass-loss rate was estimated by dividing by the planetary orbital period of 0.74 days, yielding a rate Ṁ_d between 1.33× 10^12 and 9.32 × 10^13 kg/yr.
To place this value in perspective, we compared it to the mass loss due to photoevaporation of the planetary atmosphere driven by the combined X-ray and extreme ultra-violet (EUV) flux from the host star, abbreviated as XUV. In this case, the mass-loss rate due to photoevaporation is <cit.>
rll
Ṁ_evap = 3F_XUV/4Gρ_p ,
where F_XUV is the XUV flux from the host star at the planet, ρ_p is the bulk density of the planet, and G is the gravitational constant. The estimate was made without an assumption on the planetary atmosphere. <cit.> collected X-ray luminosities for 82 stars and inferred EUV luminosities using coronal models. The star 55 Cnc A has available X-ray observations. The XUV flux at 55 Cnc e is 870.96 erg s^-1cm^-2 (see Table 6 of <cit.>). Using appropriate parameters for the bulk density of 55 Cnc e <cit.>, we obtain a mass-flux rate due to photoevaporation of 4.64 × 10^13 kg/yr.
Our estimated mass loss due to the amount of material obscuring the star is comparable to the mass-loss rate due to photoevaporation, which suggests that the amount of material required to cause a 50 ppm change is plausible. However, photoevaporation is not the only mechanism leading to atmospheric escape. Tidally driven volcanism, as well as Jeans escape <cit.> and plasma-driven atmospheric sputtering feeding a plasma torus, are mass-loss processes that could occur on a super-Earth <cit.>. The main challenge for the required mass to escape the planet is the high escape velocity of approximately 24 km/s. However, violent plumes such as those observed in Io <cit.> could provide the material to the circumstellar environment. The plume speeds in Io are approximately 0.5 km/s <cit.>, which is lower than the Io escape velocity of 2.6 km/s. Even so, plume material escapes the atmosphere with the aid of additional mechanisms such as sputtering by charged particles <cit.>. A complex model combining different mass-loss processes is required to determine whether the amount of material can reach the escape velocity.
§.§.§ Characteristic sublimation timescales
Due to the close-in orbit of 55 Cnc e and its likely tidally locked configuration, the silicates vaporise at the day-side surface temperature of ∼2700 K <cit.> and form an atmosphere whose equilibrium vapour pressure is given by the Clausius-Clapeyron equation,
rll
P_vapour(T) = exp(-μm_HL_sub/k_BT+B) ,
where μ is the molecular mass of gas released from dust due to sublimation, m_H is the atomic mass unit <cit.>, L_sub is the latent heat of sublimation, k_B is Boltzmann's constant, and B is a constant composition-dependent sublimation parameter. We express Eq. (<ref>) as
rll
P_vapour(T) = exp(-𝒞/T+B) ,
with
rll
𝒞 = μm_HL_sub/k_B .
The sublimation parameters for the selected species are shown in Table <ref>.
The mass-loss rate of dust particles due to sublimation is given by <cit.>
rll
dm/dt = -S √(μm_H/2πk_BT)P_vapour(T) ,
where S is the surface area of the aggregate particle. We assumed that a dust grain is composed of N identical spheres with radius s, so that S=4 π Ns^2.
We calculated the equilibrium temperature T of the dust in local thermodynamic equilibrium through an energy balance between the absorption rate of stellar radiation and the thermal emission and energy loss through sublimation <cit.>,
rll
Ω∫C_abs(𝑛, 𝑥)B_⋆(λ)dλ = 4 π∫C_abs(𝑛, 𝑥)B_λ(λ, T)dλ-dm/dtL,
where B_⋆(λ) is the solar radiance, and B_λ(λ, T) is the Planck function of the dust <cit.>. The solid angle subtended by the star at a distance r is given by
rll
Ω = 2 π[1-√(1-(R_⋆/r)^2) ] .
The absorption cross sections C_abs in Eq. (<ref>) depend on the complex refractive index 𝑛, the size parameter 𝑥=2 π s / λ, and the structure of the particle. These cross sections were computed using the Mie theory. We used the program <cit.> to retrieve the absorption cross sections for each species listed in Table <ref> for dust radii 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, and 0.7 μm and for a wide discrete range of wavelength values available in the collection. This narrow range of particle sizes satisfies the conditions of being opaque in the optical <cit.>, but transparent in the IR <cit.>. The dataset of contains optical properties of 32 condensates (see Table 1 of <cit.> for more details). The equilibrium temperature for each grain size was pretabulated according to Eq. (<ref>).
Finally, after the temperature of the grains reached a state of equilibrium, the characteristic timescales for sublimation were estimated. For this purpose, we considered the grain mass as a function of its density and volumen of spheres. Taking the first derivative, we obtain
rll
dm/dt = 4ρπN s^2 ds/dt ,
and rewriting Eq. (<ref>) results in
rll
ds/dt = -√(μm_H/2πk_BT)P_vapour(T)/Nρ .
The advantage of Eq. (<ref>) is that it relates the grain radii and time explicitly. The right-hand side of the equation only depends on the temperature and properties of the grain. After tabulating all relevant values, we solved the differential equation numerically to estimate the required time for a dust grain of a given size to sublimate. We integrated numerically using a trapezoidal method for an initial grain radius until it reached a threshold value of s=0.001 μm.
For a single sphere (N=1), the characteristic sublimation timescale of the selected species is shown in Table <ref>. Silicates made of pyroxene, olivine, and fayalite survive less than a minute. Silicon monoxide and corundum have an even shorter sublimation timescale on the order of a second. An α-quartz grain remains in the environment for up to approximately 11 hours before sublimating, while for amorphous quartz, the sublimation time is around two hours. Silicon carbide has a survival time between 4 and 7 hours. The range of characteristic sublimation timescales of graphite spans a wide range of values for different grain radii, translating into sublimation times of between 4 seconds and up to 46 hours. Because the phase variation of 55 Cnc e is observed at least on the orbital timescale of around 17.7 hours, most of the species for the radius range sublimate long before they would be able to produce the phase-curve variability. Only graphite and α-quartz between 0.3 and 0.7 μm, as well as a 0.7 μm grain of silicon carbide, survive over multiple hours. Ultimately, larger grains composed of graphite and α-quartz have a sublimation time comparable to the orbital period of the planet. For an Earth-like mantle, the short sublimation lifetimes of pyroxene and olivine would require the planet to continuously supply material to the atmosphere to produce variability on the timescale of hours. Such a continuous replenishment of material to the circumstellar environment would result in comet-like tail shape in the transit, which has not been observed. Because the composition of the outgassed material is connected to the composition of the planetary mantle <cit.>, it seems unlikely that these elements originate from the planet and reach the circumstellar environment or at least are abundant enough to cause the observed variability.
§.§.§ Stellar radiation pressure
We considered the scenario of material that is ejected from the planet through, for example, explosive volcanism or high-altitude condensation <cit.>. As soon as the dust leaves the planet, it will be subject to radiation pressure by the star. The ratio of radiation pressure to gravity acting on a dust particle with mass m_d is given by <cit.>[The correct formulation of the solid angle used here to calculate the ratio of radiation pressure to gravity, Ω=π r^2/R_⋆^2, can be found in <cit.>.]
rll
β = πR_⋆^2/GM_⋆m_dc ∫B_⋆(λ)C_pr(𝑛, 𝑥)dλ,
where M_⋆ is the mass of the host star, c is the speed of light, and C_pr(𝑛, 𝑥) is the radiation pressure cross section, defined as C_pr=C_abs+(1-g_0)C_sca, with C_abs and C_sca the absorption and scattering cross section, respectively; and g_0 is the scattering asymmetry parameter. The g_0 parameter describes how isotropic or anisotropic the scattering is. It is usually tabulated as a function of the wavelength and particle size. For g_0=0, light is scattered equally in all directions[An asymmetry parameter of g_0=0 does not necessarily imply isotropic scattering <cit.>. To be precise, it corresponds to a symmetric phase function <cit.>.]. A positive value of the asymmetric parameter indicates forward scattering, while for a negative value, backscattering prevails. We computed these quantities with . By examining the output of , the asymmetry parameter in the CHEOPS bandpass for particle sizes between 0.1 and 0.7 μm shows that forward scattering dominates for all considered species in the shorter wavelengths. However, forward scattering is not dominant for the small grain sizes. For the smallest grains, the asymmetry parameter tends asymptotically to a symmetric phase function for longer wavelengths in the CHEOPS range.
A ratio of radiation pressure to gravity above 0.5 leads to an unbounded trajectory, while values below this threshold correspond to closed orbits. In the presence of radiation pressure and gravity, we can interpret the two opposing forces as an effective reduced gravitational field compared to the planet and defined as g_eff=GM_⋆(1-β)/r^2. Material in bounded orbits moves in a Keplerian ellipse with the periastron at the location where it was released. Conservation of energy and angular momentum between the grain and the planet provide information regarding the eccentricity and semi-major axis of the grain <cit.>,
rll
e_d = β/1-β; a_d = a_p1-β/1-2β ,
where a_p is the semi-major axis of the planet. With Kepler's third law, we then computed the period of a grain released by the planet and subject to radiation pressure.
We used Eq. (<ref>) to estimate the ratio of radiation pressure to gravity for our range of grain radii and compositions. In Fig. <ref> the ratio of radiation pressure to gravity for each considered species is shown. Grains with a ratio above 0.5 will be blown away, while grains below this ratio will orbit the star. The latter case is of particular interest because of the possibility of forming a circumstellar torus of dust. Most silicates within our size range move in a closed orbit, while graphite and silicon carbide grains smaller than 0.5 μm experience a radiation pressure that is too strong and are blown away. The same occurs to silicon carbide smaller than 0.4 μm.
For the grain radii that lead to a Keplerian orbit, we computed the eccentricity and semi-major axis using Eq. (<ref>) to obtain the trajectory of a grain during its characteristic sublimation timescale after escaping the planet. A grain of a specific species and size with periastron at the location it escaped the planetary Hill sphere radius initially moves at the same speed as the planet <cit.>. It is fair to assume this initial speed value because the ratio of the escape speed to the orbital speed is approximately √((M_pa_p)/(M_⋆R_p))≈ 0.01. A higher ratio of the radiation pressure to gravity leads to more eccentric and longer orbits. Figure <ref> shows the motion of the dust during its characteristic sublimation lifetime. Based on the timescales, most species will stay bounded to the planet or follow closely behind, forming a comet-like tail. So far, there is no evidence of a comet-like transit shape in observations of 55 Cnc e. No asymmetry in the transit shape is observed in the CHEOPS observations, as shown in the phase-folded light curve in Fig. <ref>.
Graphite, α-quartz, and silicon carbide travel at least one-fourth of an entire orbit during their lifetime. Whilst the process of supplying material could be stochastic in nature, if the dust is replenished frequently enough, an inhomogeneous torus around the star could form. It would consist of regions that are more densely packed than others, and in theory, it would vary in opacity, producing measurable flux variation. While this could explain the variability in the phase-curve amplitude, it remains to be seen why this is not observed during transit. The material could be an inefficient forward-scatterer but might efficiently scatter at other angles. However, the asymmetry parameter of our considered species indicates that forward scattering mostly dominates in the CHEOPS bandpass for most of the grain sizes. If the ejecta of material occur during transit, the material would float close behind the planet and manifest in the transit shape as a less pronounced flux increase during egress and even after egress of the planet. It is possible that none of the CHEOPS visits coincided with an event like this during transit, which might explain why we did not see a comet-like transit shape <cit.> in the data. Even more, the fact that such a transit due to a comet-like tail is not observed suggests that the replenishment of material to the torus is not an uninterrupted process.
§.§ Star-planet interactions
Magnetic star-planet interactions have been already considered in previous works as a possible origin of the chromospheric hot spots in 55 Cnc. <cit.> modelled the environment of 55 Cnc based on the magnetic field constrained by Zeeman-Doppler imaging carried out in 2017, and found that 55 Cnc e is extremely likely to orbit within the sub-Alfvénic region of the stellar wind. This implies that a direct magnetic connection can be established between planet e and 55 Cnc <cit.>, channelling energy from the planet vicinity to the star along the magnetic topology of the stellar environment <cit.>. <cit.> further considered this possibility and estimated the strongest power that could be associated with such an interaction <cit.> using the data from the wind model of <cit.>. They found that any star-planet magnetic interaction signal could not exceed 10^-10 L_⋆. This optimistic estimate predicts a very low power and thus rules out any detectable magnetic star-planet interaction between 55 Cnc and 55 Cnc e.
In addition, the characteristics of the 55 Cnc system also allow us to rule out such a detection. Any magnetic star-planet interaction signal will be modulated by the orbital period of the planet and the rotation rate of the star that entrains its low corona (for an example for HD 189733, see <cit.>). 55 Cnc rotates in about 40 days <cit.>, while the planet orbits in about 0.74 days. If the star possesses a large-scale magnetic topology similar to the topology it had in April 2017, that is, an inclined dipole <cit.>, the location of the origin of the signal would circulate around the magnetic poles of the star as the planet orbits. Because the star rotates more slowly than the orbital motion, the signal would be visible at all times (one magnetic pole would always face Earth) and should correlate with the orbital period of 55 Cnc e. In addition to this signal, radio emission from the hypothetical magnetosphere of 55 Cnc e would also be expected, again correlated with the orbital period of the planet <cit.>. No such signals have been detected so far. We can therefore safely reject this hypothesis for 55 Cnc e.
Other types of star-planet interactions could nevertheless occur between the two bodies. Tidal interactions have also been proposed as a source of enhanced stellar activity <cit.>. In the case of 55 Cnc, the low mass of planet e renders this scenario nevertheless implausible. The last type of star-planet interaction that could be acting in 55 Cnc would be the infall of escaping material from the planet atmosphere down to the stellar corona, producing a detectable signal <cit.>. In this case, dedicated studies (beyond the scope of the present paper; e.g. <cit.>) are needed to assess both the energetics and the relative phase of such a signal.
§ CONCLUSIONS
CHEOPS observations reveal a changing phase curve at least on the orbital timescale of the planet. The phase modulation varies by up to 50 ppm. Additionally, we found a 51.9-day period of the time-dependent phase-curve amplitude of each CHEOPS visit. The origin of this is unknown. The fact that the peak of the phase curve for some visits occur during transit or close to transit rule out the planet as the source of the signal. The results motivated a deeper study on whether dust might be the cause. Our toy model allowed us to explore possible compositions of dust. The short lifetimes of some compounds such as pyroxene and olivine, whose abundance is expected to dominate at the surface of Earth-like exoplanets, mean that they are unlikely to cause a variability in the order of hours. Only a narrow range of dust sizes of graphite, silicon carbide, and quartz satisfy the required timescale. Additionally, only certain particle sizes of these species remain candidates to form a torus around the star, as suggested by past research. An argument against forming a circumstellar torus of dust is the escape velocity of approximately 24 km/s of the planet.
Previous research attributed the puzzling observations on 55 Cnc e to be caused by an individual phenomenon. Instead of searching for a single process, a complex model including dust dynamics, magnetohydrodynamics, and atmospheric radiative transfer should provide a self-consistent model of the planet. A model should take this into account to realistically model the motion of a grain and its influence on the observations and provide conclusive answers.
Recently, JWST observed 55 Cnc e in the framework of two accepted programs for Cycle 1. The first set of observations aims to study the possibility of a 3:2 spin-orbit resonance as an explanation for the variable occultation depth <cit.>. The second program will characterise the atmosphere of 55 Cnc e via spectral features of H_2O, CO, CO_2 , and SiO <cit.>. These observations promise exciting new insights in the IR range. Moreover, simultaneous observations of CHEOPS and JWST, accompanied by other instruments such as a spectropolarimeter, would provide a unique opportunity for resolving the nature of this fascinating exoplanet.
We are grateful to the anonymous referee for the careful reading and thoughtful suggestions that improved this paper. We also thank the editor, Emmanuel Lellouch, for insightful comments. Both made the submission process an enjoyable experience. We also thank the language editor for the revision of the manuscript.
CHEOPS is an ESA mission in partnership with Switzerland with important contributions to the payload and the ground segment from Austria, Belgium, France, Germany, Hungary, Italy, Portugal, Spain, Sweden, and the United Kingdom. The CHEOPS Consortium would like to gratefully acknowledge the support received by all the agencies, offices, universities, and industries involved. Their flexibility and willingness to explore new approaches were essential to the success of this mission.
EMV aknowledges support from the Centre for Space and Habitability (CSH). This work has been carried out within the framework of the National Centre of Competence in Research PlanetS supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. EMV acknowledge the financial support of the SNSF. EMV thanks Beatriz Campos Estrada for insightful discussion on the sublimation timescales of dust in disintegrating exoplanets.
B.-O. D. acknowledges support from the Swiss State Secretariat for Education, Research and Innovation (SERI) under contract number MB22.00046.
ABr was supported by the SNSA.
S.G.S. acknowledge support from FCT through FCT contract nr. CEECIND/00826/2018 and POPH/FSE (EC).
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (project Spice Dune, grant agreement No 947634).
LBo, VNa, IPa, GPi, RRa, GSc, VSi, and TZi acknowledge support from CHEOPS ASI-INAF agreement n. 2019-29-HH.0.
DJB acknowledges financial support from the CSH, University of Bern.
ML acknowledges support of the Swiss National Science Foundation under grant number PCEFP2_194576.
ASt acknowledges support from the PLATO/CNES grant at CEA/IRFU/DAp and the french Programme National de Planétologie (PNP).
This work was supported by FCT - Fundação para a Ciência e a Tecnologia through national funds and by FEDER through COMPETE2020 - Programa Operacional Competitividade e Internacionalizacão by these grants: UID/FIS/04434/2019, UIDB/04434/2020, UIDP/04434/2020, PTDC/FIS-AST/32113/2017 & POCI-01-0145-FEDER- 032113, PTDC/FIS-AST/28953/2017 & POCI-01-0145-FEDER-028953, PTDC/FIS-AST/28987/2017 & POCI-01-0145-FEDER-028987, O.D.S.D. is supported in the form of work contract (DL 57/2016/CP1364/CT0004) funded by national funds through FCT.
YAl acknowledges the support of the Swiss National Fund under grant 200020_172746.
We acknowledge support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grants ESP2016-80435-C2-1-R, ESP2016-80435-C2-2-R, PGC2018-098153-B-C33, PGC2018-098153-B-C31, ESP2017-87676-C5-1-R, MDM-2017-0737 Unidad de Excelencia Maria de Maeztu-Centro de Astrobiología (INTA-CSIC), as well as the support of the Generalitat de Catalunya/CERCA programme. The MOC activities have been supported by the ESA contract No. 4000124370.
S.C.C.B. acknowledges support from FCT through FCT contracts nr. IF/01312/2014/CP1215/CT0004.
XB, SC, DG, MF and JL acknowledge their role as ESA-appointed CHEOPS science team members.
ACC acknowledges support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1.
This project was supported by the CNES.
The Belgian participation to CHEOPS has been supported by the Belgian Federal Science Policy Office (BELSPO) in the framework of the PRODEX Program, and by the University of Liège through an ARC grant for Concerted Research Actions financed by the Wallonia-Brussels Federation.
L.D. is an F.R.S.-FNRS Postdoctoral Researcher.
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (project Four Aces.
grant agreement No 724427). It has also been carried out in the frame of the National Centre for Competence in Research PlanetS supported by the Swiss National Science Foundation (SNSF). DE acknowledges financial support from the Swiss National Science Foundation for project 200021_200726.
MF and CMP gratefully acknowledge the support of the Swedish National Space Agency (DNR 65/19, 174/18).
DG gratefully acknowledges financial support from the CRT foundation under Grant No. 2018.2323 “Gaseous or rocky? Unveiling the nature of small worlds”.
M.G. is an F.R.S.-FNRS Senior Research Associate.
MNG is the ESA CHEOPS Project Scientist and Mission Representative, and as such also responsible for the Guest Observers (GO) Programme. MNG does not relay proprietary information between the GO and Guaranteed Time Observation (GTO) Programmes, and does not decide on the definition and target selection of the GTO Programme.
SH gratefully acknowledges CNES funding through the grant 837319.
KGI is the ESA CHEOPS Project Scientist and is responsible for the ESA CHEOPS Guest Observers Programme. She does not participate in, or contribute to, the definition of the Guaranteed Time Programme of the CHEOPS mission through which observations described in this paper have been taken, nor to any aspect of target selection for the programme.
This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche.
PM acknowledges support from STFC research grant number ST/M001040/1.
This work was also partially supported by a grant from the Simons Foundation (PI Queloz, grant number 327127).
IRI acknowledges support from the Spanish Ministry of Science and Innovation and the European Regional Development Fund through grant PGC2018-098153-B- C33, as well as the support of the Generalitat de Catalunya/CERCA programme.
GyMSz acknowledges the support of the Hungarian National Research, Development and Innovation Office (NKFIH) grant K-125015, a a PRODEX Experiment Agreement No. 4000137122, the Lendület LP2018-7/2021 grant of the Hungarian Academy of Science and the support of the city of Szombathely.
V.V.G. is an F.R.S-FNRS Research Associate.
NAW acknowledges UKSA grant ST/R004838/1.
ACC and TW acknowledge support from STFC consolidated grant numbers ST/R000824/1 and ST/V000861/1, and UKSA grant number ST/R003203/1.
NCS acknowledges funding by the European Union (ERC, FIERCE, 101052347). Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
This research made use of <cit.> and its
dependencies <cit.>. We acknowledge the use of further software: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.
aa
authoryear,open=(,close=)
§ DETRENDING BASIS VECTORS
Table <ref> provides information of the basis vectors we used to correct for systematics from the flux measurements in each CHEOPS visit. The selection is based on the combination of basis vectors that minimise the BIC.
§ RESIDUALS
The residuals correspond to the flux after systematics, transit, occultation model, and the sinusoidal signal of the phase curve are removed. The gallery in Figs. <ref> and <ref> contains the residuals of all CHEOPS visits. Complementary to this, the density distribution of the residuals is shown next to the time series.
§ PHASE-CURVE PARAMETERS
Here we present pairwise relations between relevant phase-curve parameters: transit and occultation depth, and phase-curve amplitude and offset. The transit depth is compared with the occultation depth (Fig. <ref>), phase-curve amplitude (Fig. <ref>), and phase offset (Fig. <ref>). Then the occultation depth is compared with amplitude (Fig. <ref>) and offset (Fig. <ref>). The remaining combination relates the phase-curve amplitude with the offset (Fig. <ref>).
§ SELECTED CORNER PLOTS
Figure <ref> and <ref> show the full joint posterior correlation plots of visits 2 and 9, respectively. b is the impact parameter, t_0 is the mid-transit time relative to the reference time, R_p/R_⋆ is the ratio of the planetary radius to the stellar radius, (R_p/R_⋆)^2 is the transit depth, δ_t is the analytical expression of the transit depth <cit.>, R_⊙ and M_⊙ are the stellar radius and mass, roll refers to the roll angle, δ_occ is the occultation depth, Phi is the phase offset in degrees, and log(s) is the natural logarithm of the flux uncertainty for each measurement.
Visit 9 clarifies the nature of the high uncertainty in the phase offset reported in Table <ref>. Since ingress or egress were not observed by CHEOPS, the MCMC infers a bimodal distribution on the mid-transit time.
§ MODEL COMPARISON
For each visit, we performed a comparison between different models for the phase curve: sinusoid function, Lambertian sphere, piecewise-Lambertian, constant baseline flux, and constant baseline flux without occultation (set to zero). Table <ref> shows the top-ranked model and the difference in the LOO between the first and second ranked model. The last column is the statistical weight of the top-ranked model.
|
http://arxiv.org/abs/2307.04594v1 | 20230710143529 | Parameterized Analysis of the Cops and Robber Problem | [
"Harmender Gahlawat",
"Meirav Zehavi"
] | cs.DM | [
"cs.DM"
] |
Singling out SO(10) GUT models using recent PTA results
Jonathan Steiner
February 2023
=========================================================
Pursuit-evasion games have been intensively studied for several decades due to their numerous applications in artificial intelligence, robot motion planning, database theory, distributed computing, and algorithmic theory. Cops and Robber () is one of the most well-known pursuit-evasion games played on graphs, where multiple cops pursue a single robber. The aim is to compute the cop number of a graph, k, which is the minimum number of cops that ensures the capture of the robber.
From the viewpoint of parameterized complexity, is W[2]-hard parameterized by k [Fomin et al., TCS, 2010].
Thus, we study structural parameters of the input graph. We begin with the vertex cover number (𝗏𝖼𝗇). First, we establish that k ≤𝗏𝖼𝗇/3+1. Second, we prove that parameterized by 𝗏𝖼𝗇 is by designing an exponential kernel. We complement this result by showing that it is unlikely for parameterized by 𝗏𝖼𝗇 to admit a polynomial compression. We extend our exponential kernels to the parameters cluster vertex deletion number and deletion to stars number, and design a linear vertex kernel for neighborhood diversity. Additionally, we extend all of our results to several well-studied variations of .
§ INTRODUCTION
In pursuit-evasion, a set of agents, called pursuers, plan to catch one or multiple evaders. Classically, pursuit-evasion games were played on geometric setups, where pursuers and evaders move on the plane following some rules <cit.>. Parsons <cit.> formulated pursuit-evasion on graphs to model the search for a person trapped in caves, giving rise to the field of graph searching. Since then, pursuit-evasion has been studied extensively, having applications in artificial intelligence <cit.>, robot motion planning <cit.>, constraint satisfaction and database theory <cit.>, distributed computing <cit.> and network decontamination <cit.>, and significant implications in graph theory and algorithms <cit.>.
() is one of the most intensively studied pursuit-evasion games on graphs, where a set of cops pursue a single robber. Players move in discrete time steps alternately, starting with the cops. In each move, a player can move to an adjacent vertex, and the cops win by capturing the robber (i.e., if a cop and the robber occupy the same vertex). The goal is to compute the cop number of a graph G, denoted 𝖼(G), which is the minimum number of cops required to win in G. We define the game formally in Section 2. is well studied in the artificial intelligence literature under the name Moving Target Pursuit () <cit.>, where we consider sub-optimal but faster strategies from an applicative point of view. The results have found numerous applications in game design, police chasing, path planning, and robot motion planning <cit.>.
Determining the parameterized complexity of games is a well-studied research topic <cit.>.
Most pursuit-evasion games are, in fact, AW[*]-hard <cit.>. In particular, is W[2]-hard parameterized by 𝖼(G) <cit.>. Thus, we consider structural parameterizations, focusing on kernelization, also known as polynomial-time preprocessing with a parametric guarantee. Due to the profound impact of preprocessing, kernelization was termed “the lost continent of polynomial time” <cit.>. We begin with the most studied structural parameter in parameterized complexity: the vertex cover number (𝗏𝖼𝗇) of the input graph. We bound 𝖼(G) in terms of 𝗏𝖼𝗇, as well as achieve both positive and negative results concerning the kernelization complexity of parameterized by 𝗏𝖼𝗇. We generalize our kernelization results to the smaller parameters cluster vertex deletion number () and deletion to stars number (), as well as to the parameter neighborhood diversity (). Furthermore, we extend all our results to several well-studied variants of .
The choice of 𝗏𝖼𝗇 as a parameter to study pursuit-evasion games is natural due to various scenarios where 𝗏𝖼𝗇 is significantly smaller than the graph size. For example, this includes scenarios where we model the existence of one or few (possibly interconnected) central hubs—for illustration, suppose an intruder is hiding in a system of buildings where we have only few corridors but a large number of rooms, or suppose we have few virtual servers with many stations (e.g., of private users) that can communicate only with the servers. Furthermore, 𝗏𝖼𝗇 is one of the most efficiently computable parameters from both approximation <cit.> and parameterized <cit.> points of view, making it fit from an applicative perspective even when a vertex cover is not given along with the input. Moreover, 𝗏𝖼𝗇 is the best choice for proving negative results—indeed, our negative result on the kernelization complexity of for 𝗏𝖼𝗇 implies the same for many other well-known smaller parameters such as treewidth, treedepth and feedback vertex set <cit.>. One shortcoming of 𝗏𝖼𝗇 as a parameter is that it is very high for some simple (and easy to resolve) dense graphs like complete graphs. However, we generalize our kernel to , which is small for these dense graphs, and for . Furthermore, we design a linear kernel for the well-studied parameter . We further discuss the utility of our kernels in the Conclusion.
§.§ Brief Survey
was independently introduced by Quilliot <cit.> and by Nowakowski and Winkler <cit.> with exactly one cop[In fact, a specific instance of on a specific graph was given as a puzzle in Problem 395 of the book Amusements in Mathematics <cit.> already in 1917.]. Aigner and Fromme <cit.> generalized the game to multiple cops and defined the cop number of a graph.
The notion of cop number and some fundamental techniques introduced by Aigner and Fromme <cit.> yielded a plethora of results on this topic. For more details, we refer the reader to the book <cit.>.
The computational complexity of finding the cop number of a graph has been a challenging subject of research. On the positive side, Berarducci and Intrigila <cit.> gave a backtracking algorithm that decides whether G is k-copwin in 𝒪(n^2k+1) time.
On the negative side, Fomin et al. <cit.> proved that determining whether G is k-copwin is NP-hard, and W[2]-hard parameterized by k. Moreover, Mamino <cit.> showed that the game is PSPACE-hard, and later, Kinnersley <cit.> proved that determining the cop number of a graph is, in fact, EXPTIME-complete. Recently, Brandt et al. <cit.> provided fine-grained lower bounds, proving that the time complexity of any algorithm for is Ω(n^k-o(1)) conditioned on Strong Exponential Time Hypothesis (𝖲𝖤𝖳𝖧), and 2^Ω (√(n)) conditioned on Exponential Time Hypothesis (𝖤𝖳𝖧).
Since admits an XP-time algorithm, it is sensible to bound the cop number for various graph classes or by some structural parameters. Nowadays, we know that the cop number is 3 for the class of planar graphs <cit.> and toroidal graphs <cit.>, 9 for unit-disk graphs <cit.>, 13 for string graphs <cit.>, and is bounded for bounded genus graphs <cit.> and minor-free graphs <cit.>. Moreover, it is known that the cop number of a graph G is at most 𝗍𝗐(G)/2+1 <cit.>, where 𝗍𝗐(G) denotes the treewidth of G, and at most 𝖼𝗐(G) <cit.>, where 𝖼𝗐(G) denotes the clique-width of G.
§.§ Our Contribution
We conduct a comprehensive analysis of parameterized by 𝗏𝖼𝗇. We start by bounding the cop number of a graph by its vertex cover number:
theoremVCBound
For a graph G, 𝖼(G) ≤𝗏𝖼𝗇/3+1.
The proof is based on the application of three reduction rules. Each of our rules controls its own cop that, in particular, guards at least three vertices that belong to the vertex cover. Once our rules are no longer applicable, we exhibit that the remaining unguarded part of the graph is of a special form. In particular, we exploit this special form to prove that, now, the usage of only two additional cops suffices.
We complement Theorem <ref> with an argument (Lemma <ref>) that it might be difficult to improve this bound further using techniques similar to ours.
Second, we prove that parameterized by 𝗏𝖼𝗇 is 𝖥𝖯𝖳 by designing a kernelization algorithm:
theoremVCKernel
parameterized by 𝗏𝖼𝗇 admits a kernel with at most 𝗏𝖼𝗇+ 2^𝗏𝖼𝗇/√(𝗏𝖼𝗇) vertices.
Our kernel is also based on the application of reduction rules. However, these rules are very different than those used for the proof of Theorem 1. While our main rule is quite standard in kernelization (involving the removal of, in particular, false twins), the proof of its correctness is (arguably) not.
Theorem <ref>, along with Theorem <ref> and an XP-algorithm (Proposition <ref>), gives the following immediate corollary:
corollaryVCFPT
is parameterized by 𝗏𝖼𝗇, and is solvable in (𝗏𝖼𝗇+2^𝗏𝖼𝗇/√(𝗏𝖼𝗇))^𝗏𝖼𝗇/3+2· n^𝒪(1) time.
We complement our kernel by showing that it is unlikely for to admit polynomial compression, by providing a polynomial parameter transformation from . In particular, our reduction makes non-trivial use of a known construction of a special graph having high girth and high minimum degree.
theoremVCNPoly
parameterized by 𝗏𝖼𝗇 does not admit polynomial compression, unless .
Next, we present a linear kernel for parameterized by neighbourhood diversity:
theoremND
parameterized by 𝗇𝖽 admits a kernel with at most 𝗇𝖽 vertices.
On the positive side, we extend our exponential kernel to two smaller structural parameters, and :
theoremVClique
parameterized by admits a kernel with at most 2^2^𝖼𝗏𝖽 + √(𝖼𝗏𝖽) vertices. Moroever, parameterized by admits a kernel with at most 2^2^𝖽𝗍𝗌 + 𝖽𝗍𝗌^1.5 vertices.
Several variants of have been studied due to their copious applications. We extend our results, parameterized by , to some of the most well-studied ones. We define these variants (and used notations) in Section <ref>. We first bound the cop number of these variants by 𝗏𝖼𝗇:
For a graph G: (1) 𝖼_lazy≤𝗏𝖼𝗇/2 +1; (2) 𝖼_attack≤𝗏𝖼𝗇/2 +1; (3) 𝖼_active(G) ≤𝗏𝖼𝗇; (4) 𝖼_surround(G) ≤𝗏𝖼𝗇; (5) 𝖼_s(G) ≤𝗏𝖼𝗇 (for any value of s); (6) for a strongly connected orientation G of G, 𝖼(G) ≤𝗏𝖼𝗇.
We also extend our exponential kernel to these variants:
theoremLAtKernel
and parameterized by 𝗏𝖼𝗇 admit a kernel with at most 𝗏𝖼𝗇+2^𝗏𝖼𝗇/√(𝗏𝖼𝗇) vertices. Moreover, on strongly connected directed graphs admits a kernel with at most 3^𝗏𝖼𝗇+𝗏𝖼𝗇 vertices.
Then, we present a slightly more general kernelization that works for most variants of the game. In particular, we define a new variant of the game (in Section <ref>), Generalized CnR,that generalizes various well studied variants of . We have the following result that proves that Generalized CnR parameterized by 𝗏𝖼𝗇 admits an exponential kernel.
theoremGenKernel
Generalized CnR parameterized by 𝗏𝖼𝗇 admits a kernel with at most 𝗏𝖼𝗇+𝗏𝖼𝗇· 2^𝗏𝖼𝗇 vertices.
Then, we show that the same kernelization algorithm also provides us the following result:
, , and parameterized by 𝗏𝖼𝗇 admit a kernel with at most 𝗏𝖼𝗇+𝗏𝖼𝗇· 2^𝗏𝖼𝗇 vertices.
Finally, we complement our exponential kernels for these variants by arguing about their incompressibility:
, , , , and on strongly connected directed and oriented graphs parameterized by 𝗏𝖼𝗇 do not admit a polynomial compression, unless .
§.§ Additional Related Works
For a graph with girth at least 5, the cop number is lower bounded by the minimum degree of the graph <cit.>. As implied by the lower bound for the Zarankiewicz problem <cit.>, an extremal graph with girth 5 has Ω(n^3/2) edges.
In a graph with Ω(n^3/2) edges, if there is a vertex whose degree is smaller than c√(n), for an appropriate constant c, then we can remove it and still get a smaller graph with Ω(n^3/2) edges.
Hence, eventually, every vertex has degree Ω(√(n)).
Therefore, the cop number of such a graph is Ω(√(n)).
Meyniel <cit.> conjectured this to be tight, that is, 𝒪(√(n)) cops are sufficient to capture the robber in any connected graph.
This is probably the deepest conjecture in this field (see <cit.>). Since then, several attempts have been made to bound the cop number of general graphs <cit.>. Although these results establish that the 𝖼(G) = o(n), even the question whether c(G) = 𝒪(n^1-ϵ), for ϵ >0, remains open.
Many graph classes have unbounded cop number. The graph classes for which the cop number is Ω(√(n)) are called Meyniel extremal. These include bipartite graphs <cit.>, subcubic graphs <cit.>, and polarity graphs <cit.>. Meyniel's conjecture was also considered for random graphs <cit.>.
Lastly, we remark that variations of vary mainly depending on the capabilities of the cops and the robber. Some of these variations were shown to have correspondence with several width measures of graphs like treewidth <cit.>, pathwidth <cit.>, tree-depth <cit.>, hypertree-width <cit.>, cycle-rank <cit.>, and directed tree-width <cit.>. Moreover, Abraham et al. <cit.> defined the concept of a cop-decomposition, which is based on the cop strategy in the game on minor-free graphs provided by Andreae <cit.>, and showed that it has significant algorithmic applications.
§ PRELIMINARIES
For ℓ∈ℕ, let [ℓ] = {1,…, ℓ}. Whenever we mention a/b, we mean ⌈a/b⌉.
§.§ Graph Theory
For a graph G, we denote its vertex set by V(G) and edge set by E(G). We denote the size of V(G) by n and size of E(G) by m. In this paper, we consider finite, connected[The cop number of a disconnected graph is the sum of the cop numbers of its components; hence, we assume connectedness.], and simple graphs.
Let v be a vertex of a graph G. Then, by N(v) we denote the open neighbourhood of v, that is, N(v)= {u | uv ∈ E(G)}.
By N[v] we denote the close neighbourhood of v, that is, N[v] = N(v) ∪{v}. For X ⊆ V(G), we define N_X(v) = N(v) ∩ X and N_X[v] = N[v] ∩ X. We say that v dominates u if u∈ N[v]. The girth of a graph G is the length of a shortest cycle contained in G.
A u,v-path is a path with endpoints u and v. A path is isometric if it is a shortest path between its endpoints. For u,v∈ V(G), let d(u,v) denote the length of a shortest u,v-path.
Let G be a graph and U⊆ V(G). Then, G[U] denotes the subgraph of G induced by U. A set U ⊆ V(G) is a vertex cover if G[V(G) ∖ U] is an independent set. The minimum cardinality of a vertex cover of G is its vertex cover number (𝗏𝖼𝗇). Moreover, U is a cluster vertex deletion set if G[V(G) ∖ U] is a disjoint union of cliques. The minimum size of a cluster vertex deletion set of a graph is its cluster vertex deletion number (𝖼𝗏𝖽). Additionally, U is a deletion to stars set if G[V(G) ∖ U] is a disjoint union of star graphs. The minimum size of a deletion to stars set of a graph is its deletion to stars number (𝖽𝗍𝗌). Two vertices u,v ∈ V(G) have the same type if and only if N(v)∖{u} = N(u) ∖{v}. A graph G has neighborhood diversity at most w if there exists a partition of V(G) into at most w sets, such that all the vertices in each set have the same type.
§.§
is a two-player perfect information pursuit-evasion game played on a graph.
One player is referred as cop player and controls a set of cops, and the other player is referred as robber player and controls a single robber.
The game starts with the cop player placing each cop on some vertex of the graph, and multiple cops may simultaneously occupy the same vertex. Then, the robber player places the robber on a vertex.
Afterwards, the cop player and the robber player make alternate moves, starting with the cop player.
In the cop player move, the cop player, for each cop, either moves it to an adjacent vertex (along an edge) or keeps it on the same vertex. In the robber player move, the robber player does the same for the robber. For simplicity, we will say that the cops (resp., robber) move in a cop (resp., robber) move instead of saying that the cop (resp., robber) player moves the cops (resp., robber). Throughout, we denote the robber by .
A situation where one of the cops, say, , occupies the same vertex as is a capture. (We also say that the captures and that is captured by .) The cops win if they have a strategy to capture , and wins if it has a strategy to evade a capture indefinitely. A graph G is k-copwin if k cops have a winning strategy in G.
The cop number of G, denoted 𝖼(G), is the minimum k such that G is k-copwin. For brevity, G is said to be copwin if it is 1-copwin (i.e. (G) = 1).
Accordingly, we have the following decision version of the problem.
A graph G, and an integer k ∈ℕQuestionIs G k-copwin?
We say that some cops guard a subgraph H of G if cannot enter H without getting captured by one of these cops in the next cop move. We shall use the following result:
Let P be an isometric path in G. Then one cop can guard P after a finite number of rounds/cop moves.
Currently, the best known algorithm to decide whether G is k-copwin is by Petr et al. <cit.>:
is solvable in 𝒪(kn^k+2) time.
If a cop occupies a vertex v, then attacks N[v]. A vertex u is safe if it is not being attacked by any cop. If is on a vertex that is not safe, then is under attack.
§.§ Variations of
Several variations of have been studied in the literature, differing mainly in the rules of movements of agents, the definition of the capture, and the capabilities of the agents. We provide below the definitions of the games considered in this paper. We list below some of the primary properties of the gameplay in which these variations differ:
* Speed of agents: If an agent has speed s, where s∈ℕ, then the agent can move along at most s edges in its turn. We note that a robber with speed s cannot move over a cop, that is, the robber can move along a path of length at most s not containing any cop, in its turn.
* Lazy/active/flexible cops:
Let C be the set of cops and let A∪ F ∪ L be a partition of the set of cops such that A is the set of active cops, F be the set of flexible cops, and L be the set of lazy cops. Then, in each cop move, at most one cop from L can make a move, each cop from A must make a move, and each cop from F can either make a move or stay on the same vertex. Unless mentioned otherwise, all cops are assumed to be flexible.
* Reach of cops:
If a cop _i has reach λ_i, then cannot access a vertex that is at a distance at most λ_i from the vertex occupied by _i. Here, think of the cop _i as having a gun with range λ_i. Hence, if _i can reach a vertex that is at most distance λ_i from the robber's vertex at the end of a cop move, then _i can shoot , and the cops win. Similarly, on a robber move, even if has speed s, then it can move only along a path of length at most s that does not contain any vertex that is at a distance at most λ_i from _i. In , for each cop _i, λ_i = 0.
* Visible/invisible robber: If the robber is visible, then the cops know the position of the robber. If the robber is invisible, then the cops do not know the position of the robber. Moreover, we say that cops have d-visibility if cops can see the position of the robber only if it is at most d edges away from at least one of the cops.
Next, we define the variants of for which we will extend our results.
: <cit.> is one the the most well-studied variants of games <cit.>. In this variant, the cops are lazy, that is, at most one cop can move during a cops' turn. This restricts the ability of the cops with respect to the classical version. The minimum number of lazy cops that can ensure a capture in a graph G is known as the lazy cop number and is denoted by 𝖼_lazy(G). Clearly, 𝖼(G) ≤𝖼_lazy(G), as 𝖼_lazy(G) cops can capture the robber in the classical version (using the winning strategy of the game). We remark that this game is also studied with the name one-cop-moves game <cit.>.
:
In <cit.>, the robber is able to strike back against the cops. If on a robber's turn, there is a cop in its neighborhood, then the robber can attack the cop and eliminate it from the game. However, if more than one cop occupy a vertex and the robber attacks them, then only one of the cops gets eliminated, and the robber gets captured by one of the other cops on that vertex. The cop number for capturing an attacking robber on a graph G is denoted by 𝖼_attack(G), and is referred to as the attacking cop number of G. Clearly, 𝖼(G) ≤𝖼_attack(G) ≤ 2 ·𝖼(G), as, on the one hand, 𝖼_attack(G) cops can capture the robber in the classical version. On the other hand, if we play the attacking version with 2·𝖼(G) cops using the strategy of the classical variant with the only difference that there are always at least two cops on a vertex, then the cops have a winning strategy.
:
In the game of <cit.>, each cop as well as the robber are active, that is, in a cop/robber move, each cop/robber has to move to an adjacent vertex. The active cop number of a graph G, denoted by 𝖼_active(G), is the minimum number of cops that can ensure capture in this game. It is easy to see that 𝖼_active(G) ≤ 2·𝖼(G), as if we keep one extra cop adjacent to each cop in the winning strategy for , then whenever some cop has to skip a move, it can simply do so by switching with the extra cop adjacent to it.
:
In the game of <cit.>, the definition of capture is different. In this game, a cop and the robber can occupy the same vertex of the graph during the game, but the robber cannot end its turn by remaining at a vertex occupied by some cop. The cops win by surrounding the robber, that is, if the robber occupies a vertex v, then there is a cop at each vertex u∈ N(v). The surrounding cop number for a graph G is denoted as 𝖼_surround(G). It is easy to see that 𝖼_surround(G) ≥δ(G), where δ(G) is the minimum degree of the graph.
:
In the game of <cit.>, the robber can move faster than the cops. If has speed s, then it can move along a path with at most s edges not containing any cop. The minimum number of cops that can ensure capture of
a fast robber with speed s in a graph G is denoted by 𝖼_s(G). For s ≥ 2, deciding whether 𝖼_s(G)≤ k is NP-hard as well as W[2]-hard even when input graph G is restricted to be a split graph <cit.>. The game of is well-studied <cit.>.
on Directed Graphs:
The game of is also well-studied for oriented/directed graphs <cit.>. The game is played on a directed graph G, and the players can only move along the orientation of the arcs.
Finally, we define a variant of that generalizes many well-studied variants of :
Generalized :
Consider the following generalized version of . Here the input is (G,_1,…,_k, ) where each cop _i has speed s_i (possibly different for each cop) and has speed s_R. Moreover, each cop can be either forced to be active (all active cops have to move in each turn), lazy (at most one lazy cop moves in each turn), or flexible (a flexible cop can either move or stay on the same vertex in its move). Moreover, the robber can also be forced to be either lazy or flexible. Furthermore, each cop _i can have reach λ_i (possibly different for each cop). This game generalizes several well-studied variants of along with , , and Cops and Robber From a Distance <cit.>. It also generalizes the game of <cit.>.
Finally, we note that we assume the notion of “being active” to be defined only when the agent has speed s=1. But, this notion can be defined in multiple ways if the agent has speed s>1: the player might have to move at least s'≤ s edges, the player may have to move to a vertex at a distance at least s' ≤ s from the current vertex, the player may or may not be allowed to repeat edges, and so on. We remark that our kernelization result for Generalized CnR can be made to work, with some changes, considering any of these notions discussed.
§.§ An XP Algorithm for Variants
For graph searching games, there is a standard technique to get an XP-time algorithm with running time n^𝒪(k) (where n is the size of input graph and the question is whether k cops have a winning strategy). This technique involves generating a game graph where each vertex represents a possible placement of all the agents on the vertices of G. Since k cops and a single robber can have n^k+1 possible placements on G, the game graph has n^k+1 vertices. The following step is to mark all of the winning states (that is, where the robber is captured). Afterwards, we use an algorithm to keep adding states to the set of winning states in the following manner. On a cop move, from a given state S, if there exists a movement of cops that can change the game state S to a winning state, we add S to the winning states. On a robber move, for a game state S, if all the possible moves of the robber lead to a winning state, we add S to the winning state. Finally, if there exists a position of k cops such that, for any position of the robber, these states are in winning states, we declare that k cops have a winning strategy in G. It is easy to see that this algorithm can be implemented in n^𝒪(k) time.
Petr, Portier, and Versteegan <cit.> gave an implementation of this algorithm, for , that runs in 𝒪(kn^k+2) time. It is not difficult to see that this algorithm can be made to work for all the variants we discussed by changing the rules to navigate between game states. For , the only extra consideration is that if attacks a cop (among k cops) and does not get captured in the next cop move, then we have a game state, say, S', with k-1 cops and one robber, where the placement of these agents is a subset of a placement of k+1 agents in one of the original game states, and hence we prune S'. Thus, we have the following proposition.
For any variant of considered in this paper, an instance (G,k) can be solved in 𝒪(kn^k+2) time.
§.§ Parameterized complexity
In the framework of parameterized complexity, each problem instance is associated with a non-negative integer, called a parameter. A parametrized problem Π is fixed-parameter tractable (𝖥𝖯𝖳) if there is an algorithm that, given an instance (I,k) of Π, solves it in time f(k)· |I|^𝒪(1) for some computable function f(·). Central to parameterizedcomplexity is the W-hierarchy of complexity classes:
𝖥𝖯𝖳⊆𝖶[1]⊆𝖶[2]⊆…⊆𝖷𝖯.
Two instances I and I' (possibly of different problems) are equivalent when I is a Yes-instance if and only if I' is a Yes-instance. A compression of a parameterized problem Π_1 into a (possibly non-parameterized) problem Π_2 is a polynomial-time algorithm that maps each instance (I,k) of Π_1 to an equivalent instance I' of Π_2 such that size of I' is bounded by g(k) for some computable function g(·). If g(·) is polynomial, then the problem is said to admit a polynomial compression.
A kernelization algorithm is a compression where Π_1 = Π_2. Here, the output instance is called a kernel.
Let Π_1 and Π_2 be two parameterized problems. A polynomial parameter transformation from Π_1 to Π_2 is a polynomial time algorithm that, given an instance (I,k) of Π_1, generates an equivalent instance (I',k') of Π_2
such that k' ≤ p(k), for some polynomial p(·). It is well-known that if Π_1 does not admit a polynomial compression, then Π_2 does not admit a polynomial compression <cit.>. We refer to the books <cit.> for details on parameterized complexity.
§ BOUNDING THE COP NUMBER
In the following lemma, we give a general upper bound for the cop number, which we use to derive bounds for several graph parameters.
Let G be a graph and let U ⊆ V(G) be a set of vertices such that for each connected component H of G[V(G) ∖ U], 𝖼(H) ≤ℓ. Then, 𝖼(G) ≤⌈|U|/2⌉ +ℓ.
We note that this proof uses techniques used to bound 𝖼(G) in terms of tw(G) by Joret et al. <cit.>. Denote U = {u_1,…, u_q}. Consider isometric paths P_1, …, P_⌈q/2⌉ such that the endpoints of P_i are u_2i-1 and u_2i. Note that these isometric paths always exist as we assume that the graph is connected. Here, P_⌈q/2⌉ might be a single vertex path containing only the vertex u_q.
Now, we guard each path P_i using a single cop (due to Proposition <ref>). These ⌈q/2⌉ cops restrict the robber to one connected component H of G[V(G)∖ U]. Since each of these components is ℓ-copwin, q/2 +ℓ cops have a clear winning strategy in G.
We know that the classes of star graphs, complete graphs, chordal graphs, and trees are copwin <cit.>. These bounds, along with Lemma <ref>, implies the following theorem.
Let G be a graph and t =min{𝖼𝗏𝖽, 𝖽𝗍𝗌}. Then, 𝖼(G) ≤t/2+1.
§.§ Bounding Cop Number by 𝗏𝖼𝗇:
Let U be a vertex cover of size t in G and I be the independent set V(G) ∖ U. Lemma <ref> implies that 𝖼(G) ≤⌈t/2⌉ +1. In this section, we improve this bound. First, we provide the following reduction rules.
[RR<ref>]
If there is a vertex v ∈ I such that |N(v)| ≥ 3, then place a cop at v and delete N[v].
[RR<ref>]
If there is a vertex v ∈ U such that |N[v] ∩ U| ≥ 3, then place a cop at v and delete N[v].
[RR<ref>]
If there is an isometric path P such that P contains at least three vertices from U, then guard P using one cop and delete V(P) (see Proposition <ref>).
We remark that RR<ref> and RR<ref> can be merged, but we prefer to keep them separate to ease the presentation. Moreover, we note the following.
In the application of reduction rules RR<ref>-RR<ref>, whenever a set of vertices X ⊆ V(G) is deleted by the application of rules RR<ref>-RR<ref>, it implies that each vertex x ∈ X is being guarded by some cop, and hence, is not accessible to . We do not actually delete the vertices, and this deletion part is just for the sake of analysis. Hence, from the cop player's perspective, the graph remains connected.
Second, we have the following lemma concerning the structure of subgraphs accessible to after an exhaustive application of rules RR<ref>-RR<ref>.
Let H be a connected component of G where rules RR<ref>-RR<ref> cannot be applied anymore. Then, for every two distinct vertices x,y ∈ V(H) ∩ U, either xy ∈ E(G) or there exists a vertex w ∈ I such that xw ∈ E(G) and yw ∈ E(G).
For contradiction, let us assume that there exist two distinct vertices x,y ∈ V(H) ∩ U such that xy ∉ E(G) and there does not exist a vertex w ∈ I such that xw ∈ E(G) and yw ∈ E(G). Since x and y are part of the connected component H, there exists an x,y-path. Let P be an isometric x,y-path.
Let P = x, v_1, … , v_ℓ, y. Since vertices in I form an independent set and ℓ≥ 2, the vertices v_1, …, v_ℓ cannot be all from I. So, there exists at least one v_i, for i ∈ [ℓ], such that v_i ∈ U. Thus, P contains at least three vertices from U, and P is an isometric path. Therefore, we can apply RR<ref>, and hence, we reach a contradiction.
Next, we argue that, after an exhaustive application of rules RR<ref>-RR<ref>, the cop number of each connected component accessible to is bounded. We have the following lemma.
Once we cannot apply rules RR<ref>-RR<ref> anymore, let the robber be in a connected component H. Then, c(H) ≤ 2.
We present a winning strategy for two cops. If H contains at most two vertices from U, then the cops have a winning strategy by placing a cop on each of these vertices. Hence, we assume there exist at least three vertices in H from U. Let x and y be two distinct vertices of H from U. Then, we place a cop on each of these vertices. Denote the cops by _1 and _2. We consider the two cases as follows.
Case 1: If is on a vertex in w ∈ I, then due to reduction rule RR<ref>, it can have at most two neighbors in U. Let them be u and v. Now, due to Lemma <ref>, the cops can move to vertices such that one of them, say x', dominates the vertex u and the other, say y', dominates the vertex v. See Figure <ref> for reference. So, the cops move to the vertices x' and y'. This restricts to stay on its current vertex w in I (else it is captured in the next move of the cops). Now, in the next move of the cops, they move to the vertices u and v. Again, this restricts to stay on the vertex w (else it is indeed captured). Finally, in the next move of the cops, the cops capture .
Case 2: If is on a vertex in u ∈ U, then _1 can move to a vertex in I, say x', to attack (due to Lemma <ref>). This forces to either move to a vertex w ∈ I or to a vertex z ∈ U. Accordingly, we consider two sub-cases.
* If moves to a vertex w ∈ I, then note that w can have at most two neighbors in U (due to RR1), and one of them is u (being attacked by _1). Let the other neighbor of w be v. Now, _2 can move to a vertex such that it attacks v (due to Lemma <ref>). This game state is identical to case 1. Hence, the cops can capture the robber in two rounds.
* If moves to a vertex z ∈ U, then _1 moves to u. This forces to move to a vertex in I since u can have only one neighbor in U (due to RR<ref>), and that is occupied by C_1, with both cops being in U. This game state is again identical to case 1, and thus the cops win in at most two rounds.
This completes our proof.
Finally, we have the following theorem.
*
The correctness of this theorem follows from Lemma <ref> and the fact that using each cop in the reduction rules RR<ref>, RR<ref>, and RR<ref>, we remove at least three vertices from U. If we can apply these rules t/3 times, gets restricted to a vertex in I, and thus one additional cop can capture . Else, when we apply these rules at most t/3-1 times, we then need two additional cops (by Lemma <ref>), that is, we overall need at most t/3 +1 cops to ensure capture.
We note here that a similar technique will fail if we try to “remove” four vertices in each reduction rule. More precisely, if we have the following reduction rules, then we might not get a graph with a bounded (by a constant independent of the input) cop number.
[RR<ref>]
If there is a vertex v ∈ I such that |N(v)| ≥ 4, then place a cop at v and delete N[v].
[RR<ref>]
If there is a vertex v ∈ U such that |N[v] ∩ U| ≥ 4, then place a cop at v and delete N[v].
[RR<ref>]
If there is an isometric path P such that P contains at least four vertices from U, then guard U using one cop and delete V(P) (see Proposition <ref>).
We have the following claim.
For every k ∈ℕ, there exists a graph G with a vertex cover U and independent set I = V(G) ∖ U, such that we cannot apply the rules RR<ref>-RR<ref>, and 𝖼(G)>k.
Bonato and Burgess <cit.> proved that for every k, there exists a diameter-2 graph H such that c(H) ≥ k. Let H be a diameter-2 graph such that c(H) ≥ k.
Joret et al. <cit.> showed that subdividing each edge of a graph an equal number of times does not reduce the cop number. So, we subdivide each edge of H to get the graph G such that 𝖼(G) ≥ k. Now, we can put the original vertices in the vertex cover U, and the newly introduced vertices in the independent set I. We cannot apply any of the rules RR<ref> (because each vertex in I has degree exactly 2), RR<ref> (because U is an independent set), and RR<ref> (since any isometric path in G containing more than three vertices of U will contradict the fact that H is a diameter-2 graph).
Hence, G is a graph that satisfies the conditions of our lemma.
§.§ Bounding the Cop Number for Variants
Here we extend the result of Theorem <ref> to several variations of the game. In particular, we prove the following result.
Let G be a graph with a vertex cover U of size t. Then,
* 𝖼_lazy≤t/2 +1.
* 𝖼_attack≤t/2 +1.
Let I be the independent set V(G) ∖ U. First, we note here that in , one cop cannot ensure guarding of an isometric paths <cit.>, and in , multiple cops, say, ℓ cops, cannot ensure guarding ℓ paths simultaneously. (This is evident from the fact that there exists a planar graph G with 𝖼_lazy(G) ≥ 4 <cit.>.) Therefore, reduction rules RR<ref>-RR<ref> will not directly imply an upper bound on the respective cop numbers here. So, we have the following reduction rules:
[RR<ref>]
If there is a vertex v ∈ I such that N(v) > 1, then place a cop at v and delete N[v].
[RR<ref>]
If there is a vertex v ∈ U such that N_U[v] > 1, then place a cop at v and delete N[v].
Observe that after an exhaustive application of reduction rules RR<ref> and RR<ref>, we are left with a collection of stars, each of which has its center vertex in U.
In the case of , we can easily apply rules RR<ref> and RR<ref>, since cops do not move once placed according to an application of reduction rules RR<ref> and RR<ref>, except for when they move to capture . Finally, is restricted to a star, and one extra lazy cop can move and capture .
In the case of , all cops start at the same vertex. Whenever the cop player wants to station one of the cops at a vertex v according to rules RR<ref> and RR<ref>, all of the cops that are not stationed yet move together to the vertex v (to avoid getting attacked). Note that once a cop is stationed at a vertex u, the cop never moves and hence can never be attacked (because if wants to attack a cop at vertex v, it has to reach a vertex in N(v) in the previous round, and now the cop at v can move and capture ). Once we cannot apply rules RR<ref> and RR<ref> anymore, is restricted to a star. At this point, if there are at least two unstationed cops, then these two cops can move to capture . Else, let v be the last vertex where we stationed a cop. Since at this point we have stationed all but one cop (t/2 cops stationed), observe that for each vertex x∈ U, there is a cop in N[x], and therefore, is restricted to one vertex, say, u, of I. Now, can only attack a cop if it is at a vertex in N(u) (and N(u)⊆ U). Finally, the only unstationed cop, say, , moves to a vertex in N(u) in a finite number of steps (at this point cannot attack without getting captured as is on a vertex in U), and captures in the next round.
The bound on the cop numbers follow from the fact that in each reduction rule, we remove at least two vertices from U and place only one cop.
We have the following straightforward observation concerning the bounds on the cop number for the remaining variants.
Let t be the 𝗏𝖼𝗇 of a graph G. Then, 𝖼_active(G) ≤ t, 𝖼_surround(G) ≤ t, 𝖼_s(G) ≤ t (for any value of s), and for a strongly connected orientation G of G, 𝖼(G) ≤ t.
We remark that the cop number for an oriented graph G (with underlying graph G) that is not strongly connected can be arbitrarily larger than the 𝗏𝖼𝗇 of G. To see this, consider a vertex cover U of size t in G. Next, we add ℓ vertices v_1, …, v_ℓ such that each vertex v_i, for i ∈ [ℓ], has only outgoing edges to vertices in U. Now, if we do not place a cop on some v_j, for j ∈ [ℓ], then can start at v_j and cops can never capture . Hence, 𝖼(G) ≥ℓ.
The proof of Theorem <ref> directly follows from Lemma <ref> and Observation <ref>.
§ KERNELIZATION ALGORITHMS
In this section, we provide kernelization algorithms for and its variants.
§.§ Exponential Kernel for by 𝗏𝖼𝗇:
Let G be a graph where a vertex cover U of size t is given. If no such vertex cover is given, then we can compute a vertex cover U of size t≤ 2·𝗏𝖼(G) using a polynomial-time approximation algorithm <cit.>. Then, the vertices in V(G) ∖ U form an independent set I of size n-t. Recall that the question is whether G is k-copwin.
Our kernelization algorithm is based on the exhaustive application of the following reduction rules.
[RR<ref>]
If k ≥t/3+1, then answer positively.
[RR<ref>]
If k = 1, then apply an 𝒪(n^3) time algorithm (Proposition <ref>) to check whether G is copwin.
[RR<ref>]
If there are two distinct vertices u,v ∈ I such that N(u) ⊆ N(v), then delete u.
The safeness of rule RR<ref> follows from Theorem <ref>. For the safeness of rule RR<ref>, we have the following lemma. We note that Lemma <ref> can also be derived from <cit.>, but we give a self-contained proof for the sake of completeness.
Let u and v be two distinct vertices of G such that N(u) ⊆ N(v). Consider the subgraph H of G induced by V(G)∖{u}. Let k≥ 2. Then, G is k-copwin if and only if H is k-copwin.
First, we show that if G is k-copwin, then H is k-copwin. For the graph H, the k cops borrow the winning strategy that they have for G, with the only difference that whenever a cop has to move to the vertex u in G, it moves to v (in H) instead. Since N(u) ⊆ N(v), the cop can make the next move as it does in the winning cop strategy for G. Note that using this strategy, the cops can capture if is restricted to V(H) in G. Therefore, using this strategy, k cops will capture in H as well.
Second, we show that if H is k-copwin, then G is is k-copwin. Here, for each vertex x ≠ u of G, we define I(x) = x, and for u, we define I(u)= v. Observe that for each x ∈ V(G), I(x) is restricted to H and if xy ∈ E(G), then I(x)I(y) ∈ E(H). Therefore, every valid move of a player from a vertex x to y in G can be translated to a valid move from I(x) to I(y) in H. Now, the cops have the following strategy. If the robber is on a vertex x, the cops consider the image of the robber on the vertex I(x). Since the robber's image is restricted to H, the cops can use the winning strategy for H to capture the image of the robber in G. Once the image is captured, if the robber is not on the vertex u, then the robber is also captured. Otherwise, the robber is on the vertex u, and at least one cop is on v. See Figure <ref> for an illustration. So, one cop, say 𝒞_1, stays on v and this prevents the robber from ever leaving u. Indeed this follows because N(u) ⊆ N(v), and so, if ever leaves u, it will be captured by 𝒞_1 in the next cop move. Finally, since k>1, some other cop, say 𝒞_2, can use a finite number of moves to reach u and capture the robber.
This completes our proof.
Note that the requirement for k≥ 2 in Lemma <ref> is crucial. It might so happen that we can get an H such that c(H)=1, but 𝖼(G)>1. To see this, consider the example of C_4, where any two diagonal (i.e., non-adjacent) vertices satisfy the property in Rule RR9, and if we remove one of them, the cop number reduces from 2 to 1. However, this does not harm our algorithm because if we are given k= 1, then RR<ref> is applied (before RR<ref>).
Two sets A and B are incomparable if neither A⊆ B nor B⊆ A. We shall use the following proposition that follows from Sperner's Theorem and Stirling's approximation.
Let X be a set of cardinality N. Moreover, let Y be a set of subsets of X such that for each a,b ∈ Y, a and b are incomparable. Then, |Y| ≤2^N/√(N).
Once we cannot apply RR<ref>-RR<ref> anymore, we claim that the size of the reduced graph G' is bounded by a function of t. Let U' = U ∩ V(G') and I' = I ∩ V(G'). Clearly, |U'| ≤ t. Now, each vertex u ∈ I' is associated with a neighborhood N(u) such that N(u) ⊆ U'. Moreover, for any two vertices u,v ∈ I', the sets N(u) and N(v) are incomparable. Hence, due to Proposition <ref>, |I'| ≤2^t/√(t), and therefore, |V(G')| ≤ t+2^t/√(t), which proves the following theorem.
*
Now, we can apply the XP-time algorithm (Proposition <ref>) for on our kernel. Since k ≤t/3, the running time we get is exponential only in t and polynomial in n. Specifically, the running time of the algorithm is t·(t+2^t/√(t))^t/3+2· n^𝒪(1). Moreover, if a vertex cover U of size t = 𝗏𝖼(G) is not given, then we can compute one in time 1.2738^t· n^𝒪(1) <cit.>. Thus, we have the following corollary.
*
§.§ Exponential Kernel for by :
To get a kernel for parameterized by 𝖼𝗏𝖽, we employ techniques similar to the ones we used to get a kernel for parameterized by 𝗏𝖼𝗇. Let U be a cluster vertex deletion set of size t. Let S = V(G)∖ U, and C_1, …, C_ℓ be the set of disjoint cliques that form the graph G[S]. Since 𝖼(G)≤t/2+1 (Theorem <ref>), we have the following reduction rule.
[RR<ref>]
If k ≥t/2+1, then report Yes-instance.
Next, we have the following lemma.
Let u and v be vertices of some clique C of G[S]. If N_U(u) ⊆ N_U(v), then 𝖼(G) = 𝖼(G∖{u}).
First we observe that, since u and v are part of the same clique C, N[u] ⊆ N[v]. Then, the proof of this lemma follows from the proof of Lemma <ref>. We remark that this proof also follows from the idea of retracts used in the literature <cit.>. Additionally, we remark that, here, 𝖼(G) need not be necessarily greater than 1. To see this, consider the situation when is at u and a cop, say, _1, is at v. Now, cannot move to a vertex in U since N_U(u) ⊆ N_U(v) and cannot stay on a vertex in C since v is a part of C. Thus, gets captured in the next move by _1.
Hence, we can apply the following reduction rule, whose safeness was proved by Lemma <ref>.
[RR<ref>]
Let u and v be vertices of some clique C ∈ G[S] such that N[u] ⊆ N[v]. Then, delete u.
Once we cannot apply reduction rule RR<ref> anymore, the size of each clique in G[S] is at most 2^t/√(t) (due to Proposition <ref>).
Similarly to Lemma <ref>, we have the following lemma.
Let C_i and C_j be two cliques in G[S] such that for each vertex u ∈ V(C_i), there exists a vertex v ∈ V(C_j) such that N_U(u) ⊆ N_U(v). Then, k >1 cops have a winning strategy in G if and only if they have a winning strategy in G[V(G) ∖ V(C_i)].
The proof idea here is similar to the proof idea of Lemma <ref>. Let H= G[V(G) ∖ V(C_i)]. Here, we will just prove that if k cops have a winning strategy in H, then k cops have a winning strategy in G. (The proof of the reverse direction is rather easy to see, combining arguments from Lemma <ref> and the arguments we present in the rest of this proof).
Let k≥ 2 cops have a winning strategy in H. Similarly to Lemma <ref>, for each vertex x ∈ V(G) ∖ V(C_i), we define I(x) = x, and for each vertex u∈ V(C_i), we have a vertex v∈ V(C_j) such that N_U(u) ⊆ N_U(v), and we define I(u)=v. (Note that there might be multiple choices for v here. We can choose any such vertex.)
Observe that for each vertex x∈ V(G), I(x) is restricted to H. Moreover, if xy ∈ E(G), then I(x)I(y) ∈ E(H) for the following reasons. If x,y ∈ V(G) ∖ V(C_i), then it is obvious. Else, if x,y ∈ V(C_i), then observe that I(x) and I(y) are part of some clique C_j, and N_U(x) ⊆ N_U(I(x)) and N_U(y) ⊆ N_U(I(y)). Hence, in this case, if xy ∈ E(G), then I(x)I(y) ∈ E(H). Finally, assume without loss of generality that x∈ V(C_i) and y∈ V(G)∖ V(C_i). In this case, xy∈ E(G) only if y∈ U. Since N_U(x) ⊆ N_U(I(x)), I(x)I(y) ∈ E(H). Thus, if xy ∈ E(G), then I(x)I(y) ∈ E(H). Therefore, every valid move of a player from a vertex x to a vertex y in G can be translated to a move from I(x) to I(y) in H.
Now, cops play their winning strategy in H with the following consideration: When the robber is at a vertex x in G, the cops consider the image of the robber at vertex I(x) in G. Since the robber's image is restricted to the vertices of H, the cops can use a winning strategy from H to capture the image of the robber in G. Once the image is captured, if the robber is at a vertex x ∉ V(C_i), then the robber is also captured. Otherwise, the robber is at a vertex x∈ V(C_i), and one of the cops is at vertex I(x) in C_j. Now, observe that the robber cannot immediately move to a vertex in U. Anyhow, the robber can move to some other vertex y ∈ V(C_i), and in this case, the cop at vertex I(x) can move to vertex I(y) ∈ V(C_j). This way, the cop occupying the robber's image can prevent the robber from ever leaving C_i. Since k≥ 2, some other cop can move to capture the robber in C_i (as cliques are copwin). This completes our proof.
Thus, we can apply the following reduction rule, whose safeness was proved by Lemma <ref>.
[RR<ref>]
Let C_i and C_j be two cliques in G[S] such that for each vertex u ∈ V(C_i), there exists a vertex v ∈ V(C_j) such that N_U(u) ⊆ N_U(v). Then, delete V(C_i).
Finally, we use the following lemma to bound the size of the desired kernel from Theorem <ref>.
After an exhaustive application of RR<ref>-RR<ref>, the size of the reduced graph is at most 2^2^t + √(t).
Once we cannot apply the reduction rules RR<ref> and RR<ref>, due to Proposition <ref>, each clique can have at most 2^t/√(t) vertices. Moreover, the total number of cliques possible is at most 2^2^t/√(t)/√(2^t/√(t)) (due to Proposition <ref>). Thus, the total number of vertices in the reduced graph is at most 2^2^t + √(t).
Since k ≤t/2+1 (by Reduction Rule RR<ref>), applying the XP-algorithm for from Proposition <ref> to the kernel in Theorem <ref> gives us the following corollary.
is parameterized by . Specifically, it is solvable in (𝖼𝗏𝖽+2^2^𝖼𝗏𝖽 + √(𝖼𝗏𝖽))^𝖼𝗏𝖽/2+2· n^𝒪(1) time.
§.§ Exponential Kernel for by
Using the ideas we presented in Section <ref>, we can also get a kernel for with respect to deletion to stars number. Let U be a deletion to stars vertex set of size t. Also, let S = V(G) ∖ U, and let X_1, … X_ℓ be the stars in the graph G[S]. Specifically, we have the following reduction rules along with reduction rule RR<ref>.
[RR<ref>]
Let u and v be two leaf vertices of some star X in G[S] such that N_U(u) ⊆ N_U(v). Then, delete u.
[RR<ref>]
Let X and Y be two stars in G[S] such that V(X) = x, x_1, …, x_p and V(Y) = y, y_1, …, y_q, where x and y are center vertices of X and Y, respectively. If N_U(x)⊆ N_U(y) and for each vertex x_i (for i∈ [p]), there is a vertex y_j (for j∈ [q]) such that N_U(x_i) ⊆ N_U(y_j), then delete X.
The safeness of RR<ref> follows from Theorem <ref>. We have the following lemma, which establishes that reduction rules RR<ref> and RR<ref> are safe.
Assuming k>1, reduction rules RR<ref> and RR<ref> are safe.
To prove that rule RR<ref> is safe, it suffices to observe that for leaf vertices u and v of some star X ∈ S, if N_U(u) ⊆ N_U(v), then N(u) ⊆ N(v) in G. Indeed, the rest of the proof follows directly from the proof of Lemma <ref>.
Next, we give a proof idea for the safeness of rule RR<ref>. Here, we just define the function of the image of the robber, and the rest of the proof is similar to the proofs of Lemmas <ref> and <ref>. For each vertex u ∉ V(X), I(u) = u. For each x_i, I(x_i) = y_j such that N_U(x_i)⊆ N_U(y_j) (there might be multiple choices for y_j and we can choose any one of them), and I(x) = y.
Now, we claim that once we cannot apply rules RR<ref> and RR<ref> anymore, the size of the graph is bounded by a function of t. First, we note that the size of each star is at most 2^t/√(t)+1 (by Proposition <ref>). Let X and Y be two stars in G[S] such that x and y are the center vertices of X and Y, respectively. We say that X and Y have the same neighbourhood type if N_U(x) = N_U(y). Second, it is easy to see that there can be at most 2^t neighbourhood types. Next, we bound the number of stars in each neighbourhood type. Let S_1, …, S_z be the stars having the same neighbourhood type, and let v_i be the center vertex of star S_i. For each star S_i, for i∈ [z], let 𝒮_i = {N(v): v∈ V(S_i)∖{v_i}}. Since we have applied reduction rule RR<ref> exhaustively, we know that for each A∈𝒮_i, A=N(v) for a unique vertex v∈ V(S_i)∖{v_i}. Observe that each S_i is a subset of the power set of U and the power set of U has size 2^t. Moreover, since we have applied reduction rule RR<ref> exhaustively, we know that for any i,j ∈ [z], neither 𝒮_i ⊆𝒮_j nor 𝒮_j ⊆𝒮_i. Hence, due to Proposition <ref>, z ≤2^2^t/√(2^t).
Therefore, the size of the reduced graph can be at most 2^2^t/√(2^t)· 2^t · (2^t/√(t) +1). Thus, we have the desired kernel from Theorem <ref>.
Since k ≤t/2+1 (by reduction rule RR<ref>), applying the XP-algorithm for from Proposition <ref> to the kernel in Theorem <ref> gives us the following corollary.
is parameterized by . Specifically, it is solvable in (𝖽𝗍𝗌+2^2^𝖽𝗍𝗌 + 𝖽𝗍𝗌^1.5)^𝖽𝗍𝗌/2+2· n^𝒪(1) time.
§.§ Exponential Kernels for Different Variants
Here, we extend the result of Theorem <ref> to several variations of the game. We have the following results.
§.§.§ and :
First, we prove the following lemma.
Let u and v be two distinct vertices of G such that N(u) ⊆ N(v). Consider the graph H induced by V(G)∖{u}. Then for k>1 and for x∈{lazy,attack}, 𝖼_x(G) ≤ k if and only if 𝖼_x(H) ≤ k.
The proof for the forward direction (c_x(G)≤ k implies c_x(H)≤ k) is easy and follows from the arguments similar to the arguments in the proof of Lemma <ref>. We prove the reverse side (c_x(H)≤ k implies c_x(G) ≤ k) for both the variants below. Moreover, similarly to the proof of Lemma <ref>, we define I(u) = v and I(x) = x when x≠ u. Similarly, when is at a vertex x, we say that the image of is at vertex I(x). (Note that the image of is restricted to H.) In both of the variants, the cops will play in G to capture the image of the robber using the winning strategy of H.
In , the cops begin by capturing the image of in G. If is at a vertex x ≠ u, then is captured. If is at vertex u, then observe that there is a cop, say, , at v that has captured the image of . Now, ensures that cannot move, and some other lazy cop can move to capture in a finite number of rounds.
In , the main observation is that if the cops can capture in H, they can capture the image of in G without getting attacked by . If is at a vertex x≠ u when the image of is captured, then is captured. Otherwise, is at u, and a cop, say, _1, is at vertex v. Now another cop, say, _2, can move to a vertex w ∈ N(v) (in a finite number of steps) to capture . If attacks _2 at this point, then note that _1 can move to capture in the next round. If does not attack, then _2 moves to capture in the next round.
Lemma <ref> establishes that reduction rule RR<ref> is safe for both and . Before applying reduction rule RR<ref>, we apply the following reduction rules.
[RR<ref>]
If k ≥t/2 +1, then answer positively (Theorem <ref>).
[RR<ref>]
If k=1, then apply the 𝒪(n^3) time algorithm from Proposition <ref>.
The size of the kernel, by using these reduction rules, is dependent on RR9. Therefore, the proof of the existence of the desired kernel from Theorem <ref> follows directly.
Moreover, Theorem <ref>, along with the XP-algorithms from Proposition <ref> for these variants, gives the following immediate corollary.
and are parameterized by 𝗏𝖼𝗇. Specifically, they are solvable in 𝗏𝖼𝗇+ 2^𝗏𝖼𝗇/√(𝗏𝖼𝗇))^𝗏𝖼𝗇/2+2· n^𝒪(1) time.
§.§.§ on Directed Graphs:
Next, we consider the game of on oriented graphs. For a directed graph G and a vertex v∈ V(G), let N^+(v) and N^-(v) denote the set of out-neighbors and in-neighbors of v, respectively. We have the following lemma.
Let u and v be two distinct vertices of a strongly connected directed graph G such that N^+(u) ⊆ N^+(v) and N^-(u) ⊆ N^-(v). Let H be the graph induced by V(G)∖{u}. Then, for k>1, k cops have a winning strategy in H if and only if k cops have a winning strategy in G
First, observe that H is also strongly connected.
Second, let k cops have a winning strategy in G. Then, the cops can use this winning strategy in H, with the only difference that whenever a cop, say, , has to move to u in G, moves to v in H instead ( can do so because N^-(u) ⊆ N^-(v)). Next, whenever has to move to a vertex, say, w, from u, in the strategy in G, then can move to w from v also (since N^+(u) ⊆ N^+(v)). As is restricted to V(H) in G, cops will capture using this strategy in H as well.
Finally, let k cops have a winning strategy in H. We use this strategy to get a winning strategy in G using k cops. First, we define I(x) = x for x≠ u and I(u) = v. Since I(x) is restricted to H, we use the winning strategy in H to capture I(x). At this point if x ≠ u, then is captured. Else, is at u and one of the cops, say, _1, is at v. Since N^+(u) ⊆ N^+(v), cannot move as long as _1 occupies v. Since G is strongly connected, one of the other cops, say, _2, can move to u in a finite number of rounds to capture .
Let G be a graph with a vertex cover U of size t, and let I = V(G)∖ U. Let G be a strongly connected orientation of G. We apply the following reduction rules.
[RR<ref>]
If k≥ t, then answer positively.
[RR<ref>]
If k=1, then apply the 𝒪(n^3) time algorithm from Proposition <ref> to check whether G is copwin.
[RR<ref>]
If u and v are two distinct vertices in I such that N^+(u) ⊆ N^+(v) and N^-(u) ⊆ N^-(v), then delete u.
Safeness of reduction rules RR<ref> and RR<ref> follow from Theorem <ref> and Lemma <ref>, respectively. Now, we argue that once we cannot apply reduction rules RR<ref>-RR<ref>, the size of G is bounded by a function of t. Observe that each vertex u in I has a unique neighbourhood (N^+(u)∪ N^-(u)) and there are three choices for a vertex v ∈ U to appear in the neighbourhood of a vertex u ∈ I, that is, either v ∈ N^+(u), or v ∈ N^-(u), or v ∉ N^+(u)∪ N^-(u). Therefore, the total number of possible vertices in I are at most 3^t. Thus, applying reduction rules RR<ref>-RR<ref>, we get the desired kernel from Theorem <ref>.
Theorem <ref>, along with rule RR21 and Proposition <ref>, gives the following corollary.
on strongly connected directed graphs is parameterized by the vertex cover number t. In particular, it is solvable in (3^t+t)^t+1· n^𝒪(1) time.
§.§ General Kernelization
In this section, we provide a general reduction rule that works for most variants of parameterized by the vertex cover number.
Let U be a vertex cover of size t in G and I be the independent set V(G) ∖ U. For each subset S⊆ U, we define the following equivalence class: 𝒞_S = { v ∈ I N(v) = S}.
Given an instance ((G,k),t), we have the following reduction rule.
[RR<ref>]
If there is an equivalence class 𝒞_S such that |𝒞_S| >k+1, then keep only k+1 arbitrary vertices from 𝒞_S in G, and delete the rest.
First, we present (informal) intuition why reduction rule RR22 is safe. Since the neighbourhood of each vertex in 𝒞_S is the same, all of these vertices are equivalent with respect to the movement rules in any of the variants discussed. We keep k+1 copies of such vertices because, on a robber move, there is at least one vertex that is not occupied by any cop. We refer to such a vertex as a free vertex. Note that there might be multiple free vertices. On a robber player's turn, if plans to move to a vertex in 𝒞_S, it can move to a free vertex. Moreover, if a fast robber wants to use a vertex from 𝒞_S as an intermediate vertex, it can use a free vertex for this purpose as well. We prove safeness for individual variants later in this section.
Moreover, we have the following lemma that we will use later.
Let G be a graph with a vertex cover U of size t. After an exhaustive application of reduction rules RR<ref> and RR<ref>, the reduced graph has at most t+ t·2^t vertices.
There can be at most 2^t equivalence classes, and for each equivalence class, we keep at most k+1 vertices in I. Due to rule RR19, we can assume k<t. Thus, size of I is at most t· 2^t. The size of G is, therefore, at most |U| + |I| ≤ t+ t·2^t.
§.§.§ Generalized CnR
In this section, we establish that RR<ref> is safe for Generalized CnR. We have the following lemma to prove this claim.
Let G be a graph with a vertex cover U of size t. Let 𝒞_S (for S ⊆ U) be an equivalence class such that |𝒞_S| = ℓ >k+1. Moreover, let H be a subgraph formed by deleting an arbitrary vertex v of 𝒞_S from G. Then, (G,_1,…,_k, ) is a Yes-instance if and only if (H,_1,…,_k, ) is a Yes-instance.
Let 𝒞_S = {v_1, …, v_ℓ}. Without loss of generality, let us assume that vertices v_1,… v_ℓ-1 belong to the graph H, and v = v_ℓ. Since there are at most k cops in the game and ℓ >k+1, at least one vertex of v_1,…, v_ℓ-1 is not occupied by any cop. We denote this vertex by x (x is dependent on the position of the cops and may change during the course of the game). Moreover, here we modify the definition of a safe vertex slightly: A vertex y is safe if it is at a distance at least λ_i+1 from _i, for i∈ [k]. Since each vertex in 𝒞_S has the same neighborhood, observe that either each vertex in 𝒞_S not occupied by a cop is a safe vertex or none of the vertices in 𝒞_S is safe. Moreover, for each vertex y∈ V(G)∖{v}, let I(y) = y and I(v) = x. Note that for each vertex u, I(u) is restricted to vertices of V(H), N(u)= N(I(u)), and if u is a safe vertex, then I(u) is also a safe vertex. To ease the presentation, instead of saying that the cops/robber has a winning strategy in (G,_1,…, _k,) (or (G,_1,…, _k,)), we will say that the cops/robber has a winning strategy in G (or H).
First, let has a winning strategy 𝒮 in G. To show that has a winning strategy in H, we will prove a slightly stronger statement that has a winning strategy, say, 𝒮', in G even if is restricted to the vertices of V(H) in G. We get 𝒮' from 𝒮 as follows: If has to use a vertex y in 𝒮 in some move during the game, it uses I(y) instead. We first show that can safely enter the graph. Let y be the vertex enters in the strategy 𝒮. Then, enters at I(y) in 𝒮'. Since y is a safe vertex (as 𝒮 is a winning strategy for ), I(y) is also a safe vertex. Hence can safely enter a vertex. Now, the only thing to argue is that if can move safely from a vertex y to a vertex z in G, then it can safely move from vertex I(y) to I(z) in G. Let moves from y to z using a path P_1= (y=y_1,…,y_r=z), where r∈ [s_R], during some move in 𝒮. Notice that since 𝒮 is a winning strategy, each vertex y_i (i∈ [r]) is a safe vertex, and hence, each vertex I(y_i) is also a safe vertex. Moreover, since N(y_i) = N(I(y_i)), W=(I(y_1), …, I(y_r)) is a walk with at most r vertices between I(y) and I(z). (It might not be a path since vertex x may repeat in this walk.) Since the existence of a walk between two vertices implies the existence of a path between these vertices using vertices from a subset of the walk vertices, we have an I(y),I(z)-path of length at most r using (safe) vertices from {I(y_1),…,I(y_r)}. Hence can safely move from I(y) to I(z). Thus, 𝒮' is a winning strategy for even when is restricted to vertices of V(H) in G.
In the reverse direction, let has a winning strategy in H. Then, we show that has a winning strategy in G as well. Here, whenever a cop _i moves to a vertex y, assumes its image at the vertex I(y). Observe that I(y) is restricted to V(H) in G. Let y_1,…,y_k be the vertices occupied by cops during some instance in the game. Let F be the set of vertices in V(H) that are safe during this turn. Moreover, let F' be the set of the vertices in V(H) that are safe if cops are occupying the vertices I(y_1), …, I(y_k). Then, we have the following claim.
F'⊆ F.
Targetting contradiction, assume that y∈ F' but y∉ F. Then, there exists some i∈ [k] such that d(y, y_i) ≤λ_i but d(y,I(y_i))>λ_i. If y_i ≠ v, then this is not possible since for y_i≠ v, I(y_i)=y_i. Hence, we can assume that y_i = v and I(y_i) = x. Since N(v) = N(x), for each vertex y in V(G)∖{v} (and y∈ F' ⊆ V(H)), d(y,x) ≤ d(y,v), that is, d(y,I(y_i))≤ d(y,y_i), a contradiction.
We note that it might not be true that F⊆ F', as it might so happen that F contains the vertex x, but F' does not.
Due to Claim <ref>, it is sufficent to show that if has a winning strategy in H considering the image of cop _i as a cop with the capabilities of _i, then has a winning strategy in G. To this end, can use its winning strategy from H since images of the cops are restricted to V(H). Thus, has a winning strategy in G.
Finally, note that, in both directions of proofs, moves in H (respectively, in G) if and only if moves in G (respectively, in H). Hence, if is active/flexible in the original strategy, is active/flexible in the designed strategy. This completes the proof.
Observe that 𝗏𝖼𝗇+1 cops always have a winning strategy in G. Therefore, we have the following theorem as a consequence of Lemma <ref> and Lemma <ref>.
*
Theorem <ref> directly implies the existence of the desired kernel for and from Theorem <ref>. The existence of the desired kernel for from Theorem <ref> follows from Lemma <ref> and the following lemma, which proves the safeness of RR<ref> for .
Let G be a graph with a vertex cover U of size t. Let 𝒞_S (for S ⊆ U) be an equivalence class such that |𝒞_S| = ℓ >k+1. For a subgraph H formed by deleting ℓ - k - 1 arbitrary vertices of 𝒞_S from G, 𝖼_surround(H) ≤ k if and only if 𝖼_surround(G) ≤ k.
Let 𝒞_S = {v_1, …, v_ℓ}. Without loss of generality, let us assume that the vertices v_1,… v_k+1 belong to the graph H and vertices v_k+2, …, v_ℓ are deleted. We begin by noting that cannot be surrounded at a vertex in S in G (since each vertex in S has at least k+1 neighbours). Therefore, throughout the proof, we have the implicit assumption that when is surrounded, it is not on a vertex in S.
Let k cops have a winning strategy in G. Then, to surround , cops use this strategy with the following changes in H. Whenever a cop has to move to a vertex in {v_k+2, …, v_ℓ}, it moves to vertex v_1 instead. Since all vertices in 𝒞_S have the same neighbourhood, the next move of this cop can be the same as it was in (the winning strategy of) G. Note that using this strategy, the cops can surround the robber in G if is restricted to V(H) in G, and also the moves of cops are restricted to V(H) in G. Therefore, the cops can surround using this strategy in H as well.
Now, let k cops have a winning strategy in H. We use this strategy to surround in G, in the following manner. Since we have only k cops, during any time in the gameplay, there is at least one vertex in {v_1, …, v_k+1} that is not occupied by any cop. Let us call this vertex a free vertex (there might be multiple free vertices). Again we use the concept of the image of the robber.
For each vertex x∈ V(G), if x∈ V(H), then we define I(x) = x; else, if x∈{ v_k+1, … v_ℓ}, then we define I(x) = y, where y is a free vertex at that instance. Whenever moves to a vertex x ∈ V(G), we say that the image of the robber moves to I(x). Moreover, we remind that, in this game, although some cop and can be at the same vertex, the robber cannot end its move at the same vertex as one of the cops. Cops use this capability to force to move from a vertex. Therefore, we also have to argue that whenever cops force to move, they force the image of the robber to move as well. To this end, observe that the image of the robber and the robber are on different vertices only if is on some vertex x∈{ v_k+1, …, v_ℓ} and the image of the robber is on a free vertex, say, y. Notice that if, in the strategy for H, was occupying y and the cop player wants to force to move out of y, then it does so by moving a cop, say, , from a vertex w∈ N(y) to y. Cop player adapts this strategy in G by moving form w to x instead of w to y. This move is possible because N(x)= N(y). Thus, , as well as the image of , are forced to move as they would have been forced to move in the winning strategy of k cops in H.
Hence, the image of is restricted to V(H) in G and has to follow the rules of the movement of the robber.
Thus, the cops will finally surround the image of in G. At this point, if is on a vertex v ∈{v_k+2, …, v_ℓ}, note that I() is on a vertex u ∈{v_1, …, v_k+1}. Observe that here, if I() is surrounded, then there is a cop on each vertex in S, and thus, is surrounded as well. If was on a vertex in V(H)∖ S when surrounded, then I() and are at the same vertex, and thus, is surrounded as well.
This finishes the proof of Theorem <ref>. The following corollary is a direct consequence of Theorem <ref>, Theorem <ref>, Theorem <ref>, and Proposition <ref>.
, , , and Generalized CnR are prameterized by 𝗏𝖼𝗇. Specifically, each of these variants is solvable in (𝗏𝖼𝗇· 2^𝗏𝖼𝗇+𝗏𝖼𝗇)^𝗏𝖼𝗇+1· n^𝒪(1) time.
§ POLYNOMIAL KERNELS FOR
In this section, we provide a linear kernel for parameterized by the neighbourhood diversity (𝗇𝖽) of the input graph. One of the key benefits of 𝗇𝖽 as a parameter is that it is computable in polynomial time <cit.>. More specifically, in polynomial time, we can compute a minimum partition of V(G) into classes V_1,…, V_w such that each V_i contains vertices of the same type. Hence, a linear kernel parameterized by 𝗇𝖽 can be very useful from an applicative perspective.
Since for any two vertices u,v∈ V_i, for i∈ [w], N(v) ∖{u} = N(v)∖{ u}, we have that either each V_i is an independent set (N(v) = N(u) in this case) or each V_i is a clique (N[v] = N[u] in this case). Now, we use the following reduction rules.
[RR<ref>]
If k≥ w, then answer positively.
We have the following lemma to prove that RR<ref> is safe.
For a graph G, 𝖼(G) ≤𝗇𝖽.
Let S be a set of vertices such that S contains exactly one vertex, say v_i, from each neighbourhood class V_i. Then, (since we assume G to be connected) observe that S is a dominating set of G. Hence, the cops have a trivial winning strategy by placing a cop on each vertex of S (and |S| ≤ w). Therefore, 𝖼(G) ≤𝗇𝖽.
Next, if k=1, then we apply RR<ref> (the XP-algorithm for from Proposition <ref>). Hence, we assume that k≥ 2. Next, we have the following reduction rule.
[RR<ref>]
For each neighbourhood class V_i, keep one arbitrary vertex and delete the rest.
We have the following lemma to prove that RR<ref> is safe.
Let V_i= {v_1,…,v_ℓ} be a neighbourhood class of G having at least two vertices (ℓ≥ 2). Consider the subgraph H of G induced by V(G)∖{v_ℓ}. Then, for k>1, G is k-copwin if and only if H is k-copwin.
We have the following two cases depending on whether V_i is an independent set or a clique.
* V_i is an independent set: Note that, in this case, N(v_ℓ) = N(v_1). Therefore, due to Lemma <ref>, we have that G is k-copwin if and only if H is k-copwin.
* V_i is a clique: Note that in this case, N[v_ℓ] = N[v_1]. The proof, in this case, (specifically the forward direction) follows from arguments presented in the proof of Lemma <ref>. For the reverse direction, here for x≠ v_ℓ, I(x) = x and I(v_ℓ) = v_1. Now, note that every possible move of in G can be mapped to a valid move of the image of the robber in H, just like in the proof of Lemma <ref>. The only difference here is that when is at v_ℓ (image of is at v_1), can move to v_1 as well (along with vertices in N(v_1)). Notice that this move can be translated to the movement of the image of in H where the image of chooses to stay on the same vertex in its move. Hence, the cops will first capture the image of in H, and then capture in G.
This completes the proof of this lemma.
Since we keep only one vertex of each type and there are at most w types, we have the following theorem.
*
We have the following corollary as a consequence of Theorem <ref>.
is parameterized by 𝗇𝖽. Specifically, it is solvable in 𝗇𝖽^𝗇𝖽· n^𝒪(1) time.
Moreover, it is not difficult to see that this kernelization can be extended to and using an extension of Lemma <ref>, giving us a kernel with at most 𝗇𝖽 vertices. Moreover, using a reduction rule similar to RR<ref> where we keep k vertices of each type, we can have a kernel with at most k·𝗇𝖽 vertices for and .
We have the following lemma, for which we provide a proof outline.
Let V_i = {v_1,…,v_ℓ} is a neighbourhood class of G containing at least k+2 vertices (ℓ≥ k+2). Consider the subgraph of G induced by V(G)∖{v_ℓ}. Then, for k>1, s≥ 1, and x∈{active, s}, 𝖼_x(G) ≤ k if and only if 𝖼_x(H) ≤ k.
Similar to the proof of Lemma <ref>, we have the following two cases depending on whether V_i is an independent set or a clique.
* V_i is an independent set: Proof of this case follows from the proof of Lemma <ref>.
* V_i is a clique: Here, for each v_j ∈ V_i, N[v_j] = N[v_ℓ]. For each vertex x ≠ v_ℓ, let I(x) = x and I(v_ℓ) = v_1.
First, let 𝖼_x(G) ≤ k. Then, we use the strategy of the cops from G to capture in H with the only change that whenever a cop, say, _i, wants to move to a vertex x in G, it moves to I(x) in H instead with the only contingency that if _i wants to move from v_1 to v_ℓ, then it moves to v_2 (so that if the cops are active, then this is indeed a valid move of _i in H). Observe that the cops can capture in G using this strategy even when the cops are restricted to the vertices of H. Hence, the cops can capture using this strategy in H.
In the reverse direction, let 𝖼_x(H) ≤ k. Note that if k active cops have a winning strategy against a flexible robber in G, then k active cops have a winning strategy against an active robber in G as well. Hence, for the ease of arguments, we show that k active cops have a winning strategy in G even if is flexible to show that 𝖼_active≤ k. The cops assume that the image of is occupying the vertex I(x) when is occupying the vertex x. Thus, we have an image of moving in H with the same capabilities as . The cops will capture the image of using their winning strategy from H. Notice that once the image of is captured, if is at a vertex x ≠ v_ℓ, then is captured as well. Otherwise, is at v_ℓ and there is some cop, say, _1, at v_1. In the case of , will be captured in the next move of cops (since v_1v_ℓ∈ E(G)). In the case of , if this is a cop move (that is, the image of is captured on a robber move), then _1 will capture in the next move. Otherwise, in the previous move of the cops, _1 moved to v_1 while was at v_ℓ. In this case, since N[v_1] = N[v_ℓ], _1 could have moved to v_ℓ to capture itself. Hence, 𝖼_x(G) ≤ k.
This completes the proof.
Since 𝖼(G) ≤𝗇𝖽 for all of these variants (as there is a dominating set of size 𝗇𝖽 in G), we have the following result as a consequence of Lemma <ref> (and arguments presented above).
and parameterized by 𝗇𝖽 admit a kernel with at most 𝗇𝖽 vertices. Moreover, and parameterized by 𝗇𝖽 admit a kernel with at most 𝗇𝖽^2 vertices.
Finally, we remark that this technique of kernelization will not work directly for . For example, consider a complete graph on n vertices, for which 𝗇𝖽 = 1 (all the vertices have the same type) and 𝖼_surround = n, and if we remove any vertex from this clique, 𝖼_surround decreases. Moreover, as evident from our example of complete graphs, 𝖼_surround cannot be bounded by any computable function that depends only on 𝗇𝖽.
§ INCOMPRESSIBILITY
§.§ Incompressibility of
In this section, we show that it is unlikely that the problem parameterized by 𝗏𝖼𝗇 admits a polynomial compression. For this purpose, we first define the following problem. In , we are given a bipartite graph G with a vertex bipartition V(G) = T ∪ N and a non-negative integer k. A set of vertices N'⊆ N is said to be an RBDS if each vertex in T has a neighbour in N'. The aim of is to decide whether there exists an RBDS of size at most k in G. Accordingly, we have the following decision version of .
A bipartite graph G with vertex bipartition V(G) = T ∪ N, and a non-negative integer k.Question Does G has an RBDS of size at most k?
Dom, Lokshtanov, and Saurabh <cit.> proved that it is unlikely for parameterized by |T|+k to admit a polynomial compression. More precisely, they proved the following result.
parameterized by |T|+k does not admit a polynomial compression, unless .
We show that parameterized by the 𝗏𝖼𝗇 does not have a polynomial compression by developing a polynomial parameter transformation from parameterized by |T|+k to parameterized by 𝗏𝖼𝗇.
§.§.§ Bipartite Graphs with Large Degree and Girth
For our reduction, we borrow a construction by Fomin at al. <cit.> of bipartite graphs having high girth and high minimum degree, which they used to prove NP-hardness (and W[2]-hardness for the solution size k) of .
For positive integers p,q, and r, we can construct a bipartite graph H(p,q,r) with rqp^2 edges and a bipartition (X,Y), with |X| = |Y| = pq. The set X is partitioned into sets U_1, …, U_p, and the set Y is partitioned into sets W_1, … W_p, with |U_i| = |W_i| = q. By H_i,j we denote the subgraph of H(p,q,r) induced by U_i ∪ W_j, and by 𝖽𝖾𝗀_i,j(z) we denote the degree of vertex z in H_i,j. Fomin et al. <cit.> provided the following construction:
Let q ≥ 2p(r+1) (p(r+1)-1)^6-1/(p(r+1)-1)^2-1. Then, we can construct H(p,q,r) in time 𝒪(r· q · p^2) with the following properties.
* The girth of H(p,q,r) is at least 6.
* For every vertex z ∈ V(H_i,j) and every i,j ∈ [p], we have r-1 ≤𝖽𝖾𝗀_i,j(z) ≤ r+1.
§.§.§ Polynomial Parameter Transformation
Suppose that we are given an instance (G,k) with V(G) = T ∪ N of the problem. First, we construct a graph G' with V(G') = T' ∪ N' from G by introducing two new vertices, x and y, such that T' = T ∪{x} and N' = N ∪{y}, and E(G') = E(G) ∪{xy }. We have the following observation.
G has an RBDS of size at most k if and only if G' has an RBDS of size at most k+1. Moreover, any RBDS of G' contains y.
Now, we present the main construction for our reduction. Denote the vertex set V(T') by {v_1, v_2, …, v_p', x}. Moreover, let p = p'+1, ℓ=k+1, r = ℓ+2, and q = ⌈ 2p(r+1) (p(r+1)-1)^6-1/(p(r+1)-1)^2-1⌉.
We construct H(p,q,r) such that each of U_i and W_i, for 0 < i ≤ p', contains q copies of vertex v_i, and each of U_p and W_p contains q copies of vertex x. Now, we obtain a graph G” by adding one more set of vertices P to H(p,q,r) such that V(P) = V(N'). Moreover, if there is an edge between a vertex u ∈ N' and a vertex v_i ∈ T', then we add an edge between u and every vertex of U_i, and also between u and every vertex of W_i. Similarly, we add an edge between y and every vertex of U_p, and between y and every vertex of W_p. Finally, we make the vertex y adjacent to every vertex of P. See Figure <ref> for reference.
For correctness, we have the following lemma.
G' has an RBDS of size at most ℓ if and only if G” is ℓ-copwin.
First, we show that if G' has an RBDS of size ℓ, then ℓ cops have a winning strategy in G”. Let S⊆ N' be an RBDS in G' of size at most ℓ. The cops begin by choosing the vertices corresponding to S in P. Observe that the vertex y has to be present in S. Since vertex y dominates each vertex in P, the robber cannot safely enter a vertex in P. Additionally, due to the construction of G”, the vertices of S dominate each vertex in H. Hence, the robber cannot safely enter a vertex in H. Therefore, the robber will be captured in the first move of the cops.
Next, we show that if there is no RBDS of size ℓ in G', then ℓ cops do not have a winning strategy. We prove this by giving a winning strategy for the robber. First, we show that the robber can safely enter the graph. In the beginning, let there be ℓ_1 ≤ℓ cops in P and ℓ_2 ≤ℓ cops in H. Since there is no RBDS of size ℓ in G', for every placement of at most ℓ cops in P, there exists at least one pair of U_i and W_i such that no vertex of U_i and W_i is dominated by the cops from P. Let U_i and W_i be one such pair of sets such that no vertex of U_i and W_i is dominated by the cops from P. Moreover, since each vertex of H can dominate at most p(r+1) vertices in H, ℓ_2 cops can dominate at most ℓ· p(r+1) vertices. Since U_i (and W_i also) contains q vertices, and q> ℓ· p(r+1), the ℓ_2 cops in H cannot dominate all vertices of U_i, and hence the robber can safely enter a vertex of U_i.
Now, whenever the robber is under attack, it does the following. Without loss of generality, let us assume that the robber is in U_i (the case of W_i is symmetric). Since there are at most ℓ cops in P, there is always a W_j such that no vertex of W_j is dominated by cops from P. Since each vertex in U_i has at least r-1 = ℓ+1 neighbours in W_j, the robber can move to at least ℓ+1 vertices of W_j. Since the girth of H is at least 6, no vertex from H can dominate two vertices of W_j that are adjacent to the robber; else, we get a cycle on four vertices. Hence, at most ℓ cops from H can dominate at most ℓ neighbors of the robber in W_j, and the robber has at least ℓ+1 neighbors in W_j. Hence, the robber can move to a safe vertex in W_j. Since the graph H is symmetric, the robber can move safely from W_j' to W_i' also. The robber follows this strategy to avoid capture forever.
This completes the proof of our lemma.
Next, we have the following observation to show that there exists a vertex cover U of G” such that |U| = poly(|T|,k).
V(H) ∪{y} is a vertex cover of G”. Therefore, the vertex cover number of G” is at most 2· p · q+1 = 1+ 2p·⌈ 2p(k+3) (p(k+3)-1)^6-1/(p(k+3)-1)^2-1⌉, where p = |T|+1.
This completes the proof of the argument that parameterized by the 𝗏𝖼𝗇 is unlikely to admit a polynomial compression. Thus, we have the following theorem as a consequence of Lemma <ref>, Observation <ref> and Proposition <ref>.
*
We prove the incompressibility of the variants (Theorem <ref>) in the Appendix.
§.§ Incompressibility for Variants
In this section, we prove Theorem <ref>. In Theorem <ref>, we proved that it is unlikely for to admit a polynomial compression. For this purpose, we constructed a graph G” where k cops have a winning strategy if and only if the graph G' has an RBDS of size at most k. If G' has an RBDS of size k, then there is a dominating set of size k in G”. Else, there is no winning strategy for k cops in G”. Here, we use the same construction to show that the variants we study (except for ) are unlikely to admit a polynomial compression parameterized by 𝗏𝖼𝗇. We establish this by proving that G” is k-copwin for these variants if and only if G' has an RBDS of size at most k.
As discussed earlier, for a graph G, 𝖼(G) ≤𝖼_lazy(G), 𝖼(G) ≤𝖼_attacking(G), and 𝖼(G) ≤𝖼_s(G) (for any s≥ 1). Therefore, if G does not have an RBDS of size at most k, then 𝖼(G)>k, and hence, 𝖼_lazy(G) >k, 𝖼_attacking(G) > k, and 𝖼_s(G) > k (for k>0). To see the reverse direction, observe that in each of these three variants, if the cops start by occupying a dominating set, then they win in the next round. Hence, this establishes that it is unlikely for , , parameterized by 𝗏𝖼𝗇 to admit to admit a polynomial compression.
Similarly, it is true for also, that if the cops start by occupying a dominating set, then they win in the next round. Hence, we have to only show that if there is no RBDS of size k in G' (and hence, no dominating set of size k in G”), then k cops do not have a winning strategy in G for . The robber uses the following strategy. When is under attack, it follows the strategy from . Now, is forced to move (because it is active) even when it is on a safe vertex. Note that always stays in H(p,q,r). Due to symmetry, let us assume it is in some vertex v in some block U_i. In this case, can simply move to a vertex in W_i. Observe here that since vertices in U_i are safe, the vertices in W_i are also safe.
Thus, we have the following lemma to establish that these variants are unlikely to admit a polynomial compression.
, , , and parameterized by the vertex cover number do not admit a polynomial compression, unless .
This result can also be extended to directed (or oriented) graphs. We have the following lemma.
on strongly connected directed and oriented graphs parameterized by vertex cover number does not admit a polynomial compression, unless .
For the case of directed graphs, we can simply replace each edge in the construction with a loop edge (directed cycle on two vertices).
To prove this result for oriented graphs, we do the following. Here, we change the underlying graph G”. First, instead of having two partitions U and W, we have three partitions U,W, and X (with the same rules). See Figure <ref> for an illustration. Second, we add edges between U and W, W and X, and X and U following the rules of the construction. Moreover, the edge rules for vertices in P are the same (that is, if a vertex has edges with each vertex in some U_i, it has edges with each vertex in W_i and X_i as well). Next, we define orientations. For the vertex y, we orient all the edges as outgoing. For every vertex u∈ P ∖{y}, we mark all the edges as outgoing, except for the edge uy (which is oriented yu). For each edge uv such that u∈ U and w ∈ W, orient the edge uw. For every edge wx such that w∈ W and x∈ X, orient the edge wx. For each edge xu such that x∈ X and u∈ U, orient the edge xu. Finally, add an extra vertex z, and add arc zy. Moreover, for each vertex v ∈ U_p ∪ W_p ∪ X_p, add an arc vz.
It is straightforward to see that G” is a strongly connected oriented graph. Moreover, if G” has a dominating set of size k, then k cops have a winning strategy by occupying these vertices in G”. Observe that, at this point, can enter only at vertex z and cannot move as long as there is a cop, say, _1, at y (which there is due to the construction of G' and G”). Now, since G” is strongly connected, some other cop, say, _2, can move to capture in a finite number of rounds. For the reverse direction, if G' does not have an RBDS of size k (and hence G” does not have a dominating set of size k), then following the arguments of Lemma <ref>, can enter at a safe vertex in U. Then, whenever is under attack, it can move to a safe vertex in W. Similarly, it can move from W to X and from X to U when under attack. Moreover, note that vertex z does not attack any vertex in U∪ W∪ X. Hence, has a clear evading strategy.
This completes our proof.
Lemma <ref> and Lemma <ref> directly imply Theorem <ref>.
§ CONCLUSION AND FUTURE DIRECTIONS
In this paper, we conducted a comprehensive analysis of the parameterized complexity of parameterized by 𝗏𝖼𝗇.
First, we showed that the cop number of a graph is upper bounded by 𝗏𝖼𝗇/3+1. Second, we proved that parameterized by 𝗏𝖼𝗇 is by designing an exponential kernel. We complemented this result by proving that it is unlikely for parameterized by 𝗏𝖼𝗇 to admit a polynomial compression. We then extended these results to other variants as well as to other parameters.
To achieve our kernelization results, the rules we used concerned removing (false or true) twins from the graph. These rules are easy to implement and hence can be used to reduce the complexity of the input graph, even when the input graph is far from the considered parameters. For example, for cographs, none of the considered parameters is constant/bounded, but cographs can be reduced to a single vertex with the operation of removing twins, and hence, our reduction rules give an alternate proof that the cop number of cographs is at most two <cit.> for several variants. Moreover, MTP is well-studied with the motivation of designing computer games. Some examples of these variants include: multiple targets and multiple pursuer search <cit.> with applications in controlling non-player characters in video games; from the robber's perspective with faster cops <cit.> where the strategies were tested on Baldur's Gate; modeled with edge weights and different speeds of agents <cit.> with the specific examples of Company of Heroes and Supreme Commander. Moreover, the PACMAN game's movement can be considered as an instance of on a partial grid. One of the key aspects of designing these games is to come up with scenarios that are solvable but look complex and challenging. Our reduction rule can help in this regard. One can begin with an easy-to-resolve instance of , and then keep adding twins to this instance (recursively) to get an instance that looks sufficiently complex but has the same complexity.
Finally, we defined a new variant of , named Generalized CnR, that generalizes many well-studied variants of including , , Cops and Robber From a Distance <cit.>, and also generalizes the games of <cit.>. We showed that RR<ref> provides a kernel for Generalized CnR as well. This gives hope that RR<ref> can be used to get kernels for many practical variants not explicitly studied in this paper.
Still, many questions on the parameterized complexity of remain open. We list some of these questions below.
Does there exist an algorithm for parameterized by 𝗏𝖼𝗇 with running time 2^𝒪(𝗏𝖼𝗇)· n^𝒪(1)?
Does there exist a better bound for the cop number with respect to 𝗏𝖼𝗇? In particular, is 𝖼(G) = o(𝗏𝖼𝗇)?
Does parameterized by 𝗏𝖼𝗇 admit a polynomial α-approximate kernel?
Study with respect to the following parameters: (1) feedback vertex set (2) treewidth (3) treedepth. In particular, is parameterized by treewidth?
plainurl
|
http://arxiv.org/abs/2307.04541v2 | 20230710130942 | Learning Large Margin Sparse Embeddings for Open Set Medical Diagnosis | [
"Mingyuan Liu",
"Lu Xu",
"Jicong Zhang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Mingyuan Liu, Lu Xu, Jicong Zhang.
^1School of Biological Science and Medical Engineering,
Beihang University, Beijing, China
^2Hefei Innovation Research Institute, Beihang University, Hefei, Anhui, China
{liumingyuan95, xulu181221, jicongzhang}@buaa.edu.cn
Learning Large Margin Sparse Embeddings for Open Set Medical Diagnosis
Mingyuan Liu1 Lu Xu1 Jicong Zhang1,2,*
August 12, 2023
======================================================================
Fueled by deep learning, computer-aided diagnosis achieves huge advances.
However, out of controlled lab environments, algorithms could face multiple challenges.
Open set recognition (OSR), as an important one, states that categories unseen in training could appear in testing.
In medical fields, it could derive from incompletely collected training datasets and the constantly emerging new or rare diseases.
OSR requires an algorithm to not only correctly classify known classes, but also recognize unknown classes and forward them to experts for further diagnosis.
To tackle OSR, we assume that known classes could densely occupy small parts of the embedding space and the remaining sparse regions could be recognized as unknowns.
Following it, we propose Open Margin Cosine Loss (OMCL) unifying two mechanisms.
The former, called Margin Loss with Adaptive Scale (MLAS), introduces angular margin for reinforcing intra-class compactness and inter-class separability, together with an adaptive scaling factor to strengthen the generalization capacity.
The latter, called Open-Space Suppression (OSS), opens the classifier by recognizing sparse embedding space as unknowns using proposed feature space descriptors.
Besides, since medical OSR is still a nascent field, two publicly available benchmark datasets are proposed for comparison.
Extensive ablation studies and feature visualization demonstrate the effectiveness of each design.
Compared with state-of-the-art methods, MLAS achieves superior performances, measured by ACC, AUROC, and OSCR.
§ INTRODUCTION AND RELATED WORK
Deep learning achieves great success in image-based disease classification.
However, the computer-aided diagnosis is far from being solved when considering various requirements in real-world applications. As an important one, open set recognition (OSR) specifies that diseases unseen in training could appear in testing <cit.>. It is practical in the medical field, caused by the difficulties of collecting a training dataset exhausting all diseases, and by the unpredictably appearing new or rare diseases. As a result, an OSR-informed model should not only accurately recognize known diseases but also detect unknowns and report them. Clinically, these models help construct trustworthy computer-aided systems. By forwarding unseen diseases to experts, not only the misdiagnosis of rare diseases could be avoided, but an early warning of a new disease outbreak could be raised.
There are many fields related to OSR but are essentially different.
In classification with reject options <cit.>, samples with low confidence are rejected to avoid misclassification. However, since its closed set nature, unknown classes could still be misclassified confidently <cit.>.
Anomaly detection, novelty detection, and one-class classification<cit.> aim at recognizing unknowns but ignore categorizing the known classes.
In outlier detection or one-/few-show learning <cit.>, samples of novel classes appear in training.
In zero-shot learning <cit.>, semantic information from novel classes could be accessed. Such as zebra, an unknown class, could be identified given the idea that they are stripped horses, and abundant samples of horse and stripe patterns.
Differently, OSR knows nothing about novel classes and should have high classification accuracy of the known meanwhile recognize unknowns, as illustrated in Fig. <ref> a). Due to limited space, some reviews <cit.> are recommended for more comprehensive conceptual distinctions.
Most OSR researches focus on natural images, while medical OSR is still in its infancy. In medical fields, representative work like T3PO <cit.> introduces an extra task to predict the input image augmentation, and samples with low probabilities are regarded as unknowns.
CSL <cit.> uses generative adversarial neural networks (GAN) to generate proxy images and unknown anchors.
As for natural images, a line of work tries to simulate unknowns using generated adversarial or counterfactual samples using GAN <cit.>. However, whether unknown patterns could be generated by learning from the known is unclear.
Some works learn descriptive feature representations. They enhance better feature separation between unknowns and knowns or assume the known features following certain distributions so that samples away from distributional centers could be recognized as unknowns <cit.>.
Differently, this work categorizes densely distributed known features and recognizes sparse embedding space as unknowns, regardless of the specific distribution.
This work tackles OSR under the assumption that known features could be assembled compactly in feature embedding space, and remaining sparse regions could be recognized as unknowns.
Inspired by this, the Open Margin Cosine Loss (OMCL) is proposed merging two components, Margin Loss with Adaptive Scale (MLAS) and Open-Space Suppression (OSS).
The former enhances known feature compactness and the latter recognizes sparse feature space as unknown.
Specifically, MLAS introduces the angular margin to the loss function, which reinforces the intra-class compactness and inter-class separability. Besides, a learnable scaling factor is proposed to enhance the generalization capacity.
OSS generates feature space descriptors that scatter across a bounded feature space. By categorizing them as unknowns, it opens a classifier by recognizing sparse feature space as unknowns and suppressing the overconfidence of the known.
An embedding space example is demonstrated in Fig. <ref> b), showing OMCL learns more descriptive features and more distinguishing known-unknown separation.
Considering medical OSR is still a nascent field, besides OMCL, we also proposed two publicly available benchmark datasets. One is microscopic images of blood cells, and the other is optical coherence tomography (OCT) of the eye fundus. OMCL shows good adaptability to different image modalities.
Our contributions are summarized as follows.
Firstly, we propose a novel approach, OMCL for OSR in medical diagnosis. It reinforces intra-class compactness and inter-class separability, and meanwhile recognizes sparse feature space as unknowns.
Secondly, an adaptive scaling factor is proposed to enhance the generalization capacity of OMCL.
Thirdly, two benchmark datasets are proposed for OSR. Extensive ablation experiments and feature visualization demonstrate the effectiveness of each design. The superiority over state-of-the-art methods indicates the effectiveness of our method and the adaptability of OMCL on different image modalities.
§ METHOD
In Section <ref>, the open set problem and the formation of cosine Softmax are introduced. The two mechanisms MLAS and OSS are sequentially elaborated in Section <ref> and <ref>, followed by the overall formation of OMCL in Section <ref>.
§.§ Preliminaries
Problem setting:
Both closed set and open set classifiers learn from the training set 𝒟_train={(x_i, y_i)}_i=1^N with N image-label pairs (x_i, y_i), where y_i∈𝒴={1, 2, ..., C} is a class label.
In testing, closed set testing data 𝒟_test shares the same label space 𝒴 with the training data. However, in the open set problem, unseen class y_i=C+1 could appear in testing i.e. y_i∈𝒴_open={1, 2, ..., C, C+1}.
Cosine Loss:
The cosine Softmax is used as the basis of the OMCL. It transfers feature embeddings from the Euclidian space to a hyperspherical one, where feature differences depend merely on their angular separation rather than spatial distance.
Given an image x_i, its vectorized feature embedding z_i, and its label y_i, the derivation progress of the cosine Softmax is
S_cos
=e^W_y_i^Tz_i/∑_j=1^Ce^W_j^Tz_i_Conventioanl Form
=e^∥W_y_i∥∥z_i∥ cos(θ_y_i,i)/∑_j=1^Ce^∥W_j∥∥z_i∥ cos(θ_j,i)
=e^s· cos(θ_y_i,i)/∑_j=1^Ce^s· cos(θ_j,i)_Cosine Form
, where W_j denotes the weights of the last fully-connected layer (bias is set to 0 for simplicity). ∥W_j∥=1 and ∥z_i∥=s are manually fixed to constant numbers 1 and s by L2 normalization. s is named the scaling factor. cos(θ_j,i) denotes the angle between W_j and z_i. By doing so, the direction of W_j could be regarded as the prototypical direction of class j as shown in Fig. <ref> a). Samples with large angular differences from their corresponding prototype will be punished and meanwhile class-wise prototypes will be pushed apart in the angular space.
Compared with Softmax, the cosine form has a more explicit geometric interpretation, promotes more stabilized weights updating, and learns more discriminative embeddings <cit.>.
Moreover, the L2 normalization constrains features to a bounded feature space, which allows us to generate feature space descriptors for opening a classifier (will be further discussed in Section <ref>).
§.§ Margin Loss with Adaptive Scale (MLAS)
MLAS serves three purposes.
1) By applying angular margin, the intra-class compactness and the inter-class separability are strengthened.
2) The threshold could represent the potential probability of the unknowns, which not only prepares for the open set but also learns more confident probabilities of the knowns.
3) A trainable scaling factor is designed to strengthen the generalization capacity.
MLAS is:
S_MLAS
=e^s·(cos(θ_y_i,i)-m)/e^s·(cos(θ_y_i,i)-m)+e^s· t+∑_j=1, j≠ y_i^Ce^s· cos(θ_j,i)
m, t, and s respectively denote margin, threshold, and learnable scaling factor, with corresponding geometric interpretation demonstrated in Fig. <ref> b).
By using the angular margin, the decision boundary could be more stringent.
Without it, the decision boundary is cos(θ_1,i)>cos(θ_2,i) for the i-th sample of class 1.
It becomes cos(θ_1,i)>cos(θ_2,i)+m when using the margin, which leads to stronger intra-class compactness. Moreover, the angular similarities with other classes are punished in the denominator to increase inter-class separability.
The threshold t could be regarded as an extra dimension that prepares for unknown classes. Given the conventional input of Softmax as [q_i^1, q_i^2, ..., q_i^C]∈ℝ^C, ours could be understood as [q_i^1, q_i^2, ..., q_i^C, t]∈ℝ^C+1. Since t is added, the class-wise output q_i^c before Softmax is forced to have a higher value to avoid misclassification (at least larger than t). It reinforces more stringent learning and hence increases the feature compactness in the hyperspherical space.
A large s makes the distribution more uniform, and a small s makes it collapses to a point mass.
In this work, it is learnable, with a learning rate 0.1× the learning rate of the model. It theoretically offers stronger generalization capacity to various datasets and is experimentally observed to converge to different values in different data trails and could boost performances.
LMCL <cit.> and NMCL <cit.> are the most similar arts to ours. Differently, from the task perspective, these designs are proposed for closed-world problems. From the method perspective, an OSS mechanism is designed to tackle OSR leveraging generate pseudo-unknown features for discriminative learning. Moreover, an adaptive scaling factor is introduced for increasing generalization.
§.§ Open-Space Suppression (OSS)
OSS generates feature space descriptors of bounded feature space. By categorizing them into an extra C+1 class, samples in sparse feature space could be recognized as unknown and the overconfidence of the known is suppressed.
OSS selects points scattered over the entire feature space, named descriptors, representing pseudo-unknown samples. Different from existing arts that generate pseudo-unknowns by learning from known samples, the OSS selects points scattered over the feature space. It guarantees all space could be possibly considered for simulating the potential unknowns. By competing with the known features, feature space with densely distributed samples is classified as the known, and the sparse space, represented by the descriptors, will be recognized as unknown.
In this work, the corresponding descriptor set, with M samples, is 𝒟_desc={(z_i, C+1)}_i=1^M, where z_i ∈𝕌[-s,s]^d subject to ∥z_i∥=s. 𝕌[-s,s] denotes random continuous uniform distribution ranges between -s to s, and d is the dimension of feature embeddings.
s is trainable and the descriptors are dynamically generated with the training.
Fig. <ref> c) demonstrates the geometric interpretation. During training, descriptors are concatenated with the training samples at the input of the last fully-connected layer, to equip the last layer with the discrimination capacity of known and unknown samples. The OSS is
S_OSS=e^s· t/e^s· t+∑_j=1^Ce^s· cos(θ_j,i)
where t and s follow the same definition in MLAS.
Most similar arts like AL <cit.> attempts to reduce misclassification by abandoning ambiguous training images. Differently, we focus on OSR and exploit a novel discriminative loss with feature-level descriptors for OSR.
§.§ Open Margin Cosine Loss (OMCL)
OMCL unifies MLAS and OSS into one formula, which is
L_OMCL=-1/N+M∑_i=1^N+M𝕀_i log(S_cos) + λ𝕀_i log(S_MLAS) +λ𝕀_i log(S_OSS)
𝕀_i equals 1 if the i-th sample is training data, and equals 0 if it belongs to the feature space descriptors. λ is a weight factor. Since the output of the channel C+1 is fixed as t, no extra weights W_C+1 are trained in the last fully-connected layer. As a result, OMCL does not increase the number of trainable weights in a neural network. During testing, just as in other works <cit.>, the maximum probability of known classes is taken as the index of unknowns, where a lower known probability indicates a high possibility of unknowns.
§ RESULT
§.§ Datasets, Evaluation Metrics, and Implementation Details
Two datasets are adapted as new benchmarks for evaluating the OSR problem. Following protocols in natural images <cit.>, half of the classes are selected as known and reminders as unknowns. Since the grouping affects the results, it is randomly repeated K times, leading to K independent data trials. The average results of K trials are used for evaluation. The specific groupings are listed in the supplementary material, so that future works could follow it for fair comparisons.
BloodMnist contains 8 kinds of individual normal cells with 17,092 images <cit.>. Our setting is based on the closed set split and prepossessing from <cit.>. Classes are selected 5 rounds (K=5). In each trial, images belonging to 4 chosen classes are selected for training and closed-set evaluation. Images belonging to the other 4 classes in testing data are used for open set evaluation.
OCTMnist has 109,309 optical coherence tomography (OCT) images <cit.>, preprocessed following <cit.>. Among the 4 classes, 1 is healthy and the other 3 are retinal diseases. In data trail splitting, the healthy class is always in the known set, which is consistent with real circumstances, and trails equal to 3 (K=3).
Metrics: Following previous arts <cit.>, accuracy (ACC_c) validates closed set classification. Area Under the Receiver Operating Characteristic (AUROC_o), a threshold-independent value, measures the open set performances. Open Set Classification Rate (OSCR_o) <cit.>, considers both open set recognition and closed set accuracy, where a larger OSCR indicates better performance.
Implementation Details:
The classification network is ResNet18 <cit.>, optimized by Adam with an initial learning rate of 1e-3 and a batch size 64. The number of training epochs is 200 and 100 for BloodMnist and OCTMnist respectively because the number of training samples in BloodMnist is smaller. Margin m, threshold t, λ are experimentally set to -0.1, 0.1, and 0.5 respectively. Images are augmented by random crop, random horizontal flip, and normalization.
§.§ Comparison with State-of-the-art Methods
As demonstrated in Table <ref>, the proposed OMCL surpasses state-of-the-art models, including typical discriminative methods, baseline<cit.>, GCPL<cit.>, and RPL<cit.>; latest generative model DIAS<cit.>; and ARPL+CS<cit.> that hybrids both. All methods are implemented based on their official codes. Their best results after hyperparameter finetunes are reported. Results show the OMCL maintains the accuracy, meanwhile could effectively recognize unknowns.
§.§ Ablation Studies
Effectiveness of MLAS and OSS: Table <ref> demonstrates the respective contributions of MLAS and OSS in OMCL. Each of them enhances the performances and they could work complementarily to further improve performances.
Ablation Study of Adaptive Scaling Factor: Fig. <ref> a) demonstrates the effectiveness of the adaptive scaling factor. Quantitatively, the adaptive design surpasses a fixed one. Moreover, Fig. <ref> b) displays the scaling factor will converge to different values in different training trials. Both results demonstrate the effectiveness and the generalization capacity of the adaptive design.
Ablation Study of Hyperparameters t, m, and λ: Fig. <ref> a), b), and c) respectively show the influence on results when using different hyperparameters. t and m are the threshold and angular margin, presented in equation <ref>, and λ is the trade-off parameter in equation <ref> .
Ablation Study of M: Fig. <ref> d) illustrates the effect of the number of feature space descriptors upon results. The ratio 1:1 is experimentally validated as a proper ratio. Because a randomly generated descriptor could be extremely close to a known feature point, but classified as a novel category, which may disturb the training. If the number of descriptors is far more than that of the training samples (the 5 times shown in Fig. <ref> 4), the performance gets lower.
Feature Visualization: Fig. <ref> b) visualizes the t-SNE results of features z of both known and unknown classes after dimension reduction. For each class, 200 samples are visualized and the perplexity of the t-SNE is set to 30. It shows that OMCL could learn better intra-class compactness and inter-class separability. Moreover, samples of unknown classes tend to be pushed away from known classes, incidcating the effectiveness of our designs.
§ CONCLUSION
In this paper, two publicly available benchmark datasets are proposed for evaluating the OSR problem in medical fields. Besides, a novel method called OMCL is proposed, under the assumption that
known features could be assembled compactly in feature space and the sparse regions could be recognized as unknowns.
The OMCL unifies two mechanisms, MLAS and OSS, into a unified formula. The former reinforces intra-class compactness and inter-class separability of samples in the hyperspherical feature space, and an adaptive scaling factor is proposed to empower the generalization capability.
The latter opens a classifier by categorizing sparse regions as unknown using feature space descriptors.
Extensive ablation experiments and feature visualization demonstrate the effectiveness of each design. Compared to recent state-of-the-art methods, the proposed OMCL performs superior, measured by ACC, AUROC, and OSCR.
splncs04
|
http://arxiv.org/abs/2307.04325v1 | 20230710033939 | Influence of Charge on Anisotropic Class-one Solution in Non-minimally Coupled Gravity | [
"M. Sharif",
"Tayyab Naseer"
] | gr-qc | [
"gr-qc"
] |
Influence of Charge on Anisotropic Class-one Solution in Non-minimally Coupled Gravity
M. Sharif^1 [email protected] and Tayyab Naseer^1,2 [email protected]
^1 Department of Mathematics and Statistics, The University of Lahore,
1-KM Defence Road Lahore, Pakistan.
^2 Department of Mathematics, University of the Punjab,
Quaid-i-Azam Campus, Lahore-54590, Pakistan.
=========================================================================================================================================================================================================================================================================================================
This paper studies charged star models associated with anisotropic
matter distribution in f(ℛ,𝒯,𝒬)
theory, where
𝒬=ℛ_ϕψ𝒯^ϕψ. For this
purpose, we take a linear model of this gravity as
ℛ+ζ𝒬, where ζ represents a coupling
constant. We consider a self-gravitating spherical geometry in the
presence of electromagnetic field and generate solution to the
modified field equations by using the “embedding class-one”
condition and 𝕄𝕀𝕋 bag model equation of state. The
observational data (masses and radii) of four different stellar
models like 4U 1820-30, SAX J 1808.4-3658, SMC X-4 and Her X-I is
employed to analyze the effects of charge on their physical
properties. Finally, the effect of the coupling constant is checked
on the viability, hydrostatic equilibrium condition and stability of
the resulting solution. We conclude that the considered models show
viable and stable behavior for all the considered values of charge
and ζ.
Keywords: f(ℛ,𝒯,ℛ_ϕψ𝒯^ϕψ) gravity; Stability;
Self-gravitating systems; Compact objects.
PACS: 04.50.Kd; 04.40.Dg; 04.40.-b.
§ INTRODUCTION
General Relativity (𝔾ℝ) is viewed as the best
gravitational theory to tackle various challenges, yet it is
inadequately enough to explain the rapid expansion of our cosmos
properly. As a result, multiple extensions to 𝔾ℝ have
been proposed to deal with mystifying problems such as the dark
matter and cosmic expeditious expansion etc. Various cosmologists
pointed out that this expansion is caused by the presence of a large
amount of an obscure force, named as dark energy which works as
anti-gravity and helps stars as well as galaxies to move away from
each other. The simplest extension to 𝔾ℝ was obtained by
putting the generic function of the Ricci scalar ℛ in
geometric part of the Einstein-Hilbert action, named as
f(ℛ) theory <cit.>. There is a large body of
literature <cit.>-<cit.> to explore the viability and stability
of celestial structures in this theory.
Bertolami et al <cit.> introduced the concept of
matter-geometry coupling in f(ℛ) scenario by coupling
the effects of ℛ in the matter Lagrangian to study
self-gravitating objects. Such couplings have prompted many
researchers and hence several modifications of 𝔾ℝ (based
on the idea of coupling) have been suggested. The first
matter-geometry coupling was proposed by Harko et al
<cit.>, named as f(ℛ,𝒯) gravity, in which
𝒯 serves as trace of the energy-momentum tensor
(𝔼𝕄𝕋). The incorporation of 𝒯 in modified
functionals produces non-null divergence of the corresponding
𝔼𝕄𝕋 as opposed to 𝔾ℝ and f(ℛ)
theories. This coupling gravity offers several remarkable
astrophysical results <cit.>-<cit.>.
Haghani et al <cit.> suggested much complicated theory
whose functional depends on ℛ, 𝒯 and
𝒬, where
𝒬≡ℛ_ϕψ𝒯^ϕψ.
They studied three different models of this theory to analyze their
physical viability. The insertion of
ℛ_ϕψ𝒯^ϕψ makes this theory
more effective than other modified theories such as
f(ℛ,𝕃_m) and f(ℛ,𝒯). The
reason is that it entails strong non-minimal interaction between
geometry and matter distribution in a self-gravitating object even
for the scenarios when f(ℛ,𝒯) fails. For
instance, for the case in which a compact interior has trace-free
𝔼𝕄𝕋, (i.e., 𝒯=0), the particles can entail
such strong coupling. This theory provides better understanding of
inflationary era of our cosmos as well as rotation curves of
galactic structures. Sharif and Zubair <cit.> adopted matter
Lagrangian as 𝕃_m=μ, -P to study thermodynamical laws
corresponding to two models ℛ+ζ𝒬 as well
as ℛ(1+ζ𝒬) and determined viability
constraints for them. The same authors <cit.> checked the
validity of energy bounds analogous to the above models and
concluded that only positive values of ζ fulfill weak energy
conditions.
Odintsov and Sáez-Gómez <cit.> demonstrated certain
cosmological solutions and confirmed that
f(ℛ,𝒯,𝒬) gravity supports the
ΛCDM model. Baffou et al <cit.> obtained numerical
solutions of Friedmann equations and perturbation functions with
respect to two peculiar modified models and explored their
stability. Sharif and Waseem <cit.> determined the
solutions and their stability for isotropic as well anisotropic
configurations and concluded that 𝕃_m=P_r results in more
stable structures for the later case. Yousaf et al
<cit.>-<cit.> employed the idea of orthogonal splitting of
the curvature tensor in this gravity and calculated some scalars in
the absence and presence of charge which help to understand the
structural evolution of self-gravitating bodies. Recently, we have
obtained physically acceptable solutions in this scenario through
multiple approaches <cit.>-<cit.>. The complexity factor and
two different evolutionary modes have also been discussed for a
self-gravitating object <cit.>.
Numerous investigations have been conducted in the context of
𝔾ℝ and its extended theories to examine how charge
influences the structural changes in celestial objects. Das et
al. <cit.> used Riessner-Nordström metric as an exterior
geometry and calculated the solution of the equations coupled with
charge at the hypersurface. Sunzu et al <cit.> studied
several strange stars owning charged matter configuration in their
interiors with the help of mass-radius relation. Various authors
<cit.>-<cit.> observed that presence of charge inside
physical systems usually make them more stable in a wide range.
The state variables for isotropic or anisotropic quark bodies are
usually represented by energy density and pressure, that can be
interlinked through different constraints, one of them is the
𝕄𝕀𝕋 bag model equation of state
(𝔼o𝕊) <cit.>. It is well-known that
compactness of strange structures like RXJ 185635-3754, PSR 0943+10,
Her X-1, 4U 1820-30, SAX J 1808.4-3658 and 4U 1728-34, etc. can be
efficiently described by 𝕄𝕀𝕋 𝔼o𝕊,
whereas an 𝔼o𝕊 for neutron star fails in this
context <cit.>. In general, a vacuum comprises of two states,
namely false and true whose discrepancy can be calculated through
the bag constant (𝔅). This model has extensively been
used by several researchers <cit.>-<cit.> to analyze the
internal composition of various quark bodies. Demorest et al
<cit.> discussed a particular strange star (namely, PSR
J1614-2230) and found that class of such massive objects can only be
supported by 𝕄𝕀𝕋 bag model. Rahaman et al
<cit.> employed this model along with interpolating technique to
explore the mass and some other physical aspects of compact
structures.
The solution to the field equations in any gravitational theory can
be formulated by virtue of multiple techniques, such as the
consideration of a particular 𝔼o𝕊 or the
solution of metric potentials etc. A useful technique in this regard
is the embedding class-one condition which points out that an
n-dimensional space can always be embedded into a space of one
more dimension, i.e., n+1. Bhar et al <cit.> used an
acceptable metric potential to determine physically viable
anisotropic star models through this condition. Maurya et al
<cit.> employed this condition to calculate the solutions
corresponding to relativistic stars and also analyzed the effects of
anisotropy on these structures. Singh et al <cit.> formed
non-singular solution for spherically symmetric spacetime in terms
of new metric function by using this technique. The decoupled
solutions for self-gravitating anisotropic systems have been
determined through class-one condition <cit.>. The same
condition has also been employed to modified theories. Singh
et al <cit.> used the embedding approach to study the
physical features of different compact stars in the context of
f(ℛ,𝒯) theory. Rahaman et al
<cit.> also discussed celestial structures through an embedding
approach in the same scenario and claimed that this modified theory
better explains such massive bodies. Various authors formulated
multiple acceptable class-one solutions in various backgrounds such
as f(ℛ), f(𝒢), f(ℛ,𝒯) and
f(𝒢,𝒯) theories <cit.>-<cit.>. Sharif
and his collaborators <cit.>-<cit.> extended this work in
f(𝒢) and Brans-Dicke scenarios, and obtained viable as
well as stable solutions.
In this paper, we study charged star models with anisotropic matter
distribution in the framework of
f(ℛ,𝒯,𝒬) theory. The paper has the
following format. Next section is devoted to the basic description
of modified theory and construction of the field equations
corresponding to a model ℛ+ζ𝒬. We assume
𝕄𝕀𝕋 bag model 𝔼o𝕊 and utilize
embedding condition to find radial metric potential from known
temporal component. The boundary conditions are given in section 3.
Section 4 explores the effects of electromagnetic field on several
physical characteristics of compact objects through graphical
analysis. Finally, we summarize all the results in section 5.
§ THE F(ℛ,𝒯,𝒬) GRAVITY
The action for this theory is obtained by inserting
f(ℛ,𝒯,𝒬) in place of ℛ
in the Einstein-Hilbert action (with κ=8π) as <cit.>
𝕀_f(ℛ,𝒯,𝒬)=∫√(-g){f(ℛ,𝒯,𝒬)/16π
+𝕃_m+𝕃_ℰ}d^4x,
where 𝕃_m and 𝕃_ℰ symbolize the
Lagrangian densities of matter configuration and electromagnetic
field, respectively. The corresponding field equations are
𝒢_ϕψ=𝒯_ϕψ^(EFF)=8π{1/f_ℛ-𝕃_mf_𝒬(𝒯_ϕψ+ℰ_ϕψ)
+𝒯_ϕψ^(𝒞)},
where 𝒢_ϕψ is the Einstein tensor,
𝒯_ϕψ^(EFF) can be termed as the
𝔼𝕄𝕋 in extended gravity, 𝒯_ϕψ is the
matter energy-momentum tensor and ℰ_ϕψ is the
electromagnetic tensor. The modified sector of this theory becomes
𝒯_ϕψ^(𝒞) = -1/8π(𝕃_mf_𝒬-f_ℛ)[(f_𝒯+1/2ℛf_𝒬)𝒯_ϕψ
+{ℛ/2(f/ℛ-f_ℛ)-𝕃_mf_𝒯..
- .1/2∇_σ∇_ω(f_𝒬𝒯^σω)}g_ϕψ
-1/2(f_𝒬𝒯_ϕψ)-(g_ϕψ-
∇_ϕ∇_ψ)f_ℛ
- 2f_𝒬ℛ_σ(ϕ𝒯_ψ)^σ
+∇_σ∇_(ϕ[𝒯_ψ)^σf_𝒬]
+2(f_𝒬ℛ^σω+.f_𝒯g^σω)∂^2𝕃_m/∂ g^ϕψ∂ g^σω].
Here, f_ℛ, f_𝒯 and f_𝒬 are
the partial derivatives of f with respect to its arguments. Also,
≡1/√(-g)∂_ϕ(√(-g)g^ϕψ∂_ψ)
and ∇_ω indicate D'Alambert operator and covariant
derivative, respectively. We take suitable choice of matter
Lagrangian as
𝕃_m=-1/4𝒜_ϕψ𝒜^ϕψ
which leads to ∂^2𝕃_m/∂
g^ϕψ∂
g^σω=-1/2𝒜_ϕσ𝒜_ψω
<cit.>. Here,
𝒜_ϕψ=ω_ψ;ϕ-ω_ϕ;ψ
serves as the Maxwell field tensor and
ω_ψ=ω(r)δ^ψ_0 is termed as the four
potential. The violation of the equivalence principle is obvious in
this theory due to the arbitrary coupling between matter and
geometry which results in the disappearance of covariant divergence
of 𝔼𝕄𝕋 (<ref>) (i.e., ∇_ϕ𝒯^ϕψ≠ 0). Consequently, an additional force is
produced in the gravitational structure which causes non-geodesic
motion of test particles. Thus we have
∇^ϕ𝒯_ϕψ =2/2f_𝒯+ℛf_𝒬+16π[∇_ϕ(f_𝒬ℛ^σϕ𝒯_σψ)-𝒢_ϕψ∇^ϕ(f_𝒬𝕃_m)
-1/2∇_ψ𝒯^σω(f_𝒯g_σω+f_𝒬ℛ_σω)
+∇_ψ(𝕃_mf_𝒯)-8π∇^ϕℰ_ϕψ].
In the structural development of celestial bodies, anisotropy is
supposed as a basic entity which appears when there is a difference
between radial and tangential pressures. In our cosmos, many stars
are likely to be interlinked with anisotropic fluid, thus this
factor becomes highly significant in the study of stellar models and
their evolution. The anisotropic 𝔼𝕄𝕋 is
𝒯_ϕψ=(μ+P_) 𝒦_ϕ𝒦_ψ+P_
g_ϕψ+(P_r-P_)𝒲_ϕ𝒲_ψ,
where the energy density, radial as well as tangential pressure,
four-vector and four-velocity are given by
μ, P_r, P_, 𝒲_ϕ and 𝒦_ϕ,
respectively. The trace of the field equations provides
3∇^ω∇_ω
f_ℛ-ℛ(𝒯/2f_𝒬-f_ℛ)-𝒯(8π+f_𝒯)+1/2∇^ω∇_ω(f_𝒬𝒯)
+∇_ϕ∇_ω(f_𝒬𝒯^ϕω)-2f+(ℛf_𝒬+4f_𝒯)𝕃_m
+2ℛ_ϕω𝒯^ϕωf_𝒬
-2g^ψξ∂^2𝕃_m/∂
g^ψξ∂
g^ϕω(f_𝒯g^ϕω+f_𝒬R^ϕω)=0.
For f_𝒬=0, this yields f(ℛ,𝒯)
theory, which can further be reduced to f(ℛ) gravity
when f_𝒯=0. The electromagnetic 𝔼𝕄𝕋 is
defined as
ℰ_ϕψ=1/4π[1/4g_ϕψ𝒜^σω𝒜_σω
-𝒜^ω_ϕ𝒜_ωψ],
and Maxwell equations are
𝒜^ϕψ_;ψ=4π𝒥^ϕ, 𝒜_[ϕψ;σ]=0,
where 𝒥^ϕ=ϖ𝒦^ϕ,
𝒥^ϕ and ϖ are the current and charge
densities, respectively. To examine the interior compact stars, we
take self-gravitating spherical spacetime as
ds^2=-e^ρ dt^2+e^α dr^2+r^2dθ^2+r^2sin^2θ
dφ^2,
where ρ=ρ(r) and α=α(r). The Maxwell equations
ω”+1/2r[4-r(ρ'+α')]ω'=4πϖ
e^ρ/2+α,
lead to
ω'=s/r^2e^ρ+α/2,
where s shows the presence of charge inside the geometry
(<ref>) and '=∂/∂ r. In this context, the
matter Lagrangian turns out to be 𝕃_m=s^2/2r^4.
Also, the four-vector and four-velocity in comoving framework are
𝒲^ϕ=δ^ϕ_1 e^-α/2, 𝒦^ϕ=δ^ϕ_0 e^-ρ/2,
satisfying 𝒦^ϕ𝒦_ϕ=-1 and
𝒲^ϕ𝒦_ϕ=0.
We consider a linear model as <cit.>
f(ℛ,𝒯,ℛ_ϕψ𝒯^ϕψ)=f_1(ℛ)+
f_2(ℛ_ϕψ𝒯^ϕψ)=ℛ+ζℛ_ϕψ𝒯^ϕψ,
where ζ is an arbitrary coupling constant. The nature of the
corresponding solution is found to be oscillatory (representing
alternating collapsing and expanding phases) for the case when
ζ > 0. On the other hand, ζ < 0 yields the cosmic scale
factor having a hyperbolic cosine-type dependence. The stability of
this model has been analyzed for isotropic/anisotropic
configurations through different schemes leading to some acceptable
values of ζ <cit.>. The factor 𝒬 of
this model becomes
𝒬 = e^-α[μ/4(2ρ”+ρ'^2-ρ'α'+4ρ'/r)+P_r/4(ρ'α'-ρ'^2
-2ρ”-4α'/r)
- P_(ρ'/r-α'/r-2e^α/r^2+2/r^2)].
The corresponding field equations (<ref>) take the form as
𝒢_ϕψ = ζ/1-ζ s^2/2r^4[(8π/ζ+1/2ℛ)𝒯_ϕψ
+8π/ζℰ_ϕψ+1/2{𝒬
-∇_σ∇_ω𝒯^σω}g_ϕψ
- 2ℛ_σ(ϕ𝒯_ψ)^σ-1/2𝒯_ϕψ
+∇_σ∇_(ϕ𝒯_ψ)^σ
-ℛ^σω𝒜_ϕσ𝒜_ψω].
The non-conservation of 𝔼𝕄𝕋 (<ref>) becomes
∇^ϕ𝒯_ϕψ =2ζ/ζℛ+16π[∇_ϕ(ℛ^σϕ𝒯_σψ)-1/2ℛ_σω∇_ψ𝒯^σω-1/2𝒯_ϕψ∇^ϕℛ-8π∇^ϕℰ_ϕψ
-𝒢_ϕψ∇^ϕ(𝕃_m)].
Equation (<ref>) leads to three non-zero components as
8πμ =e^-α[α'/r+e^α/r^2-1/r^2
+ζ{μ(3ρ'α'/8-ρ'^2/8
+α'/r+e^α/r^2-3ρ”/4-3ρ'/2r
-1/r^2)-μ'(α'/4-1/r-ρ')
+μ”/2+P_r(ρ'α'/8
-ρ'^2/8-ρ”/4+α'/2r+α”/2
-3α'^2/4)+5α'P'_r/4-P”_r/2
+P_(α'/2r-ρ'/2r+3e^α/r^2
-1/r^2)-P'_/r
+s^2/r^4(α'/2r-e^α/2r^2+1/2r^2+ρ'α'/8
-ρ'^2/8-ρ”/4-e^α/ζ)}],
8π
P_r =e^-α[ρ'/r-e^α/r^2+1/r^2
+ζ{μ(ρ'α'/8+ρ'^2/8
-ρ”/4-ρ'/2r)-ρ'μ'/4
-P_r(5ρ'^2/8-7ρ'α'/8+5ρ”/4-7α'/2r+ρ'/r-α'^2
-e^α/r^2+1/r^2)
+P'_r(ρ'/4+1/r)-P_(α'/2r-ρ'/2r+3e^α/r^2
-1/r^2)+P'_/r
+s^2/r^4(ρ'/2r+e^α/2r^2
-1/2r^2+ρ”/4+ρ'^2/8-ρ'α'/8+e^α/ζ)}],
8π
P_ =e^-α[1/2(ρ”+ρ'^2/2-ρ'α'/2
-α'/r+ρ'/r)
+ζ{μ(ρ'^2/8+ρ'α'/8-ρ”/4-ρ'/2r)
-μ'ρ'/4+P_r(ρ'^2/8+3α'^2/4-ρ'α'/8+ρ”/4-α'/2r
-α”/2)-5α'P'_r/4+P”_r/2
-P_(ρ'^2/4-ρ'α'/4+ρ”/2-α'/r+ρ'/r)
-P'_(α'/4-ρ'/4-3/r)+P”_/2
+s^2/r^4(ρ'α'/8-ρ'^2/8-ρ”/4
+α'/4r-ρ'/4r-e^α/ζ)}].
The explicit expressions for the matter variables are given in
Eqs.(<ref>)-(<ref>). In order to keep the system in
hydrostatic equilibrium, we can obtain the corresponding condition
from Eq.(<ref>) as
dP_r/dr+ρ'/2(μ
+P_r)-2/r(P_-P_r)-2ζ
e^-α/ζℛ+16π[ρ'μ/8(ρ'^2+2ρ”-ρ'α'+4ρ'/r)
-μ'/8(ρ'^2-ρ'α'+2ρ”+4ρ'/r)+P_r(5ρ'^2α'/8
-5ρ'α'^2/8-5α'^2/2r+7ρ”α'/4-ρ”'/2
-ρ'ρ”+ρ'α”/2+2α”/r+ρ'α'/r-α'/r^2
-ρ”/r+ρ'/r^2+2e^α/r^3-2/r^3)+P'_r/8(ρ'α'-2ρ”
-ρ'^2+4α'/r)+P_/r^2(α'-ρ'+2e^α/r
-2/r)-P'_/r(α'/2-ρ'/2
+e^α/r-1/r)
-(ss'/r^4-2s^2/r^5)(ρ'/r-e^α/r^2+1/r^2
+2e^α/ζ)]=0.
This represents Tolman-Opphenheimer-Volkoff (𝕋𝕆𝕍)
equation in extended framework which helps in analyzing the
structure and dynamics of self-gravitating celestial objects.
Misner-Sharp <cit.> provided the mass of a sphere as
m(r)=r/2(1-g^ϕψr_,ϕr_,ψ),
which leads to
m(r)=r/2(1-e^-α+s^2/r^2).
The non-linear system (<ref>)-(<ref>) contain six unknowns
ρ, α, μ, P_r, P_ and s, hence some constraints
are required to close the system. We investigate various physical
aspects of different quark bodies through a well-known
𝕄𝕀𝕋 bag model 𝔼o𝕊 which interrelates
the matter variables inside the geometry <cit.>. This constraint
has the form
P_r=1/3(μ-4𝔅).
The constant 𝔅 has been determined corresponding to
different stars <cit.> that are used in the analysis of
physical attributes of all the considered star models. The solution
of the modified field equations (<ref>)-(<ref>) along with
𝔼o𝕊 (<ref>) turns out to be
μ =[8π
e^α+ζ(9ρ”/8-e^α/r^2+1/r^2-α”/8
-5ρ'α'/8-α'^2/16
-7α'/2r+3ρ'^2/16+7ρ'/4r)]^-1
×[3/4(1+ζ
s^2/2r^4)(α'/r+ρ'/r)+𝔅{8π
e^α-ζ(4α'/r-3ρ'^2/4-3ρ”/2+ρ'α'
α”/2+α'^2/4-ρ'/r+e^α/r^2-1/r^2)}],
P_r =[8π
e^α+ζ(9ρ”/8-e^α/r^2+1/r^2
-α”/8-5ρ'α'/8-α'^2/16
-7α'/2r+3ρ'^2/16+7ρ'/4r)]^-1
×[1/4(1+ζ
s^2/2r^4)(α'/r+ρ'/r)-𝔅{8π
e^α-ζ(ρ'α'/2
+α'/r-2ρ'/r+e^α/r^2
-ρ”-1/r^2)}],
P_ =[8π
e^α+ζ(1/r^2-2e^α/r^2+ρ'^2/4+ρ”/2-ρ'α'/4+ρ'/r
-α'/r)]^-1[ρ'/2r-α'/2r
+ρ'^2/4-ρ'α'/4+ρ”/2+ζ{8π
e^α+ζ(9ρ”/8-e^α/r^2+1/r^2
-α”/8-5ρ'α'/8-α'^2/16
-7α'/2r+3ρ'^2/16+7ρ'/4r)}^-1{1/8r(1+ζ
s^2/2r^4)(2ρ'α'^2+ρ'^3-ρ”α'-ρ'ρ”
-α'α”-ρ'α”+3ρ'^2α'/2
-3ρ'^2/r+3α'^3/2-α'^2/r-4ρ'α'/r)+2π
e^α𝔅(ρ'α'
-2ρ”+2α”-3α'^2-2ρ'/r+2α'/r)
+ζ𝔅/16(10ρ”α”-5ρ'α'α”+11ρ'ρ”α'
-11ρ”α'^2-ρ'^2α”
-2ρ”ρ'^2-10ρ”^2-7ρ'^2α'^2/2
+ρ'^3α'/2-36ρ'α'^2/r-8ρ'^3/r
+11ρ'α'^3/2+16ρ'^2α'/r
+28ρ”α'/r-8α'α”/r+12α'^3/r+3ρ'^4/2
-8ρ'^2/r^2-8α”e^α/r^2
+8α”/r^2-20α'^2/r^2-24ρ'ρ”/r+52ρ'α'/r^2+10ρ'α”/r
-4e^αρ'α'/r^2+8e^αρ”/r^2-8ρ”/r^2
+12α'^2e^α/r^2-8ρ'/r^3
-8e^αα'/r^3+8α'/r^3+8e^αρ'/r^3)}]
+ζ
s^2/4r^4e^α(ρ'α'/2-ρ'^2/2-ρ”
+α'/r-ρ'/r-4e^α/ζ).
A comprehensive analysis has been done on the study of celestial
bodies configured with quark matter through 𝔼o𝕊
(<ref>) in 𝔾ℝ and other modified theories
<cit.>. We find solution to the modified charged field
equations by employing this 𝔼o𝕊 and setting
values of the coupling constant as ζ=±5.
Eiesland <cit.> computed the essential and adequate condition
for the case of an embedding class-one as
ℛ_1212ℛ_0303-ℛ_0101ℛ_2323+ℛ_1202ℛ_1303=0,
which leads to
ρ'^2-(ρ'-α')ρ'e^α-2(e^α-1)ρ”=0,
and hence
α(r)=ln(1+C_1ρ'^2e^ρ),
where C_1 is an integration constant. To evaluate α(r), we
consider the temporal metric function as <cit.>
ρ(r)=ln C_3+2C_2r^2.
Here, C_2 and C_3 are positive constants that need to be
determined. Lake <cit.> proposed the criteria to check the
acceptance of ρ(r) as ρ(r)|_r=0=ln
C_3, ρ'(r)|_r=0=0 and ρ”(r)|_r=0>0 everywhere in the
interior configuration (r=0 indicates center of the star). This
confirms the acceptance of the metric potential (<ref>). Using
Eq.(<ref>) in (<ref>), we obtain
α(r)=ln(1+C_2C_4r^2e^2C_2r^2),
where C_4=16C_1C_2C_3. Equations (<ref>)-(<ref>) in
terms of these constants take the form as given in Appendix
B.
§ BOUNDARY CONDITIONS
In order to understand the complete structural formation of massive
stars, we impose some conditions on the boundary surface, known as
the junction conditions. In this regard, several conditions have
been discussed in the literature, such as the Darmois, Israel and
Lichnerowicz junction conditions. The first of them requires the
continuity of the first and second fundamental forms between both
the interior and exterior regions at some fixed radius <cit.>.
On the other hand, Lichnerowicz junction conditions yield the
continuity of the metric and all first order partial derivatives of
the metric across Σ <cit.>. However, both of these
conditions are often stated to be equivalent, known as the
Darmois-Lichnerowicz conditions <cit.>. Since we need to
calculate three constants, thus we use these junction conditions to
increase the number of equations.
The choice of the exterior spacetime should be made on the basis
that the properties (such as static/non-static and
uncharged/charged) of the interior and exterior geometries can match
with each other at the hypersurface. Also, for model (<ref>),
the term ℛ_ϕψ𝒯^ϕψ does not
contribute to the current scenario. Therefore, we take the
Reissner-Nordström exterior metric as the most suitable choice
given by
ds^2=-(1-2M̅/r+S̅^2/r^2)dt^2+dr^2/(1-2M̅/r+S̅^2/r^2)
+r^2dθ^2+r^2sin^2θ dφ^2,
where S̅ and M̅ are the charge and mass of the
exterior region, respectively. We suppose that the metric potentials
(g_tt and g_rr components) and the first order differential
(g_tt,r) corresponding to inner and outer geometries are
continuous across the boundary, leading to the following constraints
e^ρ(ℋ) =C_3e^2C_2ℋ^2=1-2M̅/ℋ+S̅^2/ℋ^2,
e^ζ(ℋ) =1+C_2C_4ℋ^2e^2C_2ℋ^2=(1-2M̅/ℋ
+S̅^2/ℋ^2)^-1,
ρ'(ℋ) =4C_2ℋ=2M̅ℋ-2S̅^2/ℋ(ℋ^2
-2M̅ℋ+S̅^2),
where ℋ denotes the boundary of a compact star.
Equations (<ref>)-(<ref>) are solved simultaneously so that
we obtain
C_1 = ℋ^4(2M̅ℋ-S̅^2)/4(M̅ℋ-S̅^2)^2,
C_2 = M̅ℋ-S̅^2/2ℋ^2(ℋ^2-2M̅ℋ+S̅^2),
C_3 = (ℋ^2-2M̅ℋ+S̅^2/ℋ^2)e^M̅ℋ-S̅^2/2M̅ℋ-ℋ^2-S̅^2,
C_4 = 2(2M̅ℋ-S̅^2)/M̅ℋ-S̅^2e^M̅ℋ-S̅^2/2M̅ℋ-ℋ^2-S̅^2.
The second fundamental form yields
P_r^Σ_=0, s^Σ_=S̅,
m^Σ_=M̅.
Equation (<ref>) provides the radial pressure inside a compact
star which must disappear at the hypersurface. This leads to the bag
constant in terms of Eqs.(<ref>)-(<ref>) as
𝔅 =[4ℋ^5(ζ(-4M̅^3ℋ+2M̅^2S̅^2
+10M̅S̅^2ℋ-5S̅^4-3S̅^2ℋ^2)
+8πℋ^4(ℋ(ℋ-2M̅)+S̅^2))]^-1[(ℋ(ℋ
-2M̅)+S̅^2)(-2M̅^2ℋ
+M̅(S̅^2+3ℋ^2)-2S̅^2ℋ)(ζS̅^2+2ℋ^4)].
We can evaluate the constants (C_1, C_2, C_3, C_4) as well
as bag constant through the experimental data (masses and radii) of
four strange stars <cit.> given in Table 1. Tables
2 and 3 present the values of these constants
for S̅=0.2 and 0.7, respectively. It is observed that all
these stars exhibit consistent behavior with the Buchdhal's proposed
limit <cit.>, i.e., 2M̅/ℋ<8/9.
The solution to the field equations (<ref>)-(<ref>) is
obtained by applying some constraints. The values of matter
variables such as the energy density (at the core and boundary) and
central radial pressure along with the bag constant with respect to
different choices of the coupling constant (ζ=5, -5)
and charge (S̅=0.2, 0.7) are given in Tables
4-7. We obtain 𝔅 for different
stars as
* For ζ=5 and S̅=0.2: 116.27, 215.48, 235.81 and 113.18
MeV/fm^3.
* For ζ=5 and S̅=0.7: 115.15, 210.95, 226.74 and 109.69
MeV/fm^3.
* For ζ=-5 and S̅=0.2: 116.07, 215.01, 235.56 and 113.15
MeV/fm^3.
* For ζ=-5 and S̅=0.7: 114.94, 210.32, 226.07 and 109.58
MeV/fm^3.
Notice that the predicted range (60-80 MeV/fm^3
<cit.>) of bag constant for which stars remain stable
does not incorporate the above computed values for different cases
in this theory. Nevertheless, CERN-SPS and
RHIC performed several experiments and revealed that
density dependent bag model could provide a vast range of this
constant.
§ GRAPHICAL INTERPRETATION OF COMPACT STRUCTURES
This sector deals with the graphical analysis of different physical
attributes of anisotropic compact models coupled with
electromagnetic field. With the help of preliminary data presented
in Tables 1-3, the graphical nature of the
developed solution (<ref>)-(<ref>) is analyzed for different
parametric values. We check physical acceptance of the metric
potentials, anisotropic pressure, energy conditions and mass inside
all considered candidates. Since ζ is an arbitrary constant,
so the analysis of physical attributes of compact stars
corresponding to its different values would help us to explore the
effects of this theory. For this, we choose ζ=±5 and check
the stability of modified gravity model (<ref>), and the
constructed solution. Further, the modified field equations still
engage an unknown such as the interior charge, thus one can now
either adopt a constraint to make it known or take its known form.
In this regard, we take the electric charge s(r) depending on the
radial coordinate as follows <cit.>
s(r)=S̅(r/ℋ)^3=kr^3,
where k is a constant with the dimension of inverse square length.
We obtain increasing and singularity-free nature of the metric
functions everywhere.
§.§ Study of Matter Variables
A solution can be considered physically acceptable if it exhibits
the maximum value of state variables (pressure and energy density)
at the core of celestial object and decreasing towards its boundary.
Figures 1-3 show the graphs of energy density,
radial and tangential pressures, respectively corresponding to each
star for two values of charge and k=0.001. We note that all stars
provide acceptable behavior of these quantities. Figure 1
shows that energy density increases by increasing the coupling
constant and decreasing charge. Figures 2 and
3 demonstrate the decreasing behavior of radial and
tangential pressures inside each star with the increase in charge as
well as ζ. The radial pressure vanishes at the boundary only
for ζ=-5. Tables 4-7 indicate that
structure of each star becomes more dense for ζ=5 and
S̅=0.2. We have checked the regular behavior of the developed
solution (dμ/dr|_r=0 = 0, dP_r/dr|_r=0 =
0, d^2μ/dr^2|_r=0 < 0, d^2P_r/dr^2|_r=0 <
0) and is satisfied. In all plots of this paper, remember that
* Red (thick) line corresponds to ζ=-5 and S̅=0.2.
* Red (dotted) line corresponds to ζ=-5 and S̅=0.7.
* Black (thick) line corresponds to ζ=5 and S̅=0.2.
* Black (dotted) line corresponds to ζ=5 and S̅=0.7.
§.§ Behavior of Anisotropy
The solution (<ref>)-(<ref>) produces the anisotropy
(Δ=P_-P_r). We analyze the influence of charge on
anisotropy to study its role in structural development. The
anisotropy shows inward (decreasing) or outward (increasing)
directed behavior accordingly whether the radial pressure is greater
or less than the tangential component. Figure 4 depicts
that it disappears at the core and possess increasing behavior in the interior of all
stars. It is also shown that large value of charge reduces
anisotropy.
§.§ Effective Mass, Compactness and Surface Redshift
The sphere (<ref>) has an effective mass in terms of energy
density as
m(r)=1/2∫_0^ℋr^2μ dr,
where μ is provided in Eq.(<ref>). Equivalently,
Eq.(<ref>) along with (<ref>) yields
m(r)=r/2{r^2(S̅^2-2M̅ℋ)e^(M̅ℋ-S̅^2)
(r^2-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)/r^2(S̅^2
-2M̅ℋ)e^(M̅ℋ-S̅^2)(r^2-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)
-ℋ^2(ℋ^2-2M̅ℋ+S̅^2)}.
The increasing behavior of mass towards boundary with respect to
each candidate is shown in Figure 5 indicating that all
compact objects become more massive for ζ=5 and S̅=0.2.
The increment in charge results in the less massive structure. Some
physical quantities play a significant role in the study of
evolution of compact objects, one of them is the mass to radius
ratio of a star, known as compactness. This is given as
β(r)=m(r)/r=1/2{r^2(S̅^2-2M̅ℋ)e^(M̅ℋ-S̅^2)
(r^2-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)/r^2(S̅^2
-2M̅ℋ)e^(M̅ℋ-S̅^2)(r^2-ℋ^2)/ℋ^2(ℋ^2
-2M̅ℋ+S̅^2)-ℋ^2(ℋ^2-2M̅ℋ+S̅^2)}.
Buchdahl <cit.> used the matching criteria at the hypersurface
and proposed that a feasible solution corresponding to a celestial
body must have its value less than 4/9 everywhere. A
massive object with sufficient gravitational pull undergoes certain
reactions and releases electromagnetic radiations. The surface
redshift quantifies increment in the wavelength of those radiations,
provided as
D(r)=-1+1/√(1-2β(r)),
which then leads to
D(r)=-1+√(r^2(S̅^2-2M̅ℋ)e^(M̅ℋ-S̅^2)(r^2
-ℋ^2)/ℋ^2(ℋ^2-2M̅ℋ+S̅^2)+ℋ^2(2M̅ℋ
-ℋ^2-S̅^2)/ℋ^2(2M̅ℋ-ℋ^2-S̅^2)).
For a feasible star model, Buchdahl calculated its upper limit as
2 for isotropic interior, whereas it is 5.211 for anisotropic
configuration <cit.>. Figures 6 and 7 show
graphs of both factors for each star that are consistent with the
required range for all values of ζ and charge (Tables
4-7). Moreover, these quantities increase with
the increasing of bag constant and decreasing charge.
§.§ Energy Conditions
A geometrical structure may contain normal or exotic matter in its
interior. In astrophysics, some constrains depending on state
variables are extensively used, known as energy conditions. The
verification of these conditions confirm the existence of normal
matter in a considered star as well as viability of the developed
solution. These bounds are given as
* Null: μ+P_+s^2/4π r^4≥ 0, μ+P_r ≥ 0,
* Weak: μ+s^2/8π r^4≥ 0, μ+P_+s^2/4π r^4≥ 0, μ+P_r ≥ 0,
* Strong: μ+2P_+P_r+s^2/4π r^4≥ 0,
* Dominant: μ-P_≥ 0, μ-P_r+s^2/4π r^4≥ 0.
We observe from the graphs of matter variables (Figures
1-3) that they possess positive behavior. Also,
μ>P_r and μ>P_ everywhere in the domain, thus the
fulfilment of all the energy conditions is obvious, contradicting
the results found in <cit.>. However, we have not added their
plots. Consequently, we can say that our resulting solution and
extended model (<ref>) are physically viable.
§.§ Tolman-Opphenheimer-Volkoff Equation
The generalized 𝕋𝕆𝕍 equation is already expressed in
Eq.(<ref>). We are required to plot different forces involving
in this equation to check whether the model is in stable equilibrium
condition or not <cit.>. To do this, the compact form of the
non-conservation equation in the presence of charge can be written
as
f_g+f_h+f_a=0,
where f_g, f_h and f_a are gravitational, hydrostatic and
anisotropic forces, respectively, defined as
f_g=-ρ'/2(μ+P_r),
f_h=-dP_r/dr+ss'/4π r^4,
f_a=2/r(P_-P_r).
Here, the effective matter variables are given in
Eqs.(<ref>)-(<ref>). Figure 8 exhibits the plots of
this equation, from which it can clearly be noticed that our
considered quark models are in hydrostatic equilibrium.
§.§ Stability Analysis
The stability criteria helps to understand the composition of
astronomical structures in our universe. Here, we check stability of
the developed solution through two techniques.
§.§.§ Herrera Cracking Technique
The causality condition <cit.> states that speed of sound in
tangential and radial directions must lie within 0 and 1 for a
stable structure, i.e., 0 ≤ v_s^2 < 1 and 0 ≤
v_sr^2 < 1, where
v_s^2=dP_/dμ,
v_sr^2=dP_r/dμ.
Herrera <cit.> suggested a cracking approach according to which
the stable system must meet the condition 0 ≤|
v_s^2-v_sr^2| < 1 everywhere in its interior.
Figure 9 shows that our solution with respect to all
candidates is stable throughout.
§.§.§ Adiabatic Index
Another approach to check the stability is the adiabatic index
(Γ). Several researchers <cit.> studied the
stability of self-gravitating structures by utilizing this concept
and concluded that stable models have its value not less than
4/3 everywhere. Here, Γ is defined as
Γ=μ+P_r/P_r(dP_r/dμ)=μ+P_r/P_r(v_sr^2).
To overcome the problem such as the occurrence of dynamical
instabilities inside the star, Moustakidis <cit.> recently
proposed a critical value of the adiabatic index depending on
certain parameters as
Γ_Crit=4/3+19/21β(r),
where the condition Γ≥Γ_Crit ensures the stability
of compact structure. This condition has also been discussed
decoupled class-one solutions <cit.>. Figures 10
and 11 depict the plots of Γ and Γ_Crit
for different values of charge corresponding to each quark star. We
observe that the criterion of this approach is fulfilled and thus
all the candidates show stable behavior.
§ FINAL REMARKS
In this paper, we have studied the influence of matter-geometry
coupling through the model ℛ+ζ𝒬 on four
charged anisotropic compact stars for the coupling constant
ζ=±5. We have adopted the matter Lagrangian proposed by
Haghani et al <cit.> which turns out to be
𝕃_m=s^2/2r^4. We have formulated the
corresponding equations of motion and non-conservation equation. We
have used the temporal metric function (<ref>) to determine the
radial metric potential (<ref>) through embedding class-one
condition and then found the solution (<ref>)-(<ref>) of the
modified field equations. The four unknowns (C_1,C_2,C_3,C_4) have
been determined at the hypersurface with the help of observed mass
and radius of each celestial object. We have used the preliminary
information of four compact stars, i.e., SAX J 1808.4-3658, 4U
1820-30, SMC X-4 and Her X-I (Table 1) to calculate
constants for different values of charge (Tables 2 and
3) as well as bag constant with respect to different
choices of ζ. We have found that the solution with respect to
each star is physically acceptable as state variables are maximum
(minimum) at the center (boundary). The mass of strange stars
exhibits increasing behavior for the given values of charge, bag
constant and ζ (Figure 5).
It is found that increasing nature of the coupling constant and
decreasing the charge (i.e., ζ=5 and S̅=0.2) produce
dense interiors in this modified gravity. The compactness and
redshift parameters also provide acceptable behavior (Figures
6 and 7). We have obtained that our developed
solution is viable and stellar models contain normal matter.
Finally, we have checked hydrostatic equilibrium condition and
stability of the resulting solution through two criteria. We
conclude that our solution with respect to all the considered models
show stable behavior for both values of charge as well as considered
range of ζ (Figure 9). The adiabatic index and its
critical value also confirm their stability (Figures 10
and 11). These results are observed to be consistent with
<cit.>. It is worthwhile to mention here that all our results
reduce to 𝔾ℝ by choosing ζ=0.
§ APPENDIX A
The explicit expressions of the matter
variables are deduced from Eqs.(<ref>)-(<ref>) as
μ =-[4 r^4 ((χ _3 (ζχ _6+8 π
e^α)-ζχ _2 χ _7) (ζ ^2 χ _3
χ _5+ζχ _1 (ζχ _10+8 π
e^α)-8 π e^α
×(ζχ
_10+8 π e^α))+ζ(ζχ _3 χ
_5+χ _7 (ζχ _1-8 π e^α)) (ζχ _3 χ _9+χ _2 (ζχ _10
+8 π
e^α)))]^-1[4 ζ(ζχ _3
χ _9+χ _2 (ζχ _10+8 π e^α))
(χ _7 (r^2 (r
α'+e^α-1)
+ζ s^2 χ _4)+χ
_3 (r^2 (-e^α+r ρ '+1)+ζ s^2 χ
_8))+(χ _3 (ζχ _6+8 π e^α)-ζχ _2 χ _7)
(-ζ r^4 χ
_3 α 'ρ '+2 ζ r^4 χ _3 ρ”+ζ r^4 χ _3
ρ '^2-2 ζ r^3 χ _3 α '+32 π r^3 e^αα
'+2 ζ r^3 χ _3 ρ '
+4 ζ r^2 χ _10(r α '+e^α-1)-32 π r^2 e^α+32 π r^2
e^2 α+4 ζ s^2 χ _4 (ζχ _10+8 π
e^α)
+4 ζ ^2 s^2 χ _3 χ
_11)],
P_r =[4 r^4 (ζ ^3 (-χ
_3) χ _5 χ _6-ζ ^3 χ _3 χ _5 χ _9-8 πζ
^2 χ _3 χ _5 e^α+8 πζ ^2 χ _7 χ _9
e^α+ζ ^2 χ _2 χ _5
×(ζχ _7-ζχ _10-8 π e^α)+8 πζ ^2 χ
_6 χ _10 e^α-ζχ _1 (ζ ^2 χ _7 χ
_9+ζχ _6 (ζχ _10+8 π
e^α)
+8 π e^α(ζχ _10+8
π e^α))+64 π ^2 ζχ _6 e^2 α+64
π ^2 ζχ _10 e^2 α+512 π ^3 e^3
α)]^-1
×[ζχ _5 (4
(-ζχ _7+ζχ _10+8 π e^α)
(r^2 (r α '+e^α-1)+ζ s^2 χ
_4)-ζχ _3 (r^2
×(-2 r^2 ρ”-r^2ρ '^2+r α ' (r ρ '+2)-4 e^α+2 r
ρ '+4)+4 ζ s^2 χ _8-4 ζ s^2 χ
_11))
-ζχ _1 (ζ r^4 χ _7
α 'ρ '-2 ζ r^4 χ _7 ρ”-ζ r^4 χ _7 ρ
'^2+2 ζ r^3 χ _7 α '+32 π r^3 e^αρ '-2
ζ r^3 χ _7 ρ '
+4 ζ r^2 χ _10(-e^α+r ρ '+1)+32 π r^2 e^α-32 π r^2
e^2 α+4 ζ s^2 χ _8 (ζχ _10+8 π
e^α)
-4 ζ ^2 s^2 χ _7 χ
_11)-8 π e^α(-ζ r^4 χ _7 α ' ρ
'+2 ζ r^4 χ _7 ρ”+ζ r^4 χ _7 ρ '^2-2 ζ r^3
χ _7 α '
-32 π r^3 e^αρ '+2 ζ
r^3 χ _7 ρ '-4 ζ r^2 χ _10(-e^α+r ρ
'+1)-4 ζ s^2 χ _8 (ζχ _10+8 π e^α)
-32 π r^2 e^α+32 π r^2 e^2 α+4 ζ ^2 s^2 χ _7 χ _11)],
P_ =[4 r^4 (ζ ^3 (-χ _3) χ _5 χ
_6-ζ ^3 χ _3 χ _5 χ _9-8 πζ ^2 χ _3 χ _5
e^α+8 πζ ^2 χ _7 χ _9 e^α+ζ ^2 χ
_2 χ _5
×(ζχ _7-ζχ _10-8
π e^α)+8 πζ ^2 χ _6 χ _10
e^α-ζχ _1 (ζ ^2 χ _7 χ _9+ζχ _6
(ζχ _10+8 π e^α)
+8 π
e^α(ζχ _10+8 π e^α))+64 π
^2 ζχ _6 e^2 α+64 π ^2 ζχ _10 e^2
α+512 π ^3 e^3 α)]^-1
×[ζχ _5 (ζχ _2 (r^2 (-2 r^2 ρ”+r^2 (-ρ '^2)+r α '(r ρ '+2)-4
e^α+2 r ρ '+4)
+4 ζ s^2 χ _8-4
ζ s^2 χ _11)+4 (ζχ _6+ζχ _9+8 π
e^α) (r^2 (r α '+e^α-1)+ζ
s^2 χ _4))
+(8 π e^α-ζχ
_1) ((ζχ _6+8 π e^α) (r^3
(-α ' (r ρ '+2)+2 r ρ”+r ρ '^2+2 ρ
')
+4 ζ s^2 χ _11)+4 ζχ _9
(r^2 (-e^α+r ρ '+1)+ζ s^2 χ
_8))],
where
χ_1 =3ρ'α'/8-ρ'^2/8
+α'/r+e^α/r^2-3ρ”/4-3ρ'/2r-1/r^2,
χ_2 =ρ'α'/8-ρ'^2/8-ρ”/4+α'/2r+α”/2
-3α'^2/4,
χ_3 =α'/2r-ρ'/2r+3e^α/r^2-1/r^2,
χ_4 =α'/2r-e^α/2r^2+1/2r^2+ρ'α'/8
-ρ'^2/8-ρ”/4-e^α/ζ,
χ_5 =ρ'α'/8+ρ'^2/8-ρ”/4-ρ'/2r,
χ_6 =5ρ'^2/8-7ρ'α'/8+5ρ”/4-7α'/2r+ρ'/r-α'^2
-e^α/r^2+1/r^2,
χ_7 =α'/2r-ρ'/2r+3e^α/r^2-1/r^2,
χ_8 =ρ'/2r+e^α/2r^2
-1/2r^2+ρ”/4+ρ'^2/8-ρ'α'/8+e^α/ζ,
χ_9 =ρ'^2/8+3α'^2/4-ρ'α'/8+ρ”/4-α'/2r
-α”/2,
χ_10 =ρ'^2/4-ρ'α'/4+ρ”/2-α'/r+ρ'/r,
χ_11 =ρ'α'/8-ρ'^2/8-ρ”/4
+α'/4r-ρ'/4r-e^α/ζ.
§ APPENDIX B
Equations (<ref>)-(<ref>) in
terms of constants take the form as
μ =[r^4{16π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3-ζ
C_2(4096C_2^5C_1^2C_3^2r^4e^4C_2r^2(2C_1C_3
×
e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4e^4C_2r^2+64C_2^3C_1C_3r^2e^2C_2r^2(44C_1C_3e^2C_2r^2
+3r^2)-272C_2^2C_1C_3r^2e^2C_2r^2
+2C_2(76C_1C_3e^2C_2r^2-3r^2)-23)}]^-1
×[4096𝔅C_2^6C_1^2C_3^2r^8e^4C_2r^2(2C_1C_3e^2C_2r^2(8π
r^2-ζ)-ζ r^2)-512C_2^5C_1^2C_3^2
×
r^4e^4C_2r^2(r^4(20ζ𝔅-6)-3ζ
s^2)+128C_2^4C_1^2 C_3^2r^2e^4C_2r^2{3ζ
s^2+96π𝔅r^6
+r^4(6-40ζ𝔅)}-16C_2^3C_1C_3r^2e^2C_2r^2(2r^4(14ζ𝔅-9)-9ζ
s^2)+8C_2^2
×{C_1C_3e^2C_2r^2(3ζ
s^2+96π𝔅r^6+r^4(6-40ζ𝔅))+3ζ𝔅r^6}+C_2{3ζ
s^2
+r^4(20ζ𝔅+6)}+16π𝔅r^4],
P_r =-[r^4{16π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3-ζ
C_2(4096C_2^5C_1^2C_3^2r^4e^4C_2r^2(2C_1
×
C_3e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4e^4C_2r^2+64C_2^3C_1C_3r^2e^2C_2r^2(44C_1e^2C_2r^2
× C_3+3r^2)-272C_2^2C_1C_3r^2e^2C_2r^2
+2C_2(76C_1C_3e^2C_2r^2-3r^2)-23)}]^-1
×[(16r^2
C_2^2C_1C_3e^2C_2r^2+1)(256C_2^4𝔅C_1C_3r^6e^2C_2r^2(2C_1e^2C_2r^2(8π
r^2-ζ)
× C_3-ζ
r^2)+32C_2^3C_1C_3r^2e^2C_2r^2(r^4(4ζ𝔅-2)-ζ s^2)+8C_2^2C_1C_3e^2C_2r^2
×(64π𝔅r^6-ζ
s^2-2r^4(6ζ𝔅+1))+C_2(r^4(24ζ𝔅-2)-ζ
s^2)+16π𝔅r^4)],
P_ =[r^4(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2(4π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2-ζ
C_2(8C_2C_1
×
C_3e^2C_2r^2-1)(16C_2^2C_1C_3r^2e^2C_2r^2+2C_2r^2+3))(16π(16C_2^2C_1C_3r^2e^2C_2r^2
+1)^3-ζ
C_2(4096C_2^5C_1^2C_3^2r^4e^4C_2r^2(2C_1C_3e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4
× e^4C_2r^2+64C_2^3C_1C_3r^2e^2C_2r^2(44C_1C_3e^2
C_2r^2+3r^2)+2(76C_1C_3e^2C_2r^2-3r^2)
×
C_2-272C_2^2C_1C_3r^2e^2C_2r^2-23))]^-1[-67108864
C_1^5 C_3^5 e^10 C_2 r^2 r^10(2 C_1 C_3
×
e^2 C_2 r^2(8 π r^2-ζ)-r^2 ζ) (ζ𝔅 r^6+2 C_1 C_3 e^2 C_2 r^2 s^2 (8 π r^2-ζ)) C_2^14-C_1^5 C_3^5
× 8388608 e^10
C_2 r^2 r^10ζ(2 (3 ζ𝔅 +80 C_1 C_3
e^2 C_2 r^2π𝔅 -3) r^6-20 C_1 C_3 e^2 C_2 r^2ζ r^4
×𝔅 -s^2 (32 C_1 e^2 C_2
r^2π C_3+3 ζ) r^2+4 C_1 C_3 e^2 C_2 r^2 s^2 ζ) C_2^13-1048576 C_1^4 C_3^4
× e^8 C_2 r^2
r^6 (ζ((2-4 ζ𝔅 ) r^6-2 (1+4 π ) s^2
ζ r^2+s^2 ζ ^2) r^4+2 C_1 C_3 e^2 C_2 r^2(64 π
^2
× s^2 ζ r^4+8 π(2 r^8 (5 ζ𝔅 -1)-17 r^4 s^2 ζ)-ζ(2 (9 ζ𝔅 +5) r^6+s^2 ζ ^2-17ζ
× s^2
r^2)) r^2+8 C_1^2 C_3^2 e^4 C_2 r^2(8 π r^2-ζ) ((6 ζ𝔅 +2) r^6+112 π s^2 r^4-s^2 ζ
r^2
×(21+8 π )+s^2 ζ ^2 ))
C_2^12-262144 C_1^4 C_3^4 e^8 C_2 r^2 r^6 (ζ((94
ζ𝔅 -26) r^6-8
× (4+5 π ) s^2
ζ r^2+5 s^2 ζ ^2) r^2+4 C_1 C_3 e^2 C_2 r^2(128
π ^2 s^2 ζ r^4+ζ(-(44 ζ𝔅
+5)
× r^6-8 s^2 ζ r^2+s^2 ζ ^2)+4 π((68 ζ𝔅 -8) r^8+13 s^2 ζ r^4-6 s^2 ζ ^2
r^2))) C_2^11
-16384 C_1^2 C_3^2 e^4 C_2
r^2 r^4 (-s^2 ζ ^3 r^6+C_1 C_3 e^2 C_2 r^2ζ(22
r^6-2 (11+36 π ) s^2 ζ r^2
+17 s^2 ζ ^2)
r^4+4 C_1^2 C_3^2 e^4 C_2 r^2(640 π ^2 s^2 ζ r^4-8 π(20 r^8+49 s^2 ζ r^4+22 s^2 ζ ^2 r^2)
+ζ(4 (2 ζ𝔅 +7) r^6+47 s^2 ζ r^2+8 s^2
ζ ^2)) r^2+16 C_1^3 C_3^3 e^6 C_2 r^2(128 π ^2
s^2 (42 r^2
-5 ζ) r^4+8 π(20 (2
ζ𝔅 +1) r^8-231 s^2 ζ r^4+27 s^2 ζ ^2
r^2)-ζ(4 (13 ζ𝔅 +9)
×
r^6-154 s^2 ζ r^2+17 s^2 ζ^2))) C_2^10-4096
C_1^2 C_3^2 e^4 C_2 r^2 r^4 (-11 s^2 ζ ^3 r^4+C_1
C_3
× e^2 C_2 r^2ζ(2 (274 ζ𝔅 -51) r^6-40 (5+3 π ) s^2 ζ r^2+63 s^2 ζ
^2) r^2+4 C_1^2 C_3^2 e^4 C_2 r^2
×(2560
π ^2 s^2 ζ r^4+8 π(80 (2 ζ𝔅 -1) r^8+254
s^2 ζ r^4-129 s^2 ζ ^2 r^2)+ζ(-6
× (32 ζ𝔅 -23) r^6-340 s^2 ζ r^2+85 s^2
ζ^2))) C_2^9-256 C_1 C_3 e^2 C_2 r^2 r^2 (-3
s^2 ζ ^3
× r^6-2 C_1 C_3 e^2 C_2 r^2ζ((44 ζ𝔅 -34) r^6+2 (17+20 π) s^2 ζ r^2+69
s^2 ζ ^2) r^4-16
× C_1^2 C_3^2 e^4 C_2
r^2(-1280 π ^2 s^2 ζ r^4-ζ((142-44 ζ𝔅 ) r^6+66 s^2 ζ r^2+35 s^2 ζ ^2)
+16 π(20 (ζ𝔅 +1) r^8+8 s^2 ζ r^4+17 s^2
ζ ^2 r^2)) r^2+32 C_1^3 C_3^3 e^6 C_2 r^2(2560
π ^2 s^2
×(7 r^2-ζ) r^4+8 π(80 (ζ𝔅 +1) r^8-768 s^2 ζ r^4+115 s^2 ζ
^2 r^2)-ζ(2 (44 ζ𝔅
+75)
r^6-514 s^2 ζ r^2+85 s^2 ζ ^2))) C_2^8-64 C_1
C_3 e^2 C_2 r^2 r^2 (-13 s^2 ζ ^3 r^4+4 C_1 C_3
× e^2 C_2 r^2ζ(50 (4 ζ𝔅 -3) r^6+4
(-13+158 π ) s^2 ζ r^2-177 s^2 ζ ^2) r^2+32 C_1^2
C_3^2
× e^4 C_2 r^2(2560 π ^2 s^2 ζ
r^4+ζ((241-122 ζ𝔅 ) r^6-396 s^2 ζ
r^2+155 s^2 ζ ^2)+4 π
×(80 (ζ𝔅 -2) r^8+596 s^2 ζ r^4-319 s^2 ζ ^2
r^2))) C_2^7-8 (3 s^2 ζ ^3 r^6+8 C_1 C_3 e^2
C_2 r^2
×ζ ^2 (-28 𝔅 r^6+48
π s^2 r^2+9 s^2 ζ) r^4+32 C_1^2 C_3^2 e^4 C_2 r^2(1280 π ^2 s^2 ζ r^4+ζ
×((82-174
ζ𝔅 ) r^6+160 s^2 ζ r^2-123 s^2 ζ ^2)-8
π(40 (ζ𝔅 +1) r^8-26 s^2r^4
ζ -33 s^2 ζ ^2 r^2)) r^2+64 C_1^3 C_3^3 e^6 C_2
r^2(2560 π ^2 s^2 (7 r^2-ζ) r^4+32 π(20
r^8-173
× s^2 ζ r^4+25 s^2 ζ ^2
r^2)+ζ(4 (10 ζ𝔅 -29) r^6+400 s^2 ζ
r^2-57 s^2 ζ ^2))) C_2^6-8
×(19 s^2 ζ ^3 r^4+4 C_1 C_3 e^2 C_2 r^2ζ(5 (2
ζ𝔅 -17) r^6-56 s^2 ζ ^2+4s^2 ζ (13+123 π
)
× r^2 ) r^2+16 C_1^2 C_3^2 e^4 C_2 r^2(2560 π^2 s^2 ζ r^4+ζ((235-208 ζ𝔅
) r^6-366 s^2 ζ r^2
+120 s^2 ζ ^2)+4 π(40 (ζ𝔅 -4) r^8+644s^2 ζ r^4-299 s^2 ζ
^2 r^2))) C_2^5-2 (32 C_1^2
× e^4
C_2 r^2(128 π ^2 r^2 (42 r^2-5 ζ) s^2-4
π(40 (ζ𝔅 -1) r^6+324 s^2 ζ r^2-31 s^2
ζ ^2)
+ζ(3 (8 ζ𝔅 -5)
r^4+59 s^2 ζ)) C_3^2+4C_1 e^2 C_2 r^2(1280 π
^2 s^2 ζ r^4-32 π(2 (3 ζ𝔅
+5) r^8-13 s^2 ζ r^4-40 s^2 ζ ^2 r^2)- (4 (26 ζ𝔅 +25) r^6-376 s^2 ζ r^2+321 s^2 ζ
^2)
×ζ) C_3+r^2 ζ(-6 (2 ζ𝔅 +1) r^6+2 (3+28 π ) s^2 ζ r^2+133 s^2 ζ
^2)) C_2^4-2 (ζ
×((20 ζ𝔅 -33) r^6+2 (15+98 π ) s^2 ζ r^2+69 s^2 ζ
^2)+4 C_1 C_3 e^2 C_2 r^2(1280 π ^2 r^2
×ζ s^2+ζ(r^4 (73-114 ζ𝔅 )-120
s^2 ζ)+ (40 (ζ𝔅 -2) r^6+338 s^2 ζ
r^2-97
× s^2 ζ ^2)4 π))
C_2^3-(128 π ^2 r^2 ζ s^2-8 π(4 r^6-7 s^2 ζ
r^2-35 s^2 ζ^2)+32 π e^2 C_2 r^2
× C_1
C_3((4-8 ζ𝔅 ) r^4+224 π s^2 r^2-(31+16 π )
s^2 ζ)+ ((42 ζ𝔅 -38)
r^4+s^2
×73 ζ)ζ) C_2^2-4 π(8 (ζ𝔅 -1) r^4+(35+32 π ) s^2 ζ)
C_2-64 π ^2 s^2].
§ APPENDIX C
The resulting solution
(<ref>)-(<ref>) produces the anisotropy as
Δ =[r^4(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2(16π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3-ζ
C_2(4096
×
r^4C_2^5C_1^2C_3^2e^4C_2r^2(2C_1C_3e^2C_2r^2+r^2)+1024C_2^4C_1^2C_3^2r^4e^4C_2r^2+64C_2^3C_1C_3
× r^2e^2C_2r^2(44C_1C_3e^2C_2r^2+3r^2)-272
C_2^2C_1C_3r^2e^2C_2r^2+2(76C_1C_3e^2C_2r^2
-3r^2)C_2-23))]^-1[(16C_2^2C_1C_3r^2e^2C_2r^2+1)^3
(256C_2^4𝔅C_1C_3r^6e^2C_2r^2(C_1C_3
×2e^2C_2r^2(8π r^2-ζ)-ζ
r^2)+32C_2^3C_1C_3r^2e^2C_2r^2(r^4(4ζ𝔅-2)-ζ
s^2)+8C_2^2
×
C_1C_3e^2C_2r^2(64π𝔅r^6-2r^4(6ζ𝔅+1)-ζ
s^2)+C_2(r^4(24ζ𝔅-2)-ζ
s^2)
+16π
r^4𝔅)-{4π(16C_2^2C_1C_3r^2e^2C_2r^2+1)^2-ζ
C_2(8C_2C_1C_3e^2C_2r^2-1)(C_2^2
×
16C_1C_3r^2e^2C_2r^2+2C_2r^2+3)}^-1{67108864
C_1^5 C_3^5 e^10 C_2 r^2 r^10(2 C_1 C_3 e^2 C_2
r^2
×(8 π r^2-ζ)-r^2 ζ)
(ζ𝔅 r^6+2 C_1 C_3 e^2 C_2 r^2 s^2 (8 π
r^2-ζ)) C_2^14+8388608 C_1^5
×
C_3^5 e^10 C_2 r^2r^10ζ(2 (3 ζ𝔅
+80 C_1 C_3 e^2 C_2 r^2π𝔅 -3) r^6-20 C_1 C_3
e^2 C_2 r^2ζ𝔅 r^4
-s^2 (32 C_1 e^2
C_2 r^2π C_3+3 ζ) r^2+4 C_1 C_3 e^2 C_2 r^2 s^2 ζ) C_2^13+1048576 C_1^4 C_3^4 e^8 C_2 r^2
×
r^6 (ζ(r^6(2-4 ζ𝔅 ) -2 (1+4 π ) s^2
ζ r^2+s^2 ζ ^2) r^4+2 C_1 C_3 e^2 C_2 r^2(64 π
^2 s^2 ζ r^4
+8 π(2 r^8 (5 ζ𝔅 -1)-17 r^4 s^2 ζ)-ζ(2 (9 ζ𝔅 +5) r^6-17 s^2 ζ r^2+s^2 ζ ^2))
r^2
+8 C_1^2 C_3^2 e^4 C_2 r^2(8 π r^2-ζ) ((6 ζ𝔅 +2) r^6+112 π s^2 r^4-(21+8 π )
s^2 ζ r^2+s^2
×ζ
^2))C_2^12+262144 C_1^4 C_3^4 e^8 C_2 r^2 r^6 (ζ
r^2 ((94 ζ𝔅 -26) r^6+5 s^2 ζ ^2-8s^2 ζ
r^2 (5π
+4)) +4 C_1 C_3 e^2 C_2 r^2(128 π
^2 s^2 ζ r^4+ζ(-(44 ζ𝔅 +5) r^6-8 s^2
ζ r^2+s^2 ζ ^2)
+4 π((68 ζ𝔅 -8) r^8+13 s^2 ζ r^4-6 s^2 ζ ^2
r^2))) C_2^11+16384 C_1^2 C_3^2 e^4 C_2 r^2
r^4
×(-s^2 ζ ^3 r^6+C_1 C_3 e^2 C_2 r^2ζ(22 r^6-2 (11+36 π ) s^2 ζ r^2+17 s^2 ζ ^2)
r^4+4 C_1^2 C_3^2
× e^4 C_2 r^2(640 π ^2
s^2 ζ r^4-8 π(20 r^8+49 s^2 ζ r^4+22 s^2 ζ ^2
r^2)+ζ(4 (2 ζ𝔅 +7) r^6
+47
s^2 ζ r^2+8 s^2 ζ ^2)) r^2+16 C_1^3 C_3^3 e^6 C_2
r^2(128 π ^2 s^2 (42 r^2-5 ζ) r^4+8 π(20
(2 ζ
×𝔅 +1) r^8-231 s^2 ζ
r^4+27 s^2 ζ ^2 r^2)-ζ(4 (13 ζ𝔅
+9) r^6-154 s^2 ζ r^2+17
× s^2
ζ^2))) C_2^10+4096 C_1^2 C_3^2 e^4 C_2 r^2 r^4
(-11 s^2 ζ ^3 r^4+C_1 C_3 e^2 C_2 r^2ζ(2r^6 (274
ζ𝔅
-51) -40 (5+3 π ) s^2 ζ r^2+63
s^2 ζ ^2) r^2+4 C_1^2 C_3^2 e^4 C_2 r^2(2560 π ^2
s^2 ζ r^4+8 π(80
× (2 ζ𝔅
-1) r^8+254 s^2 ζ r^4-129 s^2 ζ ^2 r^2)+ζ(-6
(32 ζ𝔅 -23) r^6-340 s^2
×ζ
r^2+85 s^2 ζ^2))) C_2^9+256 C_1 C_3 e^2 C_2 r^2
r^2 (-3 s^2 ζ ^3 r^6-2 C_1 C_3 e^2 C_2 r^2ζ((44
ζ𝔅
-34) r^6+2 (17+20 π) s^2 ζ
r^2+69 s^2 ζ ^2) r^4-16 C_1^2 C_3^2 e^4 C_2 r^2(-1280
π ^2 s^2 ζ r^4
-ζ((142-44 ζ𝔅 ) r^6+66s^2 ζ r^2+35 s^2 ζ ^2)+16 π(20 (ζ𝔅 +1) r^8+8 s^2 ζ r^4
+17
s^2 ζ ^2 r^2)) r^2+32 C_1^3 C_3^3e^6 C_2 r^2(2560
π ^2 s^2 (7 r^2-ζ) r^4+8 π(80 (ζ𝔅 +1) r^8
-768 s^2 ζ r^4+115 s^2 ζ ^2
r^2)-ζ(2 (44 ζ𝔅 +75) r^6-514 s^2 ζ
r^2+85 s^2 ζ ^2))) C_2^8
+64 C_1 C_3
e^2 C_2 r^2 r^2 (-13s^2 ζ ^3 r^4+4 C_1 C_3 e^2 C_2 r^2ζ(50 (4 ζ𝔅 -3) r^6+(158π-13)
× 4s^2 ζ r^2-177 s^2 ζ ^2) r^2+32 C_1^2 C_3^2e^4
C_2 r^2(2560 π ^2 s^2 ζ r^4+ζ((241-122 ζ𝔅 ) r^6
-396 s^2 ζ r^2+155 s^2 ζ
^2)+4 π(80(ζ𝔅 -2) r^8+596 s^2 ζ
r^4-319 s^2 ζ ^2 r^2))) C_2^7
+8 (3
s^2 ζ ^3 r^6+8 C_1 C_3 e^2 C_2 r^2ζ ^2(-28
𝔅 r^6+48 π s^2 r^2+9 s^2 ζ) r^4+32 e^4 C_2
r^2 C_1^2
× C_3^2 (1280 π ^2 s^2 ζ
r^4+ζ((82-174 ζ𝔅 ) r^6+160 s^2 ζ
r^2-123 s^2 ζ ^2)-8 π(40 (ζ
×𝔅 +1) r^8-26 s^2 ζ r^4-33 s^2 ζ ^2
r^2))r^2+64 C_1^3 C_3^3 e^6 C_2 r^2(2560 π ^2 s^2
(7 r^2-ζ)
× r^4+32 π(20 r^8-173
s^2 ζ r^4+25 s^2 ζ ^2r^2)+ζ(4 (10 ζ𝔅 -29) r^6+400 s^2 ζ r^2
-57 s^2 ζ
^2))) C_2^6+8 (19 s^2 ζ ^3 r^4+4 C_1 C_3e^2 C_2
r^2ζ(5 (2 ζ𝔅 -17) r^6+4s^2 ζ r^2
(13
+123 π )-56 s^2 ζ ^2) r^2+16 C_1^2 C_3^2
e^4 C_2 r^2(2560 π ^2 s^2 ζ r^4+ζ((235-208
ζ𝔅 ) r^6
-366 s^2 ζ r^2+120 s^2
ζ ^2)+4 π(40 (ζ𝔅 -4) r^8+644s^2 ζ
r^4-299 s^2 ζ ^2 r^2))) C_2^5
+2 (32
C_1^2 e^4 C_2 r^2(128 π ^2 r^2 (42 r^2-5 ζ)
s^2-4 π(40 (ζ𝔅 -1) r^6+324 s^2 ζ
r^2
-31 s^2 ζ ^2)+ζ(3 (8 ζ𝔅 -5) r^4+59 s^2 ζ)) C_3^2+4C_1 e^2 C_2
r^2(1280 π ^2 s^2 ζ r^4-32 π
×(2
(3 ζ𝔅 +5) r^8-13 s^2 ζ r^4-40 s^2 ζ ^2
r^2)-ζ(4(26 ζ𝔅 +25) r^6-376 s^2 ζ
r^2
+321 s^2 ζ ^2)) C_3+r^2 ζ(-6 (2
ζ𝔅 +1) r^6+2 (3+28 π ) s^2 ζ r^2+133 s^2 ζ
^2)) C_2^4
+2 (ζ((20 ζ𝔅 -33) r^6+2 (15+98 π ) s^2 ζ r^2+69 s^2 ζ
^2)+4 C_1 C_3e^2 C_2 r^2(1280 π ^2
×
r^2 ζ s^2+ζ(r^4 (73-114 ζ𝔅 )-120 s^2
ζ)+4 π(40 (ζ𝔅 -2) r^6+338 s^2 ζ
r^2
-97 s^2 ζ ^2))) C_2^3+(128 π
^2 r^2 ζ s^2-8 π(4 r^6-7 s^2 ζ r^2-35 s^2 ζ
^2)+32 C_1 C_3 e^2 C_2 r^2
×π((4-8
ζ𝔅 ) r^4+224 π s^2 r^2-(31+16 π ) s^2 ζ)+ζ((42 ζ𝔅 -38) r^4+73
s^2
×ζ)) C_2^2+4 π(8 (ζ𝔅-1) r^4+(35+32 π ) s^2 ζ) C_2+64 π ^2
s^2}].
43
1 Buchdahl H A 1970 Mon. Not. R. Astron. Soc.
150 1
2 Nojiri S and Odintsov S D 2003 Phys. Rev. D
68 123512
2b Song Y S, Hu W and Sawicki I 2007 Phys.
Rev. D 75 044004
2d Sharif M and Yousaf Z 2013 Mon. Not.
R. Astron. Soc. 434 2529
2f Astashenok A V, Capozziello S and
Odintsov S D 2014 Phys. Rev. D 89 103509
10 Bertolami O et al 2007 Phys. Rev. D 75 104016
20 Harko T et al 2011 Phys. Rev. D 84 024020
21 Sharif M and Zubair M 2013 J. Exp. Theor. Phys.
117 248
21a Shabani H and Farhoudi M 2013 Phys. Rev. D
88 044048
21e Sharif M and Siddiqa A 2017 Eur. Phys. J.
Plus 132 529
21f Das A et al 2017 Phys. Rev. D
95 124011
22 Haghani Z et al 2013 Phys. Rev. D 88 044023
22a Sharif M and Zubair M 2013 J. Cosmol. Astropart. Phys. 11 042
22b Sharif M and Zubair M 2013 J. High Energy Phys. 12 079
23 Odintsov S D and Sáez-Gómez D 2013 Phys. Lett. B 725 437
25 Baffou E H, Houndjo M J S and Tosssa J 2016 Astrophys. Space Sci. 361 376
25a Sharif M and Waseem A 2016 Eur. Phys. J. Plus
131 190
25a1 Sharif M and Waseem A 2016 Can. J. Phys. 94 1024
26 Yousaf Z, Bhatti M Z and Naseer T 2020 Eur. Phys. J.
Plus 135 353
26a Yousaf Z, Bhatti M Z and Naseer T 2020 Phys. Dark
Universe 28 100535
26b Yousaf Z, Bhatti M Z and Naseer T 2020 Int. J. Mod. Phys. D 29 2050061
26c Yousaf Z, Bhatti M Z and Naseer T 2020 Ann.
Phys. 420 168267
26d Yousaf Z et al 2020 Phys. Dark Universe 29 100581
26e Yousaf Z et al 2020 Mon. Not.
R. Astron. Soc. 495 4334
27 Sharif M and Naseer T 2021 Chin. J. Phys.
73 179
27a1 Sharif M and Naseer T 2022 Phys. Scr. 97 055004
27a1a Sharif M and Naseer T 2022 Pramana 96
119
27a2 Sharif M and Naseer T 2022 Int. J. Mod. Phys. D
31 2240017
27a3 Naseer T and Sharif M 2022 Universe 8 62
27aa Sharif M and Naseer T 2022 Chin. J. Phys.
77 2655
27aaa Sharif M and Naseer T 2022 Eur. Phys. J. Plus
137 947
27a Das B et al 2011 Int. J. Mod. Phys. D 20 1675
27b Sunzu J M, Maharaj S D and Ray S 2014 Astrophys. Space Sci. 352 719
27e Gupta Y K and Maurya S K 2011 Astrophys.
Space Sci. 332 155
27h Sharif M and Sadiq S 2016 Eur. Phys. J. C 76 568
27i Sharif M and Majid A 2021 Phys. Dark Universe 32 100803
33a Bordbar G H and Peivand A R 2011 Res. Astron. Astrophys. 11 851
33b Haensel P, Zdunik J L and Schaefer R 1986 Astron.
Astrophys. 160 121
34 Cheng K S, Dai Z G and Lu T 1998 Int. J.
Mod. Phys. D 7 139
34a Mak M K and Harko T 2002 Chin. J.
Astron. Astrophys. 2 248
34b Demorest P B et al (2010) Nature 467 1081
35 Rahaman F et al 2014 Eur. Phys. J. C 74 3126
36 Bhar P et al 2016 Eur. Phys. J. A 52 312
37 Maurya S K et al 2016 Eur. Phys. J. C
76 266
37a1 Maurya S K et al 2016 Eur. Phys. J. C 76 693
37b Singh K N, Bhar P and Pant N 2016 Astrophys. Space Sci. 361
339
37c Tello-Ortiz F, Maurya S K and Gomez-Leyton Y 2020 Eur. Phys. J. C 80 324
37d Dayanandan B, Smitha T T and Maurya S K 2021 Phys. Scr. 96 125041
37da Singh K N et al 2020 Chinese Phys. C
44 105106
37db Rahaman M et al 2020 Eur. Phys. J. Plus
80 272
dd Deb D et al 2019 Mon. Not. R.
Astron. Soc. 485 5652
ee Maurya S K et al 2019 Phys. Rev. D
100 044014
ff Mustafa G et al 2020 Chin. J. Phys. 67 576
gg Maurya S K et al 2020 Eur. Phys. J. Plus 135 824
gga Mustafa G et al 2021 Phys. Dark
Universe 31 100747
ggb Maurya S K, Tello-Ortiz F and Ray S 2021 Phys. Dark
Universe 31 100753
ggc Mustafa G et al 2021 Eur. Phys. J. Plus 136
166
ggd Maurya S K, Singh K N and Nag R 2021 Chin. J. Phys.
74 313
gge Adnan M et al 2022 Int. J. Mod. Phys. D
19 2250073
ggf Sarkar S, Sarkar N and Rahaman F 2022 Chin. J. Phys. 77
2028
38 Sharif M and Waseem A 2018 Eur. Phys. J. C 78 868
38c Sharif M and Majid A 2020 Eur. Phys. J. Plus
135 558
38a2 Sharif M and Saba S 2020 Chin. J. Phys. 64 374
41b Misner C W and Sharp D H 1964 Phys. Rev. 136 B571
41f Kalam M et al 2013 Int. J. Theor. Phys.
52 3319
41f1 Arbañil J D V and Malheiro M 2016 J.
Cosmol. Astropart. Phys. 11 012
41fa Biswas S et al 2019 Ann. Phys. 409 167905
41fb Sharif M and Ramzan A 2020 Phys. Dark Universe
30 100737
41i Eiesland J 1925 Trans. Am. Math. Soc. 27 213
41j Lake K 2003 Phys. Rev. D 67 104015
41ja Darmois G 1927 Les
equations de la gravitation einsteinienne
41jb Lichnerowicz A 1955 Théories Relativistes de la Gravitation et de l'Electromagnétisme Masson,
Paris
41jc Lake K 2017 Gen. Relativ. Gravit. 49 134
41k Dey M et al 1998 Phys. Lett. B 438 123
42a Buchdahl H A 1959 Phys. Rev. 116 1027
aaa Farhi E and Jaffe R L 1984 Phys. Rev. D 30 2379
bbb Alcock C, Farhi E and Olinto A 1986 Astrophys.
J. 310 261
hh Gangopadhyay T et al 2013 Mon. Not.
R. Astron. Soc. 431 3216
hha Deb D et al 2019 J. Cosmol. Astropart. Phys.
10 070
hhb de Felice F, Yu Y and Fang J 1995 Mon. Not.
R. Astron. Soc. 277 L17
42b Ivanov B V 2002 Phys. Rev. D 65 104011
42d Abreu H, Hernandez H and Nunez L A 2007 Class. Quantum Gravit. 24 4631
42e Herrera L 1992 Phys. Lett. A 165 206
42f Heintzmann H and Hillebrandt W 1975 Astron.
Astrophys. 38 51
42g Moustakidis C C 2017 Gen. Relativ. Gravit. 49 68
|
http://arxiv.org/abs/2307.04141v1 | 20230709100539 | Gray-body factor and absorption of the Dirac field in ESTGB gravity | [
"Qian Li",
"Chen Ma",
"Yu Zhang",
"Zhi-Wen Lin",
"Peng-Fei Duan"
] | gr-qc | [
"gr-qc"
] |
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
[email protected] Corresponding author Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan 650500, China.
City College, Kunming University of Science and Technology, Kunming, Yunnan 650051, China.
The gray-body factor and the absorption cross section of the 4D ESTGB gravity with a mode of nonlinear electrodynamics for the massless Dirac field are studied in this paper. The magnetic charge value varies between -2^(5/3)/3 and 0 as well as the ADM mass is set to 1, which corresponds to a non-extreme black hole. The gray-body factor is obtained using the semi-analytic WKB method after solving the massless Dirac equation. When the absolute value of magnetic charge is increasing, the gray-body factor γ(ω) is decreasing. In addition, the partial absorption cross section and the total absorption cross section are calculated by using the partial wave method. We find that the maximum value of partial absorption cross section decreases as κ increases. And the existence of magnetic charge causes the diminishing of the total absorption cross section. Finally, we find that the absorption cross section of the Dirac field is more sensitive to electric charge than magnetic charge by comparing the absorption cross section of the Reissner-Nordström and ESTGB-NLED black holes.
Gray-body factor and absorption of the Dirac field in ESTGB gravity
Peng-Fei Duan
August 12, 2023
===================================================================
§ INTRODUCTION
In the decades after Einstein's general theory of relativity predicted the existence of black holes, the research on the related properties of black holes has gradually become a frontier hot issue in astrophysics. An important feature of black holes is that no information can escape from its event horizon. As a result, the existence of black holes can only be detected by a lot of the indirect methods in the previous decades. Nevertheless, with the development of technology, the Event Horizon Telescope Cooperation Organization has successfully captured the first image of a black hole in the center of the M87 galaxy <cit.>. This picture directly proves the existence of black holes in the universe. However, a lot of phenomenons cannot be explained by the basic general relativity. These phenomenons include but are not limited to the acceleration expansion stage of the universe <cit.>, the combination of gravity and the laws of quantum physics <cit.>, and the flatness of the spiral curve for a spiral galaxy <cit.>. Accordingly, this presents a wide room for altering and interpreting theories of gravity, such as the so called extended scalar-tensor-Gauss-Bonnet(ESTGB) <cit.>, which is a theory in four dimensions. Specifically, the scalar field is coupled with Gauss-Bonnet invariant in ESTGB to avoid the Ostrogradsky instability <cit.>. The black hole solutions have been presented by solving the complex field equation in ESTGB gravity without matter field in four dimensions. In addition, these numerical solutions are also given in the context of different matter fields, for instance, massive scalar <cit.>, charged case <cit.>, Dilatonic <cit.>, multi-scalar<cit.>, and a particular form of nonlinear electrodynamics <cit.>.
Black hole that is not an isolated system will interact with the surrounding environment. Interactions present more interesting phenomena, such as radiation, absorption, and scattering. Therefore, we can study how black holes interact with the surrounding environment to obtain interrelated information about special objects. In addition, the experiments to explore black holes rely largely on GW astronomy, shadow images and X-ray spectroscopy. In a way, all three aspects depend on the effect of black holes on the environment. As we all know, the accretion plays a non-negligible role in the phenomenology of active galactic nuclei <cit.>. Accretion of fundamental field, i.e., scalar field, electromagnetic field and Dirac field etc., is usually associated with the research of absorption cross section. So it is necessary to research the absorption of waves and particles by black hole. Since the 1960s, theorists have begun to study the problems related to scattering. Moreover, the gray-body factor can help us understand the absorption and scattering of particles, and it is also an important factor to solve Hawking radiation. Hawking radiation <cit.> proposed by Hawking in 1976, which depends on the gray-body factor and black hole temperature, is of crucial importance when studying the black hole information paradox. It may be the most difficult obstacle to a thorough understanding of quantum gravity. The gray-body factor is defined as the possibility that the incident particle with frequency ω is absorbed by the black hole, which encodes valuable information about the near-horizon structure and correlative physics of the black hole. And it is used to measure the deviation from the radiation of the ideal and perfect black-body <cit.>. Many authors <cit.> have also studied the gray-body of various black holes with the different methods.
A plethora of methods have been proposed to calculate the gray-body factor with different accuracy, including the new cancellation between contributions to the wave function for the different spin particles <cit.>, the exact numerical method <cit.>, the rigorous bounds for the gray-body factor <cit.>, the WKB method<cit.>, etc. The WKB approximation stands out among the above-mentioned various calculation methods because of its versatility and flexibility. Blome, Hans-Joachim and Mashhoon <cit.> proposed the first simple semi-analytic formula to calculate the quasinormal frequency by matching the effective potential with the inverse Pöschl-Teller potential. However, this formula fails to improve the accuracy for the lower multi-pole numbers. A year later, Schutz and Will <cit.> calculated the quasinormal modes using the WKB approximation based on Mashhoon's formula. The method is to match the WKB solution with the Taylor expansion that passes two turning points. Subsequently, Iyer, Sai and Will <cit.> introduced the third-order formula of the WKB method, which improved the accuracy up to one percent on the basis of Schutz and Will. Moreover, Konoplya <cit.> and Matyjasek et al. <cit.> proposed higher WKB order terms. The WKB approximation can calculate not only the gray-body factor, but also the quasinormal mode. The quasinormal modes <cit.> containing complex frequencies represent the response of black holes to external perturbations such as massless scalar fields, neutrino fields, gravitational fields, electromagnetic fields, Dirac fields, etc.
In the 1970s, Hawking discovered that the evaporation rate of a black hole is directly proportional to its absorption cross section <cit.>. Subsequently, a wealth of important researches on the absorption and scattering of plane waves acting on black holes were established in the 1970s and 1980s. For instance, Sanchez <cit.> indicated that the absorption cross section of a Schwarzschild black hole for the massless scalar field is oscillatory with respect to the geometry-optical limit (27/4)π r_s^2, and Unruh <cit.> studied the absorption in the massive scalar. Besides, Crispino <cit.> presented the absorption of electromagnetic waves in the Schwarzschild spacetime for arbitrary frequencies, and Jung <cit.> studied the absorption of massive scalar field for the Reissner-Nordström spacetime. Next, the result of absorption of electromagnetic waves was obtained in Ref. <cit.>. In addition, the absorption of massless scalar by kerr spacetime was investigated in Ref. <cit.>, and the absorption of electromagnetic waves was analyzed in Ref. <cit.>. Liao Hao, Songbai Chen et al. <cit.> analyzed the absorption and Hawking radiation of electromagnetic waves with Weyl correction in 4D black hole spacetime. A lots of authors <cit.> have also studied the absorption and scattering cross sections of various black holes. In this paper we will study the gray-body factor and absorption cross section of the black hole in ESTGB gravity with an unusual form of nonlinear electrodynamics in four dimensions.
This paper is organized as follows. The second section outlines the basic information of the four-dimensional (4D) extended scalar-tensor-Gauss-Bonnet theory (ESTGB) coupled with a special form of nonlinear electrodynamics, and the settings of related parameters are also given. In the third section, the massless Dirac equation is reduced to master wave equations and the effective potential is analyzed. Next, the gray-body factor is calculated using the WKB method in the fourth section. The fifth section presents the expression of the absorption cross section of the Dirac field and the corresponding results are also given. Summary and conclusions are presented in the last section.
§ THE BLACK HOLE SOLUTION IN THE EXTENDED SCALAR-TENSOR-GAUSS-BONNET GRAVITY
Without loss of generality, we adopt natural units in this paper, namely c = G = ħ = 1. The 4D extended scalar-tensor-Gauss-Bonnet theory coupled with a particular form of nonlinear electrodynamics (ESTGB-NLED) <cit.> is defined as follows,
S = ∫ d^4x √(-g){1/16π(R - 1/2∂_μϕ∂^μϕ + f(ϕ) R__GB^2 - 2 U (ϕ) )
- 1/4πℒ_ matter}.
where R is the Ricci scalar, ϕ is the scalar field, f(ϕ) is a coupling function that depends only on ϕ, R__GB^2 is the Gauss-Bonnet term and U (ϕ) means the scalar field potential. The first term is the Einstein-Hilbert Lagrangian density. The Lagrangian density ℒ_ matter represents any matter field. Assuming that the metric is static, then we have the following spherically symmetric form,
ds^2=-f(r)dt^2+f^-1(r)dr^2+r^2dΩ^2,
with
f(r)=1-2m/r-q^3/r^3,
where m represents the ADM mass and q indicates the magnetic charge. Supposing that both the effective energy-momentum tensor and the corresponding NLED energy-momentum tensor satisfy the weak energy condition, we can obtain that m>0 and q<0. The metric is similar to those used in the Reissner-Nordström black hole that retains two horizons, one horizon, or none. Moreover, the Schwarzschild black hole is recovered when q is set to 0. Without loss of generality, we consider the black hole is non-extreme <cit.> satisfying the value of 0 > q/m > -2^(5/3)/3. If we set the ADM mass m=1 and then the magnetic charge -2^(5/3)/3<q<0.
For later comparisons, it is beneficial to explicitly mention the Reissner-Nordström black hole solution. The metric of Reissner-Nordström black hole is given by
ds^2=-(1-2M/r+Q^2/r^2)dt^2+(1-2M/r+Q^2/r^2)^-1dr^2+r^2dΩ^2,
where Q is the electric charge of the black hole and M is the mass of black hole.
§ MASTER WAVE EQUATION IN DIRAC FIELD
In this section, we reduce the massless Dirac equation in the black hole spacetime to a series of Schrödinger-like radial equations, and analyze the properties of the effective potential. The massless Dirac equation in black hole spacetime has the following form according to Ref. <cit.>,
γ^α e_α^μ( ∂_μ + Γ_μ) Ψ= 0,
where γ^α mean the Dirac matrices, defined as follows,
γ^0=(
[ -i 0; 0 i ]),
γ^i=(
[ 0 -iσ^i; iσ^i 0 ]),
with σ_i representing the pauli matrix, for any i ∈{1,2,3}, respectively.
Moreover, e_α^μ is the inverse of the tetrad e_μ^α, of which the particular form is defined by the metric g_μν,
g_μν=η_ab e_μ^a e^b_ν,
where η_ab is the Minkowski metric, and η_ab=diag(-1,1,1,1). Additionally, Γ_μ is the spin connection defined by
Γ_μ=1/8[γ^a,γ^b] e_a^ν e_bν;μ,
with e_bν;μ=∂_μ e_bν-Γ_μν^α e_bα being the covariant derivative of e_bν, where Γ_μν^α is the Christoffel symbol.
In the spacetime of a static and spherical black hole, e_μ^a can be expressed as
e_μ^a=diag(√(f),1/√(f),r,rsinθ).
Hence, the components of Γ_μ can be obtained by substituting equation (<ref>) into equation (<ref>) as follows,
Γ_0=1/4 f^'γ^1γ^0,Γ_1=0,Γ_2=1/2√(f)γ^1γ^2,Γ_3=1/2(sinθ√(f)γ^1γ^3+cosθγ^2γ^3).
Further substituting the above expressions into the Dirac equation (<ref>), the Dirac equation becomes
γ^0/√(f)∂Ψ/∂t+√(f)γ^1(∂/∂r+1/r+1/4f df/ dr)Ψ+γ^2/r(∂/∂θ+1/2θ)Ψ+γ^3/rsinθ∂Ψ/∂ψ= 0.
Furthermore, we can transform equation (<ref>) into the following equation
γ^0/√(f)∂Φ/∂t+√(f)γ^1(∂/∂r+1/r)Φ+γ^2/r(∂/∂θ+1/2θ)Φ+γ^3/rsinθ∂Φ/∂ϕ= 0,
by defining a tortoise coordinate change as
r_⋆=∫dr/f,
introducing the ansatz as
Φ=(
[ i G^±(r)/rϕ_jm^±(θ,φ); F^±(r)/rϕ_jm^∓(θ,φ) ]) e^-iwt,
and defining spinor angular harmonics as
ϕ_jm^+=(
[ √(l+1/2+m/2l+1)Y_l^m-1/2; √(l+1/2-m/2l+1)Y_l^m+1/2 ]), (j=l+1/2),
ϕ_jm^-=(
[ √(l+1/2-m/2l-1)Y_l^m-1/2; -√(l+1/2+m/2l-1)Y_l^m+1/2 ]), (j=l-1/2).
As a result, equation (<ref>) can be rewritten as
(
[ 0 -ω; ω 0 ])
(
[ F^±; G^± ])-∂/∂r_⋆(
[ F^±; G^± ])+√(f)(
[ k_±/r 0; 0 -k_±/r ])
(
[ F^±; G^± ])= 0,
where the different cases for (+) and (-) in the function F^± and G^± are given by <cit.>
d^2F/dr_⋆^2+(ω^2-V_1)F= 0,
d^2G/dr_⋆^2+(ω^2-V_2)G= 0,
with
V_1 = √(f)|κ|/r^2(|κ|√(f)+r/2df/dr-f),(for κ=j+1/2, and j=l+1/2),
V_2 = √(f)|κ|/r^2(|κ|√(f)-r/2df/dr+f),(for κ=-(j+1/2), and j=l-1/2).
It is worth noting that the potentials V_1 and V_2 are super-symmetric partners <cit.>, and they are derived from the same super-potential. It is well established that the potentials V_1 and V_2 related in this way have the same spectra. Therefore, we only need to consider the effective potential V_1 in calculating the gray-body factor and the absorption cross section for the massless Dirac field by the WKB approximation. As a result, the equation (<ref>) can be written as
d^2ψ/dr_⋆^2+(ω^2-V_eff)ψ=0.
Note that Eq.(<ref>) is a Schrödinger-like equation with an effective potential V_eff. The effective potential is depicted in Fig.<ref> when we consider κ as a variable with the fixed q=-0.8. We can observe from Fig.<ref> that the height of the effective potential barrier becomes larger if κ is increased. Furthermore, the location of the peak point moves towards the right with increasing κ. In addition, we compare the effective potential V_eff in Fig.<ref>(a) with κ = 5 under four scenarios, i.e., q=-0.4, q=-0.8, q=-1.0 and q=-2^5/3/3 respectively. It can be seen that, when the absolute value of the magnetic charge is increased, the height of the effective potential barrier increases and then the value diminishes and converges to almost the same value as r increasing. Because this metric is similar to the Reissner-Nordström spacetime, we compare the change of the effective potential for the two black hole when κ is set to 5. It is obvious in Fig.<ref>(b) that the maximum value of the effective potential of Reissner-Nordström spacetime increases faster with electric charge than that of ESTGB-NLED spacetime increases with magnetic charge. This means that the effective potential is more sensitive to the electric charge. Finally, it is worth noting that the effective potential has the form of a single-peak positive definite potential barrier, because it inclines to zero as r→∞. In other words, it vanishes at the event horizon r_+.
§ GRAY-BODY FACTOR
In this section, we are going to discuss the grey-body factor for the Dirac field in 4D ESTGB gravity with a nonlinear electrodynamics, i.e., we calculate the reflection probability and transmission probability. The discussion bases on the six-order WKB method for the different value of the magnetic charge.
Hawking <cit.> predicted that when the temperature of the black hole is proportional to its surface gravity, the black hole will emit particles, which behaves almost like a black body. So black holes are the thermal systems that has an associated temperature and entropy. Therefore black holes produce radiation when thermodynamic laws is satisfied and take into account the quantum effects <cit.>. Hawking proposed the expression that the evaporation rate of a black hole in a mode with frequency ω at the event horizon,
Γ(ω)=1/e^βω± 1d^3 k/(2π)^3,
where β is the inverse of the black hole temperature, i.e., 1/T_BH, and the plus as well as minus sign are the emission of the fermions and bosons. However the emission rate, which is measured by the spectator located far away, could be affected by the geometry situated on the outside of the event horizon. That is to say, the geometry situated on the outside of the event horizon is going to serve as a potential barrier for the radiation that emits from a black hole. The strong gravitational potential near the event horizon of the black hole will scatter part of the radiation particles back to the black hole, that is, part of the radiation is reflected back to the black hole. Another part of the particles will pass through the gravitational potential due to the quantum tunneling effect, fly to infinity, and be measured by the remote observer. And the radiation, which reaches the remote observer through the potential barrier, will no longer appear as the form of a black body. Hence we can rewrite the expression of the emission rate recorded by the remote observer with the frequency ω as,
Γ(ω)=γ(ω)/e^βω± 1d^3 k/(2π)^3,
γ(ω) is the gray-body factor related to frequency in the action.
The gray-body factor is defined as,
γ(ω)=|𝒯_ω l|^2.
The solution of the second-order differential equation (<ref>), with the postulation of purely outgoing waves at infinity and purely incoming waves at the event horizon, has the following boundary conditions:
ψ(r_⋆)∼{ℐ_ω l e^-iω r_⋆ +ℛ_ω l e^iω
r_⋆,
r_⋆ →+∞,
𝒯_ω l e^-iω r_⋆,
r_⋆→-∞ .
.
Where ℛ_ω l and 𝒯_ω l are the reflection coefficient and transmission coefficient, respectively. Due to the conservation of flux,
ℛ_ω l and 𝒯_ω l satisfy the following relationship
|ℛ_ω l|^2+|𝒯_ω l|^2=|ℐ_ω l|^2.
The phase shift δ_l can be expressed by
e^-2 iδ_l=(-1)^l+1ℛ_ω l/ ℐ_ω l.
Now, we discuss the use of WKB method to determine the gray-body factor <cit.>. The gray-body factor relies on the special relation between
ω and V_m, which V_m is the peak value of the effective potential V(r). There are three cases with ω^2≫ V_m, ω^2≈ V_m and ω^2≪ V_m to consider: when ω^2≫ V_m, i.e., the wave will not be reflected back to the black hole by the barrier when the wave with a frequency ω is higher than the height of the potential barrier. So the transmission probability 𝒯_ω l is almost equal to one. And the reflection probability ℛ_ω l is close to zero. Almost all of the radiation pass the potential and fly to infinity under this condition. When ω^2≪ V_m, the transmission probability 𝒯_ω l is almost equal to zero as well as the reflection probability ℛ_ω l is close to one. This means that almost all of the radiation is reflected back to the black hole by the potential. When ω^2≈ V_m, we compute the gray-body in the limit since the highest precision value is obtained in the WKB approximation.
Under the WKB approximation, when incident probability ℐ_ω l is equal to one the reflection coefficient can be expressed by
ℛ_ω l=(1+e^-2π i α)^-1/2,
𝒯_ω l=√(1-|(1+e^-2π i α)^-1/2|^2).
Where α is defined by
α=i(ω^2-V_0)/√(-2V_0^(2))-Λ_2-Λ_3-Λ_4-Λ_5-Λ_6.
Where V_0 is the peak value of the effective potential V(r) at r=r_0. Then,
Λ_2=1/√(-2V^(2)_0)[1/8(V^(4)_0/V^(2)_0)(b^2+1/4)-1/288(V^(3)_0/V^(2)_0)^2(7+60b^2)]
Λ_3=n+1/2/-2V^(2)_0[5/6912(V^(3)_0/V^(2)_0)^4(77+188b^2)-1/384((V^(3)_0)^2V^(4)_0/(V^(2)_0)^3)(51+100b^2)
+1/2304(V^(4)_0/V^(2)_0)^2(67+68b^2)-1/288(V^(6)_0/V^(2)_0)(5+4b^2)+1/288(V^(3)_0V^(5)_0/(V^(2)_0)^2)(19+28b^2)].
In Eqs. (<ref>) and (<ref>), the superscript (2,3,4,5,6) denotes the differentiation with respect to the tortoise coordinate r_⋆ and b=n+1/2. Considering that the expression of Λ_4, Λ_5 and Λ_6 found by <cit.> is overly cumbersome, we do not describe it in detail here.
In order to understand the nature of the transmission and reflection coefficients, we shall plot them with the frequency ω for different values of the magnetic charge. The results for transmission probability are represented in Fig.<ref>(a), where the different values of magnetic charge have been chosen. It can be observed that, for all values of magnetic charge q, the transmission coefficient starts from 0 and reaches 1 when ω is increased. Besides, one can see that the transmission coefficient diminishes as the absolute value of the magnetic charge q increases. In other words, the magnetic charge has the behaviour of obstructing the wave from passing through the black hole. The reason may be that the magnetic charge increases the peak value of the effective potential. Fig.<ref>(b) shows the comparison of transmission coefficient for Reissner-Nordström and ESTGB-NLED black hole. The transmission coefficient of ESTGB-NLED black hole presents the smaller change than that of Reissner-Nordström black hole when we view the magnetic (electric) charge as the variable parameter. We also exhibit the reflection coefficient with κ=5 in Fig.<ref>(c) and compare it with Reissner-Nordström black hole in Fig.<ref>(d). On the contrary, the reflection coefficient starts from 1 and reaches 0 with the increase of ω. Furthermore, as shown in Fig.<ref>, we present the results of the transmission and reflection coefficients in the case where q=-0.8 and κ varies from 1 to 4. When κ is increasing, the reflection coefficient becomes larger and the transmission coefficient becomes smaller. The reason for this behavior is that the peak value of the effective potential increases with the increase of κ.
§ ABSORPTION CROSS SECTION
In this section, we calculate the absorption cross section for the Dirac field, which is defined as the ratio of the number of particles absorbed by the black hole to the incident particle flux. Benone <cit.> proposed the partial wave method to get total absorption cross section as follows,
σ_abs=∑_l=0^∞σ_abs^l,
and the partial absorption cross section is given by
σ_abs^l=π/ω^2(2l+1)(1-|e^-2 iδ_l|^2),
we substitute the phase shift e^-2 iδ_l of different l subwaves into the Eq.(33), then we can get another expression,
σ_abs=π/ω^2∑_l=0^∞(2l+1)(1-|ℛ_ω l|^2).
Considering the effects of the Dirac field, the results of the partial absorption cross section are shown in Fig.<ref> and Fig.<ref>. One can see from Fig.<ref> that the peak value of the partial absorption cross section decreases and the location of the maximum moves to the right when κ is increased. By observing the curves in Fig.<ref>(a) for different magnetic charge q, one can obtain that the partial absorption cross section tends to almost the same value on the low and high frequencies, and the peak value moves slightly to the right when the absolute value of q is increased. In Fig.<ref>(b), we compare the partial absorption cross section of Reissner-Nordström and ESTGB-NLED black holes. We note that the partial absorption cross section of the two types black hole also goes to almost the same value on the low- and high-frequency regions due to the effective potential has the form of a single-peak positive. The effective potential is more sensitive to the electric charge. Specifically, the electric charge can hinder the passage of the wave more. So in the mid-frequency the curve change of the partial absorption cross section of the ESTGB-NLED black hole is not as obvious as that of the Reissner-Nordström black hole when we take the magnetic (electric) charge as the variable.
As shown in Fig.<ref>(a), we draw the results of the total absorption cross section from κ=1 to κ=10 with the magnetic charge q=-0.4, q=-0.8, q=-1.0 and q=-2^5/3/3 respectively. We can see that the total absorption cross section increases and then tends to a stable value when we increase the frequency ω. We also obtain that, as we increase the absolute value of the magnetic charge, the total absorption cross section gradually diminishes. That is to say that, the magnetic charge weakens the absorption for the Dirac field. This is in agreement with the results presented in Ref. <cit.>. Additionally, we plot the total absorption cross section for the Reissner-Nordström black hole in Fig.<ref>(b), for comparison purposes.
Compared with the effective potential of Reissner-Nordström black hole, we find that the effective potential of the ESTGB-NLED black hole changes slower. We can say that the magnetic charge has a smaller effect on the total absorption cross-section of the Dirac field than the electric charge. Therefore, the variation of total absorption cross section of the ESTGB-NLED black hole with the magnetic charge is not as pronounced as for the Reissner-Nordström black hole black hole with the charge.
§ CONCLUSIONS
In the preceding sections, we have studied the gray-body factor and the absorption cross section for the massless Dirac field of the black hole, which is the solution of the 4D ESTGB gravity in the context of the nonlinear electrodynamics. Due to the fact that the solution is characterized by the ADM mass m and the magnetic charge q, the black holes will have different structure according to the different choices of these parameters. Therefore, in order to consider the generality, we have studied the case that the black hole is non-extreme where m=1 and -2^(5/3)/3 < q < 0, which is similar to the Reissner-Nordström spacetime. Specifically, we have plotted the effective potentials in Fig.1 and Fig.2 for two cases. We have found that the effective potential of the Dirac field is more sensitive for the electric charge owing to that the variation of the effective potential of Reissner-Nordström spacetime is more obvious than that of ESTGB-NLED spacetime. Besides, we have carried out numerical calculations to get the gray-body factors using the sixth-order WKB approximations. We have shown the changes of the transmission and reflection coefficients with respect to the magnetic charge in Fig.3, respectively. We have observed that due to the magnetic charge the reflection coefficient is increasing and the transmission coefficient is decreasing comparing the Schwarzschild spacetime. In other words, we have obtained that the magnetic charge impedes the wave from passing the black hole. Moreover, when κ is increasing, the reflection coefficient becomes larger and the transmission coefficient becomes smaller. It has been discovered in Fig.<ref>(a) that the total absorption cross section of the Dirac field decreases when we augment the absolute value of the magnetic charge, but increases with the increasing of frequency. Finally, in Fig.<ref>(b), we have compared the total absorption cross section for the Reissner-Nordström and ESTGB-NLED black hole. It has been found that the absorption cross section of the Dirac field is more sensitive to electric charge than magnetic charge in these two types of black hole.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGMENTS
This work was supported partly by the National Natural Science Foundation of China (Grants No. 12065012, No. 12065013), Yunnan High-level Talent Training Support Plan Young & Elite Talents Project (YNWR-QNBJ-2018-360) and the Fund for Reserve Talents of Young and Middle-aged Academic and Technical Leaders of Yunnan Province (Grant No. 2018HB006).
99
1Akiyama, K. et al.:
Astrophys. J. Lett. 875(2019)L1.
Riess1998
Riess, A.G. et al.:
Astron. J. 116(1998)1009-1038.
Fradkin1985
Fradkin, E.S. and Tseytlin, A.A.:
Nucl. Phys. B 261(1985)1-27.
Capozziello2007
Capozziello, F. et al.:
Mon. Not. Roy. Astron. Soc. 375(2007)1423-1440.
Doneva2018
Doneva, D.D. et al.:
Phys. Rev. D 98(2018)104056.
Woodard02210
Motohashi, H. and Suyama, T.:
JHEP 09(2020)032.
Doneva2019
Doneva, D.D. et al.:
Phys. Rev. D 99(2019)104045.
Kanti1996
Kanti, P. et al.:
Phys. Rev. D 54(1996)5049-5058.
scalariz_Doneva
Doneva, D.D. et al.:
Phys. Rev. D 102(2020)064042.
Canate2020
Cañate, P. and Perez Bergliaffa, S.E.:
Phys. Rev. D 102(2020)104038.
Macedo2014
Macedo, C.F.B. and Crispino, L.C.B.:
Phys. Rev. D 90(2014)064001.
Hawking1976
Hawking, S.W.:
Phys. Rev. D 14(1976)2460-2473.
Kanti2002
Kanti, P. and March-Russell, J.:
Phys. Rev. D 66(2002)024023.
Songbai2010
Chen, S. and Jing, J.:
Phys. Lett. B 691(2010)254-260.
Ama-Tul-Mughani2021
Ama-Tul-Mughani, Q. et al.:
Astropart. Phys. 132(2021)102623.
Sharif2020
Sharif, M. and Ama-Tul-Mughani, Q.:
Phys. Dark Univ. 27(2020)100436.
Qanitah Ama-Tul-Mughani2020
Sharif, M. and Ama-Tul-Mughani, Q.:
PTEP 2020(2020)033E01.
Konoplya2021
Konoplya, R.A.:
Phys. Lett. B 823(2021)136734.
Cvetic1998
Cvetic, M. and Larsen, F.:
Phys. Rev. D 57(1998)6297-6310.
Zhang2018
Zhang, C.Y., Li, P.C. and Chen, B.:
Phys. Rev. D 97(2018)044013.
Miao2017
Miao, Y.G. and Xu, Z.M.:
Phys. Lett. B 772(2017)542-546.
Konoplya2011
Konoplya, R.A. and Zhidenko, A.:
Phys. Lett. B 686(2010)199-206.
Blome1984
Blome, H.J., and Mashhoon, B.:
Phys. Lett. A 100(1984)231-234.
Schutz1985
Schutz, B.F. and Will, C.M.:
Astrophys. J. Lett. 291(1985)L33-L36.
Iyer1987
Iyer, S. and Will, C.M.:
Phys. Rev. D 35(1987)3621.
Konoplya2003
Konoplya, R.A.:
Phys. Rev. D 68(2003)024018.
Matyjasek2017
Matyjasek, J. and Opala, M.:
Phys. Rev. D 96(2017)024011.
Wongjun2020
Wongjun, P. et al.:
Phys. Rev. D 101(2020)124033.
Cai2020
Cai, X.C. and Miao, Y.G.:
Phys. Rev. D 101(2020)104023.
Hu2020
Hu, Y. et al.:
EPL 128(2019)50006.
Liang2018
Liang, J.:
Commun. Theor. Phys. 70(2018)695.
Saleh2018
Saleh, M.,Thomas, B.B. and Kofane, T.C.:
Eur. Phys. J. C 78(2018)325.
Liang20018
Liang, J.:
Chin. Phys. Lett. 35(2018)050401.
Aragon2021
Aragón, A. et al.:
Phys. Rev. D 103(2021)064006.
Li2017
Li, J., Lin, K. and Wen, H.:
Adv. High Energy Phys. 2017(2017)5234214.
Wang2021
Wang, M. et al.:
Eur. Phys. J. C 81(2021)469.
Jawad2020
Jawad, A. et al.:
Mod. Phys. Lett. A 35(2020)2050298.
Sharif2021
Sharif, M. and Khan, A.:
[arXiv:2109.06010 [gr-qc]].
Guo2013
Guo, G.:
Eur. Phys. J. C 73(2013)2573.
Futterman
Futterman, J.A.H. et al.:
1988 (Cambridge: Cambridge Uni-versity Press) p. 254.
Sanchez1978
Sanchez, N.G.:
Phys. Rev. D 18(1978)1030.
Unruh1976
Unruh, W.G.:
Phys. Rev. D 14(1976)3251-3259.
Crispino2007
Crispino, L.C.B. et al.:
Phys. Rev. D 75(2007)104012.
Jung2004
Jung, E. et al.:
Phys. Lett. B 602(2004)105-111.
Crispino2008
Crispino, L.C.B. and Oliveira, E.S.:
Phys. Rev. D 78(2008)024011.
Songbai Chen2014
Liao, H. et al.:
Phys. Lett. B 728(2014)457-461.
Macedo2013
Macedo, C.F.B. et al.:
Phys. Rev. D 88(2013)064033.
Leite2017
Leite, L.C.S. et al.:
Phys. Lett. B 774(2017)130-134.
Huang2019
Huang, H. et al.:
Gen. Rel. Grav. 51(2019)22.
Huang2015
Huang, H. et al.:
Gen. Rel. Grav. 47(2015)8.
2018
Leite, L.C.S. et al.:
Phys. Rev. D 98(2018)024046.
Anacleto2020
Anacleto, M.A. et al.:
Phys. Lett. B 803(2020)135334.
Magalhaes2020
Magalhães, R.B. et al.:
Eur. Phys. J. C 80(2020)386.
Junior2020
Lima, H.C.D. et al.:
Phys. Lett. B 811(2020)135921.
Benone2018
Benone, C.L. et al.:
Int. J. Mod. Phys. D 27(2018)1843012.
Brill1957
Brill, D.R. and Wheeler, J.A.:
Rev. Mod. Phys. 29(1957)465-479.
Cho2005
Cho, H.T. and Lin, Y.C.:
Class. Quant. Grav. 22(2005)775-790.
Cooper1995
Cooper, F. et al.:
Phys. Rept. 251(1995) 267-385.
Hawking1975
Hawking, S.W.:
Commun. Math. Phys. 43(1975)199-220.
Hawking19761
Hawking, S.W.:
Phys. Rev. D 13(1976)191-197.
Benone104053
Benone, C.L. et al.:
Phys. Rev. D 89(2014)104053.
|
http://arxiv.org/abs/2307.04264v1 | 20230709210516 | Impact of interaction forces in first order many-agent systems for swarm manufacturing | [
"Ferdinando Auricchio",
"Massimo Carraturo",
"Giuseppe Toscani",
"Mattia Zanella"
] | math.AP | [
"math.AP",
"nlin.AO"
] |
./figures/
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
remarkremark[theorem]Remark
|
http://arxiv.org/abs/2307.09555v1 | 20230714151704 | Transient Neural Radiance Fields for Lidar View Synthesis and 3D Reconstruction | [
"Anagh Malik",
"Parsa Mirdehghan",
"Sotiris Nousias",
"Kiriakos N. Kutulakos",
"David B. Lindell"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
The torsion of stellar streams
Adriana Bariego–Quintana
1
Felipe J. Llanes–Estrada2
July 14th 2023
========================================================================================
-0.26in
-0.15in
https://anaghmalik.com/TransientNeRF/anaghmalik.com/TransientNeRF
0.3in
Neural radiance fields (NeRFs) have become a ubiquitous tool for modeling scene appearance and geometry from multiview imagery. Recent work has also begun to explore how to use additional supervision from lidar or depth sensor measurements in the NeRF framework. However, previous lidar-supervised NeRFs focus on rendering conventional camera imagery and use lidar-derived point cloud data as auxiliary supervision; thus, they fail to incorporate the underlying image formation model of the lidar. Here, we propose a novel method for rendering transient NeRFs that take as input the raw, time-resolved photon count histograms measured by a single-photon lidar system, and we seek to render such histograms from novel views. Different from conventional NeRFs, the approach relies on a time-resolved version of the volume rendering equation to render the lidar measurements and capture transient light transport phenomena at picosecond timescales. We evaluate our method on a first-of-its-kind dataset of simulated and captured transient multiview scans from a prototype single-photon lidar. Overall, our work brings NeRFs to a new dimension of imaging at transient timescales, newly enabling rendering of transient imagery from novel views. Additionally, we show that our approach recovers improved geometry and conventional appearance compared to point cloud-based supervision when training on few input viewpoints. Transient NeRFs may be especially useful for applications which seek to simulate raw lidar measurements for downstream tasks in autonomous driving, robotics, and remote sensing.
§ INTRODUCTION
The ability to sense and reconstruct 3D appearance and geometry is critical to applications in vision, graphics, and beyond.
Lidar sensors <cit.> are of particular interest for this task due to their high sensitivity to arriving photons and their extremely high temporal resolution; as such, they are being deployed in systems for 3D imaging in smart phone cameras <cit.>, autonomous driving, and remote sensing <cit.>.
Recent work has also begun to explore how additional supervision from lidar <cit.> or depth sensor measurements <cit.> can be incorporated into the NeRF framework to improve novel view synthesis and 3D reconstruction.
Existing NeRF-based methods that use lidar <cit.> are limited to rendering conventional RGB images, and use lidar point clouds (i.e., pre-processed lidar measurements) as auxiliary supervision rather than rendering the raw data that lidar systems actually collect.
Specifically, lidars capture transient images—time-resolved picosecond- or nanosecond-scale measurements of a pulse of light travelling to a scene point and back.
We consider the problem of how to synthesize such transients from novel viewpoints.
In particular, we seek a method that takes as input and renders transients in the form of time-resolved photon count histograms captured by a single-photon lidar system[Single-photon lidars are closely related to conventional lidar systems based on avalanche photo diodes <cit.>, but they have improved sensitivity and timing resolution (discussed in Section <ref>); other types of coherent lidar systems <cit.> are outside the scope considered here.] <cit.>.
Lidar view synthesis may be useful for applications which seek to simulate raw lidar measurements for downstream tasks, including autonomous driving, robotics, remote sensing, and virtual reality.
The acquisition and reconstruction of transient measurements has been studied across various different sensing modalities, including holography <cit.>, photonic mixer devices <cit.> streak cameras <cit.>, and single-photon detectors (SPADs) <cit.>.
In the context of SPADs and single-photon lidar, a transient is measured by repeatedly illuminating a point with pulses of light and accumulating the individual photon arrival times into a time-resolved histogram.
After capturing such histograms for each point in a scene, one can exploit their rich spatio-temporal structure for scene reconstruction <cit.>, to uncover statistical properties of captured photons <cit.>, and to reveal the temporal profile of the laser pulse used to probe the scene (knowledge of which can significantly improve depth resolution <cit.>).
These properties motivate transients as a representation and their synthesis from novel views.
While existing methods have explored multiview lidar reconstruction <cit.>, they exclusively use point cloud data, and do not tackle lidar view synthesis.
Recently, a number of NeRF-based methods for 3D scene modeling have also been proposed to incorporate point cloud data (e.g., from lidar or structure from motion) <cit.> or information from time-of-flight sensors <cit.>.
Again, these methods focus on synthesizing conventional RGB images or depth maps, while our approach synthesizes transient images.
Another class of methods combines NeRFs with single-photon lidar data for non-line-of-sight imaging <cit.>; however, they focus on a very different inverse problem and scene parameterization <cit.>, and do not aim to perform novel view synthesis of lidar data as we do.
Our approach, illustrated in Fig. <ref>, extends neural radiance fields to be compatible with a statistical model of time-resolved measurements captured by a single-photon lidar system.
The method takes as input multiview scans from a single-photon lidar and, after training, enables rendering lidar measurements from novel views.
Moreover, accurate depth maps or intensity images can also be rendered from the learned representation.
In summary, we make the following contributions.
* We develop a novel time-resolved volumetric image formation model for single-photon lidar and introduce transient neural radiance fields for lidar view synthesis and 3D reconstruction.
* We assemble a first-of-its-kind dataset of simulated and captured transient multiview scans, constructed using a prototype multiview single-photon lidar system.
* We use the dataset to demonstrate new capabilities in transient view synthesis and state-of-the-art results on 3D reconstruction and appearance modeling from few (2–5) single-photon lidar scans of a scene.
§ RELATED WORK
Our work ties together threads from multiple areas of previous research, including methods for imaging with single-photon sensors, and NeRF-based pipelines that leverage 3D information to improve reconstruction quality.
Our implementation also builds on recent frameworks that improve the computational efficiency of NeRF training <cit.>.
Active single-photon imaging.
Single-photon sensors output precise timestamps corresponding to the arrival times of individual detected photons.
The most common type of single-photon sensor is the single-photon avalanche diode (SPAD). SPADs are based on the widely-available CMOS technology <cit.> (which we consider in this work), but other technologies such as superconducting nanowire single-photon detectors <cit.> and silicon photomultipliers <cit.>, offer different tradeoffs in terms of sensitivity, temporal resolution, and cost.
In active imaging scenarios, pulsed light sources are paired with single-photon sensors to estimate depth or reflectance of a scene by applying computational algorithms to the captured photon timestamps <cit.>.
The extreme temporal resolution of these sensors also enables direct capture of interactions of light with a scene at picosecond timescales <cit.>, and by modeling and inverting the time-resolved scattering of light, single-photon sensors can be used to see around corners <cit.> or through scattering media <cit.>.
The extreme sensitivity of single-photon sensors has made them an attractive technology for autonomous navigation <cit.>, and accurate depth acquisition from mobile phones <cit.>.
Our approach differs significantly from all the previous work in that we investigate, for the first time, the problem of lidar view synthesis and multi-view 3D reconstruction in the single-photon lidar regime.
We introduce the framework of transient NeRFs for this task and jointly optimize a representation of scene geometry and appearance that is consistent with captured photon timestamps across all input views.
3D-informed neural radiance fields.
A number of recent techniques for multiview reconstruction using NeRFs leverage additional geometric information (sparse point clouds from lidar <cit.> or structure from motion <cit.>) to improve the reconstruction quality or reduce the number of required input viewpoints.
Similar benefits can be obtained by combining volume rendering with monocular depth estimators <cit.>, or using data from time-of-flight sensors <cit.>.
Other methods investigate the problem of view synthesis from few input images but leverage appearance priors instead of explicit depth supervision <cit.>.
In contrast to the proposed approach, all of these methods focus on reconstructing images or depth maps rather than transient histograms.
* nerf methods without depth information (we should recover better geometry using fewer views)
* mipnerf
* nglod
* nerf methods with rgb+depth information
* torf
* depth-supervised nerf (only uses rgb + colmap)
* depth oracle nerf (tries to explicitly predict depth)
* urban nerf (uses rgb+point cloud lidar data), maybe closest to us, but doesn't try to render albedo from lidar data and doesn't use the raw lidar input
* monosdf (uses monocular depth estimation + sdf-based model)
* neural rgb-d surface reconstruction (uses kinect data + TSDF model for indoor scene reconstruction)
* mipnerf RGB-D depth assisted fast neural radiance fields
* nerf methods with only depth sensor information
* netf
§.§ Transient Imaging
* Streak Cameras - initial work <cit.>, femto-photography <cit.>
* Holographic Methods - initially introduced by Abramson in 1978 <cit.>, light-in-flight holography using femtosecond pulsed laser <cit.>, microscopy <cit.>
* Spad-based Methods - intial work <cit.>, at interactive rates <cit.>
* PMDs
* Optical interferometry
§.§ Direct Imaging
§.§.§ Lidar based
Real-time 3D scene reconstruction with a single-photon lidar, using point-cloud denosing <cit.>. Leveraging shaking of a hand while capturing a snapshot for better depth image <cit.>.
§.§.§ Sensor fusion
There exists a wide range of methods for RGB and RGB-D based 3D reconstruction. Most of RGB-D reliant methods are based on <cit.>, where multiple depth measurements are fused using a signed distance function (SDF) which is stored in a uniform 3D grid. An example of such work is KinectFusion <cit.> combines such representation with real-time tracking to reconstruct objects and small scenes in real-time. An example of a method reliant on just RGB images is Single View MPI <cit.>, which learns to generate multiplane images given one or more images with known viewpoints.
Most recently coordinate-based multi layer perceptrons (MLP) have become a popular representation of the 3D scene <cit.>. As input the MLP takes a 3D location in the model space and outputs for example, occupancy, density or colour. There has been a lot of work using this simple idea in multiple applications like SLAM <cit.>, <cit.>, for novel-view synthesis <cit.>, <cit.>.
§.§ Indirect Imaging
§.§.§ Active NLOS imaging
NeTF <cit.> extends the idea behind NeRF for transient field modelling. Using a spherical parametrization of 3D points and integrating over wavefronts as opposed to radiance lines they are able to perform non-line of sight (NLOS) imaging.
§.§.§ Passive NLOS imaging
* Corner Cameras
* Occlusion Imaging
§.§ Extra papers
* Rawnerf <cit.>
* torf <cit.>
* nerfactor <cit.>
* refnerf <cit.>
* instant ngp <cit.>
* mitsuba nlos <cit.>
* netf <cit.>
* netf optica <cit.>
* eric veach thesis <cit.>
§ TRANSIENT NEURAL RADIANCE FIELDS
We describe a mathematical model for transient measurements captured using single-photon lidar and propose a time-resolved volume rendering formulation compatible with neural radiance fields.
§.§ Image Formation Model
Consider that a laser pulse illuminates a point in a scene that is imaged onto a sensor at position 𝐩∈ℝ^2 (see Fig. <ref>).
Assume light from the laser pulse propagates to a surface and back to 𝐩 along the same path described by a ray 𝐫(t), where t indicates propagation time.
The forward path along the ray is given as 𝐫(t) = 𝐱(𝐩) + tc ω(𝐩), where 𝐱(𝐩)∈ℝ^3 is the ray origin, ω(𝐩)∈𝕊^2 is the ray direction which maps to 𝐩, and c is the speed of light.
Now, let f(t) denote the temporal impulse response of the lidar (including the temporal profile of the laser pulse and the sensor jitter), and let α(𝐩) incorporate reflectance and radiometric falloff factors <cit.> of the illuminated point at distance z(𝐩) from 𝐱(𝐩).
Then, assuming single-bounce light transport, the photon arrival rate incident on the sensor from the laser pulse is given as
λ[i, j, n] = ∫_𝒫_i, j∫_𝒯_nα(𝐩) f (t - 2z(𝐩)/c) dt d𝐩,
where 𝒯_n and 𝒫_i,j indicate the temporal and spatial discretization intervals for the time bin n and pixel i, j, respectively.
The term 2z/c gives the time delay for light to propagate to a point at distance z and back.
Now, we can describe the measured transient, or the number of photon detections captured by a SPAD <cit.>, as
τ[i, j, n] ∼Poisson(N η λ[i, j, n] + B),
B = N(η A[i, j] + d),
where N indicates the number of laser pulses per pixel, η∈ (0, 1) is the detection efficiency of the sensor, and B is the total number of background (non-laser pulse) detections. Background detections in turn depend on A, the average ambient photon rate at pixel [i, j], and d, number of false detections produced by the sensor per laser pulse period, also known as the dark count rate.
When the number of detected photons is far fewer than the number of laser pulses, SPAD measurements can be modeled according to a Poisson process <cit.> where the arrival rate function varies across space and time.
This model is appropriate for our measurements, which have relatively low flux (<5% detection probability per emitted laser pulse) <cit.>.
The resulting measurements τ[i, j, n] represent a noisy histogram of photon counts collected at pixel [i, j] at time bin n.
§.§ Time-Resolved Volume Rendering
Using the measurements τ, we wish to optimize a representation of the appearance and geometry of the scene.
To this end, we propose a time-resolved version of the volume rendering equation used in NeRF <cit.>.
Specifically, we model clean (i.e., without Poisson noise) time-resolved histograms τ[i, j, n] as (writing 𝐫(t) as 𝐫 for brevity)
τ[i, j, n] = ∫_𝒫_i, j∫_𝒯_n (tc)^-2 T(t)^2σ(𝐫)𝐜(𝐫, ω) dt d𝐩,
where T(t) = exp( -∫_t_0^tσ(𝐫) ds).
We denote by 𝐜 the radiance of light scattered at a point 𝐫(t) in the direction ω, and σ represents the volume density or the differential probability of ray termination at 𝐫(t).
Finally, T(t) is the transmittance from a distance t_0 along the ray to t, and this term is squared to reflect the two-way propagation of light <cit.>. We additionally explicitly account for the inverse-square falloff of intensity, through the term (tc)^-2.
The definite integrals are evaluated over the extent of time bin n, 𝒯_n = [t_n-1, t_n], and over 𝐩 within the area of pixel [i, j] as in Equation <ref>.
Note that in practice, we calculate Equation <ref> using the discretization scheme of Max <cit.> used by Mildenhall et al. <cit.>.
Finally, to account for the temporal spread of the laser pulse and sensor jitter, we convolve the estimated transient with the calibrated impulse response of the lidar system f to obtain
τ_f = f ∗τ.
Without this step, the volumetric model of Equation <ref> does not match the raw data from the lidar system and tends to produce thick clouds of density around surfaces to compensate for this mismatch.
§.§ Reconstruction
To reconstruct transient NeRFs, we use lidar measurements τ^(k)[i, j, n] of a scene captured from different viewpoints 0 ≤ k ≤ K-1.
We parameterize transient NeRF using a neural network ℱ consisting of a hash grid of features and a multi-layer perceptron decoder <cit.>. The network takes as input a coordinate and viewing direcion, and outputs radiance and density, ℱ(𝐫(t), ω) = 𝐜, σ.
We use these outputs to render transients (see Fig. <ref>). The model is optimized to minimize the difference between the rendered transient and measured photon count histograms.
We also introduce a modified loss function to account for the high dynamic range of lidar measurements, and we propose a space carving regularization penalty to help mitigate convergence to local minima in the optimization.
HDR-Informed loss function.
Measurements from a single photon sensor can have a dynamic range that spans multiple orders of magnitude, with each pixel recording from zero to thousands of photons.
We find that applying two exponential functions to the radiance preactivations (1) enforces non-negativity and (2) improves the dynamic range of the network output.
Thus, we have 𝐜 = exp(exp(ĉ))-1, where the network preactivations are given by ĉ.
Following Muller et al. <cit.> the network also predicts density in log space.
After time-resolved volume rendering using Equation <ref>, we apply a loss function in log space to prevent the brightest regions from dominating the loss <cit.>.
The loss function is given as:
ℒ_τ = ∑_k, i, j, n‖ln(τ^(k)[i, j, n]+1) - ln(τ_f^(k)[i, j, n]+1)‖_1,
where the sum is over all images, pixels, and time bins.
Space carving regularization.
We find that using the above loss function alone results in spurious patches of density in front of dark surfaces in a scene.
Here, the network can predict bright values on the surface itself, but darkens the corresponding values of τ_f by placing additional spurious density values along the ray.
Since the network can predict the radiance of the density to be zero at these points, the predicted transients τ_f can be entirely consistent with the measured transients τ, but with incorrect geometry.
To address this, we introduce a space carving regularization
ℒ_sc = ∑_k,i, j, n
τ^(k)[i, j, n] < B∫_𝒫_i, j∫_𝒯_n T(t)σ(𝐫) dt d𝐩.
This function penalizes any density along a ray at locations where the corresponding measured transient values are less than the expected background level B.
This effectively forces space to be empty (i.e., zero density) at regions where the measurements do not indicate the presence of a surface.
The complete loss function used for training is then given as
ℒ = ℒ_τ + λ_scℒ_sc,
where λ_sc controls the strength of the space carving regularization.
§.§ Implementation Details
Our implementation is based on the NerfAcc <cit.> version of Instant-NGP <cit.>, which we extend to incorporate our time-resolved volume rendering equation.
In particular, we extend the framework to output time-resolved transient measurements, to account for the pixel footprint, and to estimate depth.
Pixel footprint.
We use a truncated Gaussian distribution to model the spatial footprint of the laser spot and SPAD sensor projected onto the scene.
We sample rays in the range of 4 standard deviations of the pixel center, weighting their contribution to the rendered transient by the corresponding Gaussian probability density function value. We set the standard deviation of the Gaussian to 0.15 pixels for the simulated dataset and 0.10 pixels for the captured dataset.
Depth.
To estimate depth we find the distance along each ray that results in the maximum probability of ray termination: argmax_t T(t)σ(t).
Note that when integrating over the pixel footprint at occlusion boundaries, multiple local extrema can occur, and so taking the highest peak results in a single depth estimate without floating pixel artifacts.
Network optimization.
We optimize the network using the Adam optimizer <cit.>, a learning rate of 1×10^-3 and a multi-step learning rate decay of γ=0.33 applied at 100K, 150K, and 180K iterations.
We set the batch size to 512 pixels and optimize the simulated results until they appear to converge, or for 250K iterations for the simulated results and 150K iterations for the captured results. For the weighting of the space carving loss, we use λ_sc=10^-3 for the simulated dataset and increase this to λ_sc=10^-2 for captured data, which benefits from additional regularization.
We train the network on a single NVIDIA A40 GPU.
§ MULTIVIEW LIDAR DATASET
We introduce a first-of-its-kind dataset consisting of simulated and captured multiview data from a single-photon lidar.
A description of the full set of simulated and captured scenes is included in the supplemental, and all dataset and simulation code will be made publicly available.
Simulated dataset
[19]r2.313 in
< g r a p h i c s >
Hardware prototype. A pulsed laser shares a path with a single-pixel SPAD, and the illumination and imaging path are controlled by scanning mirrors.
We create the simulated dataset using a time-resolved version of Mitsuba 2 <cit.> which we modify for efficient rendering of lidar measurements.
The dataset consists of one scene from Vicini et al. <cit.> and four scenes made available by artists on Blendswap (<https://blendswap.com/>), which we ported from Blender to our Mitsuba 2 renderer.
The training views are set consistent with the capture setup of our hardware prototype (described below) such that the camera viewpoint is rotated around the scene at a fixed distance and elevation angle, resulting in 8 synthetic lidar scans used for training.
We evaluate on rendered measurements from six viewpoints sampled from the NeRF Blender test set <cit.>.
The renders are used to simulate SPAD measurements by applying the noise model described in Equation <ref> and setting the mean number of photon counts to 2850 per occupied pixel and the background counts to 0.001 per bin, which we set to approximate our experimentally captured data.
Hardware prototype.
To create the captured dataset, we built a hardware prototype (Fig. <ref>) consisting of a pulsed laser operating at 532 nm that emits 35 ps pulses of light at a repetition rate of 10 MHz.
The output power of the laser is lowered to <1 mW to keep the flux low enough (roughly 150,000 counts per second on average) to prevent pileup, which is a non-linear effect that distorts the SPAD measurements <cit.>.
The laser shares an optical path with a single-pixel SPAD through a beamsplitter, and a set of 2D scanning mirrors is used to raster scan the scene at a resolution of 512×512 scanpoints.
A time-correlated single-photon counter is used to record the photon timestamps with a total system resolution of approximately 70 ps.
Captured dataset.
We capture multiview lidar scans of six scenes by placing objects on a rotation stage in front of the scanning single-photon lidar and capturing 20 different views in increments of 18 degrees of rotation.
For each lidar scan we accumulate photons during a 20 minute exposure time to minimize noise in the transient measurements.
We bin the photon counts into histograms with 1500 bins and bin widths of 8 ps (all raw timestamp data will also be made available with the dataset).
We set aside 10 views sampled in 36 degree increments for testing and we use 8 of the remaining views for training.
Prior to input into the network for training, we normalize the measurement values by the maximum photon count observed across all views.
Calibration.
We calibrate the camera intrinsics of the system using a raxel model <cit.> with corners detected from two scans of checkerboard translated in a direction parallel to the surface normal.
This model calibrates the direction of each ray individually, which is necessary because the 2D scanning mirrors deviate from the standard perspective projection model <cit.>.
Extrinsics are calibrated by placing a checkerboard on a rotation stage and solving for the axis and center of rotation that best align the 3D positions of the checkerboard corners, where the 3D points are found using the calibrated ray model along with the time of flight from the lidar (see supplemental).
Overall, accurate calibration is an important and non-trivial task because multiview lidar scans provide two distinct geometric constraints (i.e. stereo disparity and time of flight) that must be consistent for scene reconstruction.
§ RESULTS
We evaluate our method on the simulated and captured datasets and use transient neural radiance fields to render intensity, depth, and time-resolved lidar measurements from novel views.
Baselines.
The intensity and depth rendered from our method are compared to four other baseline methods that combine point cloud-based losses with neural radiance fields.
For fairer comparison and to speed up training and inference times, we implement the baselines by incorporating their loss terms into the recently introduced frameworks of NerfAcc <cit.> and Instant-NGP <cit.> adopted by our method.
We train the following baselines using intensity images (i.e., the photon count histograms integrated over time) along with point clouds obtained from the photon count histograms using a log-matched filter, which is the constrained maximum likelihood depth estimate <cit.>.
* Instant-NGP <cit.> is used to illustrate performance without additional depth supervision.
* Depth-Supervised NeRF (DS-NeRF) <cit.> incorporates an additional loss term to ensure that the expected ray termination distance in volume rendering aligns with the point cloud points.
* Urban NeRF <cit.> incorporates the ray-termination loss of DS-NeRF while also adding space carving losses to penalize density along rays before and after the intersection with a point cloud point.
* Urban NeRF with masking (Urban NeRF-M) modifies Urban NeRF to incorporate an oracle object mask and extends the space carving loss to unmasked regions, providing stronger geometry regularization (additional details in supplement).
Prior to input into the network, we normalize the images and apply a gamma correction, which improves network fitting to the high dynamic range data. Finally, after training with each method, we estimate an associated depth map using the expected ray termination depth at each pixel, which is the same metric used in the loss functions of the aforementioned baselines.
§.§ Simulated Results
The method is compared to the baselines in simulation across five scenes: chair, ficus, lego, hot dog, and statue.
In Fig. <ref>, we show RGB images rendered from novel views using the baselines and our proposed method after training on two, three, and five views.
More extensive sets of results on all scenes are included in the supplemental.
We find that views rendered from transient neural radiance fields have fewer artifacts and spurious patches of density, as the explicit supervision from the photon count histograms avoids the ill-posedness of the conventional multiview reconstruction problem.
[17]r2.6in
< g r a p h i c s >
Comparison of depth maps recovered from simulated measurements trained on 5 views of the lego scene.
Additional quantitative results are included in Table <ref>, averaged across all simulated scenes.
For the evaluation of rendered RGB images, we normalize and gamma-correct the output of the proposed method and the ground truth in the same fashion as the baseline methods.
Transient NeRF recovers novel views with significantly higher peak signal-to-noise ratio and better performance on the learned perceptual image patch similarity (LPIPS) metric <cit.> compared to baselines. Transient measurements provide explicit supervision of the unoccupied spaces in the scene, leading to fewer floating artifacts, and to cleaner novel views.
The depth maps inferred from Transient NeRF are also significantly more accurate than baselines (see Fig. <ref>).
One key advantage here is that we avoid
supervision on point clouds obtained by potentially noisy (and thus view-inconsistent) per-pixel estimates of depth.
By training on the raw photon count histograms, the scene's geometry is allowed to converge to the shape that best explains all histograms across all views, resulting in much higher geometric accuracy.
§.§ Captured Results
In Fig. <ref> we show rendered novel views of intensity images from our method and baselines trained on captured data with two, three, and five views.
Results are shown on the cinema, food, and baskets scenes (additional results in the supplemental).
The proposed approach results in fewer artifacts and the rendered intensity images are more faithful to reference intensity images captured from the novel viewpoint.
Quantitative comparisons of our method to baselines on captured data are shown in Table <ref>; note that we do not have access to ground truth depth for captured data and instead use depth from a log-matched filter on the ground truth transient.
We find that the method outperforms the baselines in terms of PSNR and LPIPS of intensity images rendered from novel views.
While performance on captured data does not improve as much as observed on simulated data with increasing numbers of viewpoints, we attribute this to small imperfections (≈1 mm) in the alignment of the lidar scans after estimating the camera extrinsics.
Since DS-NeRF trains explicitly on depth without additional regularization, it is especially sensitive to camera perturbations and can be outperformed in some cases by Instant NGP which has no additional geometry constraints.
Our approach appears somewhat less sensitive to these issues, perhaps because geometry regularization is done implicitly through a photometric loss on the lidar measurements.
We notice some degradation in depth accuracy relative to simulation, likely due to imperfections in the estimated extrinsics.
Sub-mm registration of the lidar measurements would likely improve results, but achieving such precise registration is non-trivial and beyond the scope of our current work.
Finally, in Fig. <ref> we compare captured measurements to rendered transients and depth rendered for the boots scene trained on 2 viewpoints.
We recover the time-resolved light propagation from a novel view, shown in x–y slices of the rendered transients over time.
The depth map recovered from the novel view appears qualitatively similar to the ground truth (estimated from captured measurements using a log-matched filter <cit.>).
We show additional 3D reconstruction results in the supplemental.
[13]r3in
< g r a p h i c s >
Comparison between reference and novel views of lidar measurements, intensity slices, and depth. The x–y intensity slices are visualized for times indicated by the red dashed lines.
§ DISCUSSION
Our work brings NeRF to a new dimension of imaging at transient timescales, offering new opportunities for view synthesis and 3D reconstruction from multiview lidar.
While our work is limited to modeling the direct reflection of laser light to perform lidar view synthesis, our dataset captures much richer light transport effects, including multiple bounces of light and surface reflectance properties that could open avenues for future work.
In particular, the method and dataset may help enable techniques for intra-scene non-line-of-sight imaging <cit.> (i.e., recovering geometry around occluders within a scene), and recovery of the bidirectional reflectance distribution function via probing with lidar measurements <cit.>.
Our method is also limited in that we do not explore more view synthesis in more general single-photon imaging setups, such as when the lidar and SPAD are not coaxial; we hope to explore these configurations in future work.
The proposed framework and the ability to render transient measurements from novel views may be especially relevant for realistic simulation for autonomous vehicle navigation, multiview remote sensing, and view synthesis of more general transient phenomena.
Kiriakos N. Kutulakos acknowledges the support of the Natural Sciences and Engineering Council of Canada (NSERC) under the RGPIN and RTI programs.
David B. Lindell acknowledges the support of the NSERC RGPIN program.
The authors also acknowledge Gordon Wetzstein and the Stanford Computational Imaging Lab for loaning the single-photon lidar equipment.
unsrtnat
|
http://arxiv.org/abs/2307.04614v1 | 20230710145753 | (Empirical) Gramian-based dimension reduction for stochastic differential equations driven by fractional Brownian motion | [
"Nahid Jamshidi",
"Martin Redmann"
] | math.NA | [
"math.NA",
"cs.NA"
] |
left=2.5cm,
right=2.5cm,
top=2.5cm,
bottom=2.5cm,
|
http://arxiv.org/abs/2307.04301v1 | 20230710013651 | NN-EVP: A physics informed neural network-based elasto-viscoplastic framework for predictions of grain size-aware flow response under large deformations | [
"Adnan Eghtesad",
"Jan Niklas Fuhg",
"Nikolaos Bouklas"
] | cs.CE | [
"cs.CE"
] |
CORNELL]Adnan Eghtesadcor1
[email protected]
CORNELL]Jan Niklas Fuhg
CORNELL,CORNELL2]Nikolaos Bouklas
[cor1]Corresponding author.
[CORNELL]Sibley School of Mechanical and Aerospace Engineering, Cornell University, NY 14853, USA
[CORNELL2] Center for Applied Mathematics,
Cornell University,
NY 14853, USA
We propose a physics informed, neural network-based elasto-viscoplasticity (NN-EVP) constitutive modeling framework for predicting the flow response in metals as a function of underlying grain size. The developed NN-EVP algorithm is based on input convex neural networks as a means to strictly enforce thermodynamic consistency, while allowing high expressivity towards model discovery from limited data. It utilizes state-of-the-art machine learning tools within PyTorch's high-performance library providing a flexible tool for data-driven, automated constitutive modeling.
To test the performance of the framework, we generate synthetic stress-strain curves using a power law-based model with phenomenological hardening at small strains and test the trained model for strain amplitudes beyond the training data. Next, experimentally measured flow responses obtained from uniaxial deformations are used to train the framework under large plastic deformations. Ultimately, the Hall-Petch relationship corresponding to grain size strengthening is discovered by training flow response as a function of grain size, also leading to efficient extrapolation. The present work demonstrates a successful integration of neural networks into elasto-viscoplastic constitutive laws, providing a robust automated framework for constitutive model discovery that can efficiently generalize, while also providing insights into predictions of flow response and grain size-property relationships in metals and metallic alloys under large plastic deformations.
Viscoplasticity Flow response Grain size Hall-Petch Machine learning Neural networks
NN-EVP: A physics informed neural network-based elasto-viscoplastic framework for predictions of grain size-aware flow response under large deformations
[
August 12, 2023
=========================================================================================================================================================
§ INTRODUCTION
Understanding the flow response and viscoplastic behavior of metals incorporating strain rate sensitivity effects, is essential for the predictions of the mechanical behavior, improving the material performance, and designing novel alloys, critical towards revolutionizing the objectives of the aerospace and automotive manufacturing industries. Owing to their polycrystalline nature, the anisotropic flow response of metals under large deformations heavily depends on the average grain size that forms the underlying microstructure. Grain size strengthening, referred to as the Hall-Petch effect <cit.>, is associated with the grain boundary resistance to the crystallographic deformation mechanisms and dislocation motions <cit.>. Authors in <cit.> implemented a crystal plasticity model that accounts for grain size effects and slip system interactions on the deformation of austenitic stainless steels. The crystal plasticity finite element method (CPFEM) is used to study the grain size and morphology effects on yield strength <cit.>. In addition to crystal plasticity modeling, atomistic simulations are well utilized to study the Hall-Petch relationships in advanced alloys <cit.>.
High-performance computing (HPC) has improved the efficiency of numerical techniques over the last few decades. However, despite acceleration of HPC simulations, predicting multi-scale material deformations is still a time-consuming task and limited by parallel scalability and hardware performance.
To address this issue, recent research has focused on the integration of genetic algorithms (GA), machine learning (ML), and deep learning (DL) to facilitate automated constitutive modeling which has the potential to expedite multi-scale simulations <cit.>. In particular, they have been used to model hyperelasticity <cit.>, viscoelasticity <cit.> and multiphysics problems <cit.>.
Lately, neural networks (NN) and convolutional neural networks (CNN) have also been utilized for a variety of applications in modeling viscoplastic deformations in macro and micro scales <cit.>. A recurrent neural network (RNN) model was proposed as a computationally-efficient surrogate for crystal plasticity simulations <cit.>. A new NN-based crystal plasticity algorithm was presented for FCC materials and its application to non-monotonic strain paths <cit.> followed by a CNN-based CPFEM model to predict the localized viscoplastic deformation in aluminum alloys <cit.>. Some recent work has implemented NN and CNN to predict yield surfaces from microstructural images and crystal plasticity simulations <cit.>. A machine learning-enabled crystal plasticity model with dislocation density hardening was developed to identify stress and strain localizations under large viscoplastic deformations <cit.>. An input convex NN (ICNN) framework was presented for the prediction of texture-dependent macroscopic yield functions from crystal plasticity simulations <cit.>. On the continuum level, NN algorithms have been implemented in finite element (FE) solvers to replace classical history-dependent constitutive models to obtain nonlinear structural responses <cit.>.
While the literature addresses a wide range of machine and deep learning applications related to elastoplasticity and viscoplasticity in the context of big data, only a few rely on the more realistic case of limited-data availability of macroscopic observations from measured tensile tests. Furthermore, studies that focus on viscoplasticity in the low- and limited-data regimes, only utilize neural networks for parameter estimation of established phenomenological constitutive models rather than model discovery. The latter can focus on establishing automated frameworks that remove the need for a particular phenomenological parametrization. To address this shortcoming, and allow robust performance in low- and limited-data regimes, physics-based ML algorithms have been proposed aiming to directly enforce physical laws, thermodynamic principles and also established domain knowledge <cit.>.
Motivated by the earlier work of <cit.>, recent work in <cit.>, enables modular ML-based elastoplasticity in the context of limited data using thermodynamics-aware and mechanistically informed neural networks offering a hybrid framework with each component of the model being selective as classical phenomenological or a data-driven model depending on the data availability, allowing for trustworthy prediction and generalization.
The present work builds upon the work in <cit.> and proposes a novel NN-based framework for modeling the elasto-viscoplastic (NN-EVP) response of metals as a function of grain size where strain rate sensitivity effects are taken into consideration. The proposed algorithm and underlying constitutive model consider the coupled elasto-viscoplastic response in contrast to the studies where elastic and plastic regimes are treated separately. In addition, for consistency with the thermodynamic laws and mechanistic assumptions, neural network architectures are designed to enforce monotonicity and convexity requirements.
The framework allows for the discovery of laws describing the rate-sensitive viscoplastic flow response of metals and alloys, enabling a predictive infrastructure for the flow response as a function of strain amplitude and grain size. The present paper is structured as follows. Section 2 starts with a brief review of the thermodynamics-based modular formulation, based on dual potentials, as well as particular functional forms, resulting from mechanistic assumptions, that describe the rate-sensitive viscoplastic flow response in metals. Afterward, the key aspects pertaining to the novel implementations of the NN-EVP framework are discussed. Section 3 describes the applications of the NN-EVP tool in modeling large viscoplastic deformations with isotropic hardening for both synthetic and experimental data as a function of grain size. A summary of the findings and key contributions concludes the work.
§ METHODS
§.§ Elasto-viscoplasticity constitutive formulation based on dual potential
Under the assumption of small strains, elasto-viscoplasticity can be modeled by decomposing the strain into its elastic and viscoplastic parts:
ϵ=ϵ^e+ϵ^vp,
where ϵ, ϵ^e, and ϵ^vp are the total strain, elastic strain, and viscoplastic strain, respectively.
In a similar fashion, the specific free energy Ψ can be decomposed into elastic and viscoplastic contributions, denoted by Ψ^e and Ψ^vp respectively, as follows:
Ψ = Ψ^e( ϵ^e) + Ψ^vp(r, α,T ),
where r and α are the thermodynamic variables related to isotropic and kinematic hardening, and T is the temperature. Following the Coleman-Noll procedure <cit.> the Cauchy stress σ and thermodynamic forces R and X can then be defined as:
σ=∂Ψ∂ϵ^e=-∂Ψ∂ϵ^vp,
R=∂Ψ∂r
𝐗=∂Ψ∂α.
Under isothermal conditions, the intrinsic potential is written as:
Ψ_I=Ψ_I^e(ϵ^e)+Ψ_I^vp(α,r).
The generalization of the concept of equipotential surfaces in stress space, representing similar dissipation (i.e., strain rate) levels leads to the definition of a dual potential in the form of <cit.>:
φ^*=φ^*(σ,R,𝐗;r,α).
The dual dissipation potential φ^* satisfies the following conditions:
* φ^* is convex w.r.t to all its variables,
* φ^* is always positive: φ^*>0,
* φ^* includes the origin: φ^*(0,0,0,α,r)=0.
Following the normality law of generalized standard materials the plastic strain rate ϵ̇^̇v̇ṗ and the hardening variables r and α can then be obtained as <cit.>:
ϵ̇^vp=∂φ^*∂σ,
ṙ=-∂φ^*∂R,
α̇=-∂φ^*∂𝐗,
which leads to the second law of thermodynamics as follows:
Ψ_I=σ:∂φ^*∂σ+𝐗:∂φ^*∂𝐗+R∂φ^*∂R≥φ^* ≥ 0.
§.§.§ Decomposition of dual potential
Under consideration of the recovery effects caused by dislocation annihilation we can formulate the dual potential φ^* to also be decomposed into viscoplastic hardening and recovery potentials as follows <cit.>:
φ^*=Ω_vp + Ω_r,
where
Ω_vp=Ω_vp(σ,R,𝐗;r,α),
Ω_r=Ω_r(R,𝐗;r,α).
In the absence of recovery effects, the dual potential can be considered to be equal to the hardening potential Ω_p <cit.>:
φ^*=Ω_vp=φ^*(σ,R,𝐗;r,α).
§.§.§ Particular functional forms and power-law
Assuming linear elastic behavior for the elastic part of the free energy function we can write
Ψ_I^e(ϵ^e) = 1/2ϵ^e : ℂ:ϵ^e
where ℂ is the fourth-order elastic stiffness tensor. This yields the Cauchy stress through Hook's law as follows
σ=ℂ:ϵ^e.
For isotropic materials, the stiffness tensor is uniquely defined by the Young's modulus E and Poisson's ratio ν
ℂ_ijkl = E ν/(1+ν) (1-2ν)δ_ijδ_kl + E/2 (1+ ν) (δ_ikδ_jl + δ_ilδ_jk)
where δ_ij denotes the Kronecker delta.
Under the assumption of isotropic and pressure-independent material response, we can reformulate the dual potential as a function of only the second and third stress invariants
φ^*=φ^*(J_2(σ-𝐗),J_3(σ-𝐗),R,𝐗;r,α),
where
J_2(σ-𝐗)=√(32)‖σ^'-𝐗^'‖,
J_3(σ-𝐗)=13tr(σ-𝐗),
and
σ^'=σ-13tr(σ),
𝐗^'=𝐗-13tr(𝐗).
Here, the apostrophe □^' indicates the deviatoric component of the tensor and tr( ) implies the trace of the argument.
After assuming that due to the incompressible behavior of metals, only the deviatoric components of the stress contributes to the plastic deformation, the dual potential reduces to <cit.>:
φ^*=φ^*(J_2(σ-𝐗),R,𝐗;r,α),
The fundamental flow rule in viscoplasticity theory involving the thermodynamic force R(r) is commonly written in the form of a power-law and is desirable because it provides uniqueness of solution for the stress value that accommodates an imposed strain rate <cit.>:
φ^*=ϵ̇_0n+1 J_2(σ-𝐗)R(r)^n+1.
Incorporating Eq. (<ref>) and Eq. (<ref>) into Eq. (<ref>) we get that
ϵ̇^̇v̇ṗ=32ϵ̇_0 J_2(σ-𝐗)R(r)^nσ^'-𝐗^'‖σ^'-𝐗^'‖.
The power-law embeds the introduced strain rate sensitivity and provides uniqueness of solution for the threshold of equivalent von Mises stress accommodating an imposed strain rate. In Eqs <ref> and <ref>, n is the rate sensitivity parameter and ϵ̇_0 is a reference strain rate usually chosen as the norm of applied strain rate tensor ‖ϵ̇^app‖. Following Eq. <ref>, the variable r and its rate ṙ, implying the effective accumulated visco-plastic strain and effective visco-plastic strain rate can be written as follows:
r=ϵ_eff^vp=∫_0^t√(23)‖ϵ̇^̇v̇ṗ(τ)‖ dτ ,
ṙ=ϵ̇_eff^vp=√(23)‖ϵ̇^̇v̇ṗ‖.
§.§.§ Perfect viscoplasticity
In case of perfect viscoplasticity and no hardening effects, the yield surface does not evolve as a function of accumulated viscoplastic strain, and the term R(r) reduces to an initial yield value σ^Y. The dual potential and viscoplastic strain rate can then be written as:
φ^*=ϵ̇_0n+1σ_eqσ^Y^n+1,
ϵ̇^̇v̇ṗ=32ϵ̇_0σ_eqσ^Y^nσ^'σ_eq,
where σ_eq=√(32)‖σ^'‖.
Depending on the material properties, the rate sensitivity parameter varies in the range of 10 ≤ n ≤ 400. Figure <ref> shows the effect of rate sensitivity on perfect viscoplastic flow response of Cu under applied quasi-static strain rate of 0.001s^-1 generated by the power law equation mentioned above. Note that while higher values of n represent the rate sensitivity of the material more accurately, the computational cost and number of iterations required to numerically solve Eq. (<ref>) increases.
§.§.§ Isotropic hardening
In the case of isotropic hardening, the yield function R(r) starts with an initial value σ^Y (R(r=0)=σ^Y) and evolves as a function of accumulated viscoplastic strain r. The dual potential and viscoplastic strain rate then yield
φ^*=ϵ̇_0n+1σ_eqR(r)^n+1,
ϵ̇^̇v̇ṗ=32ϵ̇_0σ_eqR(r)^nσ^'σ_eq.
Figure <ref> shows the viscoplastic flow response in Cu generated by the power law and as a function of applied strain rate and rate sensitivity parameter n=20, demonstrating significant anisotropy in the flow response as a function of imposed strain rate.
While the particular functional forms and the power law model described above enable the constitutive formulation for viscoplastic flow response in metals, predicting the stress-strain behavior using these models often require manual calibration of a set of hardening parameters with experimental data obtained from uniaxial deformation tests. While the automation of the calibration procedures have been enabled by the existing ML and GA models, training these models can be still a time-consuming task, especially when discrepancies of the experimental response and the suggested power law model are present. Therefore, we propose a data-driven elasto-viscoplasticity framework based on neural networks, NN-EVP, that replaces the power law with a generic neural networks based algorithm capable of predicting the flow response in metals and alloys at large deformations in the context of limited data availability.
§.§ NN-EVP
In order to employ a physics-informed data-driven model for the elasto-viscoplastic constitutive formulation described earlier, the dual potential φ^* is represented by the response of a neural 𝒩𝒩_ϕ^* whose (scalar) input is chosen based on which hardening phenomena we aim to be captured. This is discussed in more detail in the subsequent sections.
To remain consistent with thermodynamics laws, the following conditions are (implicitly) enforced on the dual potential neural network 𝒩𝒩_ϕ^*:
* 𝒩𝒩_ϕ^* is convex and monotonically increasing
* 𝒩𝒩_ϕ^* is always positive: 𝒩𝒩_ϕ^* > 0
* 𝒩𝒩_ϕ^* includes the origin: 𝒩𝒩_ϕ^*(0)=0
If 𝒩𝒩_ϕ^*: ℝ→ℝ is a feedforward neural network with L hidden layers, input x_0 and output y_L, the neural network can be written as follows:
x_0 ∈ℝ_≥ 0,
x_1 = ℱ_1( x_0W_1^T + b_1) ∈ℝ^n^1,
x_l = ℱ_l( x_l-1W_l^T + b_l) ∈ℝ^n^l, l=1, …, L-1
x_L = x_L-1W_L^T + b_L, ∈ℝ,
where the weights W_l∈ℝ^n^l× n^l-1 and biases b_l∈ℝ^n^l define the set of trainable parameters and the activation functions are denoted by ℱ_1.
The neural network 𝒩𝒩_ϕ^* is positive, monotonically increasing, and input convex when the following conditions are met <cit.>:
* x_0≥ 0 , W_l≥ 0 , b_l≥ 0,
* ℱ_l: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L,
* ℱ_l': ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L,
* ℱ_l”: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L.
In order to satisfy the conditions above, we choose a parameterized and adaptive/scalable Softplus activation function <cit.> as follows:
ℱ_l^𝒩𝒩_ϕ^*(x) = 1/βlog(1+e^x) -ℱ_l^𝒩𝒩_ϕ^*(0) ,
where β>0 is a training parameter in addition to the weights and biases of the neural network, providing a more generic form of the activation function and thus more flexibility in training. Note that β is trainable for each hidden layer of 𝒩𝒩_ϕ^*. The deduction of ℱ_l^𝒩𝒩_ϕ^*(0) from the equation above is to satisfy the condition 𝒩𝒩_ϕ^*(0)=0. Once the dual potential 𝒩𝒩_ϕ^* is established, it can be used to replace the particular power law form introduced earlier in Eqs (<ref>) and (<ref>). Equivalently, the viscoplastic strain rate ϵ̇^vp can be obtained by taking the derivative of the output of 𝒩𝒩_ϕ^* with respect to Cauchy stress:
ϵ̇^vp= ∂𝒩𝒩_ϕ^*∂σ.
§.§.§ NN-EVP for perfect viscoplasticity
In the case of perfect viscoplasticity, the yield function σ^Y is constant and does not evolve. We can therefore use σ_eq as the input to the neural network, 𝒩𝒩_ϕ^* =𝒩𝒩_ϕ^* (σ_eq). Figure <ref> shows a schematic of the particular neural network architecture used for modeling perfect viscoplasticity in this work. While different configurations are possible in particular with regards to the selection of the number of layers and neurons, due to the already low amount of training data (less than 100 data points depending on the resolution of stress-strain data points) compared to the number of trainable parameters we show the results for a network with 2 layers and 20 neurons in the following. We remark that slight changes to this network architecture, i.e. 2 layers with 30 neurons, for example, have not shown to yield different results.
§.§.§ NN-EVP with isotropic hardening
Once hardening effects are introduced into the constitutive model, the yield function R(r) (which will be not imposed, but discovered) evolves during the deformation as a function of accumulated plastic strain r. As a result, information pertaining to the evolution of yield function is also required as an additional input to the dual potential neural network 𝒩𝒩_ϕ^*. Thus, we could define the dual potential neural network as 𝒩𝒩_ϕ^* =𝒩𝒩_ϕ^* (σ_eq,R(r)). To this end, an additional neural network 𝒩𝒩_R=𝒩𝒩_R(r) is required to track the evolution of strain hardening. Notice that the output of hardening neural network 𝒩𝒩_R(r) could be used as an input to the dual potential neural network 𝒩𝒩_ϕ^* (σ_eq,R(r)). Here, instead of taking σ_eq and R(r) as two independent inputs to 𝒩𝒩_ϕ^*, we choose to take the ratio σ_eqR(r) as a single input. This mechanistic assumption informs the neural network with the physical constraint that σ_eq needs to be scaled proportionally with the evolution of R(r), imposing the strain hardening effects as the ratio σ_eqR(r) controls the elasto-viscoplastic transition as well as the rate of the evolution of viscoplastic strain rate. Figure <ref> shows a schematic of NN-EVP with isotropic hardening. To remain consistent with the thermodynamics laws, the following conditions are enforced on the hardening neural network 𝒩𝒩_R:
* 𝒩𝒩_R is monotonically increasing,
* 𝒩𝒩_R is always positive: 𝒩𝒩_R > 0,
* 𝒩𝒩_R does not include the origin: 𝒩𝒩_R(r=0) ≠ 0.
The neural network 𝒩𝒩_R is positive and monotonically increasing when the following conditions are met:
* x_0≥ 0 , W_l≥ 0, b_l≥ 0,
* ℱ_l: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L,
* ℱ_l': ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L.
Notice that in contrast to the dual potential network 𝒩𝒩_ϕ^*, the hardening network 𝒩𝒩_R does not contain the origin and does not require convexity. This is due to the fact that the yield function R(r) has a nonzero value equal to the initial yield at zero accumulated viscoplastic strain r. We also remark that since 𝒩𝒩_R is monotonically increasing, the reciprocal form of 1𝒩𝒩_R, used as input to 𝒩𝒩_ϕ^*, is monotonically decreasing <cit.>.
In order to satisfy the conditions above, we choose combinations of forms of ReLU, adaptive logistic or adaptive tanh activation functions as follows:
ℱ_l^𝒩𝒩_R(x) = α_1 max(x,0) + α_2 1/1+e^ -β x,
ℱ_l^𝒩𝒩_R(x) = α_1 max(x,0) + α_2 e^β x-e^- β x/e^β x+e^-β x,
ℱ_l^𝒩𝒩_R(x) = α_1 1/1+e^ -β x + α_2 e^β x-e^- β x/e^β x+e^-β x,
where α_1 and α_2 denote the weights assigned to the activation functions and applied to the hidden layers of each neural network. The reason behind these mixed activation functions is to facilitate learning the hardening response for larger strain amplitudes. Since the logistic activation function saturates early at lower strain levels, the addition of either a ReLu or tanh response compensates for stress-strain curvature at later stages of hardening. The effect of variation in α_1 and α_2 as well as the particular selection of the activation functions on the flow response is discussed later in detail.
§.§.§ Hall-Petch effects and grain size-aware NN-EVP
Hall-Petch effects and variation in grain size are concomitant with strong anisotropy in viscoplastic flow responses. In particular, the initial yield function of the material alters drastically depending on the average grain size of the underlying microstructure. Particular functional forms for Hall-Petch relationship vary in the literature. However, the most common form <cit.> can be written as follows:
R_0,HP=Hμ√(b)√(d_grain)
where b is the Burgers vector, μ is the shear modulus, d_grain is the average grain size and H stands for the Hall-Petch coefficient usually obtained from calibrations with experimental observations. Notice that Eq. (<ref>) provides a nonlinear relationship between the grain size and initial yield function in non-logarithmic space, however, it can be shown that the Hall-Petch stress does not always scale as the inverse square root of grain size <cit.> and thus Eq. <ref> is simply a specific function of many possible forms. In order to incorporate the Hall-Petch effect into the NN-EVP framework, we introduce a Hall-Petch neural network 𝒩𝒩_HP=𝒩𝒩_HP(d_grain) in addition to the dual potential 𝒩𝒩_ϕ^* and the hardening neural network 𝒩𝒩_R. This allows us to learn and discover the Hall-Petch relationship. To remain consistent with thermodynamics laws, the following conditions are enforced on the Hall-Petch neural network 𝒩𝒩_HP:
* 𝒩𝒩_HP is monotonically decreasing,
* 𝒩𝒩_HP is always positive: 𝒩𝒩_HP > 0,
* 𝒩𝒩_HP(d_grain→ 0) →∞.
The neural network 𝒩𝒩_HP is positive when the following conditions are met:
* x_0≥ 0 , W_l≥ 0, b_l≥ 0,
* ℱ_l: ℝ_≥ 0→ℝ_≥ 0 , l=1, …, L.
Here, we first choose a standard tanh activation function as follows and then take the reciprocal of the network output to satisfy the conditions above:
ℱ_l^𝒩𝒩_HP(x) = e^x-e^-x/e^x+e^-x,
𝒩𝒩_HP←1/𝒩𝒩_HP.
Notice that since the Hall-Petch term only affects the initial yield function and does not evolve during the deformation, an adaptive activation function here is not necessary. Figure <ref> shows a schematic of grain size-aware NN-EVP architecture including the Hall-Petch effects. As illustrated in the figure, the response of the Hall-Petch network 𝒩𝒩_HP and the hardening network 𝒩𝒩_R are combined to σ_eq𝒩𝒩_R(r)+𝒩𝒩_HP(d_grain) and then used as an input to the dual potential network 𝒩𝒩_ϕ^*.
§ RESULTS AND DISCUSSION
§.§ Implementation highlights
The NN-EVP framework is implemented in PyTorch <cit.>. PyTorch takes advantage of automatic differentiation <cit.> facilitating automatic computation of local gradients of the output of neural network with respect to its inputs. Automatic differentiation also allows us to determine the Jacobian required for solving the update step of implicit time-stepping using the Newton Raphson (NR) method <cit.>. Details of the implementation are provided in the form of a pseudo-code in algorithm <ref>. Due to its adaptive learning decay algorithm and robustness, the AdamW optimizer <cit.> is utilized to optimize the parameters of the three neural network used in this study. In addition, a cosine annealing scheduler <cit.> is utilized to further enhance the learning rate adaptivity and therefore improve the robustness of the NN-EVP framework. The cosine annealing scheduler is set with a starting learning rate of 1e^-2 and saturates to a learning rate of 1e^-3.
The established framework is then trained to model perfect viscoplasticity as well as an isotropic hardening response under uniaxial tensile loading. We consider two different scenarios. First, the flow response is synthetically generated using the power law and phenomenological Johnson-Cook hardening <cit.> where in addition to fitting the existing data, the flow response is predicted to larger strain amplitudes via extrapolations enabled by recovering the trained neural networks. Next, experimentally measured data pertaining to large deformations as a function of grain size are used to train the NN-EVP framework presented herein. In all cases, a constant 11 component of the strain rate ϵ̇^app=1e^-3 s^-1 in X direction is applied. In order to satisfy the uniaxial loading conditions, the 22 and 33 components of the stress tensor are enforced to be zero (see algorithm <ref>). The NR solver tolerance is set to 𝒯𝒪ℒ_𝒩ℛ=1e^-6 for all simulations which is reached within 2-4 iterations depending on the deformation history. Finally, the mean squared error function is used to calculate the loss value in each training epoch.
§.§ Rediscovering the power law for perfect viscoplasticity
In order to predict perfect viscoplasticity in Cu, the power law equation with a constant yield function σ^Y and rate sensitivities of n=10, 20, and 100 are used to generate the synthetic stress-strain data. Since strain hardening is not involved, only the dual potential neural network 𝒩𝒩_ϕ^* (σ_eq=σ) is utilized. The left plot of Fig. <ref>(a) shows the evolution of normalized loss over 200 training epochs for a rate sensitivity of n=10. The training error saturates to a value of 1e^-5 after around 100 iterations. Note that the fluctuations observed in the loss evolution are innate to the optimizer algorithm regardless of the learning rate value. The right plot of Fig. <ref>(a) illustrates the trained model output using the ground truth provided by the power law. The transparent lines represent the training history of the model during the optimization process. Figures <ref>(b) and <ref>(c) highlight the loss evolution and training response for power laws with rate sensitives of n=20 and n=100 respectively. As the rate sensitivity increases, a sharper transition from elastic to viscoplastic deformation is observed, making it more challenging for the neural network to fit the data at the transition regime. An increased number of training epochs from 200 to 500 is an indication of such behavior. Note that while the training loss is still reducing slowly after around 500 epochs, to maintain consistency and computational efficiency, we hereafter set the maximum number of epochs N_Epochs=500 for all the simulations.
§.§ Training via phenomenological hardening
Synthetic stress-strain data for isotropic hardening can be generated using either physics-based or phenomenological hardening models. Physics-based hardening laws are more suited for mesoscale or micromechanical simulations because they consider the evolution of dislocation substructures and dislocation densities <cit.> and often require an evolved implementation framework which is computationally intensive. On the other hand, phenomenological hardening models such as Voce <cit.>, Peirce, Asaro, and Needleman (PAN) <cit.> or Johnson-Cook <cit.> are simpler in implementation and thus computationally more efficient for modeling macroscopic behavior where the details pertaining to the underlying microstructure is not considered. In the present work, we use the Johnson-Cook isotropic hardening model with the functional form written as follows <cit.>:
R(r)=[A+Br^n][1+C logrr^*][1-T^*^m],
T^*=T-T_0T_m-T_0,
where A, B, C, n and m are the hardening parameters associated with the Johnson-Cook model. Note that since we are considering deformations at room temperature T_0, the term T^* that includes the effects of melting temperature T_m vanishes. The values corresponding to the Johnson-Cook model parameters for Cu are listed in Table <ref> <cit.>. Here both the dual potential network 𝒩𝒩_ϕ^* (σ, R(r)) and hardening neural network 𝒩𝒩_R(r) are utilized for training. The dual potential network uses the SoftPlus activation function while the hardening neural network 𝒩𝒩_R(r) uses the adaptive logistic activation function obtained by setting α_1=0 and α_2=1 in Eq. <ref>. Figure <ref> shows the loss evolution and trained model. Young's modulus and Poisson's ratio of 130 GPa and 0.34 are used for the elastic response. The loss evolution experiences an abrupt drop to a normalized value of around 5e^-4 and saturates after around 100 epochs, indicating a fast convergence of the proposed framework.
§.§.§ Extrapolation of flow response to larger strain amplitudes
One of the powerful advantages (and challenges) of algorithms that are developed via neural network is the ability to extrapolate beyond the data available to the networks during the training process. Once a model is trained for a specific set of data, the recovered trained model is able to predict the behavior for any given input within the trained framework and beyond the data observed by the model. Here, we restrict the model during training for predictions up to 0.5% total strain and recover the model and its trained parameters to examine the ability of the model to extrapolate the flow response up to 2% total strain. This benchmark provides us with a train-test configuration consisting of 25% training and 75% testing data. Figure <ref> shows the extrapolations of stress-strain response for different ratios of the mixed activation functions ReLU, adaptive logistic, and adaptive tanh with variable combination weights α_1 and α_2 in Eqs. (<ref>), (<ref>) and (<ref>).
The logistic activation function saturates rapidly at low strain amplitudes, hindering the model from properly capturing the hardening response at higher strain amplitudes. To address this issue, tanh, and ReLU activation functions are combined with logistic activation functions using different weights to predict the flow response beyond the training domain. Note that the addition of the ReLU activation function should be done with caution to avoid excessive hardening behavior shown in Fig. <ref>(a). However, it is worthwhile to mention that sudden increase in strain hardening is well observed in case of deformation twinning in compression tests <cit.> and thus, a mixed activation function with large ReLU weights seems to be a promising approach for capturing the rapid hardening behavior upon twinning formations. Among various configurations shown in Fig. <ref>(a)-(e), a mixed activation function with 80% adaptive tanh and 20% ReLU best predicts the hardening behavior up to 2% strain.
§.§ Training large plastic deformations via experimental data
So far the capability of the NN-EVP framework in training and extrapolating the flow response at small deformations is well demonstrated. In this section, we elaborate on training the flow response on experimentally measured data at large plastic deformations. One of the challenges involved in training with experimental data is the limitation of stress-strain data points as well as their frequency and form of occurrence through the deformation history. Access to the experimental data reported in the literature usually involves the extraction of data from stress-strain images using digitizing software. Typically data extraction is based on manual user input via the coordinates representing the image. Creating a pair of lists for the model output and measured data matching at the corresponding strain levels and necessary for a one-by-one comparison is challenging. Generating data with equal spacing is also not an option since adaptive time stepping is required for a computationally efficient framework with fewer time increments that represent the same curve with large deformations up to 10% strain.
Thus, to address this issue and to enable access to the experimental data points at arbitrary strain amplitudes with arbitrary and adaptive time stepping, a nonlinear interpolation preprocessing step via the Scipy <cit.> library is used prior to training. Figure <ref> shows an example of a continuous Scipy interpolation for experimentally obtained data in Ni with uneven spacing and arbitrary strain amplitudes. Once the Scipy interpolation is applied, the stress values corresponding to the desired strain levels are readily evaluated using the obtained interpolator. Figures <ref>(a) and <ref>(b) present the trained models using measured data for small and large deformations up to 1% and 10% strain in Ni. The model shows promising performance in capturing the elasto-viscoplastic transition in small strains as well as prediction of hardening curvature under large deformations. Note that since most of the experimental data existing in the literature ignore the elastic response, an additional step based on the current value of viscoplastic strain is imposed in algorithm <ref> to ignore the elastic response when computing the loss between the training output and measured response.
§.§ Grain size-aware flow response and Hall-Petch discovery
After validating the ability of the NN-EVP model in training experimental data to large strains with arbitrary data frequency, we take a step further to train the flow response at large deformations as a function of grain size and aim to discover the Hall-Petch relationship without considering a particular functional form. To this end, the measured flow response of Cu as a function of grain size and for average grain sizes of 2.1 μ m, 3.4 μ m, 7.1 μ m and 15 μ m, as shown in Fig. <ref>, is utilized for training the model including the grain size effects. Here, in addition to the dual potential network 𝒩𝒩_ϕ^* (σ, R(r)) and hardening network 𝒩𝒩_R(r), the Hall-Petch neural network 𝒩𝒩_HP(d_grain) is also used in the model. In order to increase the training speed, adaptive time stepping was implemented with Δ t | _t+Δ t=1.15 Δ t | _t , resulting in 50 increments per stress-strain curve. Since the time stepping is adaptive, the training losses corresponding to each data point are scaled proportionally with the time increment to that data point to balance the loss evaluation. This is required due to the uneven data spacing resulting from irregular time steps where the density of training data is much smaller within the large deformation regime above 2% strain.
First, each curve is trained individually and independent of one another to test the ability of the model in training the flow response with varying stress levels. Results corresponding to single-curve fits are shown in Fig. <ref>. Since one curve is trained at a time, it is easier for the model to learn the flow response and thus the convergence is relatively fast. Next, a quaternary configuration (with all four stress-strain responses) including 200 data points is utilized for training. Training on binary and ternary configurations with two and three curves consisting of 100 and 150 data points are also provided in the supplementary data shown in Figs. <ref> and <ref>. Note that since multiple curves with contrasting stress levels and hardening curvatures are trained in parallel, it is more challenging for the model to converge to an optimal solution as evident in the behavior of loss evolution. Also, due to more data points the training process is computationally more demanding.
Ultimately, we aim to discover the Hall-Petch relationship which describes the dependence of the initial yield on the grain size. To this end, after training the model for grain sizes of 2.1 μ m, 3.4 μ m, 7.1 μ m and 15 μ m, the Hall-Petch neural network 𝒩𝒩_HP(d_grain) is recovered to extrapolate the initial flow response for grain sizes below 2.1 μ m and beyond 7.1 μ m with average grain sizes and their corresponding Hall-Petch stress values listed in Table <ref>. The discovered Hall-Petch behavior is shown in Fig. <ref>. These results indicate the ability of the model to capture the expected linear Hall-Petch relationship in log-log space, discovering it without any underlying assumptions on its functional form. Note that regardless of the particular material used herein, the proposed NN-EVP model should be able to predict the Hall-Petch relationship for a wide range of materials with arbitrary grain size and stress-strain response. Extrapolation of the Hall-Petch relationship is a complicated topic that we aim to study in the future.
§ SUMMARY AND CONCLUSIONS
The flow response in metals and metallic alloys differs drastically depending on the average grain size that forms the underlying microstructure. Modeling the viscoplastic flow response of metals is usually concomitant with the assumption of particular constitutive models in the form of a power law and phenomenological hardening formulations. These hardening laws often include parameters that involve manual calibrations with experimental data obtained from uniaxial tests. Machine learning models that can automate such calibrations generally rely on large amounts of data and often require a large number of iterations in order for them to be sufficiently accurate and predictive. For instance, the fitting process via genetic algorithms involve a large number of functional calls within four orders of magnitude to the black-box in order to obtain the flow response <cit.>. To remove the need for these big datasets and large number of iterations in computationally expensive training, we propose a data-driven elasto-viscoplasticity framework based on neural networks called NN-EVP that leverages PyTorch's high-performance ML library. The proposed approach is tested and trained using both synthetic and experimentally measured uniaxial data in the context of limited data availability.
The developed NN-EVP framework was adopted to train elasto-viscoplastic flow response of metals at both small and large deformations and as a function of grain size. First, rate sensitivity-dependent perfect viscoplasticity was discovered using the power law with no hardening effects. Next, synthetically generated deformation responses via a Johnson-Cook phenomenological hardening law were used to train and test the approach on its ability to extrapolate the flow response at strain amplitudes beyond the observed training data. Next, the framework was applied to train large deformations obtained from experimental tests with limited stress-strain data availability. Finally, simultaneous training of multiple grain size-dependent stress-strain curves enabled us to obtain a grain size-aware flow behavior and discover the Hall-Petch relationship pertaining to the grain size strengthening effects.
The proposed NN-EVP model presented herein takes a further step in the prediction of flow responses in metals and improves the computational efficiency of structure-property-relationship simulations of metallic materials. The findings of the present work provide insights into the versatility and flexibility of data-driven constitutive modeling which can be adapted for a wide range of materials exhibiting complex large deformation behavior. This motivates future work in incorporating the current model into finite elements for efficient full-field modeling of large deformations under arbitrary loading and boundary conditions.
§ DATA AVAILABILITY
Data and Python script supporting the findings of this study are available from the corresponding author upon request.
§ ACKNOWLEDGMENTS
JF, AE and NB gratefully acknowledge support by the Air Force Office of Scientific Research under award number FA9550-22-1-0075.
§ SUPPLEMENTARY DATA
|
http://arxiv.org/abs/2307.03944v1 | 20230708095104 | Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States | [
"Jie Qian",
"Jie Li",
"Shi-Yao Zhu",
"J. Q. You",
"Yi-Pu Wang"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall",
"physics.optics"
] |
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Hefei National Laboratory, Hefei 230088, China
[email protected]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
[email protected]
Interdisciplinary Center of Quantum Information, Zhejiang Province Key Laboratory of Quantum Technology and Device, Department of Physics, Zhejiang University, Hangzhou 310027, China
Light-matter interaction is crucial to both understanding fundamental phenomena and developing versatile applications. Strong coupling, robustness, and controllability are the three most important aspects in realizing light-matter interactions. Topological and non-Hermitian photonics, have provided frameworks for robustness and extensive control freedom, respectively. How to engineer the properties of the edge state such as photonic density of state, scattering parameters by using non-Hermitian engineering while ensuring topological protection has not been fully studied. Here we construct a parity-time-symmetric dimerized photonic lattice and generate complex-valued edge states via spontaneous PT-symmetry breaking. The enhanced strong coupling between the topological photonic edge mode and magnon mode in a ferromagnetic spin ensemble is demonstrated. Our research reveals the subtle non-Hermitian topological edge states and provides strategies for realizing and engineering topological light-matter interactions.
Enhanced Strong Coupling between Spin Ensemble and non-Hermitian Topological Edge States
Yi-Pu Wang
August 12, 2023
========================================================================================
Introduction.—Topology has evolved as a powerful governing principle for predicting and harnessing the robust propagation of currents in various systems, including condensed matter system <cit.>, acoustics <cit.>, mechanics <cit.> and photonics <cit.>. In topological photonics, a topological invariant ensures robust localization or propagation of electromagnetic waves <cit.>. On the other hand, non-Hermitian photonics <cit.> has also flourished in recent years, not only due to the ubiquitous non-Hermiticity in nature <cit.>, but also because the non-Hermiticity provides additional degrees of freedom to manipulate the wave behaviors. In pursuit of the simultaneous robustness and greater control flexibility, as well as the interest in fundamental research, non-Hermitian topological physics <cit.> has received considerable attention and substantial development. Scientists investigate new paradigms <cit.> and explore potential applications in this interdisciplinary territory <cit.>.
A coupled system can have two forms of non-Hermiticity. One kind is generated when there is asymmetric interaction between the sites, which leads to the non-Hermitian skin effect <cit.>. The other type, which is caused by on-site loss, can lead to intriguing phenomena associated with the parity-time (PT) symmetry. The PT-symmetric systems have received special attention, because they were proved to have real spectra <cit.>. A sequence of studies have studied the topologically protected bound (defect) states in PT-symmetric topological systems <cit.>, where the defect states are real in the PT-symmetry unbroken phase. Moreover, a number of studies have investigated whether topological edge states exist in the PT-symmetric systems <cit.>, concluding that since the edge state is not an eigenstate of the PT operator, an imaginary eigenvalue is obtained along with the spontaneous PT-symmetry breaking. In this case, a non-Hermitian edge state is obtained. We find that these imaginary edge states in the PT-symmetric system are actually topologically protected by the particle-hole symmetry <cit.>. In the one-dimensional (1D) non-Hermitian PT-symmetric Su-Schrieffer-Heeger (SSH) model <cit.>, the chiral symmetry of the system is broken, losing its topological ℤ invariant, but the particle-hole symmetry of the system is preserved and the system owns a topological ℤ_2 invariant. In the presence of perturbations that do not violate the particle-hole symmetry, the real parts of the eigenvalues of the edge modes remain 0, reflecting the topologically protected characteristics. Under this situation, the topological photonic mode with robust properties can be further manipulated by non-Hermiticity, which is highly desirable for investigating light-matter interactions <cit.>.
To investigate the interaction between topological photonic modes and matters <cit.>, we employ the photon-magnon coupling system <cit.>, which has benefits including the flexible tunability and experimental demonstration at room temperature. In this Letter, we use a set of lossy microwave resonators to build 1D non-Hermitian SSH photonic lattices. By coupling a ferromagnetic spin ensemble (FSE) to Hermitian and non-Hermitian SSH chains and monitoring the strength of the coupling between the photonic modes and the magnon mode in the FSE, we verify the topological edge states and bulk states. Non-Hermiticity introduced by the on-site alternating losses breaks the passive PT-symmetry of zero-energy modes and results in two complex-valued edge states, which localize exponentially at the opposite ends of the chain [Fig. <ref>(b)]. Further, the photonic density of state (PDOS) at boundaries is larger than that in the Hermitian case [Fig. <ref>(a)], which strengthens the coupling between the topological photonic mode and the magnon mode. Our experiment demonstrates the potential of manipulating the interaction between topological photonic states and matter by exploiting non-Hermiticity.
System and model.—The SSH chain consists of six unit cells [Figs. <ref>(a) and <ref>(b)], in which each unit contains two split-ring-resonators (SRRs) fabricated on the F4B substrate [Fig. <ref>(a)]. In the experiment, the SRR exhibits a resonance at ω_0/2π=5.62 GHz with an intrinsic loss of γ_0/2π=24.42 MHz, and the topological property is unaltered by the uniform losses along the chain <cit.>. Therefore, SRRs with the same loss can be used to build the Hermitian SSH model. Two neighboring SRRs are separated by staggered spacings to realize the intracell and intercell coupling rates, v and w. Edge states appear in the finite chain when the bulk winding number of the Hermitian Hamiltonian is 𝒲_h=1 <cit.>. The effective Hermitian SSH chain is designed in the topological non-trivial phase (v/2π=216.5 MHz, w/2π=341 MHz) and the Hamiltonian is written as <cit.>:
ℋ_h/ħ=∑_s=1^2N(ω_0-iγ_0)â_s^†â_s+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†),
where â_s^† (â_s) is the photon creation (annihilation) operator of the s-th SRR. The uniform losses of the units only yield all eigenvalues of the chain to have the same imaginary component iγ_0. The eigenvalues of the coupled SRRs are plotted in the complex plane, as shown in Fig. <ref>(c). A pair of zero-energy modes (Re(ω_m=6,7)-ω_0=0, green dots) appear in the band gap (gray area), which are the edge modes. The measured transmission spectrum of the chain is shown in Fig. <ref>(d), where the peaks correspond to the resonances of the eigenmodes. By simulating the field distribution at the edge mode frequency of ω_0/2π=5.62 GHz, we find that the electromagnetic field tends to localize at both edges of the chain, as predicted by wave function distribution <cit.>. In the low-frequency region, the measured spectrum [Fig. <ref>(d), solid line] displays an amplitude deviation from that in the high-frequency region. This is due to the residual dissipative coupling between SRRs <cit.>.
Then, on-site non-Hermiticity is added to the SSH chain. As depicted in Fig. <ref>(a), resistors R_A=0.1 Ω and R_B=2.7 Ω are integrated into odd and even sites of the chain, respectively, which induce alternated losses of γ_A/2π=36 MHz and γ_B/2π=73 MHz. The Hamiltonian becomes <cit.>:
ℋ_nh/ħ= ∑_s∈ X(ω_0-iγ_A)â_s^†â_s+∑_s∈ Y(ω_0-iγ_B)â_s^†â_s
+∑_s=1^2N-2(vâ_sâ_s+1^†+wâ_s+1â_s+2^†),
where X={1, 3, 5, ..., 2N-1}, Y={2, 4, 6, ..., 2N}, and N=6. The integrated resistors shift ω_0/2π to 5.48 GHz, and the hopping rates shift to v/2π=208.5 MHz, and w/2π=335.5 MHz. The alternated losses make the system a passive PT-symmetric one. The spontaneous PT-symmetry breaking occurs in zero-energy modes, resulting in a splitting of the imaginary parts of zero-energy modes, as shown in Fig. <ref>(e). One with a low loss Im(ω_m=6)/2π=40.42 MHz (Edge_1, blue dot) localizes at the left boundary of the chain, and the other with a high loss Im(ω_m=7)/2π=68.58 MHz (Edge_2, red dot) localizes at the right, as schematically shown in Fig. <ref>(b). The bulk Hamiltonian still preserves the PT-symmetry when δγ/2<|w-v|, and δγ=γ_B-γ_A. In this regime, the topological property is still determined by the generalized integer winding number 𝒲_nh <cit.>. 𝒲_nh=1 guarantees the existence of two non-Hermitian topological edge modes.
Experiment results.—To investigate the edge modes engineered by the non-Hermiticity, we measure the PDOS and linewidths of the edge and bulk modes in both Hermitian and non-Hermitian cases. Notably, conventional detection of the PDOS relies on the near-field radiation <cit.>, but in the non-Hermitian situation, the local gain and loss will diminish its reliability. Using the spin ensemble as a probe, we can directly detect the PDOS. In addition, it allows us to study the strong coherent interaction between the topological photonic modes and magnons.
In the experiment, the spin ensemble employed to couple with the chain is a 1-mm diameter yttrium iron garnet (YIG) sphere. The magnon mode in the sphere interacts with the local photonic modes, with a coupling strength g proportional to ηχ√(nSħω_r/2V) <cit.>, where η≤1 describes the spatial overlap and polarization matching between the photonic mode and the magnon mode, χ is the gyromagnetic ratio, n is the total number of spins, S=5/2 is the spin number of the ground state Fe^3+ ion in YIG, ω_r is the resonance frequency, and V is the photonic mode volume. Consequently, the square of the coupling strength g^2 directly reflects the PDOS at the coupling location. Firstly, we move the YIG sphere to each site (labeled as s, s=1,2,3,...,12) of the Hermitian chain, and obtain the PDOS distribution of the m-th eigenmode by analyzing the transmission spectra. The bias magnetic field is perpendicular to the device plane, and mappings of transmission spectra are measured versus electromagnet current and probe frequency. Figures <ref>(b) and <ref>(e), for instance, show the mappings when the YIG sphere is placed at site-1 and site-12, respectively. The coupling strength between m-th eigenmode of the chain and the magnon mode at the s-th site is defined as g_m,s, which can be obtained by fitting the level repulsion with:
ω_m,s^±=1/2[ω_n+ω_m±√((ω_n-ω_m)+4g_m,s^2)],
where ω_n=ω_n-iγ_n and ω_m=ω_m-i(γ_m+κ_m) are the eigenvalues of the uncoupled magnon mode and the m-th eigenmode of the chain, respectively. γ_n is the total loss rate of the magnon mode, γ_m is the intrinsic loss rate of the m-th eigenmode, and κ_m is the extrinsic loss rate of the m-th eigenmode to the input/output ports <cit.>. Coupling strengths between the magnon mode and edge modes (m=6,7) at site-1 and site-12 are obtained by fitting the level repulsion depicted in Figs. <ref>(b) and <ref>(e), which are g_edge,1/2π=g_edge,12/2π=80 MHz. Similarly, coupling strengths between the magnon mode and bulk mode (m=8) at site-1 and site-12 are obtained as g_bulk,1/2π=g_bulk,12/2π=37 MHz. g_m,s^2 as a function of the site index s are illustrated in Figs. <ref>(c) and <ref>(d), denoted by blue (m=8) and red dots (m=6,7), respectively. The observed g_m,s^2 are in good agreement with the intensity distributions for the wave function |φ_m,s|^2 (gray bar diagram).
Then, we couple the spin ensemble to the non-Hermitian SSH chain, as shown in Fig. <ref>(a). Figures <ref>(b) and <ref>(e) display the mappingswhen the YIG sphere is placed at site-1 and site-12, respectively. The mappings show similar amount of level repulsion, but reflects very different linewidths of the edge modes. Using Eq. (<ref>), the loss of the edge mode at site-1 is fitted to be γ_edge,1/2π=41.1 MHz, which is contributed by the addition of the two edge modes (m=6,7). The relation is γ_edge,s=[Im(ω_m=6)·|φ_6,s|^2+Im(ω_m=7)·|φ_7,s|^2]/(|φ_6,s|^2+|φ_7,s|^2), and the wave functions of the edge modes |φ_m,s|^2 are displayed as the bar diagram in Fig. <ref>(d). Similarly, we get γ_edge,12/2π=67.9 MHz. More interestingly, the coupling strengths between the magnon mode and edge modes at site-1 and site-12 are observed to be g_edge,1/2π=g_edge,12/2π=112 MHz, which is larger than that in the Hermitian case (80 MHz). We plot g_m,s^2 versus site index s for m=8 and m=6, 7 in Figs. <ref>(c) and <ref>(d), respectively. It can be found that the bulk mode maintains expanded, similar to the Hermitian bulk mode. But, as shown in Fig. <ref>(d), the low-loss edge state (Edge_1) accumulates at the left boundary, while high-loss edge state (Edge_2) accumulates at the right edge. The introduction of on-site loss does contribute to the increase of PDOS at the boundaries. The mechanism can be interpreted as follows: When the PT-symmetry of the edge states is broken, the energy flow between adjacent resonators is partly blocked <cit.>. The low-loss (high-loss) edge state becomes more localized at the low-loss (high-loss) site, as shown in Figs. <ref>(b) and <ref>(a), it corresponds the left (right) boundary of the chain.
It is also intriguing to detect the properties of the non-Hermitian topological edge states from spectroscopic measurements. In the PT-symmetry unbroken phase, two topological edge states cannot be distinguished via spectroscopic measurement, as shown in Fig. <ref>(a). The absorptivity spectra A_1 measured when loading microwave to port 1 is totally coincident with A_2 measured when loading microwave to port 2. In the symmetry broken phase, two topological edge states can be distinguished in spectra, as shown in Fig. <ref>(b). The spectra A_1 exhibits the low-loss state with a relatively narrow bandwidth, while the spectra A_2 reveals the high-loss state.
Finally, we anticipate to discuss about some additional characteristics of the exceptional point (EP) in the non-Hermitian chain. The dimensionless eigenvalues are defined as β_real+iβ_imag, where β_real=[Re(ω)-ω_0]/(v+w), β_imag=[|Im(ω)|-γ̅]/(v+w), and γ̅=(γ_A+γ_B)/2. In a finite SSH chain, when increasing the non-Hermitian parameter δγ/2(v+w), a series of exceptional points are gradually reached [Figs. <ref>(c) and <ref>(d)]. It can be found that the EP of the edge modes is distinctly away from the EPs of the bulk modes. The edge modes experience spontaneous PT-symmetry breaking (SPTB) at EP_1, where δγ/2(v+w) is only about 0.02. With the increase of chain length, the non-Hermiticity needed for SPTB in edge modes decreases exponentially. In the case of N≫1, any finite δγ will lead to the SPTB in edge modes <cit.>. However, the minimum requirement of SPTB in bulk mode needs δγ/2|w-v|, which is much larger than 0.02. Additional analysis is provided in the supplementary materials.
Conclusion.—We have implemented the PT-symmetric non-Hermitian topological SSH model with microwave resonators and achieved the control of topological edge states using the on-site non-Hermiticity. Through spontaneous PT-symmetry breaking, we obtain the non-Hermitian edge modes, where the photonic mode densities are enhanced at both ends of the chain. We realize the strong coupling between the edge modes and the magnon mode in both Hermitian and non-Hermitian cases. We experimentally verify that the coupling strength between the non-Hermitian edge states and the spin ensemble is stronger than that in the Hermitian situation. Our research illustrates non-Hermiticity engineered topological edge states and paves a way for studying strong coherent interaction between topological photonic modes and matter.
This work is supported by the National Key Research and Development Program of China (No. 2022YFA1405200), National Natural Science Foundation of China (No. 92265202, No. 11934010, No. U1801661, and No. 12174329), and the Fundamental Research Funds for the Central Universities (No. 2021 FZZX001-02).
99
Burkov-16
A. A. Burkov, Topological semimetals, Nature Materials 15, 1145 (2016).
Hasan-10
M. Z. Hasan and C. L. Kane, Colloquium: Topological insulators, Rev. Mod. Phys. 82, 3045 (2010).
Zhaoju-15
Z. Yang, F. Gao, X. Shi, X. Lin, Z. Gao, Y. Chong, and B. Zhang, Topological Acoustics, Phys. Rev. Lett. 114, 114301 (2015).
Ma-19
G. Ma, M. Xiao and C. T. Chan, Topological phases in acoustic and mechanical systems, Nat. Rev. Phys. 1, 281 (2019).
Yihao-22
H. Xue, Y. Yang, B. Zhang, Topological acoustics, Nature Reviews Materials 7, 974 (2022).
Huber-16
S. D. Huber, Topological mechanics, Nat. Phys. 12, 621 (2016).
Haldane-08
F. D. M. Haldane and S. Raghu, Possible realization of directional optical waveguides in photonic crystals with broken time-reversal symmetry, Phys. Rev. Lett. 100, 013904 (2008).
Wang-09
Z. Wang, Y. Chong, J. D. Joannopoulos, and M. Soljačić, Observation of unidirectional backscattering-immune topological electromagnetic states, Nature 461, 772 (2009).
Lu-14
L. Lu, J. D. Joannopoulos, and M. Soljačić, Topological photonics, Nat. Photon. 8, 821 (2014).
Ozawa-19
T. Ozawa et al., Topological photonics, Rev. Mod. Phys. 91, 015006 (2019).
Blanco-Redondo-18
A. Blanco-Redondo, B. Bell, D. Oren, B. J. Eggleton and M. Segev, Topological protection of biphoton states, Science 362, 568 (2018).
Yang-18
B. Yang et al., Ideal Weyl points and helicoid surface states in artificial photonic crystal structures, Science 359, 1013 (2018).
Klembt-18
S. Klembt et al., Exciton-polariton topological insulator, Nature, 562, 552 (2018).
Feng-17
L. Feng, R. EI-Ganainy, and L. Ge, Non-Hermitian photonics based on parity–time symmetry, Nat. Photon. 11, 752 (2017).
EI-Ganainy-18
R. EI-Ganainy et al., Non-Hermitian physics and PT symmetry, Nat. Phys. 14, 11 (2018).
Longhi-18
Stefano Longhi, Parity-time symmetry meets photonics: A new
twist in non-hermitian optics, Europhysics Letters 120, 64001 (2018).
Bender-07
C. M. Bender, Making sense of non-hermitian hamiltonians, Reports on Progress in Physics 70, 947 (2007).
Ashida-20
Y. Ashida, Z. P. Gong, and M. Ueda, Non-Hermitian physics, Adv. Phys. 69, 249 (2020).
Coulais-21
C. Coulais, R. Fleury, and J. Van Wezel, Topology and broken Hermiticity, Nat. Phys. 17, 9 (2021).
Bergholtz-21
E. J. Bergholtz, J. C. Budich, and F. K. Kunst, Exceptional topology of non-Hermitian systems, Rev. Mod. Phys. 93, 015005 (2021).
Yao-18
S. Yao and Z. Wang, Edge States and Topological Invariants of Non-Hermitian Systems, Phys. Rev. Lett. 121, 086803 (2018).
Yokomizo-19
K. Yokomizo and S. Murakami, Non-Bloch band
theory of non-Hermitian systems, Phys. Rev. Lett. 123, 066404 (2019).
CHL-20
C. H. Lee, L. Li, R. Thomale, and J. Gong,
Unraveling non-Hermitian pumping: Emergent spectral singularities and anomalous responses, Phys. Rev. B 102, 085151
(2020).
Helbig-20
T. Helbig et al., Generalized bulk–boundary correspondence in non-Hermitian topolectrical circuits. Nat. Phys. 16, 747 (2020).
Xue-20
L. Xiao, T. Deng, K. Wang, G. Zhu, Z. Wang, W. Yi, and P. Xue, Non-Hermitian bulk–boundary correspondence in quantum dynamics, Nat. Phys. 16, 761 (2020).
Zhao-19
H. Zhao et al., Non-Hermitian topological light steering, Science 365, 1163 (2019).
St-Jean-17
P. St-Jean et al., Lasing in topological edge states of a one-dimensional lattice, Nat. Photon. 11, 651 (2017).
Parto-18
M. Parto et al., Edge-Mode Lasing in 1D Topological Active Arrays, Phys. Rev. Lett. 120, 113901 (2018).
Hu-21
B. Hu et al., Non-Hermitian topological whispering gallery, Nature 597, 655 (2021).
Alvarez-18
V. M. Martinez Alvarez, J. E. Barrios Vargas, and L. E. F. Foa Torres,
Non-Hermitian robust edge states in one dimension: Anomalous localization and eigenspace condensation at exceptional points, Phys. Rev. B 97, 121401(R) (2018).
Okuma-20
N. Okuma, K. Kawabata, K. Shiozaki, and M. Sato, Topological Origin of Non-Hermitian Skin Effects, Phys. Rev. Lett. 124, 086801 (2020).
Bender-98
C. M. Bender and S. Boettcher, Real Spectra in Non-Hermitian Hamiltonians Having PT Symmetry, Phys. Rev. Lett. 80, 5243 (1998).
Schomerus-13
H. Schomerus, Topologically protected midgap states in complex photonic lattices, Opt. Lett. 38, 1912 (2013)
Malzard-15
S. Malzard, C. Poli, and H. Schomerus, Topologically Protected Defect States in Open Photonic Systems with Non-Hermitian Charge-Conjugation and Parity-Time Symmetry, Phys. Rev. Lett. 115, 200402 (2015).
Weimann-17
S. Weimann et al., Topologically protected bound states in photonic parity-time-symmetric crystals, Nat. Mater. 16, 433-438 (2017).
Stegmaier-21
A. Stegmaier et al., Topological Defect Engineering and PT Symmetry in Non-Hermitian Electrical Circuits, Phys. Rev. Lett. 126, 215302 (2021).
Esaki-11
K. Esaki, M. Sato, K. Hasebe, and M. Kohmoto, Edge states and topological phases in non-Hermitian systems, Phys. Rev. B 84, 205128 (2011).
Hu-11
Y. C. Hu and T. L. Hughes, Absence of topological insulator phases in non-Hermitian PT-symmetric Hamiltonians, Phys. Rev. B 84, 153101 (2011).
Xue-17
L. Xiao, X. Zhan, Z. H. Bian, K. K. Wang, X. Zhang, X. P. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami, W. Yi, H. Obuse, B. C. Sanders, P. Xue, Observation of topological edge states in parity–time-symmetric quantum walks, Nature Physics 13, 1117 (2017).
Cheng-22
D. Cheng et al., Truncation-dependent PT phase transition for the edge states of a two-dimensional non-Hermitian system, Phys. Rev. B 105, L201105 (2022).
SM
See Supplementary Materials at ... for device details, Hamiltonian and topological invariant analysis, additional transmission mappings, and the experimental measurement details, which includes Refs. <cit.>.
Su-79
W. P. Su, J. R. Schrieffer and A. J. Heeger, Solitons in Polyacetylene, Phys. Rev. Lett. 42, 1698 (1979).
Gutzler-21
R. Gutzler, M. Garg, C. R. Ast, K. Kuhnke, and Kern, K. Light–matter interaction at atomic scales, Nat. Rev. Phys. 3, 441 (2021).
Ruggenthaler-18
M. Ruggenthaler, N. Tancogne-Dejean, J. Flick, H. Appel, and A. Rubio, From a quantum-electrodynamical light–matter description to novel spectroscopies, Nat. Rev. Chem. 2, 0118 (2018).
Kockum-19
A. F. Kockum, A. Miranowicz, S. De Liberato, S. Savasta, and F. Nori, Ultrastrong coupling between light and matter, Nat. Rev. Phys. 1, 19 (2019).
Kim-21
E. Kim et al., Quantum Electrodynamics in a Topological Waveguide, Phys. Rev. X 11, 011015 (2021).
Huebl-PRL-2013
H. Huebl, C. W. Zollitsch, J. Lotze, F. Hocke, M. Greifenstein, A. Marx, R. Gross, and S. T. B. Goennenwein, High Cooperativity in Coupled Microwave Resonator Ferrimagnetic Insulator Hybrids, Phys. Rev. Lett. 111, 127003 (2013).
Tabuchi-PRL-2013
Y. Tabuchi, S. Ishino, T. Ishikawa, R. Yamazaki, K. Usami, and Y. Nakamura, Hybridizing Ferromagnetic Magnons and Microwave Photons in the Quantum Limit, Phys. Rev. Lett. 113, 083603 (2014).
Zhang-PRL-2014
X. Zhang, C.-L. Zou, L. Jiang, and H. X. Tang, Strongly Coupled Magnons and Cavity Microwave Photons, Phys. Rev. Lett. 113, 156401 (2014).
Tobar-PRApp-2014
M. Goryachev, W. G. Farr, D. L. Creedon, Y. Fan, M. Kostylev, and M. E. Tobar, High-Cooperativity Cavity QED with Magnons at Microwave Frequencies, Phys. Rev. Applied 2, 054002 (2014).
You-npj-2015
D. Zhang, X.-M. Wang, T.-F. Li, X.-Q. Luo, W. Wu, F. Nori, J. Q. You, Cavity quantum electrodynamics with ferromagnetic magnons in a small yttrium-iron-garnet sphere, npj Quantum Information 1, 15014 (2015).
Wang-2019
Y.-P. Wang, J. W. Rao, Y. Yang, P.-C. Xu, Y. S. Gui, B. M. Yao, J. Q. You, and C.-M. Hu, Nonreciprocity and Unidirectional Invisibility in Cavity Magnonics, Phys. Rev. Lett. 123, 127202 (2019).
Wang-2020
Y.-P. Wang and C.-M. Hu, Dissipative couplings in cavity magnonics, Journal of Applied Physics 127, 130901 (2020).
Rameshti-22
B. Z. Rameshti, S. V. Kusminskiy, J. A. Haigh, K. Usami, D. Lachance-Quirion, Y. Nakamura, C. Hu, H. X. Tang, G. E. W. Bauer and Y. M. Blanter, Cavity Magnonics, Physics Reports 979, 1-60 (2022).
Yuan-22
H. Y. Yuan, Y. Cao, A. Kamra, P. Yan, and R. A. Duine, Quantum magnonics: when magnon spintronics meets quantum information science, Physics Reports 965, 1 (2022).
Bellec-13
M. Bellec, U. Kuhl, G. Montambaux, and F. Mortessagne, Tight-binding couplings in microwave artificial graphene, Phys. Rev. B 88, 115437 (2013).
Peng-14
B. Peng, Ş. K. Özdemir, F. Lei, F. Monifi, M. Gianfreda, G. Long, S. Fan, F. Nori, C. M. Bender and L. Yang, Parity-time-symmetric whispering-gallery microcavities, Nat. Phys. 10, 394 (2014).
|
http://arxiv.org/abs/2307.03951v1 | 20230708105908 | High precision tests of QCD without scale or scheme ambiguities | [
"Leonardo Di Giustino",
"Stanley J. Brodsky",
"Philip G. Ratcliffe",
"Xing-Gang Wu",
"Sheng-Quan Wang"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
sort compress
|
http://arxiv.org/abs/2307.04539v1 | 20230710130519 | Neural functional theory for inhomogeneous fluids: Fundamentals and applications | [
"Florian Sammüller",
"Sophie Hermann",
"Daniel de las Heras",
"Matthias Schmidt"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.stat-mech"
] |
[email protected]
Theoretische Physik II, Physikalisches Institut, Universität Bayreuth, D-95447 Bayreuth, Germany
We present a hybrid scheme based on classical density functional theory and machine learning for determining the equilibrium structure and thermodynamics of inhomogeneous fluids.
The exact functional map from the density profile to the one-body direct correlation function is represented locally by a deep neural network.
We substantiate the general framework for the hard sphere fluid and use grand canonical Monte Carlo simulation data of systems in randomized external environments during training and as reference.
Functional calculus is implemented on the basis of the neural network to access higher-order correlation functions via automatic differentiation and the free energy via functional line integration.
Thermal Noether sum rules are validated explicitly.
We demonstrate the use of the neural functional in the self-consistent calculation of density profiles.
The results outperform those from state-of-the-art fundamental measure density functional theory.
The low cost of solving an associated Euler-Lagrange equation allows to bridge the gap from the system size of the original training data to macroscopic predictions upon maintaining near-simulation microscopic precision.
These results establish the machine learning of functionals as an effective tool in the multiscale description of soft matter.
Neural functional theory for inhomogeneous fluids: Fundamentals and applications
Matthias Schmidt
August 12, 2023
================================================================================
§ INTRODUCTION
The problem with density functional theory (DFT) is that you do not know the density functional.
Although this quip by the late and great Yasha Rosenfeld <cit.> was certainly meant in jest to a certain degree, it does epitomize a structural assessment of classical DFT <cit.>.
As a general formulation of many-body statistical physics, the framework comprises a beautiful and far reaching skeleton of mathematical formalism centered around a formally exact variational minimization principle <cit.>.
In practice however, the theory needs to be fleshed out by approximations of all means conceivable in our efforts to get to grips with the coupled many-body problem that is under consideration.
Specifically, it is the excess (over ideal gas) intrinsic Helmholtz free energy [ρ], expressed as a functional of the position-resolved density profile ρ(r⃗), which needs to be approximated.
Decades of significant theoretical efforts have provided us with a single exact functional, that for nonoverlapping hard rods in one spatial dimension, as obtained by another hero in the field, Jerry Percus <cit.>.
Nevertheless, useful DFT approximations range from the local density approximation for large scale features which are decoupled from microscopic length scales, to square-gradient functionals with their roots in the 19th century, to the arguably most important modern development, that of the fundamental measure theory (FMT) as kicked off by Rosenfeld in 1989 <cit.> and much refined ever since <cit.>.
FMT is a geometry-based framework for the description of hard sphere systems and it has deep roots in the Percus-Yevick <cit.> and scaled-particle theories <cit.>, which Rosenfeld was able to unify and generalize based on his unique theoretical insights <cit.>.
The realm of soft matter <cit.> stretches far beyond the hard sphere fluid.
FMT remains relevant though in the description of a reference system as used e.g. in studies of hydrophobicity, where the behaviour of realistic water models <cit.> is traced back to the simpler Lennard-Jones fluid, which in turn is approximated via the hard sphere FMT functional plus a mean-field contribution for describing interparticle attraction <cit.>.
Further topical uses of FMT include the analysis of the three-dimensional electrolyte structure near a solid surface <cit.> and the problem of the decay length of correlations in electrolytes <cit.>.
There is a current surge in the use of machine learning techniques in soft matter, e.g. for its characterization <cit.>, engineering of self-assembly <cit.>, structure detection <cit.>, and for learning many-body potentials <cit.>.
Within classical DFT, machine learning was used to address ordering of confined liquid crystals <cit.>, and free energy functionals were obtained for one-dimensional systems from convolutional <cit.> and equation-learning <cit.> networks as well as within a Bayesian inference approach <cit.>.
<cit.> used machine learning to improve the standard mean-field approximation of the excess Helmholtz free-energy functional for the Lennard-Jones fluid.
In nonequilibrium, <cit.> have reported a method to machine-learn the functional relationship of the local internal force for a steady uniaxial compressional flow of a Lennard-Jones fluid at constant temperature.
As prescribed by power functional theory <cit.>, the functional dependence in nonequilibrium not only incorporates the density profile but also the one-body current.
In this work, we return to the problem of describing and predicting the structure and thermodynamics of inhomogeneous equilibrium fluids.
We show that a neural network can be trained to accurately represent the functional dependence of the one-body direct correlation function with respect to the density profile.
While the presented methods are applicable to virtually arbitrary fluids with short-ranged interparticle interactions, we focus here on the well-studied hard-sphere fluid in order to exemplify our framework and to challenge the available highly accurate analytic approaches from liquid integral equation theory and FMT.
Reference data for training and testing the model is provided by grand canonical Monte Carlo (GCMC) simulations that cover a broad range of randomized inhomogeneous environments in planar geometry.
We implement functional calculus on the basis of the trained neural functional to infer related physical quantities and demonstrate their consistency with known literature results both in bulk and in inhomogeneous systems.
In particular, we highlight the accessibility of the fluid pair structure, the determination of free energies and equations of state as well as the validation of thermal Noether sum rules <cit.>.
These results corroborate that the neural functional exceeds its role as a mere interpolation device and instead possesses significant representational power as a genuine density functional for the prediction of nontrivially related physical properties.
We apply the trained neural network in the DFT Euler-Lagrange equation, which enables the self-consistent calculation of density profiles and which hence constitutes a neural-network-based DFT or short neural DFT.
This method alleviates conventional DFT from the burden of having to find suitable analytic approximations while still surpassing even the most profound existing treatments of the considered hard sphere fluid via FMT functionals <cit.> in accuracy.
We further demonstrate the fitness of the method for the straightforward application to multiscale problems.
Neural DFT therefore provides a way to transfer near-simulation microscopic precision to macroscopic length scales, which serves as a technique to predict properties of inhomogeneous systems which far exceed typical box sizes of the original training data.
This work is structured as follows.
The relevant physical background of liquid state theory is provided in Sec. <ref>.
Details of the simulations as well as of the neural network are given in Secs. <ref> and <ref>.
The training procedure and results for the achieved metrics that measure its convergence are presented in Sec. <ref>.
We proceed by testing physical properties of the trained model and use automatic differentiation of the neural network in Sec. <ref> to access pair correlations, which are then compared to bulk results from both the Percus-Yevick theory and from simulations.
The consistency of the neural direct correlation functional to satisfy thermal Noether sum rules is validated in Sec. <ref>, and different ways to obtain the bulk equation of state as well as free energies in inhomogeneous systems are given in Sec. <ref>.
In Sec. <ref>, we show the application of the neural functional to the self-consistent calculation of density profiles via the DFT Euler-Lagrange equation and describe the technical details and conceptual advantages of this neural DFT over analytic approaches.
In Sec. <ref>, the results are compared to those from FMT, and in Sec. <ref>, the relevance of the method for making macroscopic predictions is illustrated for cases of randomized external potential and for sedimentation between hard walls on length scales that far exceed the training simulation box sizes.
We conclude in Sec. <ref> and give an outlook to possible improvements and extensions of the method as well as to its application for different fluid types, in more general geometries and in nonequilibrium.
§ MACHINE LEARNING INTRINSIC CORRELATIONS
§.§ Physical background
We start with the standard relation for the one-body direct correlation function c_1(r⃗) of liquid state theory <cit.>,
c_1(r⃗) = lnρ(r⃗) + β(r⃗) - βμ,
where r⃗ denotes the spatial position and β = 1 / (k_B T) with the Boltzmann constant k_B and absolute temperature T.
The three terms on the right hand side of Eq. (<ref>) represent respectively the ideal gas contribution, the external potential (r⃗) and the influence of the particle bath at chemical potential μ.
The logarithm in Eq. (<ref>) is understood as ln[Λ^3 ρ(r⃗)] with the thermal wavelength Λ, which can be set to the particle size σ without any loss of information in the present classical context.
For a prescribed external potential (r⃗), knowledge of the corresponding equilibrium density profile ρ(r⃗) allows to compute c_1(r⃗) explicitly via Eq. (<ref>).
This relationship can be viewed as a locally resolved chemical potential balance: the contribution from the ideal gas, k_B T lnρ(r⃗), from the external potential, (r⃗), and from interparticle interactions, - k_B T c_1(r⃗), add up at each position to μ, which is necessarily uniform throughout an equilibrium system.
However, the notation in Eq. (<ref>) is oblivious to a central result shown by <cit.> in 1979, thereby kicking off a modern theory for the description of inhomogeneous fluids.
For given type of internal interactions, the spatial variation of the function c_1(r⃗) is already uniquely determined by the spatial form of the density profile ρ(r⃗) alone, without the need to invoke the external potential explicitly.
From this vantage point of classical DFT, the dependence of c_1(r⃗) on ρ(r⃗) is not merely pointwise but rather with respect to the values of the entire density profile, which determine c_1(r⃗) at each given position r⃗.
Formally, this relationship is exact <cit.> and it constitutes a functional dependence c_1(r⃗; [ρ]), which is indicated by brackets here and in the following and which is in general nonlinear and nonlocal.
As we will demonstrate, the existence of such a universal functional mapping makes the problem of investigating inhomogeneous fluids particularly amenable to supervised machine learning techniques.
In most formulations of classical DFT, one exploits the fact that the intrinsic excess free energy functional [ρ] acts as a functional generator such that the one-body direct correlation function is obtained via functional differentiation with respect to the density profile,
c_1(r⃗; [ρ]) = - δβ[ρ]/δρ(r⃗).
A compact description of standard formulae for the calculation of functional derivatives can be found in Ref. Schmidt2022.
In order to make progress in concrete applications, one typically needs to rely on using an approximate form of [ρ] for the specific model under consideration, as determined by its interparticle interactions.
DFT is a powerful framework, as using c_1(r⃗; [ρ]) obtained from Eq. (<ref>) with a suitable expression for [ρ] turns Eq. (<ref>) into an implicit equation for the equilibrium density profile ρ(r⃗).
In the presence of a known form of (r⃗), one can typically solve Eq. (<ref>) very efficiently, allowing ease of parameter sweeps, e.g. for exhaustive phase diagram explorations.
On the downside, [ρ] and thus also c_1(r⃗; [ρ]) remain approximative and the development of analytic tools has certainly slowed down over several years if not decades.
Here we proceed differently and bypass the excess free energy functional [ρ] at first.
Instead, we use a deep neural network to learn and to represent the functional relationship ρ(r⃗) → c_1(r⃗) directly, which has significant advantages both for the generation of suitable training data as well as for the applicability of the model in the determination of fluid equilibria.
This investigation is based on GCMC simulations that serve to provide training, validation and test data.
Discriminating between these three roles of use is standard practice in machine learning and we give further details below.
§.§ Simulation method
Generating the simulation data is straightforward and we use the following strategy, adopted to planar situations where the position-dependence is on a single position variable x while the system remains translationally invariant in the y- and z-direction.
This geometry is highly relevant to identify the physics in planar capillary and adsorption situations and facilitates ease of accurate sampling.
We employ randomized simulation conditions by generating external potentials of the form
(x) = ∑_n=1^4 A_n sin(2 π n x/L + ϕ_n) + ∑_n V_n^lin(x),
where A_n and ϕ_n are randomly selected Fourier coefficients and phases, respectively, and L is the simulation box length in x-direction.
We choose L = 20 σ, although there is no specific compliance requirement for the neural network (see below), and the lateral box lengths are set to 10σ to minimize finite-size effects.
Periodic boundary conditions apply in all spatial directions.
The sinusoidal terms in (x) are complemented by up to five piecewise linear functions V^lin(x) = V_1 + (V_2 - V_1) (x - x_1) / (x_2 - x_1) for x_1 < x < x_2 and 0 otherwise, for which the parameters 0 < x_1 < x_2 < L, V_1 and V_2 are again chosen randomly.
Additionally, we explicitly impose planar hard walls in a subset of the simulations by setting (x) = ∞ for x < x_w/2 and x > L - x_w/2, i.e. near the borders of the simulation domain; the width x_w of the wall is chosen randomly in the interval 1 ≤ x_w / σ≤ 3.
To cover a broad range from dilute to dense systems, the chemical potential is chosen randomly within the range -5 ≤βμ≤ 10 for each respective GCMC simulation run.
The observed mean densities range from 0.006 σ^-3 to 0.803 σ^-3, yet smaller and much larger local densities occur due to the inhomogeneous nature of the systems.
In total, 750 such GCMC runs are used, where for given form of (x) the planar one-body profiles ρ(x) and c_1(x) are obtained.
The former is acquired from straightforward histogram filling and the latter from evaluating Eq. (<ref>) on the basis of the sampled histogram for ρ(x) as well as the known form of (x) and value of μ for the specific run under consideration.
As Eq. (<ref>) is undefined for vanishing density, we have excluded regions where ρ(x) = 0 such as within the hard walls.
By modern standards of computational resources, the workload for the generation of the simulation data is only moderate at a total CPU time of ∼ 10^4 hours.
§.§ Neural network
We use a deep neural network <cit.> to represent the functional map from the density profile to the local value of the one-body direct correlation function at a given point.
That is, instead of the entire function, we construct the network to output only the scalar value c_1(x) for a certain position x when supplied with the surrounding inhomogeneous density.
The relevant section of the density profile comprises the values of ρ(x) in a specified window around a considered location x, as described below.
Despite the locality of the method, access to the entire (discretized) one-body direct correlation profile is immediate via evaluation of the neural network at pertinent positions x across the domain of interest.
Multiple local evaluations of the network remain performant on highly parallel hardware such as GPUs when passing the input accordingly in batches.
A schematic picture of the network architecture is given in Fig. <ref> and is explained in the following.
The functional dependence on the density profile is realized by providing discretized values of ρ(x) on an equidistant grid with resolution Δ x = 0.01 σ.
As c_1(x; [ρ]) depends only on the immediately surrounding density profile around a fixed location x, we restrict the input range x' to a sufficiently large window x' ≤ |x - x_c|.
We choose the cutoff x_c = 2.56 σ based on simulation data for the bulk direct correlation function <cit.> and on the evaluation of training metrics for different window sizes x_c.
Increasing the value of x_c further led to no improvement in the performance of the trained neural network.
This behavior is expected from theoretical considerations, as the one-body direct correlation function vanishes quickly for short-ranged pair potentials <cit.>.
We recall that in FMT, x_c = σ by construction.
Note that the choice of c_1(x; [ρ]) as our target functional is not coincidental, but that its quick spatial decay rather is a pivotal characteristic central to the success of our method.
To contrast this, assume that one attempts to model the functional mapping μ_loc(x) = μ - (x) →ρ(x), thereby naively imitating the simulation procedure.
This task poses major challenges due to the long-range nature of density correlations induced by an external potential, which is circumvented in our case by the choice of a more manageable target functional.
The input layer involves 513 nodes and is followed by three fully-connected hidden layers with 512 units each.
The output layer consists of a single node for the scalar value of c_1(x) at the specified location x.
Crucially, continuously differentiable activation functions such as the exponential linear unit or the softplus function are used throughout the network for the realization of nonlinearities.
This leads to substantial improvements particularly when evaluating two-body quantities via automatic differentiation (see Secs. <ref> and <ref>) as compared to the standard rectified linear unit (ReLU).
We attribute this superior performance to the fact that activation functions which are not continuously differentiable and which vanish in certain domain ranges (such as ReLU) reinforce sparsity of the activation output within the hidden layers <cit.>.
While this property is desired in many machine learning tasks (e.g. for classification), it hinders the accurate representation of the functional relation c_1(x; [ρ]) in our case.
The resulting neural functional for the one-body direct correlation function is denoted in the following by c_1^⋆(x; [ρ]) and related quantities which follow from its inference are marked accordingly by a superscript star.
§.§ Training procedure and metrics
The machine learning routines are implemented in Keras/Tensorflow <cit.> and we use the standard Adam <cit.> optimizer for the adjustment of the network parameters in order to fit c_1^⋆(x; [ρ]) against the simulation reference c_1(x).
The problem at hand is a regression task.
Hence, the mean squared error is chosen as a suitable loss function and the mean average error serves as a validation metric.
Since the model shall infer the pointwise value c_1(x) from a density section around a specified location x, see Fig. <ref>, the simulation data cannot be passed as is to the neural network.
Instead, windowed views of the density profile have to be generated prior to the training loop, which correspond to the target value c_1(x) at the center x of the respective window.
A periodic continuation of all simulation profiles is valid due to periodic boundary conditions.
Additionally, we use data augmentation to benefit from the inherent mirror symmetry (i.e. x → -x) of the problem and thus effectively double the number of training data sets.
As is customary, we separate the independent simulation results prior to performing the machine learning routines: 150 are kept aside as a test set, 150 serve as validation data to monitor training progress and 450 are used for the actual training of the neural network.
Modeling the functional relationship of c_1(x; [ρ]) locally, i.e. inferring pointwise values individually instead of outputting the entire profile at once, has numerous conceptual and practical advantages.
Regarding the feasibility of the neural network in concrete applications, one is free to choose an arbitrary box length L when gathering training data and more importantly to readjust the value of L when using the trained neural network for making predictions (cf. Sec. <ref>).
From a physical point of view, providing only local density information has the merit of already capturing the correlated short-range behavior of c_1(x; [ρ]).
If the neural network were to output the entire one-body direct correlation profile from a given density profile ρ(x) at once, this inherent locality would have to be learned instead, hence leading to a much more elaborate training process.
Lastly, the fine-grained nature of the training data turns out to be highly beneficial from a machine learning perspective.
Note that one can generate 9 · 10^5 input-output pairs from 450 training simulations in the present context (with the values being doubled after data augmentation).
The increased cardinality of the training set enables better generalization of the model and also prevents overfitting, e.g. to the statistical noise of the sampled profiles.
We train the model for 100 epochs in batches of size 256 and decrease the learning rate exponentially by ∼ 5% per epoch from an initial value of 0.001.
This results in a best mean average error of 0.0022 over the validation set, which is of the same order as the estimated average noise of the simulation data for c_1(x).
Therefore, we deem our neural network to possess full representational power of the local functional relationship c_1(x; [ρ]) within the conditions of the provided simulation data.
§ EXAMINING THE NEURAL CORRELATION FUNCTIONAL
§.§ Two-body bulk correlations
Besides monitoring standard metrics such as the mean average error over a test set, arguably deeper physical insights into the rigorous structure of the statistical mechanics at hand serves for assessing the quality of the neural functional c_1^⋆(x; [ρ]).
We first ascertain that the model gives an accurate representation of the physics of bulk fluids.
Despite the apparent simplicity of this case, this is a highly nontrivial test as the training data solely covered (strongly) inhomogeneous situations.
For this, we investigate the pair structure and aim at implementing the two-body direct correlation functional, which is formally defined as the functional derivative <cit.>
c_2(r⃗, r⃗'; [ρ]) = δ c_1(r⃗; [ρ])/δρ(r⃗').
On the basis of the neural network, we can make use of the powerful automatic differentiation techniques.
This allows to create an immediate analog of Eq. (<ref>) via c_2^⋆(x, x'; [ρ]) = δ c_1^⋆(x; [ρ]) / δρ(x'), where the functional derivative δ / δρ(x') is evaluated by reverse mode automatic differentiation with respect to the input values of the discretized density profile.
In common machine learning frameworks, this requires only high-level code (e.g. in Keras/Tensorflow <cit.>).
The numerical evaluation of c_2^⋆(x, x'; [ρ]) is performant as reverse mode automatic differentiation generates executable code that is suitable for building derivatives with respect to multiple input variables simultaneously.
We obtain the bulk direct correlation function in planar geometry as the special case c̅_2^b(x, ρ_b) = c_2(0, x; [ρ_b]), where we have introduced the bulk density ρ_b(x) = ρ_b = const.
(In the notation, the parametric dependence on ρ_b is dropped in the following.)
Note that c̅_2^b(x) is distinct from the more common radial representation c_2^b(r), as our geometry implies an integration over the lateral directions y and z, i.e.
c̅_2^b(x) = ∫ y z c_2^b(r = √(x^2 + y^2 + z^2))
= 2 π∫_x^∞ r r c_2^b(r),
where the last equality follows from using radial coordinates and substitution.
We commence by performing a Fourier transform of the planar real space representation c̅_2^b(x) and utilize radial symmetry in Fourier space.
This acts as a deconvolution of Eq. (<ref>) and directly yields the radial Fourier (Hankel) transform of c_2^b(r),
c̃_2^b(k) = 4 π/k∫_0^∞ r r sin(kr) c_2^b(r).
The inverse transform is identical to Eq. (<ref>) up to a factor of (2 π)^-3 upon interchanging r and k.
To go further, the bulk Ornstein-Zernike equation <cit.>
c̃_2^b(k) = h̃(k)/1 + ρ_b h̃(k)
is used to obtain the total correlation function h̃(k) from c̃_2^b(k) in Fourier space after rearrangement.
Recall that the radial distribution function follows directly via g(r) = h(r) + 1; here h(r) is the real space representation of h̃(k).
The static structure factor S(k) is then given as
S(k) = 1 + ρ_b h̃(k).
In Fig. <ref>, results of c̅_2^b(x), c̃_2^b(k), h̃(k) and S(k) are shown for different bulk densities ρ_b σ^3 = 0.4, 0.7, 0.9.
From our neural functional, we obtain c̅_2^b ⋆(x) = δ c_1^⋆(0; [ρ]) / δρ(x) |_ρ = ρ_b, i.e. the autodifferentiated network is evaluated at spatially constant density ρ_b.
The total correlation function and the static structure factor follow from Eqs. (<ref>) and (<ref>) after having computed c̃_2^b ⋆(k) via a numerical Fourier transform of c̅_2^b ⋆(x).
For comparison, we also depict reference data obtained analytically from the Percus-Yevick theory <cit.> and reproduced from simulation results of <cit.>.
Good agreement is found between simulation and the autodifferentiated neural network, while the Percus-Yevick result shows noticeable deviations in c̅_2^b(x).
The latter overestimates the depth of the core region x < σ and this discrepancy increases for larger bulk densities.
The neural functional yields a clear improvement over the Percus-Yevick theory and shows only marginal differences to the simulation results of Ref. Groot1987 for both the planar real space and the radial Fourier space representation of the two-body direct correlation function.
In h̃(k) and S(k), the severity of the discrepancies of simulation and machine-learning data to the Percus-Yevick results decreases, but a difference is still noticeable in particular for large bulk densities.
A slight mismatch to the simulation reference is observed in the magnitude and phase of the oscillations of the Percus-Yevick static structure factor S_PY(k), and this correction is reproduced very well by the neural functional.
Note that although one arrives at radial representations of the quantities c̃_2^b(k), h̃(k) and S(k) in Fourier space, performing the radial backtransform to real space numerically according to the inverse of Eq. (<ref>) is generally a “notoriously difficult task” <cit.> and is not considered here.
This successful test reveals that, while being trained solely with one-body profiles, the neural functional c_1^⋆(x; [ρ]) contains full two-body information equivalent in bulk to the radial distribution function g(r).
The pair correlations can be accessed via automatic differentiation at low computational cost and they are consistent with known bulk results.
We recall that this is a mere byproduct of the neural network and that no such two-body information has been explicitly incorporated in the training.
More so, Fig. <ref> demonstrates that the bulk quantities c̅_2^b(x), c̃_2^b(k), h̃(k) and S(k) as obtained from c_1^⋆(x; [ρ]) substantially outperform the Percus-Yevick theory and almost attain simulation quality.
In Appendix <ref>, we illustrate that higher-order correlations such as the three-body direct correlation functional c_3^⋆(x, x', x”; [ρ]) follow analogously via nested automatic differentiation.
On this level, differences to FMT results are even more prominent than the deviations to the two-body Percus-Yevick results.
As we will show in Sec. <ref>, the accuracy of predictions from the neural network also holds in inhomogeneous situations, where FMT serves again as an analogous and arguably even more challenging theoretical baseline than the Percus-Yevick bulk theory.
Before doing so, we lay out additional consistency tests and quality assessments that are applicable in inhomogeneous systems.
§.§ Noether sum rules
In order to further elucidate whether c_1^⋆(x; [ρ]) quantitatively reproduces fundamental properties of equilibrium many-body systems, we make use of exact sum rules that follow from thermal Noether invariance <cit.>:
∇ c_1(r⃗) = ∫r⃗' c_2(r⃗, r⃗') ∇' ρ(r⃗'),
∫r⃗ρ(r⃗) ∫r⃗' ρ(r⃗') ∇ c_2(r⃗, r⃗') = 0.
Both Eqs. (<ref>) and (<ref>) apply in any equilibrated inhomogeneous system regardless of the type of internal interactions.
While the interparticle interaction potential does not appear explicitly in Eqs. (<ref>) and (<ref>), it nevertheless determines the functionals c_1(r⃗; [ρ]) and c_2(r⃗, r⃗'; [ρ]).
Recall that the spatial gradient of the one-body direct correlation function can be identified with the internal equilibrium force profile, f⃗_int(r⃗) = k_B T ∇ c_1(r⃗) <cit.>.
We verify that the neural functional complies with the above sum rules (<ref>) and (<ref>) as follows.
Analogous to Sec. <ref>, we use autodifferentiation to evaluate Eq. (<ref>), but this time retain the full inhomogeneous structure of c_2^⋆(x, x'; [ρ]).
The left hand side of Eq. (<ref>) is obtained straightforwardly from simple evaluation of the neural functional and numerical spatial differentiation.
As input for ρ(x), we use the simulated density profiles of the test set.
Care is required when evaluating the spatial gradients ∇ρ(x), ∇ c_1^⋆(x; [ρ]) and ∇ c_2^⋆(x, x'; [ρ]) due to the amplification of undesired noise, which we reduce by applying a low-pass filter after having taken the numerical derivatives.
The volume integrals reduce in planar geometry to ∫r⃗ = A ∫ x, where A is the lateral system area.
In Fig. <ref>, three typical profiles for the left and right hand side of Eq. (<ref>) are shown.
In all three systems both sides of the equation coincide up to numerical noise due to the required spatial derivatives.
Additionally, we define errors via scalar deviations from equality in Eqs. (<ref>) and (<ref>) respectively as
e_1 = ‖∇ c_1(x) - A ∫ x' c_2(x, x') ∇' ρ(x') ‖_∞,
e_2 = A^2 ∫ x ρ(x) ∫ x' ρ(x') ∇ c_2(x, x'),
where ‖·‖_∞ denotes the maximum norm.
Panels (a) and (b) of Fig. <ref> depict results for e_1 and e_2 as a function of the mean density ρ̅ = ∫r⃗ρ(r⃗) / V for all 150 density profiles of the test set, where V denotes the volume of the system.
The small magnitudes of the observed error values indicate that the neural network satisfies the Noether identities (<ref>) and (<ref>) to very high accuracy.
Outliers can be attributed mostly to the moderate numerical noise of the spatial gradients, see panel (III) in Fig. <ref>, and are no hinderance in practical applications of the neural functional.
This confirmation demonstrates that our method transcends the neural network from a mere interpolation device of the simulation training data to a credible standalone theoretical object.
The fact that one is able to carry out consistent and performant functional calculus indeed renders c_1^⋆(x; [ρ]) a neural-network-based density functional.
Besides functional differentiation, we show next that functional line integration acts as the inverse operation and provides access to the corresponding free energy.
Appendix <ref> gives further insight into the symmetry properties of c_2^⋆(x, x'; [ρ]), which serve as a prerequisite for the existence of a generating excess free energy functional ^⋆[ρ]; we recall Eq. (<ref>).
§.§ Equation of state and free energy
Although the machine learning procedure operates on the level of the one-body direct correlation function, the excess free energy [ρ] is accessible by functional line integration <cit.>:
β[ρ] = - ∫_0^1 α∫r⃗ρ(r⃗) c_1(r⃗; [ρ_α]).
Here, ρ_α(r⃗) = αρ(r⃗) is a sequence of density profiles that are linearly parametrized by α in the range 0 ≤α≤ 1.
The limits are ρ_0(r⃗) = 0 such that [0] = 0, and ρ_1(r⃗) = ρ(r⃗), which is the target density profile that appears as the functional argument on the left hand side of Eq. (<ref>).
Other parametrizations of ρ_α(r⃗) are conceivable but change the concrete form of Eq. (<ref>).
On the basis of c_1^⋆(x; [ρ]), we implement Eq. (<ref>) via β^⋆[ρ] = - A ∫_0^1 α∫ x ρ(x) c_1^⋆(x; [ρ_α]) and evaluate the integrals numerically; as before A denotes the lateral system area.
We first return to bulk systems and illustrate in the following three different routes towards obtaining the bulk equation of state from the neural network.
For this, we introduce the excess free energy density as ψ_b(ρ_b) = [ρ_b] / V, where V is the system volume.
From the neural functional, the excess free energy density ψ_b^⋆(ρ_b) can be acquired via ^⋆[ρ_b] from functional line integration along a path of bulk densities according to Eq. (<ref>).
Alternatively and equivalently, one can simply evaluate the neural direct correlation functional at bulk density ρ_b and due to translational symmetry at arbitrary location (e.g. x = 0) such that c_1^b ⋆ = c_1^⋆(0; [ρ_b]).
Simplifying Eq. (<ref>) in bulk reveals that
ψ_b^⋆'(ρ_b) = - k_B T c_1^b ⋆,
where the prime denotes the derivative with respect to the bulk density argument.
The excess free energy density ψ_b^⋆(ρ_b) follows from ordinary numerical integration across bulk densities up to the target value ρ_b.
The numerical accuracy to which both routes coincide serves as a further valuable consistency test.
Additionally, one obtains the bulk pressure P(ρ_b) from the excess free energy density via
P(ρ_b) = ( ψ_b^'(ρ_b) + k_B T ) ρ_b - ψ_b(ρ_b).
The pressure is equally accessible from a further route which incorporates previous results for the bulk pair structure via their low-wavelength limits according to <cit.>
β. ∂ P/∂ρ_b|_T = β/ρ_b χ_T = 1 - ρ_b c̃_2^b(0) = 1/1 + ρ_b h̃(0) = 1/S(0),
where one can identify the isothermal compressibility χ_T = ρ_b (∂ρ_b / ∂ P)_T.
From Eq. (<ref>), P(ρ_b) is obtained by evaluation of either of the bulk correlation functions (see Sec. <ref>) in Fourier space at k = 0 for different bulk densities and by subsequent numerical integration towards the target value of ρ_b.
We compare the results in Fig. <ref>, where the equation of state P^⋆(ρ_b) of the neural network was acquired from functional line integration across bulk systems, cf. Eq. (<ref>), from evaluation of one-body bulk correlation values c_1^b ⋆, cf. Eq. (<ref>), and from the low-wavelength limit of two-body bulk correlations, cf. Eq. (<ref>).
One finds that the results of all three routes are consistent with each other and that they match very well the highly accurate Carnahan-Starling equation of state <cit.>.
A slight deviation can be noticed when evaluating P^⋆(ρ_b) via Eq. (<ref>), which constitutes the most indirect route detouring to two-body correlations.
This may reflect the small discrepancy of the neural functional to simulation results (cf. Fig. <ref>) and the sensitivity of the low-wavelength limit of the static structure factor to remaining finite size effects <cit.>.
As already observed for the bulk pair structure in Sec. <ref>, the neural network also clearly outperforms the Percus-Yevick theory for the bulk fluid equation of state.
We recall again that neither bulk information nor data for free energies or pressures was given explicitly in the training of the neural network.
Additionally, we demonstrate in Appendix <ref> that the neural functional is fit for the application of dimensional crossover <cit.> in order to obtain the bulk equation of state for the two-dimensional hard disk fluid within a reasonable range of packing fractions.
For a concise comparison of free energies in inhomogeneous situations, additional reference data has to be acquired from simulations.
In our grand canonical setting, thermodynamic integration <cit.> with respect to the chemical potential can be used to measure the grand potential according to
Ω[ρ] = - ∫_-∞^μμ' ⟨ N ⟩.
Here, the integration starts from an empty system with Ω[0] = 0 and traverses the chemical potential up to the target value μ.
One needs to measure the mean number of particles ⟨ N ⟩ in a sufficient number of simulations with intermediate chemical potentials -∞ < μ' ≤μ to evaluate Eq. (<ref>) numerically.
The excess free energy then follows directly from
[ρ] = Ω[ρ] - k_B T ∫r⃗ρ(r⃗) (lnρ(r⃗) - 1)
+ ∫r⃗ρ(r⃗) ((r⃗) - μ).
Thermodynamic integration according to Eq. (<ref>) has been performed for 22 systems of the test set to yield reference values ^sim for the excess free energy via Eq. (<ref>).
The systems were selected to cover a broad range of excess free energy values, and FMT results for were used as a further theoretical estimate for this selection.
In Tab. <ref> and Fig. <ref>, we show errors of to the quasi-exact simulation values when calculating the excess free energy via Rosenfeld and White Bear MkII FMT as well as from functional line integration according to Eq. (<ref>) of the neural functional.
For both FMT methods, a DFT minimization (cf. Sec. <ref>) is performed to yield a self-consistent density profile ρ(x), which serves as input to the respective analytic FMT expression for [ρ].
Hence we compare consistently equilibrium states (according to the respective theory) corresponding to the same form of the external potential.
The comparison reveals that the neural functional significantly outperforms Rosenfeld FMT and still yields slightly more accurate values for the excess free energy than the very reliable White Bear theory.
Regarding the above described bulk results for the free energy, this behavior is both consistent and expected, as the Rosenfeld and White Bear MkII functionals can be associated with the Percus-Yevick compressibility and Carnahan-Starling bulk equations of state respectively.
Still, the test in inhomogeneous systems is a more rigorous one than in bulk, as the full nonlocal functional representation is invoked when providing c_1^⋆(x; [ρ]) with an inhomogeneous density profile as input.
Given that the functional line integration of c_1^⋆(x; [ρ]) via Eq. (<ref>) is practically immediate, one can deem ^⋆[ρ] itself a corresponding neural functional for the excess free energy that enables a full description of the thermodynamics of inhomogeneous fluids to high accuracy.
As we present below, this quantitative precision is preserved when applying the neural functional in a predictive manner in the self-consistent calculation of density profiles.
§ PREDICTING INHOMOGENEOUS FLUIDS VIA NEURAL DFT
§.§ Going beyond analytic approximations
In the previous section, the trained model has been put to test by deriving related quantities such as c_2^⋆(x, x'; [ρ]) from autodifferentiation and ^⋆[ρ] from functional line integration in order to assess its performance against analytic and numerical reference results.
We now turn to the application of the neural functional c_1^⋆(x; [ρ]) in the context of the self-consistent determination of density profiles according to the DFT Euler-Lagrange equation.
This is achieved by rearranging Eq. (<ref>) to the standard form <cit.>
ρ(r⃗) = exp(-β ((r⃗) - μ) + c_1(r⃗; [ρ])).
A fixed-point (Picard) iteration with mixing parameter α can be used to determine the density profile from Eq. (<ref>) according to
ρ(r⃗) ← (1 - α) ρ(r⃗)
+ αexp(-β ((r⃗) - μ) + c_1(r⃗; [ρ])).
The degree of convergence is determined from the remaining difference of right and left hand side of Eq. (<ref>).
With the trained neural functional at hand, one can evaluate the one-body direct correlation function in Eq. (<ref>) via the surrogate c_1^⋆(x; [ρ]) in each iteration step.
In the following, the use of c_1^⋆(x; [ρ]) in this context will be referred to as neural DFT.
We note two minor technical points concerning the use of the neural functional in the Picard iteration.
It was observed that a conservative choice of α is necessary during the first few iterations to ensure numerical stability.
After this burn-in, the mixing parameter can be set to usual values (e.g. α = 0.05).
Furthermore, the convergence criterion has to be relaxed as compared to typical choices in analytic DFT methods due to the remaining intrinsic uncertainty of c_1^⋆(x; [ρ]).
The mean average error after training, cf. Sec. <ref>, provides an estimate for the expected relative uncertainty of the density profile according to Eq. (<ref>).
Depending on the specific problem, the error might not decrease any further than that during the iteration (<ref>).
Neither of these points caused any practical hinderance in applications.
The treatment of Eq. (<ref>) in neural DFT is conceptually not different than in standard DFT methods.
However, the model c_1^⋆(x; [ρ]) relieves the theory from being restricted by the available approximations for the one-body direct correlation function as generated from analytic expressions of the excess free energy functional [ρ] via Eq. (<ref>).
We emphasize that, unlike in previous work <cit.>, no analytic ansatz had to be provided and that our method is generic for the determination of a suitable functional from a given model Hamiltonian, thus indeed constituting a “machine learning black box” <cit.> regarding the training procedure.
However, in contrast to a closed black box, the inner workings of the resulting neural correlation functional can be inspected very thoroughly via the neural functional calculus laid out above.
Also note that, while the model works at the level of the one-body direct correlation function, the free energy is readily available from functional line integration, cf. Sec. <ref>.
Lastly, we point out that c_1^⋆(x; [ρ]) captures the entirety of the intrinsic correlations and that further improvements are conceivable by only learning differences to an analytic reference functional.
To demonstrate the capabilities of our method, we refrain from this route and show that the trained neural functional alone already exceeds the accuracy of FMT.
§.§ Comparison to FMT
In the following, we benchmark the self-consistent inhomogeneous density profiles obtained via neural DFT against FMT results.
For this comparison, the Rosenfeld <cit.> and White Bear MkII <cit.> FMT functionals are considered and the simulated density profiles are taken as quasi-exact reference data.
The FMT functionals are the most profound analytic description of the hard sphere fluid with the White Bear MkII theory being the state-of-the-art treatment of short-ranged intermolecular repulsion in classical DFT.
Nevertheless, measurable and systematic deficiencies still remain, e.g. in highly correlated systems <cit.>.
We point the reader to Ref. Roth2010 for a thorough account of FMT and to Ref. Sammueller2023 for a very recent quantitative assessment.
Note that the tensorial weights of <cit.> to describe hard sphere freezing are not included in our investigation.
The comparison is set up as follows.
For each hard sphere system of the test set (see Sec. <ref>), we determine the density profile ρ(x) from the Rosenfeld and White Bear MkII FMT functionals as well as from c_1^⋆(x; [ρ]) via the Picard iteration (<ref>) of the Euler-Lagrange Eq. (<ref>).
For this, only the known form of the external potential (x) and the value μ of the chemical potential are prescribed.
As reference density profiles are available from GCMC simulations, we can evaluate the error Δρ(x) of each of the DFT results relative to the simulation data for ρ(x).
From here, different scalar metrics for the quantitative agreement of self-consistent DFT profiles and simulation results are considered.
In Fig. <ref>, both global and local error measures for the deviation of FMT as well as neural DFT to simulation data are depicted.
For the assessment of the global error, we show the L_2-norm ‖Δρ‖_2 of the discrepancy to the reference profile, which is normalized by the mean density ρ̅ of each system respectively.
As the test data covers very dilute to very dense systems, this relative global error measure is plotted as a function of ρ̅ to discern the behavior with respect to varying global average density.
Similarly, we define an upper estimate for the relative local error by evaluating the maximum norm ‖Δρ‖_∞ of the density deviation divided by the maximum value ‖ρ‖_∞ of the GCMC density profile.
This quantity is resolved against the maximum ‖ρ‖_∞ of the respective inhomogeneous density, thus enabling the detection of local discrepancies, e.g. in the vicinity of maxima and discontinuities of the density profile.
One recognizes that neural DFT yields substantially better results than the FMT functionals with regard to both error measures.
Compared to the Rosenfeld results, both the global and the local error is decreased by approximately an order of magnitude.
Surprisingly, even the White-Bear MKII functional is not able to match the accuracy of the neural DFT, which is noticeable especially for large values of ρ̅ and of ‖ρ‖_∞.
§.§ Simulation beyond the box
A particular advantage of the local nature of the neural functional c_1^⋆(x; [ρ]) is its applicability to systems of virtually arbitrary size.
As explained in Sec. <ref>, it is sufficient to provide the density profile within a rather narrow window as input to the neural network to infer the value of the one-body direct correlation function at the center of the density section.
The model c_1^⋆(x; [ρ]) can therefore be used directly in the Euler-Lagrange Eq. (<ref>) for the prediction of planar systems of arbitrary length.
Due to the low computational demands of solving this equation self-consistently, this method is suitable even in multiscale problems where macroscopic length scales compete with and are influenced by microscopic correlations and packing features.
Although one could argue that analytic DFT methods already account for such tasks, importantly the neural functional c_1^⋆(x; [ρ]) acts as a drop-in replica of the (almost) simulation-like description of the intrinsic correlations.
Therefore, neural DFT facilitates to fuse simulation data with common DFT methods, thus providing a means to “simulate beyond the box”.
Simulation beyond the box is demonstrated in Fig. <ref>, where a system with a length of 1000 σ is considered; the numerical grid size remains unchanged at 0.01 σ.
Our setup implies that for colloids of, say, size σ = 1, we have spatial resolution of 10 across the entirety of a system of macroscopic size 1.
As a demonstration, similar to the strategy in Sec. <ref>, a sequence of spatially connected randomized potentials is generated, and the chemical potential is set here to μ = 0.
Using c_1^⋆(x; [ρ]), we obtain the corresponding density profile straightforwardly from the simple iteration scheme (<ref>).
The computational cost for the determination of ρ(x) with neural DFT is negligible as compared to an analogous many-body simulation, which is hardly feasible on such length scales.
A second example, which is arguably more relevant from a physical point of view <cit.>, is given in Fig. <ref>, where we show the sedimentation behavior of the hard sphere fluid as obtained with neural DFT.
For this, a local chemical potential μ_loc(z) = μ - (z) that decreases linearly with respect to the height z is imposed in a system which is bounded from the bottom (z = 0) and the top (z = 1000 σ) by hard walls.
The spatial variation of μ_loc(z) is chosen small enough to enable thermal diffusion across the whole sedimentation column and to yield locally an almost bulk-like behavior except near the upper and lower hard walls.
The method reproduces both the highly correlated nature of ρ(z) in the vicinity of the walls as well as its intermediate behavior within the sedimentation column, which follows closely the bulk equation of state (see Sec. <ref>), as one would expect within a local density approximation <cit.>.
§ CONCLUSION AND OUTLOOK
In this work, we have outlined and validated a machine learning procedure for representing the local functional map from the density profile to the one-body direct correlation function via a neural network.
The resulting neural functional was shown to be applicable as a powerful surrogate in the description of inhomogeneous equilibrium fluids.
This was demonstrated for the hard sphere fluid, where we have used GCMC simulations in randomized inhomogeneous planar environments for the generation of training, validation and test data.
Density and one-body direct correlation profiles followed respectively from direct sampling and from evaluation of Eq. (<ref>).
DFT elevates the role of the one-body direct correlation function c_1(x) to that of an intrinsic functional c_1(x; [ρ]) depending on the density profile ρ(x) but being independent of the external potential.
We exploited this fact in the construction of our neural network, which takes as input a local section of the discretized density profile around a fixed location x and outputs the value of the one-body direct correlation functional c_1(x; [ρ]) at that specific location.
Establishing a pointwise inference of c_1(x; [ρ]) instead of trying to represent the global functional mapping of the entire one-body profiles comes with various advantages, such as independence of the box size, the correct description of the short-range behavior of c_1(x; [ρ]), and a very significant improvement of training statistics.
The nonlinear and nonlocal functional relationship was realized by fully-connected hidden layers with smooth activation functions and a standard supervised training routine was used.
The achieved mean average error over the test set was of the same order of magnitude as the noise floor of the simulations, thus being indicative of full representational power of the neural correlation functional within the considered simulation data.
Whether the quality of the model can be improved further by performing more extensive sampling to reduce the statistical noise of the simulation profiles remains to be investigated in the future.
Additionally, active and reinforcement machine learning techniques could be useful for interleaving the training and simulation process, thereby guiding the generation of reference data in order to explore the space of inhomogeneous systems more efficiently and exhaustively.
The neural functional was put to test by verifying numerous physical relations in bulk and in inhomogeneous systems.
In particular, it was shown that the two-body direct correlation functional c_2(x, x'; [ρ]) as well as higher-order correlations are accessible from the model via automatic differentiation.
In bulk, the pair structure as described by the neural network significantly outperforms the Percus-Yevick theory and is even able to compete with simulation results <cit.>, although no bulk data was used during training.
In inhomogeneous situations, the conformance of the neural functional to the thermal Noether sum rules (<ref>) and (<ref>) as well as to spatial symmetry requirements holds to high accuracy.
The excess free energy [ρ] is readily and efficiently available via functional line integration of the model according to Eq. (<ref>) and the results agree with those obtained from simulations.
The bulk equation of state can be acquired consistently from various routes and its accuracy is comparable to the Carnahan-Starling result.
Dimensional crossover is feasible for the calculation of the bulk equation of state for the two-dimensional hard disk system.
Arguably the most important consequence of the neural functional framework is the applicability of c_1^⋆(x; [ρ]) in the self-consistent calculation of density profiles by solving the Euler-Lagrange Eq. (<ref>) of classical DFT.
As the one-body direct correlation function is faithfully represented by the neural network, one is exempted from having to find analytic approximations for c_1(x; [ρ]) or for its generating functional [ρ].
Although FMT provides such approximations for the hard sphere fluid with high precision, we could demonstrate that our neural functional outperforms both the Rosenfeld <cit.> as well as the White Bear MkII <cit.> functional.
For this, Eq. (<ref>) was solved self-consistently for all 150 randomized local chemical potentials of the test set to obtain ρ(x), where c_1(x; [ρ]) was given either analytically by FMT or evaluated via c_1^⋆(x; [ρ]).
The comparison of the results to the simulated density profiles reveals that neural DFT yields global and local errors that are up to an order of magnitude lower than those of FMT.
Furthermore, due to the flexibility that comes with the local functional mapping, the neural network could be used as a means to “simulate beyond the box”.
That is, while the training was based solely on simulation data from systems of manageable size, the resulting model c_1^⋆(x; [ρ]) is directly applicable for predictions on much larger length scales.
We demonstrated this by imposing a spatial sequence of randomized external potentials on a length of 1000 σ.
While the explicit numerical simulation of such a system is comparatively cumbersome, neural DFT offers a way to achieve close to simulation-like accuracy at low computational effort.
Furthermore, we have considered a sedimentation column with a height of 1000 σ that is bounded by hard walls.
Neural DFT is capable to both resolve microscopically the adsorption at the walls as well as to efficiently capture the long-range density decay with increasing height.
The presented fusion of machine learning and DFT can therefore be another useful technique to make headway in the multiscale description of soft matter <cit.>.
On the opposite side of the length scale spectrum, it might also be worthwile to consider quantum mechanical approaches, either in the context of ab initio simulation methods for the generation of training data or for cross-fertilization of machine learning ideas, in particular regarding topical applications in quantum DFT <cit.>.
While much insight could be gained by considering the well-studied hard sphere fluid, the application of our machine learning procedure is arguably even more useful for particle models that lack satisfactory analytic density functional approximations.
Although mean-field descriptions account surprisingly well for soft and attractive contributions <cit.>, e.g. in the Lennard-Jones fluid, analytic efforts to go beyond this approximation are sparse <cit.>.
In the future, the application of neural DFT to such thermal systems may prove to be useful either via isothermal training or by providing the temperature as a further input quantity.
We expect the general method to hold up even for complex particle models, e.g. containing many-body contributions <cit.> or orientational degrees of freedom as treated within molecular DFT <cit.>, provided that sufficiently accurate training data of sufficient quantity can be generated.
A proper treatment of the arising phase transitions and interfacial phenomena might be subtle both in simulation as well as from a machine learning perspective.
Even though we saw no need for a more sophisticated training procedure in our investigations, it could be useful to consider physics-informed machine learning <cit.> as a technique for enforcing exact physical relations of the underlying problem directly during training.
Sum rules in bulk or in inhomogeneous systems, e.g. the thermal Noether identities (<ref>) and (<ref>), might be suitable candidates for this task.
Analogous to the evaluation of derivatives in physics-informed neural networks, we have shown the necessary quantities to be accessible by automatic differentiation of the neural functional.
When considering nonequilibrium systems, power functional theory (PFT) <cit.> establishes an exact functional many-body framework which is analogous to that of DFT in equilibrium.
A central ramification of PFT is the existence of a functional map from the time-dependent one-body density ρ(r⃗, t) and current J⃗(r⃗, t) to the internal force profile f⃗_int(r⃗, t; [ρ, J⃗]), which is in general nonlocal in space and causal in time t.
Recent work by <cit.> demonstrated that machine learning this kinematic internal force functional yields highly promising results and overcomes the analytic and conceptual limitations of dynamical density functional theory.
In this regard, our method can be put into a more general context as it may be viewed as a mere special case for equilibrium systems where J⃗(r⃗, t) = 0.
The topical problem of accurately describing nonequilibrium many-body physics is certainly a natural contender for the application and extension of our neural functional framework, with many practical questions arising, e.g. concerning the generation of training data or the choice of neural network architecture.
Lastly, the possibility of extending the machine learning procedure from planar symmetry to more general two-dimensional geometries or even to the full three-dimensional problem is worth contemplating.
Especially for the latter, the amount of required training data seems restrictive at first if one considers randomized simulations in the fully inhomogeneous geometry.
However, results obtained in the planar case could be leveraged since they already capture the crux of the internal interactions, as was shown in this work.
Therefore, it may be possible to supplement the planar data with only a few select higher-dimensional simulations to incorporate the remaining nontrivial effects due to the more general geometry.
As data-efficiency will be vital in this case, one might benefit from more extensive data augmentation, and the use of equivariant neural networks <cit.> could provide a way of casting certain symmetries directly into the model architecture.
We thank T. Zimmermann, T. Eckert and N. C. X. Stuhlmüller for useful comments.
This work is supported by the German Research Foundation (DFG) via Project No. 436306241.
§ HIGHER-ORDER CORRELATIONS
Analogous to Sec. <ref>, we demonstrate that higher-order correlations can be obtained from the neural correlation functional by nested automatic differentiation.
This is due to the fact that the hierarchy of direct correlation functions c_n(r⃗, r⃗', …, r⃗^(n); [ρ]), n ≥ 2, is accessible from successive functional derivatives of the one-body direct correlation functional <cit.>,
c_n(r⃗, r⃗', …, r⃗^(n-1); [ρ]) = δ^n-1 c_1(r⃗; [ρ])/δρ(r⃗') …δρ(r⃗^(n-1)).
As illustrated in the main text, translational symmetry can be applied in bulk fluids such that the resulting bulk correlation function c_n^b(r⃗, …, r⃗^(n-2)) = c_n(0, r⃗, …, r⃗^(n-2); [ρ_b]) only incorporates n - 2 remaining position coordinates.
We specialize again to the planar geometry of our neural functional and show in Fig. <ref> the three-body bulk correlation function c̅_3^b ⋆(x, x') for a bulk density of ρ_b = 0.7 σ^-3.
While the computation of c̅_2^b ⋆(x) is practically immediate via a single reverse mode autodifferentiation pass, going to the three-body correlation function comes at the price of having to evaluate the Hessian of c_1^⋆(x; [ρ]), for which different strategies exist <cit.>.
In principle, one can proceed by nesting autodifferentiation layers to obtain further members of the hierarchy (<ref>), albeit being restricted by the practicability of the actual evaluation and the efficacy of the result.
Note that the computational effort at the three-body level is by no means restrictive and that growing numerical demands are expected when considering higher-order correlations.
The computation and analysis of c̅_3^b(x, x') might be especially useful for more complex fluid models, e.g. containing internal three-body interactions <cit.>.
We compare c̅_3^b ⋆(x, x') to analytic approximations based on FMT.
For both the Rosenfeld and the White Bear MkII functional, the three-body bulk direct correlation function is analytic in Fourier space.
We point the reader to Ref. Rosenfeld1989 for an expression of the original Rosenfeld result in terms of vectorial weight functions and to Refs. Kierlik1990,Phan1993 for an equivalent representation via scalar weights.
As the weight functions remain unchanged, the White Bear MkII result follows immediately from the modification of the excess free energy density as laid out in Ref. HansenGoos2006.
A cumulant expansion of the bulk result of the three-body direct correlation function in Fourier space can be transformed to real space analytically, which in planar geometry gives
c̅_3^b(x, x') = - b R^4/aexp(-x^2 + x x' - x'^2/a R^2),
where the width parameter a and the prefactor b are determined by
a = ν/κ3/553 - 25 η + 8 η^2/30 + 2 η + 5 η^2 - η^3,
b = κ8 π/3 √(3)30 + 2 η + 5 η^2 - η^3/(1 - η)^5,
with the packing fraction η = πρ_b / 6.
The correction factors ν and κ are set to unity in the Rosenfeld FMT and attain the forms
ν = 53 - 35 η + η^2 + 5 η^3/53 - 25 η + 8 η^2,
κ = 30 - 6 η/30 + 2 η + 5 η^2 - η^3,
in the White Bear MkII case.
The comparison reveals that the form of the neural three-body bulk correlation function c̅_3^b ⋆(x, x') is plausible and that it captures genuine features which go beyond both FMT descriptions.
The Rosenfeld FMT yields a large discrepancy in the core region x, x' ≈ 0, which is significantly unterestimated as compared to the results from the neural functional and from the White Bear theory.
We recall that, as in Sec. <ref>, the tensorial weights of <cit.> have not been used in the FMT functionals and that their inclusion might be particularly relevant on the level of higher-order correlations.
In this vein, investigating members of the direct correlation hierarchy (<ref>) with the neural correlation functional could be a valuable aid for testing and refining analytic FMT functionals.
§ SPATIAL SYMMETRY OF THE NEURAL TWO-BODY DIRECT CORRELATION FUNCTIONAL
A further consistency test of c_2^⋆(x, x'; [ρ]) arises due to its expected symmetry with respect to an interchange of the planar position coordinates x and x'.
Recall that the excess free energy functional [ρ] generates the two-body direct correlation function according to
c_2(r⃗, r⃗'; [ρ]) = - δ^2 β[ρ]/δρ(r⃗) δρ(r⃗'),
see Eqs. (<ref>) and (<ref>).
One can directly recognize from the symmetry of the second functional derivative in Eq. (<ref>) that c_2(r⃗, r⃗'; [ρ]) = c_2(r⃗', r⃗; [ρ]) must hold.
On the basis of the neural direct correlation functional in planar geometry, assessing the validity of the identity
c_2^⋆(x, x'; [ρ]) = c_2^⋆(x', x; [ρ])
is a highly nontrivial test.
This is due to the fact that c_2^⋆(x, x'; [ρ]) evaluated at certain positions x and x' follows from automatic differentiation of c_1^⋆(x; [ρ]), where the input density window is centered around the location x, see Sec. <ref>.
On the other hand, when formally evaluating c_2^⋆(x', x; [ρ]), where the arguments x and x' are now reversed, the density window is centered around x', hence constituting a generally very different and a priori unrelated input profile.
One can expect Eq. (<ref>) to be recovered only if the physical implications of Eq. (<ref>) are captured correctly by the neural functional.
Note that Eq. (<ref>) is a necessary condition for the existence of a unique neural excess free energy functional ^⋆[ρ], which can practically be obtained via functional line integration, see Sec. <ref>.
We exemplify in Fig. <ref> that the neural two-body direct correlation functional c_2^⋆(x, x'; [ρ]) obtained via autodifferentiation of c_1^⋆(x; [ρ]) indeed satisfies the symmetry requirement (<ref>) to very high accuracy.
§ NEURAL EQUATION OF STATE FOR HARD DISKS VIA DIMENSIONAL CROSSOVER
Although the neural functional c_1^⋆(x; [ρ]) was acquired explicitly for the three-dimensional hard sphere fluid, one can use dimensional crossover techniques to obtain bulk results for the two-dimensional hard disk system.
This is facilitated by investigating the behavior of the hard sphere fluid under narrow confinement, which constitutes a quasi-two-dimensional scenario.
With this method, one obtains the equation of state for the hard disk fluid from c_1^⋆(x; [ρ]), as we demonstrate in the following.
We proceed similar to Sec. <ref> and utilize Eq. (<ref>) to express the pressure P(ρ_b) via the excess free energy density ψ_b(ρ_b), which we aim to compute for a range of bulk densities ρ_b.
Whereas c_1^⋆(x; [ρ]) was evaluated for the three-dimensional bulk fluid at spatially constant density, cf. Eq. (<ref>), here a suitable density profile ρ_2D(x) is constructed as input to the neural direct correlation functional in order to emulate narrow planar confinement.
For this, we choose
ρ_2D(x) = ρ_b/x_wΘ(|x - x_w/2|)
with the Heaviside function Θ(·); note that Eq. (<ref>) is a Dirac series and yields the Dirac distribution for x_w → 0.
The neural direct correlation functional is then evaluated at the center of this assumed slit, and the values c_1^⋆(0; [ρ_2D]) are used analogous to Sec. <ref> for the determination of P_2D^⋆(ρ_b).
The equation of state for the associated two-dimensional hard disk system follows formally for x_w → 0.
As this limit is not directly accessible in practice, we assess the obtained values for finite but small slit widths 0.3 ≤ x_w / σ≤ 1 and extrapolate to x_w = 0 via a quadratic fit.
The resulting equation of state P_2D^⋆(ρ_b) for the two-dimensional hard disk fluid as obtained from this dimensional crossover on the basis of the neural network is shown in Fig. <ref>.
We additionally display analytic equations of state from scaled particle theory <cit.> and by <cit.> which serve as reference.
One recognizes that reasonable results can be achieved for low and medium densities, but that deviations to analytic results become noticeable for ρ_b > 0.7 σ^-2.
Nevertheless, it is both surprising and reassuring that the neural functional is capable of predicting correlations in narrow confinement, as no such situations were explicitly included in the training data.
Recall that hard walls were imposed only at the borders of the simulation box of length L = 20 σ and that the inhomogeneous external potential within the simulation domain consisted solely of Fourier modes and of piecewise linear functions, see Eq. (<ref>).
Presumably, improvements over the results presented in Fig. <ref> could be obtained especially for large densities by including situations of very narrow confinement explicitly in the training data.
From our outset, the successful achievement of a viable two-dimensional equation of state serves as a demonstration that c_1^⋆(x; [ρ]) indeed captures the intricate functional relationship of the underlying physical problem instead of acting as a mere interpolation tool with respect to the encountered training data.
|
http://arxiv.org/abs/2307.05688v1 | 20230711180101 | Dependence of peculiar velocity on the host properties of the gravitational wave sources and its impact on the measurement of Hubble constant | [
"Harshank Nimonkar",
"Suvodip Mukherjee"
] | astro-ph.CO | [
"astro-ph.CO",
"gr-qc"
] |
firstpage–lastpage
Ages, metallicities and structure of stellar clusters in the Magellanic Bridge
Raphael A. P. Oliveira^1, Francisco F. S. Maia^2, Beatriz Barbuy^1, Bruno Dias^3 & the VISCACHA collaboration
Indian Institute of Technology Kharagpur
=================================================================================================================
Accurate measurement of the low redshift expansion rate, the Hubble constant from standard sirens such as the gravitational wave (GW) sources with electromagnetic counterparts relies on the robust peculiar velocity correction of the redshift of the host galaxy. We show in this work that the peculiar velocity of the host galaxies exhibit a correlation with the properties of the host galaxy primarily such as its stellar mass and this correlation also evolves with redshift. As the galaxies of higher stellar mass tend to form in galaxies with higher halo masses which are located in spatial regions having a non-linear fluctuation in the density field of the matter distribution, the root mean square (RMS) peculiar velocity of the heavier galaxies is higher. As a result, depending on the formation channel of the binary compact objects and whether they are likely to be hosted in galaxies with less stellar mass or more stellar mass, the peculiar velocity contamination to the galaxies will be different. The variation in the peculiar velocity of the host galaxies can lead to a significant variation in the posterior distribution of Hubble constant inferred using sources such as Binary Neutron Stars (BNSs), depending on the underlying population of the BNS to host in a galaxy with higher stellar mass. We find that for the network of GW detectors such as LIGO-Virgo-KAGRA (LVK), LVK+LIGO-India, Cosmic Explorer+Einstein Telescope, the variation in the precision of Hubble constant inferred from 10 bright siren events can vary from ∼ 5.4 - 6%, ∼ 4.5 - 5.3% and ∼ 1.1 - 2.7% respectively. The impact of the correlation of peculiar velocity with the stellar mass on the inference of Hubble constant is not only limited to GW sources but also applicable to other low redshift probes of expansion history such as type-Ia supernovae.
Galaxy: kinematics and dynamics – gravitational waves – neutron star mergers
§ INTRODUCTION:
Gravitational wave (GW) sources such as binary neutron star (BNS), binary black hole (BBH), and neutron star-black hole (NSBH) systems <cit.> are called standard sirens as they offer a unique advantage by providing an independent and accurate measure of their luminosity distance to the source <cit.>. However, due to the nature of GW observations, there is an inherent degeneracy between the luminosity distance and the inclination angle, which is the orientation of the orbital plane with respect to the observer. The degeneracy between these two parameters presents a challenge in accurately determining the astrophysical properties of the sources solely based on GW observations unless the distance-inclination angle degeneracy can be lifted by measuring the two polarizations of GW, using higher order modes for asymmetric mass systems, or improving the inference of inclination angle using observations from electromagnetic (EM) bands <cit.>.
Bright standard sirens having an observable EM counterpart, allow for an independent measurement of the redshift of the host galaxy which can be used along with the measured luminosity distance to measure cosmological parameters which impacts the expansion history of the Universe such as the Hubble constant (H_0) <cit.>. Such measurements are performed for the event GW170817 by combining the measurement of luminosity distance and redshift <cit.>. A further improvement in the measurement of H_0 was possible by including the constraints on inclination angle from jet measurement <cit.>. This measurement was further revised by including a better measurement of peculiar velocity <cit.>. Another GW event GW190521 <cit.> with tentative EM association made it possible to obtain a weak measurement on H_0 <cit.>.
Along with bright siren measurement, measurements from dark standard sirens made using statistical-host identification technique <cit.>, mass distribution <cit.> and more recently by using the spatial clustering of the GW sources with galaxies <cit.> using cross-correlation method <cit.> have emerged as an independent probe having the potential to reach the required percent level precision on H_0, thus advancing towards resolving the well established Hubble tension <cit.>.
In an isotropically expanding universe <cit.>, the gravitational instability within large-scale structures leads to deviations from the smooth Hubble flow, introducing irregularities in the motion of the host galaxies of GW sources. Consequently, these galaxies acquire an additional velocity component beyond their recessional velocity (due to the Hubble flow). The additional velocity component of the host galaxy of a GW source, called peculiar velocity, contaminates the redshift measurements from the EM counterpart. The contribution from the peculiar velocity (few hundreds of km s^-1) to the motion of a GW host is significant up to a redshift of z=0.05 (≈15,000 km s^-1). The contamination from individual sources degrades the accuracy and precision in the measurement of H_0 if not accounted for appropriately <cit.>.
As we show in the following sections, the peculiar velocity dispersion depends on the property of the galaxy such as the stellar mass. The galaxies with higher stellar mass tends to have a large peculiar velocity dispersion than the sources with lower stellar mass. As a result, depending on the formation of the GW sources in galaxies with higher or lower stellar mass, they will have different amount of contamination. With the current sensitivities of the ground based GW detectors, we expect confident bright siren detection up to ∼ 200 Mpc <cit.> which makes it a matter of concern to understand the role of their host properties and the possible contamination from the peculiar velocity that can impact the estimation of H_0.
The paper is structured as follows, in section <ref>, we discuss the motivation behind this work, followed by a brief review of the peculiar velocity formalism employed in this work in section <ref>. In section <ref>, we discuss our approach towards modelling a population of bright sirens. In section <ref>, we simulate bright siren events in these populations of galaxies considering different ground-based GW detector configurations followed by a Bayesian parameter estimation of the luminosity distance and inclination angle of the simulated GW sources. Finally in section <ref> we summarize the work and discuss future outlook.
§ MOTIVATION
The density perturbations cause instabilities in the local gravitational field between galaxies in groups or clusters which results in the motion of the host galaxies termed as peculiar velocity. It is essential to estimate the peculiar velocity field accurately as it provides crucial information about the distribution of matter in the early universe, which in turn helps understand the formation and evolution of large-scale structures. The study of peculiar velocities also has implications for our understanding of dark matter and dark energy and for interpreting observed cosmological phenomena such as redshift space distortions <cit.>.
The total velocity of the host galaxy of a GW source is the sum of the velocity due to Hubble flow, the peculiar velocity, and the velocity of the observer. Thus, in the expansion frame of reference, the peculiar velocity of the host galaxy v_p is related to the velocity of the source v⃗_s and that of the observer v⃗_o as
v_p=(v⃗_s-v⃗_o).n̂
where v_p is directed along the line of sight denoted by n̂. A positive value of v_p implies that the host galaxy is moving away from the observer. The difference in velocities of the source and the observer leads to a difference in the observed and true (cosmological) redshift as,
(1+z_ obs)=(1+z_ true)(1+v_p/c).
For a GW source at a cosmological redshift z_true, its luminosity distance is given via the distance-redshift relation as,
d_L = c(1+z_ true)/H_0∫_0^z_ truedz/E(z),
where the term E(z)= √(Ω_m(1+z)^3+ (1-Ω_m)) is the ratio of the Hubble parameter (H(z)) to the Hubble constant H_0, written in terms of matter density Ω_m for a flat LCDM cosmological model. However, for low redshifts, H(z) is nearly constant. Hence eq. <ref> simplifies greatly to,
d_L=cz_ true/H_0,
thus becoming independent of the cosmological model. Incorporating the peculiar velocity (eq. <ref>) into the luminosity distance (eq. <ref>), we get,
d_L=cz_ obs - v_p/H_0.
Thus, for a bright siren event, with the d_L estimated from the GW data and z_obs estimated from the EM counterpart, the peculiar velocity v_p can lead to a bias in the inferred value of H_0 if not accounted for and corrected appropriately.
The peculiar velocity of the galaxies depends on the environment where they are forming. Galaxies which are forming in the dense environment and in large galaxy halo will exhibit larger peculiar velocity (with both linear and non-linear components) <cit.>. The formation of galaxies in massive halos in a dense environment will have different astrophysical properties such as stellar mass, star formation rate (see a review by ) and as a result there will exist a correlation between the galaxy properties such as stellar mass and peculiar velocity of the galaxies.
Along with the connection between the galaxies and halos, the formation of the GW sources is related to the properties of galaxies and hence also properties of the halo. Formation of bright standard sirens and their merging depends on the formation channel and properties of the galaxies such as its stellar mass, star formation rate <cit.>. So, depending on the underlying population of the host of GW sources, the halo property of the galaxies and hence the peculiar velocity contamination will be different. In this work, we are interested to show the dependence of the peculiar velocity on the astrophysical property of the host galaxies such as its stellar mass M_⋆ and how the interplay between the underlying GW source population and stellar mass can lead to different amount of contamination from peculiar velocity to the host of the GW sources. A GW source population dependent peculiar velocity contamination will lead to a population dependent impact on the inference of the value of Hubble constant from the low redshift sources. We show the impact of stellar-mass dependent peculiar velocity contamination on the value of Hubble constant for the ongoing and upcoming GW surveys and how to mitigate such effects from future observations with the network of GW detectors such as LIGO+Virgo+KAGRA () <cit.>, LVK+LIGO-India () <cit.>, and Cosmic Explorer+Einstein Telescope () <cit.>.
§ MODELLING THE IMPACT OF PECULIAR VELOCITY
Efforts for estimating the cosmic peculiar velocity field has spanned decades of extensive research <cit.>.
In the linear regime, where the density fluctuations are very small, the resulting velocity dispersions are underestimated <cit.>. The peculiar velocity model by <cit.> accounts for the non-linearities in the density field.
Assuming that all dark matter particles lie in a spherical virialized halo, proposed that the peculiar velocity, v⃗_p, has two components,
v_p = v_halo + v_vir,
where v_vir is the higher order term which represents the virial motion of a dark matter particle around the center of mass of parent halo, and v_halo represents the motion of the center of mass of the halo. The linear component arises from the bulk flow of a group or cluster while the non-linear term results from the non-linear interactions between galaxies within a cluster. Hence the velocity dispersion for the non-linear component depends upon the mass of the parent halo, σ_vir∝ m^1/3. The proportionality constant, obtained from the relations given by <cit.>, sets the dispersion for the non-linear term as
σ_vir = 476g_σ(Δ_nlE(z)^2)^1/6(m/10^15M_⊙/h)^1/3km s^-1,
for a galaxy with a halo of mass m in a region with non-linear overdensity contrast Δ_nl approximated by,
Δ_nl = 18π^2 + 60x - 32x^2,
where x=Ω_m(1+z)^3/E(z)^2 - 1. The fitting form for the bulk flow is obtained by extrapolating root-mean-square velocities of the dark matter particles from peaks in the velocity power spectrum from <cit.>.
Our work relies on the peculiar velocity estimates from the technique developed by <cit.> which utilizes the (Bayesian Origin Reconstruction from Galaxies) formalism <cit.>. By fitting a dynamical structure formation model to observed galaxies in cosmological surveys, the method infers a physically feasible and probabilistic model of the three-dimensional cosmic matter distribution, providing the linear and partially non-linear components of the velocity field. The algorithm accounts for unknown galaxy bias and incorporates selection and evolutionary effects while providing the velocity field as part of the dynamical model. method provides a numerical approximation of the posterior distribution of the parameters in a spatial grid of 256^3 values with a spatial resolution of 2.64 Mpc h^-1 for the initial conditions plus the bias parameters. The dark matter particle's initial and final positions are provided for each sample of the posterior, allowing for the estimation of the velocity field using the Simplex-in-Cell estimator.
The implementation of framework has been discussed extensively in <cit.>. reconstructs the initial density field and simulates the evolution of the density fluctuations under gravity via linear perturbation theory to estimate the v_halo (linear) component. This component paired with the non-linear term (eq. <ref>) together forms the peculiar velocity field.
In order to explicate the correlation between the peculiar velocity of host galaxies of GW sources and their properties, we consider real galaxies from the GLADE+ galaxy catalog <cit.> (more details in section <ref>). GLADE+ catalog consists of the redshifts of the galaxies corrected for their peculiar motions using the formalism discussed above. By filtering the catalog to include galaxies with z_true≤0.05, we examine the distribution of the peculiar velocity as a function of stellar mass.
Fig. <ref> shows the distribution of the log of peculiar velocity dispersions as a function of the log-stellar mass of all galaxies up to redshift z=0.05 from the GLADE+ galaxy catalog. The total distribution is divided into five sub-distributions based on five redshift ranges ranging from z= 0.01 to z=0.05. As indicated by the color bar, the topmost panel shows the distribution for 0<z≤0.01, the second panel shows the same for 0.01<z≤0.02, and so on. As we proceed from the top panel towards the bottom, more galaxies are detected with higher stellar mass. This is because the galaxies which are further away with less stellar mass are fainter and were not detected in a magnitude-limited survey. The number of galaxies detected is also large as the comoving volume of the Universe at high redshift is large. Regardless of the redshift range, an increase in the stellar mass leads to an increase in the peculiar velocity dispersion.
To show the dependence of the dispersion of the peculiar velocity on the host property of the galaxy, we show a violin plot in Fig. <ref>, which represents the median distribution of the log of the peculiar velocity dispersion as a function of log-stellar mass of galaxies. Here the galaxies in each stellar mass bin are divided into two redshift bins, z between 0 and 0.025 and z from 0.025 up to 0.05. For each stellar mass bin, the orange curves represent the distributions for galaxies in the lower z bin (0<z≤ 0.025) and the purple ones represent those in the higher z bin (0.025<z≤ 0.05). The first three violins do not exhibit a purple posterior which indicates that for those stellar mass bins, there are no galaxies detected in the higher z bin as low mass galaxies are not bright enough to be detected at higher redshifts. This can be validated from the lower panels of Fig. <ref>. Overall, we see an increasing trend in the median distribution of log-peculiar velocity dispersion with an increase in the order of the stellar mass in both the redshift bins, which clearly indicates a population dependent peculiar velocity contamination. In every stellar mass bin, the orange posteriors are skewed towards a lower value of peculiar velocity as compared to the purple posteriors which indicates a redshift-dependent contamination from peculiar velocity.
Previous studies on other datasets have also found a similar trend of increase in the velocity dispersion with increase in the stellar-mass <cit.>. The velocity trend observed from GLADE+ agrees with previous findings. The key summary from this section is that the underlying peculiar velocity dispersion depends on the stellar mass and host redshift of the sources. As a result, whether most GW source are hosted in galaxies with higher (or lower) stellar mass, their peculiar velocity contamination will be different. It is important to note here that this impact can be also important for standard candles. These sources can also exhibit a population-dependent peculiar velocity contamination. We will explore this in a future work.
§ MODELLING GW SOURCE POPULATION FOR BRIGHT SIRENS
Bright standard sirens have an observable EM counterpart and hence are expected to be detected at low redshifts. The contribution of peculiar velocity is significant typically up to z=0.05 <cit.>. Our work focuses on low redshift bright sirens, particularly BNS mergers. To probe the impact of the host properties of BNSs, it is essential to model their population appropriately. The merger rate of compact objects plays a key role in this aspect. The merger rate for compact binaries as a function of some physical property like stellar mass or redshift gives the probability distribution for their host galaxies with respect to that property.
§.§ Merger rate density distribution for BNS hosts
Almost all properties of galaxies are strongly influenced by their stellar mass <cit.>. Massive galaxies typically exhibit old stellar ages, high mass-to-light ratios, and low star formation rates. They also have high stellar surface mass densities and frequently host active galactic nuclei. On the other hand, low-mass galaxies tend to have young stellar populations, low mass-to-light ratios and are actively forming stars at present. They have low stellar surface mass densities and rarely contain active galactic nuclei.
There is also a tight correlation between stellar mass and the metal content in the gas phase of emission-line galaxies. This indicates that the amount of metals in a galaxy is closely connected to its stellar mass.
In the paradigm of standard sirens, the properties of galaxies in which binary compact objects (BCOs) form (called formation galaxy) can significantly vary from those in which BCOs merge (called host galaxy) <cit.>. For any arbitrary compact binary system, formation galaxy, and host galaxy may or may not be the same depending upon numerous factors. For instance, the formation and evolution of a BNS system span 10^6 years (1 Myr) to 10^9 years (1 Gyr) before it finally merges. Throughout this timescale, its formation galaxy can be subjected to significant alterations via chemical evolution or even galaxy-galaxy merger resulting in its host galaxy having properties dissimilar to its formation galaxy.
To model the host properties of GW sources, <cit.> simulated the mergers of compact binaries using binary population models for their progenitors while considering the evolution and possible mergers of their host galaxies across different cosmic timescales. The host properties, viz. the galaxy stellar mass, metallicity and star formation rate (SFR) are assumed to be following either the mass metallicity relation (MZR) <cit.> or the fundamental metallicity relation (FMR) <cit.>. As seen in section <ref>, the peculiar velocity dispersion depends upon the mass of the parent halo of the host galaxy and hence on their stellar mass content (see Fig. <ref>). To model the source population, we adapt from the distribution of BNS merger rate density as a function of the host stellar mass for local galaxies assuming FMR. Following their log-normal merger rate density distribution ( Page 9, figure 8: BNS α1 FMR), we fit a broken power law (more in section <ref>) using the following equation
f(dm) ∝(dm/m_break)^α_1{1/2(1 + (dm/m_break)^1/Δ) }^(α_2 - α_1) Δ
where dm = dlog(M_⋆/M_⊙), m_break is the pivotal point of change of slope, α_1 and α_2 are the power law indices and Δ is the smoothness parameter. Normalizing the above equation for the probability density function (PDF), we get the probability densities for BNS host galaxies across a log-stellar mass range of [6, 13] which gives the probability that a GW event detected by a detector is emitted by a coalescing BNS located in a galaxy that has log-stellar mass given by the log-mass bin dm. The value of m_break in the work by <cit.> is approximately 10^10.5 M_⊙ which implies that galaxies with a stellar mass of the order of 10^10 M_⊙ are estimated to have the largest number of BNS mergers per Gpc^3 yr^-1 (following the merger rate distribution).
§.§ Redshift dependent merger rate model
Having constructed a model describing the distribution of stellar masses across the galaxies hosting BNS systems, the next stage involves modeling the redshifts associated with these galaxies. The number of Compact Binary Coalescing (CBC) events per unit redshift per unit observation time is estimated using <cit.> as
dN_ GW/dz dt = R(z)/1+zdV_c/dz(θ_ΛCDM);
where dV_c/dz(θ_ΛCDM) is the comoving volume at redshift z for the cosmological parameters denoted by θ_ΛCDM for the flat-ΛCDM model and R(z) is the redshift evolution of the merger rate for different delay time distributions
R(z) = R_0 ∫_z^∞ P(t_d|t_d^min,t_d^max,d) R_SFR(z_m)dt/dz_mdz_m,
where P(t_d|t_d^min,t_d^max,d) = (t_d)^-d is the probability distribution of the delay time (t_d) given the power law index d; R_SFR(z) is the star-formation rate at redshift z <cit.> and R_0 is the merger rate at redshift z=0. The minimum and maximum time delay is given in terms of the lookback time and the local merger rate R_0 for BNS is inferred by <cit.> between 10 Gpc^-3 yr^-1 and 1700 Gpc^-3 yr^-1. Our work assumes a fiducial local merger rate of R_0 = 20 Gpc^-3 yr^-1. Hence eq. (<ref>) gives the number of BNS coalescing events per unit redshift per unit observation time. This gives the distribution of BNS host galaxies per redshift bin dz for a given observation time dt.
Integrating the eq. (<ref>) for a given observation time (including the detector duty cycle) and the redshift range gives the total number of BNS events up to that redshift. Hence, for redshifts up to 0.05, the distribution of BNS host galaxies is shown in Fig. <ref>. In this analysis we consider the BNS to follow the Madau-Dickinson SFR <cit.> with a negligible minimum delay time.
§.§ GLADE+ galaxy catalog
Using the mass distribution model of BNS and merger rate model, we choose the host galaxies of BNS from the GLADE+ galaxy catalog <cit.>. In order to do so, we consider the distribution of host galaxies of BNSs across the stellar mass range of 10^6 M_⊙ to 10^12 M_⊙ along with their distribution across redshift up to z = 0.05. The GLADE+ galaxy catalog is constructed with the purpose of optimizing multi-messenger searches by facilitating the process of sky localization for EM follow-up. The catalog contains 22.5 million galaxies and 7,50,000 quasars. It consists of the redshifts of the galaxies in the CMB frame, their stellar masses and their peculiar velocity uncertainties estimated from the BORG framework <cit.>. The redshift flag (z_flag) with a value of 1 indicates that the redshift of the galaxy in the CMB frame is corrected for the peculiar velocity bias, i.e., z_CMB = z_true.
Hence, to garner a population of potential BNS host galaxies, we filter the GLADE+ catalog by applying the following conditions:
* z_CMB≤ 0.05,
* M_⋆≠ `null',
* z_flag = 1.
With these restrictions, we get a subset of GLADE+ (henceforth called `filtered GLADE+') containing 572558 galaxies. The filtered GLADE+ contains galaxies up to z ≤ 0.05 corrected for their peculiar velocity bias and transformed to the CMB frame. It also contains the stellar masses (M_⋆), peculiar velocity dispersion (σ_v_p), and the sky location (right accession (RA) and Declination (Dec)) of galaxies up to z_CMB = 0.05. Fig. <ref> shows the distribution of galaxies from the filtered GLADE+ catalog as a function of the stellar mass.
To understand the impact of the host galaxy properties of BNSs on the peculiar velocity and subsequently, on the inference of H_0, we consider three different cases of BNS population models of potential BNS. The host galaxies from filtered GLADE+ are chosen using these three different population models:
* ⟨ M_⋆⟩ = 10^9 M_⊙,
* ⟨ M_⋆⟩ = 10^10 M_⊙,
* ⟨ M_⋆⟩ = 10^11 M_⊙.
For the three cases, we first sort the filtered GLADE+ catalog with respect to the stellar mass and then combine the normalized densities from Fig. <ref> with the PDF shown in Fig. <ref> obtained from the broken power law fitted with the parameters given in table <ref>.
Using this combined probability distribution on the stellar masses, we sample a population of 50 galaxies from the filtered GLADE+ catalog that follow the BNS merger rate model from discussed in section <ref>. We follow a similar procedure for ⟨ M_⋆⟩ = 10^10 M_⊙ case with the corresponding broken power law parameters given in table <ref> to get another population of 50 potential BNS host galaxies.
The filtered GLADE+ contains only 3365 galaxies (∼ 0.6%) having M_⋆≥ 10^11 M_⊙. Hence, for ⟨ M_⋆⟩ = 10^11 M_⊙ case, we use a step function to restrict the population to masses M_⋆≥ 10^11 M_⊙. Note that this population contains only heavier galaxies and the use of the step function does not imply that the mean stellar mass of this population is 10^11 M_⊙. We use the symbol “⟨ M_⋆⟩” as a convention to denote the galaxy populations throughout this paper. The corresponding distribution for three cases is shown in Fig. <ref>.
Thus, we have three populations of potential BNS host galaxies for the three ⟨ M_⋆⟩ cases. These three populations have their masses as discussed above, host redshifts that follow the merger rate model, and their peculiar velocity dispersion (σ_v_p) for individual galaxies. With these velocity dispersions and host galaxy redshift corrected by peculiar velocity we get the v_p realizations for each galaxy in every population.
§.§ Luminosity distance estimation for BNS sources
To estimate the value of the Hubble constant from these sources, we need an estimation of the luminosity distance for these mock BNS samples for all three different detector network configurations chosen in this analysis. To estimate the luminosity distance, we follow two different procedures, (i) we use the parameter estimation code Bilby for 10 BNS sources, and (ii) we estimate the posterior on the luminosity distance using a Gaussian approximation for 50 sources with a standard deviation equal to the median uncertainty on luminosity distance estimate from Bilby luminosity distance estimates. As we are using median errors on distance estimated from Bilby, we take into account the degeneracy with the inclination angle in a Gaussian approximation. However, this approximation cannot capture the non-Gaussian nature of the posterior. However, on combining multiple GW sources (about 10 sources) for estimating H_0, the posterior on Hubble constant H_0 will become Gaussian by central limit theorem. As a result, this approximation will not impact the conclusion significantly (this is discussed in detail later). We do not perform Bilby parameter estimation for 50 GW sources to reduce the computational cost.
Parameter estimation using Bilby: We perform parameter estimation of the GW sources using the <cit.> package to get a realistic posterior on the luminosity distance for the BNSs for three different detector networks, namely:
* : LIGO-Livingston + LIGO-Hanford + Virgo + KAGRA
* : LVK + LIGO-India
* : Cosmic Explorer + Einstein Telescope
To begin with, we first simulate BNS merger events in the chosen galaxies from the three galaxy populations from the GLADE+ catalog by injecting the GW source parameters m_1, m_2, d_L, θ_JN, ψ, ϕ, RA, and Dec. The parameters m_1 and m_2 denote the component masses randomly sampled from a uniform distribution over the range [0.8 M_⊙, 2.5 M_⊙]. The injected d_L is calculated using eq. <ref> where z_true≡ z_CMB from GLADE+ and the injected value of H_0 is 70 km s^-1 Mpc^-1. The sky location of the injected GW source, given by the right ascension (RA) and declination (Dec), is obtained from the GLADE+ catalog. The injected values for the inclination angle θ_JN and the polarisation angle ψ are randomly sampled from uniform distributions over [0, π] while that for the GW phase ϕ is sampled from uniform over [0, 2π]. We do not consider any spin effects like precession or tidal deformation in this analysis.
With these injection parameters, a GW signal waveform is generated using the waveform model. With a sampling rate of 2GHz, the frequency domain waveform h(f) is then added to the Gaussian colored noise n(f) with the power spectral density S_n(f) for the design sensitivity of LIGO, Virgo, KAGRA, LIGO-India, Cosmic Explorer, and Einstein Telescope. This generates the GW strain data for the injected source parameters. The data duration for each source for the and cases is ∼ 15 - 20 minutes varying with the randomly sampled component masses. The data duration for case is ∼ 25 - 35 minutes.
Since the luminosity distance d_L and the inclination angle θ_JN of the source are the only parameters of interest, we set the priors for all the other parameters as delta functions to expedite the parameter estimation. This approximation is appropriate as the luminosity distance and masses are not degenerate with these detector configurations. Luminosity distance and inclination angle are the maximally degenerate parameters, so we primarily consider these two for the parameter estimation. For d_L, the prior is uniform over [1, 1000] Mpc, and the prior for θ_JN is sinusoidal from 0 to π. With a Gaussian likelihood over the parameter space, we use the sampler in Bilby to estimate d_L and θ_JN. The d_L samples are obtained for 10 sources each from the three ⟨ M_⋆⟩ populations of BNS models for the three GW detector configurations , , and .
Parameter estimation using Gaussian method: We estimate the median σ_d_L/d_L for , , and from Bilby. Then we use that median error to model the Gaussian posterior on the luminosity distance for all the three cases of ⟨ M_⋆⟩.
For , , and configuration, we consider the value of σ_d_L/d_L as 19%, 15.5% and 1.6% respectively. We generate d_L samples, for the entirety of the host populations from a Gaussian distribution with the mean obtained from eq. <ref> and the percentage standard deviation obtained from the median values for every detector configuration. We then execute the v_p correction formalism for all host galaxies each across three different detector configurations.
§ IMPACT OF PECULIAR VELOCITY ON HUBBLE CONSTANT ESTIMATION
As discussed in section <ref>, the peculiar motion of a galaxy contaminates the redshift estimate which affects the distance-redshift relation resulting in a biased inference of H_0. Hence, <cit.> developed a Bayesian formalism to incorporate and correct for the peculiar velocity contamination for inferring the Hubble constant from bright standard sirens. The equation for the framework is given by,
P(H_0|{D_GW},{ẑ})∝∏_n=1^N_ GW∫dd_L^ndv_p^nℒ(d_L^n|H_0,v_p^n,z_n,u_n,D_GW^n)
P(z_n|u_n) P(v_p^n|M,u_n) Π (H_0),
where P(H_0|{D_GW},{ẑ}) is the posterior PDF of H_0 given the GW data for N_GW sources and their observed redshifts estimated from the EM counterpart; ℒ is the likelihood on the luminosity distance d_L which is assumed to be Gaussian; P(z|û) is posterior of redshift estimate at source; P(v_p|M,û) is the posterior of the peculiar velocity of the host galaxy has a halo of mass M and located at sky position û (RA, Dec) ; and Π(H_0) is the prior on the value of H_0.
Having three populations of galaxies (discussed in section <ref>) from GLADE+ and their v_p realizations, we use the d_L posteriors obtained using (i) luminosity distance posterior from Bilby and (ii) Gaussian luminosity distance posterior to obtain posterior distribution on Hubble constant H_0 from the mock samples.
§.§ H_0 correction for GW sources with PE from Bilby
To implement the v_p correction (eq. <ref>), we use a flat prior on H_0 over the range [20, 150] km s^-1 Mpc^-1, and with the d_L samples obtained using Bilby, we use the ensemble sampler from the <cit.> package to implement the Metropolis-Hastings algorithm. The code samples from the above defined prior on H_0 and evaluate the likelihood by calculating the corresponding model value for d_L using the v_p realizations. After marginalizing over the v_p uncertainties, we get the corresponding H_0 samples and obtain the posterior probability distribution function for H_0 for individual sources using kernel smoothing with a scale of ∼ 0.013 km s^-1 Mpc^-1. We then normalize the 10 individual posteriors and combine them to obtain a combined H_0 posterior for each ⟨ M_⋆⟩ population for each detector configuration.
In Fig. <ref> we present the combined posteriors for 10 sources from the three ⟨ M_⋆⟩ cases for , and detector configurations respectively. For the configuration, the joint posteriors for the three populations have a similar spread. With the addition of the LIGO-India detector, as the uncertainty on the estimation of luminosity distance decreases, we start to see the impact of the stellar mass of the host galaxy on the precision of H_0. This effect becomes more evident from the configuration as it has the most precise d_L estimation of the three.
To elucidate the role of distance uncertainty, we plot the precision on H_0 as a function of the log of the expectation value of the host stellar mass for the three detector networks as shown in Fig. <ref>. We see that with the detectors, for the ⟨ M_⋆⟩ = 10^9 M_⊙, 10^10 M_⊙ and 10^11 M_⊙ populations, the respective H_0 precisions are ∼ 1.1%, 1.9% and 2.3%. We observe that for , as we go from low-mass BNS host galaxies to high-mass BNS host galaxies, the precision on H_0 inferred from a BNS merger in the host drops by a factor of ∼ 2.
Fig. <ref> shows the precision on H_0 as a function of the number of bright siren events cumulatively combined for the three detector configurations, denoted by three different markers. For each marker, the red, yellow, and blue lines represent the ⟨ M_⋆⟩ = 10^9 M_⊙, 10^10 M_⊙, 10^11 M_⊙ populations respectively. Here we see the interplay between the distance error and the peculiar velocity bias. With the configuration, the contribution from the peculiar velocity uncertainty to the precision on H_0 far exceeds that from the distance uncertainty. Hence we clearly see the impact of the host properties in terms of the separation between the three dashed lines with lozenge (⧫) markers. As the ⟨ M_⋆⟩ = 10^11 M_⊙ population contains extremely heavy galaxies of
the order of 10^11 M_⊙ -10^12 M_⊙, the peculiar velocity contribution from the host galaxies in this population is much greater than that from the other two populations. As a result, the blue dashed line gets flattened at N_GW=3 and the precision on H_0 does not improve even after combining 10 sources.
§.§ H_0 correction for GW sources with PE from Gaussian approximation
Before we perform an analysis of the entire chosen population of BNS host galaxies, we first compare the results from the non-Gaussian d_L posteriors with Gaussian ones. For the exact same sources, we make a plot, similar to Fig. <ref> in Fig. <ref>. Comparing the similar colored lines from the two figures, we observe that with the Gaussian approximation with median percent standard deviation, the precision of H_0 for and configurations is slightly poor than that for the non-Gaussian d_L posteriors. For the configuration, the Gaussian case shows a steeper slope from ⟨ M_⋆⟩ = 10^10 M_⊙ to 10^11 M_⊙. This assures that with the median σ_d_L/d_L, we are not underestimating the contribution from the distance uncertainty to the precision on H_0.
Fig. <ref> shows the precision on H_0 as a function of the number of bright siren events cumulatively combined (similar to Fig. <ref> but with 50 sources), with Gaussian approximation for the luminosity distance with median σ_d_L/d_L for the three detector configurations. The three detector configurations are denoted by three different markers, similar to Fig. <ref>. For each marker, the red, yellow, and blue lines represent the ⟨ M_⋆⟩ = 10^9 M_⊙, 10^10 M_⊙, 10^11 M_⊙ populations respectively. It can be seen that for the configuration, for BNS host galaxies with a stellar mass of the order of 10^9 M_⊙, a one percent precision on H_0 can be attained with ≈ 15 bright siren events. If the host stellar mass is of the order 10^10 M_⊙, we need more than 20 events, and for 10^11 M_⊙, we need at least 50 bright siren events to reach the 1%-precision on H_0. Similar to Fig. <ref>, we see that the blue lozenges in Fig. <ref>, representing the configuration for the ⟨ M_⋆⟩ = 10^11 M_⊙ population, reach saturation and tend to get flattened as we combine more and more posteriors. As the peculiar velocity uncertainty far exceeds the distance error (1.6%), combining more posteriors accounts for the distance error and highlights the contribution from the host galaxy properties.
§ CONCLUSION AND FUTURE OUTLOOK
Binary neutron stars (BNS), being a class of bright standard sirens, are the ideal candidates to resolve the Hubble tension. However, since they are detected at low redshifts, the peculiar motions of their host galaxies contaminate the redshift and contribute significantly to the error budget on the inference of the Hubble constant (H_0). Hence to correct for the peculiar velocity contamination, it is necessary to estimate the peculiar velocity accurately. The peculiar velocity estimates are driven by the properties of the host galaxies of GW sources. Along with appropriate modeling of the source population, to discern the role of their host properties, we need to also analyze the impact of the distance uncertainty of a GW source on the H_0 estimation. Hence we considered three different ground-based detector configurations. These three configurations each lead to a different approximate uncertainty on the distance inferred from them.
With different detectors, we get different accuracy in the estimation of the luminosity distance of the GW sources. First, we consider the LIGO (Hanford and Livingston) - Virgo - KAGRA () network. We see that with , the median uncertainty on the d_L estimate for 30 sources is 19%. If the d_L uncertainty exceeds ∼ 20%, the impact of v_p uncertainty is shrouded and σ^2_d_L starts leading σ^2_H_0. σ^2_d_L is largely dependent on the “loudness" of the GW chirp which depends on the intrinsic properties of the GW sources.
The signal-to-noise ratio (SNR) of a GW incident on a given detector can vary depending on the number of detectors. Hence to achieve greater accuracy on d_L, we consider another detector (LIGO-India) along with . We observe that with the addition of the LIGO-India detector, we get a much better constraint on the d_L of the GW sources. The median uncertainty on the d_L estimate for the same 30 sources is 15.5%. The improved d_L estimates illuminate the contribution of v_p uncertainty and thus make way for the GW host property dependence to show up in the form of a significant difference in the H_0 precision for the BNS host populations.
To minimize the dominance of the error in the distance measurement, we considered a third configuration involving the Cosmic Explorer and Einstein Telescope () detectors. The configuration has a median distance uncertainty of 1.6% which helps explore the host property dependence of peculiar velocity to its full potential.
The results from this study present a forecast on the impact of the host properties of GW sources, particularly BNSs on the peculiar velocity estimation and subsequently on the inference of H_0. The results for these real galaxies from the GLADE+ catalog show that the stellar mass has a major impact on the peculiar velocity estimation and as a consequence, it significantly affects the inference of H_0. To shed light on the impact of the host properties on peculiar velocity estimation, we need to minimize the dominance of the distance uncertainty. We saw that a BNS merger event detected by configuration located in a host galaxy with stellar mass 10^9 M_⊙ yields a more precise estimate of H_0 by a factor of 2 than a host with stellar mass 10^11 M_⊙. This implies that host galaxies with more stellar mass will have a larger uncertainty on their peculiar velocity. The key takeaway is that it is equally important to measure the luminosity distance with greater accuracy as it is important to correct for the peculiar velocity of the GW source, to achieve precision measurement of the Hubble constant.
In conclusion, the variance on H_0 is an interplay between the luminosity distance uncertainties and the peculiar velocity uncertainties. Hence for a precise estimation of H_0, it is necessary to accurately determine distances to GW sources while simultaneously estimating their peculiar velocities with precision. Lastly, we saw that by combining more sources, typically of the order of 50, we can reach a 1% precision on H_0. However, this will depend on the host properties of the GW sources due to their peculiar velocity contamination. We saw that with configuration, BNS mergers from less massive host galaxies will lead to a precision of 1% on H_0 with approximately 15 events as compared to 40-50 events for high mass galaxies. In the future, with accurate modeling of peculiar velocity and accurately mitigating its contamination, one can reach a 1% measurement with from 50-200 sources distributed up to z=0.05 depending on the underlying population of the host galaxies. From and one can achieve 3-2.5% measurement of the Hubble constant H_0 from 50 GW sources.
§ ACKNOWLEDGEMENTS
The authors thank Abhishek Sharma for reviewing the manuscript and providing useful comments as a part of the LIGO publication and presentation policy. This work is a part of the ⟨⟩ which is supported by the TIFR and the Department of Atomic Energy, Government of India. The authors are thankful to Mr. Parag Shah for maintaining the computer cluster of the ⟨⟩ . The authors would like to thank the LIGO-Virgo-KAGRA Scientific Collaboration for providing the noise curves.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gw-openscience.org), a service of LIGO Laboratory, the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. LIGO Laboratory and Advanced LIGO are funded by the United States National Science Foundation (NSF) as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. The construction and operation of KAGRA are funded by Ministry of Education, Culture, Sports, Science and Technology (MEXT), and Japan Society for the Promotion of Science (JSPS), National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea, Academia Sinica (AS) and the Ministry of Science and Technology (MoST) in Taiwan. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science
Foundation. We acknowledge the use of the following packages in this work: Astropy <cit.>, Bilby <cit.>, emcee: MCMC Hammer <cit.>, Matplotlib <cit.>, NumPy <cit.>, Pandas <cit.>, SciPy <cit.>, Seaborn <cit.>.
§ DATA AVAILABILITY
The data underlying this article will be shared at the request to the corresponding author.
mnras
*
|
http://arxiv.org/abs/2307.04887v1 | 20230710202020 | Measuring and Mitigating Interference in Reinforcement Learning | [
"Vincent Liu",
"Han Wang",
"Ruo Yu Tao",
"Khurram Javed",
"Adam White",
"Martha White"
] | cs.LG | [
"cs.LG",
"cs.AI"
] |
Critical behavior of cascading failures in overloaded networks
Shlomo Havlin
^1 Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP, F-69342, Lyon Cedex 07, France
^2 CNRS, Univ de Lyon, ENS de Lyon, Laboratoire de Physique, F-69342 Lyon, France
^3 Department of Network and Data Science, Central European University, 1100 Vienna, Austria
^4 Rényi Institute of Mathematics, 1053 Budapest, Hungary
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================
Catastrophic interference is common in many network-based learning systems, and many proposals exist for mitigating it. Before overcoming interference we must understand it better. In this work, we provide a definition and novel measure of interference for value-based reinforcement learning methods such as Fitted Q-Iteration and DQN. We systematically evaluate our measure of interference, showing that it correlates with instability in control performance, across a variety of network architectures. Our new interference measure allows us to ask novel scientific questions about commonly used deep learning architectures and study learning algorithms which mitigate interference. Lastly, we outline a class of algorithms which we call online-aware that are designed to mitigate interference, and show they do reduce interference according to our measure and that they improve stability and performance in several classic control environments.
§ INTRODUCTION
A successful reinforcement learning (RL) agent must generalize — learn from one part of the state space to behave well in another. Generalization not only makes learning more efficient but is also essential for RL problems with large state spaces. For such problems, an agent does not have the capacity to individually represent every state and must rely on a function approximator — such as a neural network — to generalize its knowledge across many states. While generalization can improve learning by allowing an agent to make accurate predictions in new states, learning predictions of new states can also lead to inaccurate predictions in unseen or even previously seen states. If the agent attempts to generalize across two states that require vastly different behavior, learning in one state can interfere with the knowledge of the other. This phenomenon is commonly called interference[The term interference comes from early work in neural networks <cit.>.] or forgetting in RL <cit.>.
The conventional wisdom is that interference is particularly problematic in RL, even single-task RL, because
(a) when an agent explores, it processes a sequence of observations, which are likely to be temporally correlated;
(b) the agent continually changes its policy, changing the distribution of samples over time; and
(c) most algorithms use bootstrap targets (as in temporal difference learning), making the update targets non-stationary.
All of these issues are related to having data and targets that are not . When learning from a stream of temporally correlated data, as in RL, the learner might fit the learned function to recent data and potentially overwrite previous learning—for example, the estimated values.
To better contextualize the impacts of interference on single task RL, consider a tiny two room gridworld problem shown in Figure <ref>.
In the first room, the optimal policy would navigate to the bottom-right as fast as possible, starting from the top-left. In the second room, the optimal policy is the opposite: navigating to the top-left as fast as possible, starting from the bottom-right. The agent is given its position in the room and the room ID number, thus the problem is fully observable. However, the agent has no control over which room it operates in. We can see catastrophic interference if we train a DQN agent in room one for a while and then move the agent to room two. The agent simply overrides its knowledge of the values for room one the longer it trains in room two. Indeed, we see DQN's performance in room one completely collapse. In this case the interference is caused by DQN's neural network. We contrast this to a simple tile coding representation (fixed basis), with a linear Q-learning agent. The tile coding represents these two rooms with completely separate features; as a result, there is no interference and performance in room one remains high even when learning in room two.
It is difficult to verify this conventional wisdom in more complex settings, as there is no established online measure of interference for RL. There has been significant progress quantifying interference in supervised learning <cit.>, with some empirical work even correlating interference and properties of task sequences <cit.>, and investigations into (un)forgettable examples in classification <cit.>.
In RL, recent efforts have focused on generalization and transfer, rather than characterizing or measuring interference. Learning on new environments often results in drops in performance on previously learned environments <cit.>.
DQN-based agents can hit performance plateaus in Atari, presumably due to interference. In fact, if the learning process is segmented in the right way, the interference can be more precisely characterized with TD errors across different game contexts <cit.>. Unfortunately this analysis cannot be done online as learning progresses. Finally, recent work investigated several different possible measures of interference, but did not land on a clear measure <cit.>.
Interference classically refers to an update negatively impacting the agent's previous learning—eroding the agent's knowledge stored in the value function. Therefore it makes sense to first characterize interference in the value function updates, instead of the policy or return.
In most systems the value estimates and actions change on every time-step conflating many different sources of non-stationarity, stochasticity, and error. If an update to the value function interferes, the result of that update might not manifest in the policy's performance for several time steps, if at all.
We therefore focus on measuring interference for approximate policy iteration
algorithms: those that fix the policy for some number of steps (an iteration) and only update the value estimates.
We specifically conduct experiments on a class of algorithms we call Deep Q-iteration. One instance—with target networks—is almost the same as DQN but additionally keeps the behavior policy fixed within an iteration. The goal is to remove as many confounding factors as possible to make progress on understanding interference in RL. This class of algorithms allows us to investigate an algorithm similar to DQN; investigate the utility of target networks; and define a sensible interference measure by keeping more factors of variation constant within an iteration.
The contributions in this work are as follows.
(1) We define interference at different granularities to capture interference within and across iterations for this class of value-based algorithms.
(2) We justify using differences in squared TD errors across states before and after an update as an effective and computationally efficient approximation of this interference definition.
(3) We empirically verify the utility of our interference metric by showing that it correlates with instability in control performance across architectures and optimization choices.
(4) We leverage this easy-to-compute measure to outline a class of algorithms that mitigate interference. We demonstrate that these online-aware algorithms can improve stability in control by minimizing the interference metric.
We conclude this work by highlighting limitations and important next steps.
§ PROBLEM FORMULATION
In reinforcement learning (RL), an agent interacts with its environment, receiving observations and selecting actions to maximize a reward signal. We assume the environment can be formalized as a Markov decision process (MDP). An MDP is a tuple (, , Pr, R, γ) where is a set of states, is an set of actions, Pr:××→[0,1] is the transition probability, R:××→ is the reward function, γ∈ [0,1) a discount factor.
The goal of the agent is to find a policy π:×→[0,1] to maximize the expected discounted sum of rewards.
Value-based methods find this policy using an approximate policy iteration (API) approach, where the agent iteratively estimates the action-values for the current policy and then greedifies.
The action-value function : ×→ for policy π is (s,a) 𝔼[ ∑_k=0^∞γ^k R_t+k+1 | S_t = s, A_t=a ],
where R_t+1 R(S_t,A_t,S_t+1), S_t+1∼(·|S_t,A_t), and A_t∼π(·|S_t).
The Bellman operator for action values ^π:^||×||→^||×|| is defined (^π Q)(s,a) ∑_s'∈(s'| s,a)[R(s,a,s') + γ∑_a' ∈π(a'|s') Q(s',a')]. This operator can be used to obtain Q^π because it is the unique solution of the Bellman equation: ^π Q^π = Q^π.
Temporal difference (TD) learning algorithms are built on this operator, as the sampled TD error δ in expectation equals ^π Q - Q.
We can use neural networks to learn an approximation Q_ to the action-values, with parameters . Under certain conditions, the API procedure—consisting of a policy evaluation step to get Q_, followed by greedifying to get a new policy, and repeating—eventually converges to a nearly optimal value function <cit.>.
We investigate a particular API algorithm that is similar to Deep Q-learning (DQN), that we call Deep Q-iteration. The only difference to DQN is that the behavior policy is held fixed during each evaluation phase.
In this algorithms there is an explicit evaluation phase for a fixed target policy, where the agent has several steps to improve its value estimates. More specifically, on iteration k with current action-values estimates Q_k, the target policy is greedy π_k(s) = _a ∈ Q_k(s,a) and the behavior is ϵ-greedy. For each step in the iteration, a mini-batch update from a replay buffer is performed, using the update equation Δθδ_t ∇__t Q__t(S_t, A_t) for temporal difference (TD) error δ_t. This TD error can either be computed without target networks, δ_t R_t+1 + γ Q__t(S_t+1, π_k(S_t+1)) - Q__t(S_t, A_t), or with a target network,
δ_t = R_t+1+ γ Q_k(S_t+1, π_k(S_t+1)) - Q__t(S_t, A_t). The procedure is in Algorithm <ref>.
We exactly recover DQN by setting the behavior policy[Notice that the typical bootstrap target max_a' Q_k(S_t+1, a') in DQN is in fact equivalent to the Deep Q-iteration update with a target network, because max_a' Q_k(S_t+1, a') = Q_k(S_t+1, π_k(S_t+1)). The scalar is the target network refresh frequency. We can also recover Double DQN <cit.>, though it deviates just a bit more from Deep Q-iteration. It similarly uses the Deep Q-iteration update with a target network, but the target policy is greedy in Q__t rather than Q_k. The resulting TD error is instead δ_t R_t+1 + γ Q_k(S_t+1, max_a' Q__t(S_t+1, a')) - Q__t(S_t, A_t).] to be ϵ-greedy in Q__t rather than Q_k. We opt to analyze this slightly modified algorithm, Deep Q-iteration, to avoid confounding factors due to the policy changing at each step.
The definitions of interference developed in the next section, however, directly apply to DQN as well. For the controlled Two Rooms example in the introduction, we used our measure for a DQN agent. However, when moving to more complex, less controlled scenarios, this changing data distribution may impact outcomes in unexpected ways. Therefore, to control for this factor, we focus in this work on Deep Q-iteration algorithms where the data-gathering policy also remains fixed during each iteration.
The central question in this work is how generalization in Q_ impacts behavior of Deep Q-iteration. Intuitively, updates to Q_ in some states may interfere with the accuracy of the values in other states. We formalize this notion in the next section, and in the following sections empirically connect the level of interference to performance.
§ DEFINING INTERFERENCE FOR VALUE ESTIMATION ALGORITHMS
In this section, we define the interference measure that used to compare Deep Q-iteration algorithms in the coming sections. Deep Q-iteration alternates between policy evaluation and policy improvement, where one cycle of policy evaluation and improvement is called an iteration. To explain this measure, we first need to define interference during the evaluation phase of an iteration. We then discuss interference at four different levels of granularity, the coarser of which we use for our experiments. We start at the lowest level to build intuition for the final definition of interference.
Within each iteration—in each evaluation phase—we can ask: did the agent's knowledge about its value estimates improve or degrade? The evaluation phase is more similar to a standard prediction problem, where the goal is simply to improve the estimates of the action-values towards a clear target. In the case of Deep Q-iteration with target networks, it attempts to minimize the distance to the target function 𝔼[R + max_a' Q_k (S', a') | S = s, A = a]. More generally, Deep Q-iteration, with or without target networks, attempts to reduce the squared expected TD-error: [δ(θ) | S = s, A= a]^2. Without target networks, the expected TD error is the Bellman error: [δ(θ) | S = s, A= a] = ^π Q_θ (s,a) - Q_θ(s,a), where ^π Q (s,a) = _π[R + γ Q(S', A') | S = s, A= a]. A natural criterion for whether value estimates improved is to estimate if the expected TD error decreased after an update.
Arguably, the actual goal for policy evaluation within an iteration is to get closer to the true Q^π(s,a). Reducing the expected TD error is a surrogate for this goal.
We could instead consider interference by looking at if an update made our estimate closer or further from Q^π(s,a). But, we opt to use expected TD errors, because we are evaluating if the agent improved its estimates under its own objective—did its update interfere with its own goal rather than an objective truth. Further, we have the additional benefit that theory shows clear connections between value error to Q^π(s,a) and Bellman error. Bellman error provides an upper bound on the value error <cit.>, and using Bellman errors is sufficient to obtain performance bounds for API <cit.>.
Accuracy Change. At the most fine-grained, we can ask if an update, going from _t to _t+1, resulted in interference for a specific point (s,a). The change in accuracy at s,a after an update is
Accuracy Change((s,a), _t, _t+1) [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
where if this number is negative it reflects that accuracy improved. This change resulted in interference if it is positive, and zero interference if it is negative.
Update Interference. At a less fine-grained level, we can ask if the update generally improved our accuracy—our knowledge in our value estimates—across points.
Update Interference(_t, _t+1) max( 𝔼_(S,A) ∼ d[Accuracy Change((S,A), _t, _t+1) ] , 0 )
where (s,a) are sampled according to distribution d, such as from a buffer of collected experience.
Both Accuracy Change and Update Interference are about one step.
At an even higher level, we can ask how much interference we have across multiple steps, both within an iteration and across multiple iterations.
Iteration Interference. reflects if there was significant interference in updating during the evaluation phase (an iteration).
We define Iteration Interference for iteration k using expectation over Updated Interference in the iteration
Iteration Interference(k) [X] for X = Update Interference(_T_k, _T_k+1)
where T_k is a uniformly sampled time step in the iteration k.
Interference Across Iterations reflects if an agent has many iterations with significant interference. Here, it becomes more sensible to consider upper percentiles rather than averages. Even a few iterations with significant interference could destabilize learning; an average over the steps might wash out those few significant steps. We therefore take expectations over only the top α percentage of values. In finance, this is typically called the expected tail loss or conditional value at risk. Previous work in RL <cit.> has used conditional value at risk to measure the long-term risk of RL algorithms.
For iteration index K, which is a random variable,
Interference Across Iterations[X|X ≥Percentile_0.9(X)]
for X = Iteration Interference(K)
.
Iteration index K is uniformly distributed and Percentile_0.9(X) is the 0.9-percentile of the distribution of X. Other percentiles could be considered, where smaller percentiles average over more values and a percentile of 0.5 gives the median.
These definitions are quite generic, assuming only that the algorithm attempts to reduce the expected TD error (Bellman error) to estimate the action-values.
Calculating Update Interference, however, requires computing an expectation over TD errors, which in many cases is intractable to calculate. To solve this issue, we need an approximation to Update Interference, which we describe in the next section.
For clarity, we provide the pseudocode to measure interference for DQI in Algorithm <ref>.
§ APPROXIMATING UPDATE INTERFERENCE
The difficulty in computing the Update Interference is that it relies on computing the expected TD error. With a simulator, these can in fact be estimated. For small experiments, therefore, the exact Accuracy Change could be computed. For larger and more complex environments, the cost to estimate Accuracy Change is most likely prohibitive, and approximations are needed. In this section, we motivate the use of squared TD errors as a reasonable approximation.
The key issue is that, even though we can get an unbiased sample of the TD errors, the square of these TD errors does not correspond to the squared expected TD error (Bellman error). Instead, there is a residual term, that reflects the variance of the targets <cit.>
[δ()^2 | S = s, A = a]
= [δ() | S = s, A= a]^2 +
[R + Q_(S', A') | S = s, A= a]
where the expectation is over (R, S', A'), for the current (s,a), where A' is sampled from the current policy we are evaluating.
When we consider the difference in TD errors, after an update, for (s,a), we get
[δ(_t+1)^2 | S = s, A = a] - [δ(_t)^2 | S = s, A = a]
= [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
+ [R + Q__t+1(S', A') | S = s, A= a]
- [R + Q__t(S', A') | S = s, A= a].
For a given (s,a), we would not expect the variance of the target to change significantly. When subtracting the squared TD errors, therefore, we expect these residual variance terms to nearly cancel. When further averaged across (s,a), it is even more likely for this term to be negligible.
There are actually two cases where the squared TD error is an unbiased estimate of the squared expected TD error. First, if the environment is deterministic, then this variance is already zero and there is no approximation. Second, when we use target networks, the bootstrap target is actually R + Q_k(S', A') for both. The difference in squared TD errors measures how much closer Q__t+1(s, a) is to the target after the update. Namely, δ(_t+1) = R + Q_k(S', A') - Q__t+1(s, a). Consequently
[δ(_t+1)^2 | S = s, A = a] - [δ(_t)^2 | S = s, A = a]
= [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
+ [R + Q_k(S', A') | S = s, A= a]
- [R + Q_k(S', A') | S = s, A= a]
= [δ(_t+1) | S = s, A= a]^2 - [δ(_t) | S = s, A= a]^2
.
It is straightforward to obtain a sample average approximation of [δ(_t+1)^2 | S = s, A= a]
- [δ(_t) | S = s, A= a]. We sample B transitions (s_i, a_i, r_i, s'_i) from our buffer, to get samples of δ^2(_t+1, S, A,R,S') - δ^2(_t, S, A,R,S'). This provides
the following approximation for Update Interference:
Update Interference(_t, _t+1) ≈max(1/B∑_i=1^B δ^2(_t+1, s_i, a_i, r_i, s'_i) - δ^2(_t, s_i, a_i, r_i, s'_i), 0).
The use of TD errors for interference is related to previous interference measures based on gradient alignment. To see why, notice if we perform an update using one transition (s_t,a_t,r_t,s_t'),
then the interference of that update to (s,a,r,s') is δ^2(_t+1, s, a,r,s') - δ^2(_t, s, a,r,s').
Using a Taylor series expansion, we get the following first-order approximation assuming a small stepsize α:
δ^2(_t+1, s, a,r,s') - δ^2(_t, s, a,r,s')
≈∇_δ^2(_t; s,a,r,s')^⊤ (_t+1 - _t)
= -α∇_δ^2(_t; s,a,r,s')^⊤∇_δ^2(_t; s_t,a_t,r_t,s_t')
.
This approximation corresponds to negative gradient alignment, which has been used to learn neural networks that are more robust to interference <cit.>. The idea is to encourage gradient alignment to be positive, since having this dot product greater than zero indicates transfer between two samples. Other work used gradient cosine similarity, to measure the level of transferability between tasks <cit.>, and to measure the level of interference between objectives <cit.>.
A somewhat similar measure was used to measure generalization in reinforcement learning <cit.>, using the dot product of the gradients of Q functions ∇_ Q__t(s_t,a_t)^⊤∇_ Q__t(s,a).
This measure neglects gradient direction, and so measures both positive generalization as well as interference.
Gradient alignment has a few disadvantages, as compared to using differences in the squared TD errors. First, as described above, it is actually a first order approximation of the difference, introducing further approximation. Second, it is actually more costly to measure, since it requires computing gradients and taking dot products. Computing Update Interference on a buffer of data only requires one forward pass over each transition. Gradient alignment, on the other hand, needs one forward pass and one backward pass for each transition. Finally, in our experiments we will see that optimizing for gradient alignment is not as effective for mitigating interference, as compared to the algorithms that reduced Update Interference.
§ MEASURING INTERFERENCE & PERFORMANCE DEGRADATION
Given a measure for interference, we can now ask if interference correlates with degradation in performance, and study what factors affect both interference and this degradation. We define performance degradation at each iteration as the difference between the best performance achieved before this iteration, and the performance after the policy improvement step.
Similar definitions have been used to measure catastrophic forgetting in the multi-task supervised learning community <cit.>.
Let _(s,a) ∼ d_0[Q^π_k+1(s,a)] be the agent performance after the policy improvement step at iteration k where d_0 is the start-state distribution, where a random action is taken in the first step. We estimate this value using 50 rollouts. Performance Degradation due to iteration k is defined as
Iteration Degradation(k) max_i=1,…,k_(s,a) ∼ d_0[Q^π_i(s,a)] - _(s,a) ∼ d_0[Q^π_k+1(s,a)].
As before, we take the expected tail over all iterations.
If a few iterations involve degradation, even if most do not, we should still consider degradation to be high. We therefore define Degradation across iterations as
Degradation[X|X ≥Percentile_0.9(X)] for X = Iteration Degradation(K).
It might seem like Degradation could be an alternative measure of Interference. A central thesis in this paper, however, is that Interference is about estimated quantities, like values, that represent the agent's knowledge of the world. The policy itself—and so the performance—may not immediately change even with inaccuracies introduced into the value estimates. Further, in some cases the agent may even choose to forgo reward, to explore and learn more; the performance may temporarily be worse, even in the absence of interference in the agent's value estimates.
We empirically show that Interference Across Iterations is correlated with Degradation, by measuring these two quantities for a variety of agents with different buffer sizes and number of hidden nodes.
We perform the experiment in two classic environments: Cartpole and Acrobot. In Cartpole, the agent tries to keep a pole balanced, with a positive reward per step. We chose Cartpole because RL agents have been shown to exhibit catastrophic forgetting in this environment <cit.>. In Acrobot, the agent has to swing up from a resting position to reach the goal, receiving negative reward per step until termination. We chose Acrobot because it exhibits different learning dynamics than Cartpole: instead of starting from a good location, it has to explore to reach the goal.
We ran several agents to induce a variety of different learning behaviors. We generated many agents by varying buffer size ∈{1000, 5000, 10000}, number of steps in one iteration M ∈{100, 200, 400}, hidden layer size ∈{64, 128, 256, 512} with two hidden layers.
Each algorithm performed 400 iterations. Interference Across Iterations and Degradation are computed over the last 200 iterations.
A buffer for measuring Interference is obtained using reservoir sampling from a larger batch of data, to provide a reasonably diverse set of transitions. Each hyperparameter combination is run 10 times, resulting in 360 evaluated agents for Deep Q-iteration without target networks and 360 with target networks.
We show the correlation plot between Interference and Degradation in Figure <ref>. For DQI with target networks, there is a strong correlation between our measure of interference and performance degradation. For DQI without target networks, we actually found that the agents were generally unstable, with many suffering from maximal degradation. Measuring interference for algorithms that are not learning well is not particularly informative, because there is not necessarily any knowledge to interfere with.
We note a few clear outcomes.
(1) Neural networks with a larger hidden layer size tend to have higher interference and degradation.
(2) DQI with target networks has lower magnitude Interference and less degradation than DQI without target networks on both environments. Target networks are used in most deep RL algorithms to improve training stability, and the result demonstrates that using target network can indeed reduce interference and improve stability. This result is unsurprising, though never explicitly verified to the best of our knowledge. It also serves as a sanity check on the approach, and supports the use of this measure for investigation in the role of other algorithm properties that might impact interference.
40.46
< g r a p h i c s >
Correlation plot of interference and degradation for fine-tuning agents. Each point represents the interference and degradation for 50 iterations. A darker color indicates the point is later during the fine-tuning phase.
In the previous experiment, we measure interference and degradation for online agents. We can further separate learning and interference by considering fine-tuning offline learned agents.
We use Implicit Q-learning <cit.> to learn an offline agent from offline data, and fine-tune the agent with online interaction.
During the fine-tuning phase, we measure interference and degradation of the fine-tuning agent every 50 iterations.
We show the correlation plot in Figure
<ref>.
Similarly, we can see a correlation between our measure of interference and performance degradation.
When our interference measure is high, it is expected to decrease the performance.
In this experiment, several outliers exist with much higher interference than others, so we zoomed in on the x-axis to better indicate the pattern for most of the data.
In the appendix, we show the plot with all points in Figure <ref>.
§ MITIGATING INTERFERENCE VIA ONLINE-AWARE META LEARNING
With the interference measures developed and a better understanding of some of the factors that effect interference, we now consider how to mitigate interference.
In this section, we outline and empirically investigate a class of algorithms, which we call online-aware algorithms, that are designed to mitigate interference.
§.§ Online-aware Algorithms
We first discuss an objective to learn a neural network that explicitly mitigates interference. We then outline a class of algorithms that optimize this objective.
Let be the network parameters and U^_B() be an inner update operator that updates using the set of transitions in B, times. For example, U^_B() could consist of sampling mini-batches B_i from B for each of the i = 1, …, n DQI updates.
The goal of online-aware learning is to update the network parameters to minimize interference for multiple steps in the future: find a direction g_t at time step t to minimize the -step ahead Update Interference
_B[∑_i = 1^|B|δ_i(U^_B(_t- g_t))^2 -δ_i(_t)^2]
Formally, we can describe the online-aware objective as
J() = _B[L_B(U^_B())]
where L_B() = 1/|B|∑_i = 1^|B|δ_i()^2.
We refer to the class of algorithms which optimizes the online-aware objective as online-aware algorithms, and provide the pseudocode in Algorithm <ref> in Appendix <ref>. Note that this objective not only minimizes interference but also maximizes transfer (promotes positive rather than negative generalization).
The objective can be optimized by meta-learning algorithms including MAML <cit.>, which is a second-order method by computing gradients through the inner update gradients to update the meta-parameters, or a first-order method such as Reptile <cit.>. Reptile is more computationally efficient since it does not involve computing higher order terms, and only needs to perform the n inner updates to then perform the simple meta update.
Algorithm <ref> is not a new algorithm, but rather is representative of a general class of algorithms which explicitly mitigates interference. It incorporates several existing meta learning algorithms. The choice of meta-parameters, inner update operator and meta update rules results in many variants of this Online Aware algorithm.
Two most related approaches, OML <cit.> and MER <cit.>, can be viewed as instances of such an algorithm.
OML was proposed as an offline supervised learning algorithm, but the strategy can be seen as instance of online-aware learning where the inner update operator updates only the last few layers at each step on a correlated sequence of data, whereas the initial layers are treated as meta-parameters and updated using the second-order method proposed in MAML.
MER, on the other hand, uses the first-order method proposed in Reptile to update the entire network as the meta-parameters. During the inner loop, MER updates the entire network with stochastic samples. MER introduces within-batch and across-batch meta updates; this difference to the Online-aware framework is largely only about smoothing updates to the meta-parameters. In fact, if the stepsize for the across-batch meta update is set to one, then the approach corresponds to our algorithm with multiple meta updates per step. For a stepsize less than one, the across-batch meta update averages past meta-parameters. MER also uses other deep RL techniques such as prioritizing current samples and reservoir sampling.
§.§ Experimental Setup
We aim to empirically answer the question: do these online-aware algorithms mitigate interference and performance degradation? We focus on an instance of the online-aware algorithm where the meta updates are performed with the first-order Reptile methods. This instance can be viewed as a variant of MER using one big batch <cit.>. We have also tried online-aware algorithm using MAML or only meta learning a subset of network parameters similar to <cit.>, but we found online-aware algorithm using Reptile outperforms the MAML and OML variants consistently across the environments we tested.
To answer the question we compare baseline algorithms to online-aware (OA) algorithms where the baseline algorithm is DQI with or without target nets. OA treats the entire network as the meta-parameter and uses the first-order Reptile method, shown in Algorithm <ref>. The inner update operator uses randomly sampled mini-batches to compute the update in Algorithm <ref>.
To fairly compare algorithms, we restrict all algorithms to perform only one update to the network parameters per step, and all algorithms use similar amounts of data to compute the update. We also include two other baselines: Large which is DQI with 10 to 40 times larger batch sizes so that the agent sees more samples per step,
and GA which directly maximizes gradient alignment from Equation (<ref>) within DQI.[In fact, both MAML and Reptile approximately maximize the inner product between gradients of different mini-batches <cit.>.]
§.§ Experiments for DQI without Target Networks
We first consider DQI without target networks, which we found in the previous section suffered from more interference than DQI with target networks. We should expect using an online aware update should have the biggest impact in this setting. Figure <ref> summarizes the results on Acrobot and Cartpole.
Due to the space limit, we present the results with target networks in Appendix <ref>.
We can see that OA significantly mitigates interference and performance degradation, and improves control performance.
Large (light-blue) and GA (green) do not mitigate interference nearly as well. In fact, Large generally performs quite poorly and in two cases actually increases interference. Our results indicate that the online-aware algorithms are capable of mitigating interference, whereas, simply processing more data or directly maximizing gradient alignment are not sufficient to mitigate interference.
Further insight can be found by investigating data from individual runs. The previous results aggregate performance and return over runs, which can remove much of the interesting structure in the data. Looking closer, Figure <ref>(a) shows the return per run (left) and iteration interference per run (right) in Acrobot, revealing that vanilla DQI without target nets (in blue) experienced considerable problems learning and maintaining stable performance. OA (in red) in comparison was substantially more stable and reached higher performance.
Overall OA also exhibits far less interference.
§ CONCLUSION
In this paper, we proposed a definition of interference for value-based methods that fix the target policy for each iteration. We justified the use of squared TD errors to approximate this interference and showed this interference measure is correlated with control performance. In this empirical study across agents, we found that target networks can significantly reduce interference, and that bigger hidden layers resulted in higher interference in our environments. Lastly, we discuss a framework for online-aware learning for Deep Q-iteration, where a neural network is explicitly trained to mitigate interference. We concluded with experiments on classical reinforcement learning environments that showed the efficacy of online-aware algorithms in improving stability and lowering our measure of interference. This was particularly the case for Deep Q-iteration without target networks, where interference was the highest. These online aware algorithms also exhibit lower performance degradation across most of the tested environments.
There are several limitations in this work.
We did not carefully control for other factors that could impact performance, like exploration or the distribution of data in the replay buffer. DQI without target networks performed poorly in Acrobot under many hyperparameter settings, making it difficult to measure interference. Later, by including online-aware learning, the performance significantly improved, suggesting interference was indeed the culprit. But, it was difficult to perfectly identify, at least using only our measure. The correlation plots themselves indicate there are other factors, beyond interference, driving performance degradation. In fact, there has been increased interest in understanding when and why deep RL agents fail: why deep valued-based RL agents rapidly change the greedy policy <cit.>; how agents eventually loss capacity with non-stationary targets <cit.>; why agents benefit from random resetting <cit.>. Interference may be playing a role in these failures, and these issues may be surfacing in our own experiments.
Another important limitation is that we only examined Deep Q-iteration algorithms, which fixed the behavior during each iteration. Allowing this behavior to update on each step, to be ϵ-greedy with respect to the current action-values, would give us DQN. An important next step is to analyze this algorithm, and other extensions of Deep Q-iteration.
Finally, these results highlight several promising avenues for improving stability in RL. One surprising outcome was the instability, within a run, of a standard method like Deep Q-iteration. The learning curve was quite standard, and without examining individual runs, this instability would not be obvious. This motivates re-examining many reinforcement learning algorithms based on alternative measures, like degradation and other measures of stability. It also highlights that there are exciting opportunities to significantly improve reinforcement learning algorithms by leveraging online-aware learning.
collas2023_conference
§ EXPERIMENTAL DETAILS
§.§ Measuring Interference
We provide the pseudocode in Algorithm <ref>.
§.§ Online-Aware Algorithms
We provide the pseudocode in Algorithm <ref>.
§.§ Experiment setup
We experiment with two environments: Cartpole and Acrobot from the OpenAI gym (<https://gym.openai.com/>). We set the maximum steps per episode to 500, and use a discounting factor γ=0.99 in all environments.
For the fine-tuning experiment, we additionally include two more environments: Lunar Lander and Mountain Car.
We collect offline datasets containing transitions from 4% near-optimal policy and 96% random policy.
Then we train an offline agent for 70000 updates and fine tune the agent for 400 iterations.
During fine-tuning phase, the number of updates in each iteration was 200.
We set up a checkpoint every 50 iterations and measure interference and degradation across the 50 iterations. We include 10 runs in each environment.
When computing performance degradation, we use 50 Monte Carlo rollouts to estimate the performance of the policy at each iteration, that is, _(s, a) ∼ d_0[Q^π_k(s,a)]. For evaluating the TD error difference, we use a reservoir buffer of size 1000, which approximates uniform sampling from all the past transitions.
§.§ Network architecture and hyperparameters
For all experiments, we use a three-layer neural network with ReLU activation <cit.> and He initialization <cit.> to initialize the neural. For Adam and RMSprop optimizer, we use the default values for the hyper-parameters except the step size.
For the online experiments in Section <ref>, we generate a set of hyper-parameter by choosing each parameter in the set:
* Batch size = 64
* Step size α = 0.0003
* Number of iteration =400
* Optimizer = Adam
* Buffer size ∈{1000, 5000, 10000}
* Hidden size ∈{64, 128, 256, 512}
* Number of steps in an iteration M ∈{100, 200, 400}
For the fine-tuning experiment in Section <ref>, the IQL agent used a two-layer network with 64 hidden nodes on each layer to learn the policy from offline datasets.
In acrobot, the offline policy was learned with τ=0.9, β=10, α=0.005, the batch size was 100, and the learning rate was 3×10^-5 (notations are consistent with <cit.>).
In lunar lander, the number of timesteps for learning, the batch size, and α remained the same.
Other parameters were changed to τ=0.7, β=3, and the learning rate was 0.001.
The settings in mountain car were as same as in acrobot, except that the learning rate was 0.001.
During the fine-tuning step, τ and β remained the same as in the offline learning stage in the corresponding environment, while the batch size, learning rate, and the number of iterations were changed to be consistent with other experiments. The buffer size was fixed to 5000 with a first-in-first-out rule.
For the experiments in Section <ref> and <ref>, all algorithms use buffer size of 10000, 100 steps in an iteration, and 400 iterations. DQI without target nets uses hidden size of 128, and DQI with target nets uses hidden size of 512. The best parameters are chosen based on average performance of the policies over the last 200 iterations.
Baseline.
We sweep the hyperparameters for DQI in the range:
* Batch size =64
* Optimizer ∈{Adam, RMSprop}
* Step size α∈{0.003, 0.001, 0.0006, 0003, 0.0001, 0.00001}
DQI with large batch size.
For the baseline Large, we find the best batch size in the range:
* Batch size ∈{640, 1280, 2560}
Online-aware DQI.
In our experiment, We sweep over the hyperparameters in the set:
* Inner update optimizer = SGD
* α∈{1.0, 0.3, 0.1, 0.03}
* α_inner∈{0.01, 0.001, 0.0001, 0.00001}
* Number of inner updates K ∈{5,10, 20}
DQI maximizing gradient alignment (GA).
When updating the parameters, we draw two mini-batch samples B_1 and B_2 and add a regularization term in the loss function:
-λ[1/|B_1|∑_i∈ B_1∇_δ_i^2()]^⊤[1/|B_2|∑_j∈ B_2∇_δ_j^2()],
normalized by the number of parameters in the network.
In our experiment, We sweep over the hyperparameters in the set:
* Optimizer ∈{Adam, RMSprop}
* Step size α∈{0.003, 0.001, 0.0006, 0003, 0.0001, 0.00001}
* λ∈{10.0, 1.0, 0.1, 0.01, 0.001}
§.§ Experiments for DQI with Target Networks
In this section, we investigate the utility of OA for DQI with target networks. Again, it is unlikely to be particularly useful to add OA for settings where the interference is low.
In the previous section, in Figure <ref>, we found that DQI with target networks had higher interference with a larger hidden layer size (512). We therefore test the benefits of OA for this larger network size in this experiment. Figure <ref> summarizes the results on Acrobot and Cartpole.
We can see that the addition of OA to DQI with target networks helps notably in Cartpole, and only slightly in Acrobot. This is in stark contrast to the last section, where there was a large gain in Acrobot when adding OA. This outcome makes sense, as when adding OA to DQI without target networks, the agent went from failure to learning reasonably well. In this case of DQI with target networks, the agent was already learning reasonably. Nonetheless, the addition of the OA objective does still provide improvement. In Cartpole, the improvement is more substantial. Again, looking at the previous correlation plots in Figure <ref>, we can see that a hidden layer size of 512 resulted in more interference in Cartpole, and more Performance Degradation; there was more room for OA to be beneficial in Cartpole. When looking at the individual runs in Figure <ref>, we can see that DQI has some drops in performance, whereas the OA variant is much more stable.
A few other outcomes are notable. The larger network (10x the size) was actuthally better than OA in Acrobot but did worse than the base algorithm (DQI with hidden layer sizes of 512) in Cartpole. The most consistent performance was with OA. Further, except for the larger network, there was a clear correspondence between interference and performance: OA reduced interference most and performed the best, GA was next in terms of both and then finally the baseline with no additions.
§.§ Experiments for Fine-Tuning
We show the plots for all points in the fine-tuning experiment in Figure <ref>.
|
http://arxiv.org/abs/2307.03868v2 | 20230708003704 | Automated Stability Analysis of Piecewise Affine Dynamics Using Vertices | [
"Pouya Samanipour",
"Hasan A. Poonawala"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
: Zero-Shot Sketch-to-3D Shape Generation
Aditya Sanghi Pradeep Kumar Jayaraman[3] Arianna Rampini[3] Joseph Lambourne
Hooman Shayani Evan Atherton Saeid Asgari Taghanaki
Autodesk Research
=====================================================================================================================================================================================
This paper presents an automated algorithm to analyze the stability of piecewise affine () dynamical systems due to their broad applications.
We parametrize the Lyapunov function as a function, with polytopic regions defined by the dynamics. Using this parametrization, Stability conditions can be expressed as linear constraints restricted to polytopes so that the search for a Lyapunov function involves solving a linear program. However, a valid Lyapunov function might not be found given these polytopic regions.
A natural response is to increase the size of the parametrization of the Lyapunov function by dividing regions and solving the new linear program.
This paper proposes two new methods to divide each polytope into smaller ones.
The first approach divides a polytope based on the sign of the derivative of the candidate Lyapunov function, while the second divides it based on the change in the vector field of the dynamical system.
In addition, we propose using Delaunay triangulation to achieve automated division of regions and preserve the continuity of the Lyapunov function.
Examples involving learned models and explicit MPC controllers demonstrate that the proposed method of dividing regions leads to valid Lyapunov functions with fewer regions than existing methods, reducing the computational time taken for stability analysis.
§ INTRODUCTION
Piecewise affine () dynamical systems have gained popularity in robotics <cit.> and the automotive industry <cit.> due to their wide applications. concepts are utilized in advanced controllers, including gain-scheduled flight control systems<cit.> and Takagi-Sugeno fuzzy systems<cit.>. Affine systems with control saturation can be expressed using dynamics, enabling effective synthesis of controllers through explicit model predictive control (MPC)<cit.>. However, obtaining a Lyapunov function for stability guarantees with explicit MPC can be challenging. Alternatively, there is an increasing trend in using supervised machine learning methods for learning dynamics and controllers<cit.>. Neural networks (NN) with the rectified linear unit (ReLU) activation functions have been employed to convert closed-loop dynamics into PWA dynamics<cit.>. The stability of these methods, however, is not guaranteed, emphasizing the need to develop an automated approach to finding Lyapunov functions for learned models, including ReLU networks and explicit MPC.
Sampling-based methods <cit.> are prevalent for learning Lyapunov functions. The Lyapunov function is learned from finite samples, and this function must meet the stability conditions at all states, therefore verification is a critical component of the analysis.
Verification can be performed in an inexact manner using relaxed convex problems <cit.> or in an exact manner using Satisfiability Modulo theories (SMT) and Mixed-Integer Programs (MIP)<cit.>. The exact verifier certifies the Lyapunov function or generates counterexamples violating the stability conditions. Counterexamples can be incorporated into training samples for iterative learning. However, the computational complexity of the verifier remains a challenge.
An alternative to the learning approach is to parameterize the Lyapunov function and solve it as an optimization problem <cit.>. The Sum of Squares (SOS) method is employed to find the Lyapunov function for nonlinear dynamics <cit.>, but it can be computationally complex. A piecewise quadratic (PWQ) parameterization of the candidate Lyapunov function is proposed in<cit.>. However, these methods must deal with the conservatism of the S-procedure, and the results are limited to two-dimensional examples.
Instead of relying on the PWQ Lyapunov function, <cit.> utilized the function to parameterize the Lyapunov function. An algorithm has been developed for finding a Lyapunov function using partition refinement in <cit.>. A method for calculating the Lyapunov function for conewise dynamics was proposed by <cit.>. The dynamics and controller of have been parameterized as a ReLU in <cit.>. The Lyapunov function and the controller are found by parameterizing Lyapunov conditions as quantifier-free constraints for a bilinear quadratic optimization problem<cit.>. Although the Lyapunov condition for the Lyapunov function can be expressed without conservatism, the PWQ Lyapunov function receives more attention in the literature.
The refinement process in the context of Lyapunov stability analysis presents several challenges, such as preserving the continuity of the candidate Lyapunov function and dividing complex polytopes effectively.
We propose the following contributions to address the challenges in the refinement and continuity of the Lyapunov function.
Contributions
The paper introduces two novel methods for dividing cells during the search for valid Lyapunov functions. The first method utilizes the derivative of the Lyapunov function as a criterion to divide a cell, while the second method analyzes the vector field of the dynamics to do so. By examining the behavior of the Lyapunov function derivative or vector field, these methods determine suitable locations for proposing new vertices that will define new cells, since we use the vertex representation for polytopes. Furthermore, the paper proposes using Delaunay triangulation to automate the refinement process for cells. The proposed refinement methods offer the advantage of finding valid Lyapunov functions with fewer refinements compared to existing techniques. The efficacy of the search procedure is demonstrated through non-trivial examples, where valid Lyapunov functions are successfully identified within reasonable computation times. Additionally, the paper evaluates the effectiveness of the proposed approach in determining the region of attraction (ROA) by comparing the results with other methods. The comparison showcases the capability of the proposed approach to identify the ROA using the Lyapunov functions. The contributions of this paper improve the refinement process addressing challenges in the parameterization of the Lyapunov function.
§ PRELIMINARIES
In this paper, we examine the stability analysis problem for dynamical systems described by piecewise affine functions as follows:
ẋ = (x).
where x ∈ℝ^n is the state variable, and term denotes a piecewise affine function. We focus on continuous functions with polytopic cells.
The rest of this section formally describes functions.
*Notation
An index for each element in the set S constitutes the set (S).
The convex hull, the interior, the boundary, and the closure of the set S are denoted by S, S, ∂ S, and S respectively.
The transpose of matrix A is A^T. ⟨·,·⟩ denotes the inner product, ∠(·,·) is the angle between two vectors, and |·|_2 is the standard L_2 norm.
It should be noted that the symbol ≽ is the element-wise version of ≥.
§.§ Partitions And Refinements
In this paper, we define a partition as a collection of subsets {_i }_i ∈, where each _i is a closed subset of ^n and int(X_i)∩ int(X_j)=∅, ∀ i, j ∈ and i≠ j. The domain of the partition, Dom(), is the union of all the cells in .
Given two partitions = { Y_i}_i ∈ I and = {_j}_j ∈ J of a set S = =, we say that is a refinement of if _j ∩ Y_i ≠∅ implies that _j ⊆ Y_i. We denote the set of all refinements of as Ref()<cit.>.
§.§ Piecewise Affine Functions
We explicitly parameterize a piecewise affine function (x) by a partition = {_i }_i ∈ and a collection of matrices 𝐀_ = {_i }_i ∈ and vectors 𝐚_ = {_i }_i ∈ such that
(x) = _i + _i, if ∈_i, where
_i = {x ∈^n _i + _i ≽ 0 }.
Note that a generic function may not be continuous unless we appropriately constrain the parameters _i, _i, _i, and _i<cit.>.
It is assumed that any function in this paper with this explicit form meets such constraints and is always continuous. Additionally, we consider the origin to be the equilibrium, thus denoting index sets I_0 and I_1 for cells containing and not containing the origin respectively. Also, we assume that all the cells are bounded. Therefore, we can use the vertex representation for all cells. A vertex is a facet of dimension 0 for a cell <cit.>. Each cell of a partition can be represented using its vertices:
X_i= (X_i),
where (X_i) represents the set of vertices of the cell X_i.
§ MAIN ALGORITHM
This section presents an overview of the stability analysis algorithm, which aims to construct an optimization problem to discover the Lyapunov function. The algorithm consists of two main components: the formulation of an optimization problem to find a valid Lyapunov function and a refinement process to enhance the flexibility of the Lyapunov function. A detailed description of these components is provided in the subsequent section. For better comprehension, a pseudo-code representation of the algorithm is presented in Algorithm <ref>. The termination condition of the algorithm is determined by two criteria: either a valid Lyapunov function is found, or the optimization process exceeds the predefined timeout threshold of 3600 seconds. It should be emphasized that in the case of unstable systems, the algorithm needs to be manually terminated.
§ OPTIMIZATION BASED SEARCH FOR LYAPUNOV FUNCTION
In this section, first, we describe the general idea of the stability analysis and the Lyapunov function. In the next step, we parameterize the Lyapunov function as a function. Then we present the stability condition for dynamics with a candidate Lyapunov function. In <ref>, We convert the stability analysis problem to a linear optimization problem. We construct the optimization problem to be always feasible; however, only a specific solution is accepted as a valid Lyapunov function. Furthermore, we proposed new refinement approaches <ref> to increase the capacity of the candidate Lyapunov functions, facilitating the search for valid Lyapunov functions.
§.§ Lyapunov function
The Lyapunov stability theory is well known for its application to the analysis of nonlinear dynamical systems <cit.>.
Assume that V:D→ R is a continuously differentiable function, and x=0 is the equilibrium point of equation (<ref>). In this case, equation (<ref>) will be asymptotically stable if and only if V is strictly positive definite and strictly decreasing ∀ x ∈ D-{0}.
§.§ Lyapunov function
In the paper, we investigate the use of Lyapunov functions on a bounded partition that aligns with the dynamics (<ref>) structure. This assumption can be used to further reduce computation costs by taking advantage of the convexity property. Specifically, if all cells in the partition are bounded, an affine function is considered positive on a particular cell X_i if and only if it is positive on all vertices of X_i<cit.>. This observation allows for simplified analysis and computation of the Lyapunov function.
Consider a candidate Lyapunov function such that:
V(x)={[ p_i^Tx+q_i for i ∈ I_1; p_i^Tx for i ∈ I_0. ].
In the equation above, the function V(x) is continuous and differentiable in the interior of the cell. It is possible to calculate the derivative of the candidate Lyapunov function, along the dynamic, ẋ=f(x), in the interior of cell X-{0}:
ℒ_f V = ⟨∇ V , f(x) ⟩ ,
where ∇ V is the gradient of V(x), and ℒ is the lie derivative.
When V(x) is differentiable at x, let the local affine Lyapunov function be V(x) = p^T x + q, and the dynamics be ẋ = A x + a. The derivative of the Lyapunov function along the trajectories can be calculated as follows:
V̇=p^T(Ax+a).
Let {X_i}_i∈ I be a partition of a bounded subset of ℝ^n into convex polytopes with vertices v_k.
* The Lyapunov function (<ref>) will be positive definite iff:
p_i^T v_k+q_i>0 for i ∈ I_1, v_k ∈(X_i)
p_i^T v_k>0 for i ∈ I_0, v_k ∈(X_i).
* V̇, (<ref>), will be a negative definite function iff:
p_i^T (A_i v_k+a_i) <0 for i ∈, v_k ∈(X_i).
These results can be derived directly as a result of parameterizing the candidate Lyapunov function in the affine form.
The last step is to force the Lyapunov function to be continuous. To achieve this goal, the candidate Lyapunov function (<ref>) must meet the following requirements.
V_i(v_k)=V_j(v_k) , i≠ j ∈, v_k ∈(X_i)∩(X_j).
A Lyapunov function (<ref>) in the partition is considered valid if there exist p_i and q_i satisfy (<ref>)-(<ref>) for every v_k ≠ 0.
In this formulation (<ref>) guarantees that the Lyapunov function will be positive definite. Additionally, (<ref>) guarantees continuity. In this case, the Lyapunov function is Liptchitz continuous, but it is not differentiable at the boundary. As a result of the Lipchitz continuity of the Lyapunov function, we are able to use the Clarke generalized gradient and Clarke generalized derivative<cit.>. According to Clarke generalized gradient ∂ V(x) for the Lyapunov function (<ref>) can be described as:
∂ V(x)=conv({p_i:i∈ I(p),x∈ X_i})
The Clarke generalized derivative along F for the differential inclusion ẋ∈ F(x) is provided by <cit.>
V̇_F={p^Tf:p∈∂ V(x),f∈ F(x)}.
For points x≠0 if F is a singleton function then (<ref>) guarantees that V̇_F<0, ∀ p ∈∂ V(x). As shown by <cit.>, the maximum of the (<ref>) upper-bounded the decrease of the Lyapunov function along solutions of the dynamical systems. Therefore, we may conclude that the Lyapunov function is decreasing along all the trajectories of the dynamical systems. For more detail, please see <cit.>.
The origin is assumed always to be defined as a vertex in _0. The assumption that the origin is always defined as a vertex of a cell ensures that we can always find a positive-definite Lyapunov function.
Another assumption is that if a vertex v_k∈(X_i) and v_k∈ X_i∩ X_j, then v_k∈(X_j). This assumption is required to preserve the continuity of the Lyapunov function using (<ref>). Details can be found in <ref>.
§.§ Optimization problem
The constraints (<ref>)-(<ref>) on variables p_i and q_i from (<ref>) may be infeasible due to conditions (<ref>) associated with the decrease of the Lyapunov function along solutions.
Slack variables are added to these constraints to ensure feasibility. Consequently, we can formulate the search process for the Lyapunov function as follows:
min_ p_i, q_i,τ_i ∑_i=1^Nτ_i
Subject to:
p_i^T (A_i v_k+a_i)-τ_i <-ϵ_1 ∀ i ∈ I_1, v_k ∈(X_i)
p_i^T A_i v_k-τ_i <-ϵ_1 ∀ i ∈ I_0, v_k ∈(X_i)
p_i^T v_k+q_i>ϵ_2 ∀ i ∈ I_1, v_k ∈(X_i)
p_i^Tv_k>ϵ_2 ∀ i ∈ I_0,v_k ∈(X_i)
V_i(v_k)=V_j(v_k) ∀ v_k ∈(X_i)∩(X_j) , i≠ j
τ_i≥ 0 ∀ i ∈
where τ_i is the slack variable associated to cell X_i, and ϵ_1,ϵ_2>0. By design, we can state the following result.
The optimization problem in (<ref>) is always feasible.
This result is by the construction of the optimization problem.
The solution to this optimization problem yields a valid Lyapunov function if and only if all the slack variables are zero. If the cost function is non-zero, the Lyapunov function is non-decreasing at some vertices. In fact, no Lyapunov function associated with the current partition exists. It may be possible to refine the partition, meaning to divide regions within it, in order to increase the capacity of the Lyapunov function and then repeat the search using this higher-capacity function. In the following section, the refinement process is described in detail.
§.§ Refinement
A refinement of the current partition is intended to enhance the flexibility of the Lyapunov function search process. To achieve flexibility, a cell X_i with a nonzero slack variable will be divided into smaller sub-cells.
In order to keep things simple, we assume that the refinement of X_i will result in the creation of two new subcells, X_i_1 and X_i_2. For each subcell, we can parameterize the Lyapunov function as V_i_1=p_i_1^Tx+q_i_1 and V_i_2=p_i_2^Tx+q_i_2.
As a result, the candidate Lyapunov function for cell X_i has a higher capacity function that is more flexible. Furthermore, the new Lyapunov function has more parameters, p_i and q_i, as well as constraints. Increasing the number of parameters and constraints might increase the computational complexity for solving (<ref>) since the computational complexity of linear optimization with n parameters and accuracy parameter ϵ is O(n^3/4log(n/ϵ)) <cit.>.
Therefore, to implement refinement, it is necessary to use an intelligent approach, since otherwise, the complexity of the computation may increase.
We utilize a vertex representation for the refinement process to represent cells within a partition, which are convex polytopes. The process of refinement for cells involves adding a new vertex on the cell's boundary and then forming new sub-cells. For this section, we will begin by defining a few concepts and definitions that will be useful for describing these two steps.
The first concept is the simplex region, a bounded region with the smallest possible number of vertices in ℝ^d. Polytope's faces of dimension one are called edges<cit.>. In cell X_i, we define the edges by the set (X_i), and each edge is represented by a pair of vertices (v_j,v_k), where v_j,v_k∈(X_i). The edges of convex polytopes can be obtained by using MILP as described in <cit.>.
It is worth emphasizing that the edges containing the origin, where v_j=0 or v_k=0, are not taken into account in the set of edges (X_i). By making this assumption, we ensure that refinement will not be applied to edges containing the origin. Therefore, if X_i∈ I_0, its subcells will always contain the origin after refinement.
For the cell X_i, with dynamic ẋ=A_ix+a_i and the candidate Lyapunov function V_i=p_i^Tx+q_i we can define the following sets and functions.
* We can find the vector field and the derivative of the Lyapunov function at a vertex, v_j, using the following functions.
(X_i,v_j)= A_iv_j+a_i,
(X_i,v_j)= p_i^T(X_i,v_j),
where (X_i,v_j)∈ℝ^n is the vector field of the local dynamic at the vertex v_j, and (X_i,v_j) is the derivative of the Lyapunov function at the specified vertex in X_i.
* The vertices of the longest edge of a cell can be obtained using the following function:
L_max(i)= _(v_j,v_k)∈(X_i) (| v_j-v_k |_2)
* The following function can be used to capture changes in the sign of the derivative of a candidate Lyapunov function:
(i)= 1 sgn((X_i,v_j)) ≥ 0, ∀ v_j ∈(X_i),
-1 sgn((X_i,v_j)) ≤ 0, ∀ v_j ∈(X_i),
0 otherwise,
where the sgn(x) is the standard sign function. This function generates zero whenever the sign of the derivative of the candidate Lyapunov function in the cell X_i changes. Otherwise, this function generates either 1 or -1 depending on the sign of the derivative of the candidate Lyapunov function within the cell X_i.
* With (i) being 0, the following set may also be used to provide the vertices of the edges where the sign of the derivative of the Lyapunov function has changed.
c_V(i)={(v_j,v_k) (i)=0, ∀ (v_j,v_k) ∈(X_i),
(X_i,v_j)(X_i,v_k)<0}.
There are multiple edges where the sign of the derivative of the Lyapunov function has changed if (i)=0.
* The following equation can be used to determine the vertices for an edge with the largest variation in the derivative of a candidate Lyapunov function.
ΔV̇_max(i)= {(v_j,v_k)
_(v_j,v_k)∈(X_i)|(X_i,v_j)-(X_i,v_k)|}.
* We aim to determine the edge along which the vector fields exhibit the greatest range of angle variations. Therefore, we use the following function to find the edge with the smallest cosine between the vector field at its vertices.
cos_min(i)= {(v_j,v_k)
(v_j,v_k)∈(X_i)min⟨(X_i,v_j),(X_i,v_k)⟩/|(X_i,v_j)|_2|(X_i,v_k)|_2}
Now, we can delve into the refinement process.
§.§.§ Finding new vertices
The first step in the refinement process is to introduce new vertices on the boundary of cells in I_s={i ∈ I τ_i>0}. The new vertex for the cell X_i can be obtained using the following equation:
v_new_i=α v_j+β v_k,
which is a linear combination of two vertices of an edge. Based on the splitting approach which will be introduced in this section, α,β, v_j, and v_k in (<ref>) could be different.
Here we consider three different approaches for finding the new vertices.
Naive refinement
The first algorithm is inspired from <cit.>. The original algorithm was described for simplex regions and restricted to 2-D problems. In order to make the comparison possible we generalize the method for all types of regions. The process of refinement for the cell X_i based on <cit.> is described in Algorithm <ref>.
The naive algorithm adds a new vertex exclusively to the longest edge, denoted as L_max, of cell X_i that has the largest slack variable, as determined by (<ref>). This method creates sub-cells with the largest possible volume without considering the candidate Lyapunov function or local dynamics. Consequently, it may lead to unsatisfactory results.
Selecting vertices randomly could increase computational complexity without necessarily improving the refinement process. Thus, it is crucial to choose new vertices intelligently.
Lyapunov-based refinement
To address the challenge with the naive refinement, a new approach is proposed, leveraging the candidate Lyapunov function to make more informed decisions regarding selecting new vertices. The basic principle behind this method is that for every cell X_i where i∈ I_s finding a set of points P(i)={v_new_i(X_i,v_new_i)=0, v_new_i∈∂ X_i}.
Using these points, v_new_i∈ P(i), the cell X_i could be divided into the two sub-cells, X_i_1 and X_i_2, where (i_2)=-(i_1). In the case of (i)=0, we know that P(i)≠∅.
Therefore, we can find these points using the following convex problem in cell X_i.
max_α,β 0
s.t. α(X_i,v_j)+β(X_i,v_k)=0,
α+β=1,
0≤α,β≤1,
(v_j,v_k) ∈(X_i).
An explanation of how to find i, v_j and v_k in (<ref>) and other details about finding a new vertex using Lyapunov-based refinement can be found in Algorithm <ref>.
If (i)=1, then P(i)=∅, so we choose the new vertex at the edge obtained from ΔV̇_max(i).
In contrast to the previous method that focused only on the cell with the largest slack variable, the Lyapunov-based refinement is now applied to all cells with nonzero slack variables, denoted as i ∈ I_s. As a result of this broader approach, each relevant cell will be refined based on its individual candidate Lyapunov function.
However, it is important to note that the coefficient vector p_i used in the refinement process may change significantly in the next iteration. Therefore, this approach may not be suitable in all cases, as the optimization process in the subsequent steps can alter the candidate Lyapunov function.
Vector field refinement
To address the problem with the Lyapunov-based refinement, the search for new vertices should be conducted using a method that is not influenced by the optimization process in the subsequent steps.
The proposed method leverages the vector field information of the dynamics, which remains unchanged during the optimization process.
The underlying heuristic behind this method is that the direction or magnitude of the vector fields along an edge may undergo significant changes within a cell X_i where i ∈ I_s. Consequently, a higher-capacity PWA function may be required to represent the Lyapunov function within X_i accurately.
As illustrated in Fig.<ref>, the vector field direction in a cell can exhibit substantial variations, such as a flip from v_1 to v_2. In such cases, a simple function may struggle to approximate the level set accurately. To mitigate this, the method is adding a new vertex, v_new_i, between (v_1,v_2) where ∠((X_i,v_1),(X_i,v_new_i))=∠((X_i,v_2),(X_i,v_new_i)).
Consequently, after each refinement process, the greatest angle between the vector fields of an edge in cell X_i is divided in half.
The process of finding a new vertex using the vector field refinement is outlined in Algorithm <ref>, which provides a detailed description of the method.
Before moving on to the next step, storing the new vertices created by these algorithms in the following buffer is necessary.
B={v_new_i∈ℝ^n i ∈ I_s}.
Now we can proceed to the next step, which is forming sub-cells.
§.§.§ Forming sub-cells
In order to form sub-cells, Johannson<cit.> proposed remedies for 2-D systems; however, this method is limited to simplex cells. It was suggested that triangulation methods be used for non-simplex regions in <cit.>, but no specific method or implementation is presented. It has also been proposed in <cit.> to apply Delaunay triangulation to all cells; however, the results have been limited to 2-D examples.
We apply Delaunay triangulation to overcome the challenges associated with forming sub-cells for non-simplex cells and cells in higher dimensions (n>2), which would be challenging to accomplish manually.
The Delaunay triangulation of a set of points in ℝ^d is defined to be the triangulation such that the circumcircle of every triangle in the triangulation contains no point from the set in its interior. Such a unique triangulation exists for every point set in ℝ^d, and it is the dual of the Voronoi diagram.
Moreover, the Delaunay triangulation will maximize the minimum angle in each triangle<cit.>. DT((X_i)) is the notation for implementing Delaunay triangulation using the vertices of the cell X_i. The process of implementing Delaunay triangulation for a single cell is illustrated in Fig.<ref>.
Delaunay triangulation will also handle the continuity of the Lyapunov function if the partition is composed of multiple cells.
To illustrate how continuity is preserved, let us consider the v_new_i as the new vertex obtained using (<ref>) for the cell X_i. If v_new_i∈ X_i∩ X_j, then v_new_i must also be considered as a new vertex for the cell X_j, and DT((X_i)∪ v_new_i) and DT((X_j)∪ v_new_i) should be implemented. Consequently, even after refinement, continuity would be guaranteed by (<ref>). Generally, in order to implement Delaunay triangulation within the current partition, we have to follow the following steps.
* First, we must obtain the following set containing cells that required refinement.
I_split= {i: X_i ∩ B≠∅, i ∈ I()}.
* Then, we need to find the vertices located on the boundary of the cell X_i where i∈ I_split using the following set.
𝒱_new(i)= {v_new_j v_new_j=X_i∩ B, i∈ I_split}.
* Then, we can form the new sub-cells using DT((X_i)∪𝒱_new(i)) for i ∈ I_split.
The process of refinement based on the naive approach using Delaunay triangulation is shown in Fig. <ref>. As can be seen, the sub-cells are created just in the simplex cells. However, the Lyapunov-based and vector-field methods perform differently as shown in Fig.<ref>. and Fig.<ref>. respectively.
§ RESULTS
The paper presents seven examples to demonstrate the search performance for a Lyapunov function using the algorithm described in Algorithm <ref>. The computations are implemented using the Mosek optimization package <cit.> and Python 3.9 on a computer with a 2.1 GHz processor and 8 GB RAM.
During the computations, a tolerance of 10^-8 is used to determine if a number is nonzero. In all the examples, the values of ϵ_1 and ϵ_2 are set to 10^-4.
These examples aim to showcase the effectiveness and efficiency of the proposed algorithm in finding valid Lyapunov functions within reasonable computation times.
[4-D Example <cit.>]
For this example, we will use the 4-D MPC example presented in <cit.> as follows:
x_t+1= [ 0.4346 -0.2313 -0.6404 0.3405; -0.6731 0.1045 -0.0613 0.3400; -0.0568 0.7065 -0.086 0.0159; 0.3511 0.1404 0.2980 1.0416 ]x_t+
[ 0.4346,-0.6731,-0.0568,0.3511 ]u_t.
It includes the same details as <cit.>, such as a state constraint of ‖ x ‖_∞≤ 4, an input constraint of ‖ u ‖_∞≤ 1, a prediction horizon of T=10, a stage cost of Q=10I and R=1.
Explicit MPC produces a dynamic with 193 cells. To ensure that the origin is a vertex, we refined the cell with the origin on its interior first. Our next step is to convert the discrete-time dynamics into continuous-time dynamics with a sampling time t_s=0.01. Finally, We searched for the continuous Lyapunov function using Algorithm <ref> with all refinement techniques. The Algorithm <ref> timed out after 2000 seconds using the naive refinement after 31 iterations.
Algorithm <ref> found the Lyapunov function in 1200 seconds using the Lyapunov-based refinement with 5874. With the vector field refinement, the Algorithm <ref> found the solution in 280 seconds by generating 3086 cells.
In comparison with <cit.>, the Lyapunov function using vector-field refinement requires a shorter computational time.
[4-D controllable canonical dynamic]
Following is a simple 4-D example with stable canonical controllable dynamics with condition number 10 to illustrate the meaningful difference between the refinement methods.
ẋ= [ 0 1 0 0; 0 0 1 0; 0 0 0 0; -24 -50 -35 -10 ]x,
where ‖ x ‖_∞≤ 5 and the initial partition includes 16 simplex cells around the origin with the dynamic (<ref>). The search Algorithm <ref> found the valid Lyapunov function after 43 seconds with 1054 cells created as a result of vector field refinement, whereas Lyapunov-based refinement required 106 seconds with 2743 cells, and naive search required 1546 seconds with 6943 cells.
[Path Following Wheeled Vehicle<cit.>]
The following kinematic model is used to analyze the stability of a path following wheeled vehicle in <cit.>:
ḋ_e=ν sin(θ_e),
θ̇_e=ω-νκ(s) cos(θ_e)/1-d_eκ(s).
In equation (<ref>), we have the state variables θ_e, which represents the angle error, and d_e, which represents the distance error. The control input is denoted as ω.
In this study, we used a single-hidden layer ReLU with 50 neurons as described in <cit.> in order to identify the dynamic (<ref>) with the NN controller <cit.> in the region ‖ x ‖_∞≤ 0.8. Moreover, we used the vertex-based method along with vector field refinement to obtain the Lyapunov function. As can be seen in Fig. <ref>,
a comparison was made between the ROA obtained by the proposed method and the ROA obtained using the NN Lyapunov function <cit.>.
[Multi-agent consensus]
The Hegselmann-Krause model is a widely studied model in the literature, which involves N autonomous agents with state variables ξ_i. Each agent's dynamics are given by the equation:
ξ̇_̇i̇ = ∑_j=1^Nϕ(ξ_i,ξ_j)(ξ_j-ξ_i)
where i ranges from 1 to N, and ϕ:[0,1]^2→{0,1} represents a weight function as defined in the reference <cit.>.
The stability analysis results for this model are presented in Fig. <ref>. We observed that a valid Lyapunov function can be obtained without requiring any refinement. Therefore, the choice of different splitting approaches does not have any impact on this particular example. The details are provided in Table <ref>.
[2-D example from <cit.>,<cit.>,<cit.>]
This system has been presented in four different regions as follows:
Z_1={x∈ℝ^2: -x_1+x_2≥ 0, x_1+x_2≥ 0}
Z_2={x∈ℝ^2: -x_1+x_2≥ 0, -x_1-x_2≥ 0}
Z_3={x∈ℝ^2: x_1-x_2≥ 0, -x_1-x_2≥ 0}
Z_4={x∈ℝ^2: x_1-x_2≥ 0, x_1+x_2≥ 0}
and the dynamics are as follows:
Ω_p:ẋ={[ [ -0.1 1; -5 -0.1 ]x if x∈ Z_1 or x∈ Z_3; [ -0.1 5; -1 -0.1 ]x if x∈ Z_2 or x∈ Z_4. ].
The level sets and the vector fields are shown in Fig.<ref>. The Lyapunov function was obtained by refining the cells. In this example, all three refinement methods perform similarly in finding the Lyapunov function. The details about this example are presented in Table<ref>. The refinement process creates 128 cells in the partition.
[Explicit model-predictive controller<cit.>]
In this study, the stability of the following discrete time dynamic is investigated using explicit MPC, similar to <cit.>.
x_t+1 = [ 1 1; 0 1 ]x_t+[ 1; 0.5 ]u_t
As in <cit.>, the MPC problem has the same specification such as stage cost, actuator, and state constraints.
We use the MPT3 toolbox <cit.> in Matlab to obtain an explicit controller. A sampling time of t_s=0.01s was used to obtain the continuous form of the dynamic (<ref>) with the explicit MPC controller.
The dynamics generated by the explicit MPC have a cell where the origin is not on the vertices. As a result, we refine this cell with the origin as a new vertex, DT((X_i)∪ 0), and then start the Algorithm <ref>.
Fig.<ref>. depicts the level sets of the Lyapunov function.
The Lyapunov function was found by all three refinement algorithms within one second. The Lyapunov-based refinement and the vector-field refinement, however, produce a greater number of cells than the naive refinement.
[Inverted Pendulum<cit.>]
It is common in the literature to use an inverted pendulum as an example with the following state-space model:
[ ẋ_1; ẋ_2 ] = [ x_2; - c/m x_2 - g l^2 sin(x_1) ]+[ 0; 1/ml^2 ]u
where m = 0.15 kg, l = 0.5 m, c = 0.1 N s/rad, and g = 9.81 m/s^2 <cit.>.
First, we used a single-hidden layer ReLU neural network consisting of 20 neurons in the region ‖ x ‖_∞≤ 4 to identify the uncontrolled dynamics. Subsequently, we designed a ReLU neural network controller as described in <cit.>.
By incorporating the ReLU NN controller into the system, we were able to achieve stability.
We searched for the Lyapunov function using Algorithm <ref> with the Vector-field refinement.
The results are compared with Linear-quadratic regulator (LQR)<cit.> and NN Lyapunov function<cit.> in Fig.<ref>. The Lyapunov function obtained using the proposed approach has a larger ROA. It is important to note that the valid region for <cit.> and <cit.> is ‖ x ‖_2≤ 4.
Moreover, the computational time for each example is presented in the TABLE <ref>. Having run each simulation ten times, the computational time is the average time elapsed. TABLE <ref> provides the number of cells created by each refinement technique. The Vector field refinement performs better in terms of computation time and number of cells than the Lyapunov-based and naive approaches specifically in 4-D examples.
§ DISCUSSION
We have shown the effectiveness of our automated approach for stability verification through various examples. Our proposed refinement methods outperform existing techniques, and although our method does not specifically aim to maximize the region of attraction, our results are comparable to other methods. However, there are challenges to consider when applying this algorithm to a wider range of problems.
*Limitations
The computational complexity and performance of the proposed algorithm depend on the increase in the number of cells and optimization parameters during the refinement process. In a space ℝ^n, the number of cells should satisfy m ≥ 2^n. The simplest case, where the origin is surrounded by 2^n simplex cells, results in an optimization problem with 2^n × (n+1) parameters and 2^n+1× n inequality constraints. The number of constraints increases with the presence of non-simplex cells. The computational complexity of the optimization process significantly depends on the dimensionality n, leading to longer computation times as cells are further divided. In some cases, the algorithm may encounter challenges and longer computation times due to increased complexity.
To compare the results of different examples in terms of computational time and the number of cells, we introduce the following concepts:
T_opt_i = ∑_j=1^i t_optj/∑_i=1^N t_opt_i
Nr_i = n_r_i/∑_i=1^N n_r_i.
T_ opt represents the normalized accumulative optimization time, N_r indicates the normalized number of regions, t_opt represents the time spent finding the solution with MOSEK, n_r represents the number of regions, and N represents the total number of iterations to solve the optimization problem. The subscripts i and j indicate the optimization iteration.
The relationship between the normalized accumulative optimization time (T_opt) and the normalized number of cells (N_r) is investigated in three different examples in Fig.<ref>. The graphs demonstrate almost linear behavior for Example <ref> and Example <ref>, while Example <ref> exhibits an almost exponential trend. This indicates that increasing the number of cells could present a significant challenge for our proposed technique. Additionally, the refinement process may result in cells with nearly coplanar vertices, which can introduce numerical difficulties. It is essential to consider these complexities and challenges when applying our algorithm to various systems.
§ CONCLUSION
This paper presents a computational framework for obtaining valid Lyapunov functions. The framework addresses the challenges of formulating the Lyapunov conditions as a linear optimization problem, which does not always guarantee a valid Lyapunov function. To overcome this limitation, two novel refinement methods are proposed, enhancing the flexibility of the candidate Lyapunov function. We used the Delaunay triangulation to automate the refinement process. We demonstrated that the proposed approach is effective based on experiments and comparisons with alternative approaches. The experiments successfully solve a 4-D example in a short time, highlighting the practicality and efficiency of the framework. The proposed framework offers a more effective method for generating valid Lyapunov functions, offering flexibility through refinement methods and automating the process.
IEEEtran
|
http://arxiv.org/abs/2307.04099v1 | 20230709052131 | GNP Attack: Transferable Adversarial Examples via Gradient Norm Penalty | [
"Tao Wu",
"Tie Luo",
"Donald C. Wunsch"
] | cs.LG | [
"cs.LG",
"cs.CR",
"cs.CV"
] |
Visible and infrared self-supervised fusion trained on a single example
Nati Ofir
August 12, 2023
=======================================================================
Adversarial examples (AE) with good transferability enable practical black-box attacks on diverse target models, where insider knowledge about the target models is not required. Previous methods often generate AE with no or very limited transferability; that is, they easily overfit to the particular architecture and feature representation of the source, white-box model and the generated AE barely work for target, black-box models. In this paper, we propose a novel approach to enhance AE transferability using Gradient Norm Penalty (GNP). It drives the loss function optimization procedure to converge to a flat region of local optima in the loss landscape. By attacking 11 state-of-the-art (SOTA) deep learning models and 6 advanced defense methods, we empirically show that GNP is very effective in generating AE with high transferability. We also demonstrate that it is very flexible in that it can be easily integrated with other gradient based methods for stronger transfer-based attacks.
Adversarial machine learning, Transferability, Deep neural networks, Input gradient regularization
§ INTRODUCTION
Deep Neural Networks (DNNs) are the workhorse of a broad variety of computer vision tasks but are vulnerable to adversarial examples (AE), which are data samples (typically images) that are perturbed by human-imperceptible noises yet result in odd misclassifications.
This lack of adversarial robustness curtails and often even prevents deep learning models from being deployed in security or safety critical domains such as healthcare, neuroscience, finance, and self-driving cars, to name a few.
Adversarial examples are commonly studied under two settings, white-box and black-box attacks. In the white-box setting, adversaries have full knowledge of victim models, including model structures, parameters and weights, and loss functions used to train
the models. Therefore, they can directly obtain the gradients of the victim models and seek adversarial examples by misleading the loss function toward incorrect predictions. White-box attacks are important for evaluating and developing robust models and serve as the backend method for many black-box attacks, but is limited in use due to its requirement of having to know the internal details of target models. In the black-box setting, adversaries do not need specific knowledge about victim models other than their external properties (type of input and output). Two types of approaches, query-based and transfer-based, are commonly studied for black-box attacks. The query-based approach attempts to estimate the gradients of a victim model by querying it with a large number of input samples and inspecting the outputs. Due to the large number of queries, it can be easily detected and defended. The transfer-based approach uses surrogate models to generate transferable AE which can attack a range of models instead of a single victim model. Hence it is a more attractive approach to black-box attacks.
This paper takes the second approach and focuses on designing a new and effective method to improve the transferability of AE. Several directions for boosting adversarial transferability have appeared. Dong et al. <cit.> proposed momentum based methods. Attention-guided transfer attack (ATA) <cit.> uses attention maps to identify common features for attacking. Diverse Input Method (DIM) <cit.> calculates the average gradients of augmented images. <cit.> generates transferable AE using an ensemble of multiple models.
Despite the efforts of previous works, there still exists a large gap of attack success rate between the transfer-based setting and the ideal white-box setting.
In this paper, we propose a novel method to boost adversarial transferability from an optimization perspective. Inspired by the concept of “flat minima” in the optimization theory <cit.>
which improves the generalization of DNNs, we seek to generate AE that lie in flat regions where the input gradient norm is small, so as to “generalize” to other victim models that AE are not generated on. In a nutshell, this work makes the following contributions:
* We propose a transfer-based black-box attack from a new perspective that seeks AE in a flat region of loss landscape by penalizing the input gradient norm.
* We show that our method, input gradient norm penalty (GNP), can significantly boost the adversarial transferability for a wide range of deep networks.
* We demonstrate that GNP can be easily integrated with existing transfer-based attacks to produce even better performance, indicating a highly desirable flexibility.
§ METHOD
Given a classification model f(x): x ∈𝒳→ y ∈𝒴 that outputs a label y as the prediction for an input x, we aim to craft an adversarial example x^* which is visually indistinguishable from x but will be misclassified by the classifier, i.e., f(x^*) ≠ y. The generation of AE can be formulated as the following optimization problem:
max _x^*ℓ(x^*, y), s.t. x^*-x_p ≤ϵ,
where the loss function ℓ(·, ·) is often the cross-entropy loss, and the ł_p-norm measures the discrepancy between x and x^*. In this work, we use p=∞ which is commonly adopted in the literature. Optimizing Eq. (<ref>) needs to calculate the gradient of the loss function, but this is not feasible in the black-box setting. Therefore, we aim to create transferable AE on a source model yet can attack many other target models.
We develop a new method to boost adversarial transferability from a perspective inspired by “flat optima” in optimization theory. See Fig. <ref>. If an AE is located at a sharp local maximum, it will be sensitive to the difference of decision boundaries between the source model and target models. In contrast, if it is located at a flat maximum region, it is much more likely to result in a similar high loss on other models (which is desired).
Thus, we seek to generate AE in flat regions. To this end, we introduce a gradient norm penalty (GNP) term into the loss function, which penalizes the gradient norm of the loss function with respect to input. The reason is that flat regions are characterized by small gradient norms, hence penalizing the gradient norm will encourage the optimizer to find an AE that lies in a flat region. We thus enhance the adversarial transferability since a minor shift of decision boundary will not significantly change the loss value (prior work has shown that different networks often share similar decision boundaries).
§.§ Baseline Attacks
GNP is a very flexible method in that it can be easily incorporated into any existing gradient based method to boost its strength. We consider the following existing, gradient based attacks to demonstrate the effect of GNP.
Later in sec:experiments, we will also show how GNP works effectively on state-of-the-art transfer-based attacks as well.
Fast Gradient Sign Method (FGSM). FGSM <cit.> is the first gradient-based attack which crafts an AE x^adv by attempting to maximize the loss function J(x^adv, y; θ) with a one-step update:
x^adv=x+ϵ·sign(∇_x ℓ(x, y; θ)),
where ∇_x J(x, y; θ) is the gradient of loss function with respect to x, and sign(·) denotes the sign function.
Iterative Fast Gradient Sign Method (I-FGSM). I-FGSM extends FGSM to an iterative version:
x_t+1^adv = x_t^adv + α·sign(∇_x_t^advℓ(x_t^adv, y; θ)),
x_0^adv = x,
where α=ϵ / T is a small step size and T is the number of iterations.
Momentum Iterative Fast Gradient Sign Method (MI-FGSM). MI-FGSM <cit.> integrates a momentum term into I-FGSM and improves transferability by a large margin:
g_t+1 = μ· g_t + ∇_x_t^advJ(x_t^adv, y; θ)/∇_x_t^advJ(x_t^adv, y; θ)_1,
x_t+1^adv = x_t^adv + α·sign(g_t+1),
where g_0 = 0 and μ is a decay factor.
§.§ GNP Attack
As explained in sec:method, we aim to guide the loss function optimization process to move into a flat local optimal region.
To this end, we introduce GNP to penalize large gradient norm, as
L(x, y) = ℓ(x, y)-λ∇_x ℓ(x, y)_2
where ℓ(·) is the original loss function of the source model, and the regularization term is our GNP, which encourages small gradient norm when finding local maxima.
For gradient based attacks (e.g., FGSM, I-FGSM, MI-FGSM, etc.), we need to calculate the gradient
of the new loss (<ref>). To simplify notation, we omit y in the loss function since we are calculating gradient with respect to x. Using the chain rule, we have
∇_x L(x)=∇_xℓ_(x)-λ∇_x^2 ℓ_(x) ∇_xℓ_(x)/∇_xℓ_(x)
This equation involves the calculation of Hessian matrix H = ∇_x^2 ℓ_(x). This is often infeasible because of the curse of dimensionality (such a Hessian matrix in DNNs tends to be too large due to the often large input dimension). Therefore, we take the first-order Taylor expansion together with the finite difference method (FDM) to approximate the following gradient:
∇_x L_(x+rΔx)≈∇_xℓ_(x)+H rΔx
where Δx=∇_x ℓ(x)/∇_xℓ(x), and r is the step length to control the neighborhood size. Thus we obtain the regularization term of (<ref>) as:
H∇_xℓ_(x)/∇_xℓ_(x)≈∇_xℓ(x+r ∇_xℓ_(x)/∇_xℓ_(x))-∇_xℓ(x)/r
Inserting (<ref>) back into (<ref>), we obtain the gradient of the regularized loss function as:
∇_x L(x)=(1+β) ∇_xℓ_(x) -β∇_xℓ_(x+r ∇_xℓ_(x)/∇_xℓ_(x))
where β=λ/r is the regularization coefficient. We summarize the algorithm of how GNP is integrated into I-FGSM in Algorithm <ref>, but I-FGSM can be replaced by any gradient based attack.
§ EXPERIMENTS
§.§ Experiment Setup
Dataset and models. We randomly sample 5,000 test images that can be correctly classified by all the models, from the ImageNet <cit.> validation set. We consider 11 SOTA DNN-based image classifiers: ResNet50 <cit.>, VGG-19 <cit.>, ResNet-152 <cit.>, Inc v3 <cit.>, DenseNet <cit.>, MobileNet v2 <cit.>, SENet <cit.>, ResNeXt <cit.>, WRN <cit.>, PNASNet <cit.>, and MNASNet <cit.>. Following the work in <cit.>, we choose ResNet50 as the source model and the remining 10 models as target models.
Implementation Details. In experiments, the pixel values of all images are scaled to [0, 1]. The adversarial perturbation is restricted by 3 scales ϵ=4/255,8/255,16/255. The step length is set as r=0.01 and regularization coefficient β=0.8, we run 100 iterations for all attacks and evaluate model misclassification as attack success rate.
§.§ Experimental Results
§.§.§ Integration with baseline attacks
We first evaluate the performance of GNP by integrating it with baseline attacks including I-FGSM and MI-FGSM. The results are shown in tab:1. We use a pre-trained ResNet50 as the source model and evaluate the attack success rate (ASR) of the generated AE on a variety of target models under different scales of perturbation ϵ. GNP achieves significant and consistent improvement in all the cases. For instance, taking the average ASR of all the 10 target models under perturbation ϵ = 8/255, GNP outperforms I-FGSM and MI-FGSM by 26.51% and 13.67%, respectively. In addition, the improvements of the attack success rates on a single model can be achieved by a large margin of 33.06%.
§.§.§ Integration with existing transfer-based attacks
Here we also evaluate the effectiveness of GNP when incorporated into other transfer-based attacks such as DIM <cit.> and TIM <cit.>. The results are given in tab:2 and show that DIM+GNP and TIM+GNP are clear winners over DIM and TIM alone, respectively. Specifically, DIM+GNP achieves an average success rate of 91.95% under ϵ = 16/255 for the 10 target models, and TIM+GNP outperform TIM by a large margin of 16.28% under ϵ = 8/255. We note that we only present the integration of GNP with two typical methods here, but our method also apply to other more powerful gradient-based attack methods.
§.§.§ Attacking “secured” models
For a more thorough evaluation, we also investigate how GNP will perform when attacking DNN models that have been adversarially trained (and hence are much harder to attack). We choose three such advanced defense methods to attack, namely, JPEG <cit.>, R&P <cit.> and NRP <cit.>. In addition, we choose another three ensemble adversarially trained (AT) models, which are even harder than regular AT models, and attack them: Inc-v3_ens3, Inc-v3_ens4 and IncRes-v2_ens1 <cit.>. We craft AE on the ResNet50 surrogate model with ϵ=16/255, and use DIM+TIM as the “backbone” to apply GNP. The results are presented in tab:3, where we can see that GNP again boosts ASR significantly against the six “secured” models, achieving consistent performance improvements of 11.46–14.37%.
§.§ Ablation Study
We conduct ablation study on the hyper-parameters of the proposed GNP attack, i.e., step length r and regularization coefficient β. Since r represents the radius of neighborhood that is flat around current AE, a larger r is preferred; on the other hand, setting it too large will increase the approximation error of Taylor expansion and thus mislead the AE update direction. The β is to balance the goal of fooling the surrogate model and finding flat optima. fig:ablation reports the results of our ablation study, where ASR is averaged over 10 target models (excluding the source ResNet50) attacked by I-FGSM + GNP with ϵ=8/255. We observe that adding the GNP regularization term clearly improves performance (as compared to β=0) and the performance gain is rather consistent for β in a wide range of 0.6–1.6. The step length r does not affect the performance gain too much either, and r=0.01 seems to be the most stable. Thus, the ablation study reveals that GNP is not hyper-parameter sensitive and works well in a variety of conditions.
§ CONCLUSION
In this paper, we have proposed a new method for improving the transferability of AE from an optimization perspective, by seeking AE located at flat optima. We achieve this by introducing an input gradient norm penalty (GNP) which guides the AE search toward flat regions of the loss function. This GNP method is very flexible as it can be used with any gradient based AE generation methods. We conduct comprehensive experimental study and demonstrate that our method can boost the transferability of AE significantly.
This paper focuses on untargeted attacks, but GNP can be rather easily applied to targeted attacks as well, by making a small change to the loss function. We plan to have a thorough investigation in future work.
IEEEbib
|
http://arxiv.org/abs/2307.04363v1 | 20230710062314 | Diffusion dynamics of star-shaped macromolecules in dilute solutions | [
"Prabeen Kumar Pattnayak",
"Aloke Kumar",
"Gaurav Tomar"
] | cond-mat.soft | [
"cond-mat.soft"
] |
Polymer chains dissolved in a solvent take random conformations due to large internal degrees of freedom and are characterized geometrically by their average shape and size. The diffusive dynamics of such large macromolecules play an indispensable role in a plethora of engineering applications. The influence of the size of the polymer chain on its diffusion is well studied, whereas the same cannot be said for the shape of the polymer chain. In the present work, the influence of shape on the center-of-mass diffusion of the star-shaped chains in solution is investigated using Multi-particle Collision Dynamics. Star-shaped chains of varying degrees of functionality are modeled in a good solvent at infinite dilution. The radius of gyration(R_g) of the star-shaped chains follows a functionality-independent scaling law with the chain length(N), R_g ∼ N^ν, where ν∼ 0.627. The shape of the polymer chains is calibrated by relative shape anisotropy. Highly anisotropic star-shaped polymer chains are found to have a faster rate of diffusion along the translational direction due to a slower rate of rotational diffusion when the radius of gyration of the polymer chains is maintained constant.
§ INTRODUCTION
Polymeric fluids are a unique class of complex fluids that show a plethora of fascinating non-Newtonian behaviours<cit.>, which can be understood with the transport and rheological properties of the fluid. One key challenge of the complex fluid community is understanding how the macroscopic properties of the polymeric fluids arise from microscopic interactions of the macromolecules<cit.>. The polymer chain dissolved in a solvent can take a multitude of conformations due to its large internal degrees of freedom. The average shape and size are used to characterize a polymer chain geometrically: the radius of gyration is widely used to describe the size and the eigenvalues of the gyration tensor for the shape calibration<cit.>. Advances in controlled polymerization have led to synthesizing complex polymer structures like star-shaped, comb-shaped, H-shaped, ring, and many more.<cit.>. The diffusive dynamics of such complex polymer chains are of fundamental interest in the biophysics community and are ubiquitous in numerous engineering applications. Star-shaped polymers are used for the controlled delivery of drugs in biomedical applications<cit.> and as viscosity index modifiers in oil industries<cit.>. Polyethylene glycol stars are used for protein delivery<cit.>. Understanding macromolecular diffusion in a biological cell is important for its various functions, such as the movement of plasmids and transport of amino acids <cit.> <cit.>. Hence, the influence of the shape and size of the polymer chain on its diffusive dynamics in solution is essential.
Most studies on the diffusion of complex polymer chains have been done by keeping the length of the polymer chain constant. Using fluorescence microscopy, Robertson et al.<cit.> have shown a lower radius of gyration and a higher diffusion coefficient for circular DNA molecules than linear ones for the same chain length. Using Brownian dynamics simulations with hydrodynamic interaction, Kanaeda and Deguchi<cit.> have reported higher diffusion coefficients for the ring polymers than for the linear polymers of the same chain length. Hegde et al.<cit.> have reported similar findings for the ring polymers in comparison to the linear chains for the same chain length by using three different simulation techniques. Singh et al.<cit.> have shown, using Multi-particle Collision Dynamics(MPCD), that the center-of-mass diffusion coefficient of the star polymer chains decreases with an increase in their radius of gyration. Hence, when it comes to size, it is clear that the higher the size of the polymer chain, the lower the center-of-mass diffusion coefficient. However, it is difficult to comment on the influence of the shape of the polymer chain on its diffusion from the same polymer chain length study as both shape and size are distinct for different polymer chain architectures<cit.>. The diffusion study of complex polymer chains for the same size case is scanty. Hegde et al.<cit.> have reported a higher diffusion coefficient for the linear chains than the ring chains for the same radius of gyration using Molecular Dynamics, MPCD, and the Lattice Boltzmann method and also noted that size could not be the only factor that influences the diffusion of the chain. Therefore, the effect of the shape parameter on the center-of-mass diffusion of the polymer chains still remains an open question.
In this work, the effect of the shape parameter on the center-of-mass diffusion of the star-shaped polymer chains in solution is studied in the limit of infinite dilution using a mesoscopic coarse-grained simulation method, namely MPCD. For simulating the Brownian motion of the complex polymer chains in a solution, MPCD is frequently used as it incorporates both thermal fluctuation and long-range hydrodynamic interactions<cit.>. At first, the shape and size of star-shaped polymer chains with different functionality are analyzed using the gyration tensor and compared with linear polymer chains at the same chain length. Subsequently, the translational diffusion of six different types of chains (one linear and five star-shaped chains) with the same radius of gyration is studied using the center-of-mass mean square displacement, followed by their rotational diffusion using the reorientation correlation function. Finally, the diffusion study is correlated to the shape characterization study in order to find the effect of the shape parameter on the center-of-mass diffusion of the star-shaped polymer chains in a solution.
§ NUMERICAL FORMULATION
The coarse-grained bead-spring model represents the polymer chains dissolved in the solution. To replicate good solvent conditions, the excluded volume interactions between the monomer beads are modeled using the repulsive part of the 12-6 Lennard-Jones (LJ) potential, also known as Weeks-Chandler-Andersen potential<cit.> (U_WCA), defined as:
U_WCA(r) =
4ε[ ( σ_p/r)^12 - ( σ_p/r)^6 ] + ε r≤ 2^1/6σ_p
0 otherwise
where σ_p is the diameter of a monomer bead, r is the distance between two beads and ε = k_BT is the strength of interaction, k_B is the Boltzmann’s constant and T is temperature. The neighboring monomers of the polymer chain are connected with springs, the potential of which is given by Finitely Extensible Nonlinear Elastic (FENE)<cit.>, defined as:
U_FENE(r) =
-1/2 k r_0^2 ln[ 1 - (r/r_0)^2 ] r≤ r_0
∞ otherwise
where k is the spring constant, and r_0 is the maximum length of the extension. The values of κ and r_0 are 30 k_BT/σ_p^2 and 1.5 σ_p respectively, as recommended by Kremer and Grest<cit.>. The bead spring model, FENE potential, and Kreme and Grest parameters are widely used in the coarse-grained modeling of the polymer chains<cit.>. The star-shaped polymer chains of varying degrees of functionality (number of arms) have been modeled by connecting different linear arms at their ends instead of connecting them to a single central monomer, ensuring equal flexibility of the arms for all the functionalities.
The solvent is modeled explicitly as an ensemble of non-interacting point particles of finite mass (m) using a mesoscopic coarse-grained simulation technique, MPCD<cit.>. MPCD consists of alternating streaming and collision steps. In the streaming step, the MPCD particles with velocity v_i undergo ballistic motion and their positions (r_i) are updated as:
r_i( t + δ t) = r_i(t) + δ t v_i(t)
In the collision step, the simulation box is divided into cubic cells of equal size(a), and all the particles within a cell undergo stochastic collision. The collision of the MPCD particles is modeled using a momentum-conserving version of the Andersen Thermostat, also known as MPCD-AT<cit.>, in which the particle velocities (v_i) are updated as:
v_i(t + δ t) = v_cm(t) + v_i^ran - Δv_cm^ran
where v_cm is the center-of-mass velocity of the collision cell, v_i^ran is a random velocity selected from a Maxwell-Boltzmann distribution, and Δv_cm^ran is the change in center-of-mass velocity of the collision cell due to the addition of v_i^ran. During the streaming interval of MPCD, the positions, and velocities of the monomer beads evolve by the velocity-Verlet algorithm<cit.> with a time step δ t_MD. During the collision step, the monomers are considered MPCD particles and undergo stochastic collisions. The three components of v_i^ran are selected from a Gaussian distribution with variance k_BT/m for the solvent particles and k_BT/M for the monomer beads, where M is the mass of a monomer. This way of considering the monomers just like other MPCD particles in the collision step for modeling solvent-monomer interaction is often used in recent studies<cit.><cit.><cit.><cit.><cit.> due to its advantage of avoiding spurious depletion forces<cit.> which could lead to breakage of FENE bonds. Galilean invariance is ensured by randomly shifting the cells before each collision step by a vector with the three components randomly chosen from [-a/2,a/2]<cit.>. All the simulations have been performed using the MPCD-AT routines in LAMMPS<cit.>(Chen et al.<cit.> <cit.>).
The size of the collision cells is taken to be the same as the size of the monomer beads, a = σ_p. The average density of the MPCD solvent equals 5m/σ_p^3. The mass of a monomer(M) is taken as 5m to achieve neutral buoyancy. The MD time step (δ t_MD) equals 0.002τ. The MPCD collision time step (δ t) is 0.09τ, where τ is the intrinsic unit of time equals √(mσ_p^2/k_BT). The resulting viscosity and Schmidt number(Sc) of the MPCD fluid are 4 k_BT/σ_p^3 and 12, respectively. The size of the cubic simulation box is increased with polymer chain length to avoid the finite size effects following the previous studies.<cit.><cit.> Box size(L) equals 32σ_p for the set of chain lengths { 24σ_p, 36 σ_p, 48σ_p } , 48 σ_p for the set of chain lengths { 60σ_p, 84 σ_p }, and 64σ_p for the set of chain lengths { 108σ_p, 192 σ_p }. The equilibration simulation run is performed for 2×10^6 MD time steps. The results are time averaged over 5×10^8 MD time steps and ensemble-averaged over five system replicas, each with a unique set of random velocities at starting of the simulation and during the stochastic collision, both taken from Maxwell-Boltzmann distribution. The measured parameters will be expressed in reduced units using the energy scale k_BT, length scale σ_p, and mass scale m. Periodic boundary conditions are implemented in all directions. The snapshots of the simulations are shown in Figure <ref>.
To validate the MPCD routines, the Brownian motion of 250 colloidal particles of the same size as the monomer beads are modeled in the MPCD solvent. The variation of their average mean square displacement (MSD) with lag time (Δ t) is plotted in Figure <ref>(a). Typically, power law describes the variation of MSD with lag time: MSD ∝Δ t^b. The dynamics of the solutes can be diffusive (b=1), sub-diffusive (b<1), or super-diffusive (b>1). From Figure <ref>(a), we note that we obtain b = 1 as expected for the colloids. Hence, the dynamics of the colloids are captured well by the simulation. Further, the radius of gyration(R_g) vs. chain length (N) is plotted for the linear chain in Figure <ref>(b). A power law behavior can be observed with a scaling exponent of 0.623 for the linear chain. The value of the power law exponent reported by Chen et al.<cit.> using similar simulation parameters is 0.61. Linear chains have been studied widely, and their corresponding scaling exponent values for good solvent conditions are summarized in Table <ref>. The exponent value calculated in the present work agrees well with earlier studies. In addition, the diffusion coefficient(D) is calculated from the center-of-mass mean square displacement(MSD) vs. lag time plot for the linear chains of different lengths. The variation of D with N is shown in Figure <ref>(b). It also follows a power law D ∼ N^-ν_d, where ν_d = 0.622. The equality of ν with ν_d confirms the Zimm theory for the diffusion of a polymer chain with intra-chain hydrodynamic interactions, which predicts D ∼1/R_g.
The difference between a good solvent and a poor solvent can be demonstrated by introducing attraction using the standard 12-6 LJ potential with a cutoff of 2.5σ_p for the pairwise interaction between monomer beads instead of WCA potential as explained by Peng et al.<cit.>. In a good solvent, the polymer chain forms a coil, whereas it becomes a globule in the case of a poor solvent. This coil-to-globule transition is shown in Figure <ref> and can be observed by measuring the radius of gyration, which is approximately 5σ_p in a good solvent for the chain length of 56σ_p, contrary to 2σ_p for the poor solvent case. This reduction in the size of the polymer chain is also visible in the values of the resulting diffusion coefficients, which are 0.0061σ_p^2/τ and 0.0032σ_p^2/τ for poor and good solvents, respectively.
§ RESULTS AND DISCUSSION
§.§ Shape and size of star-shaped chains
The gyration tensor(S) of a polymer chain is defined as the dyadic product of the position vector(P) of a monomer bead in the center-of-mass reference frame with its transpose and averaged over all the monomers of the chain<cit.>.
S = 1/NPP^T, P = [ x_1^i - x_1^cm; x_2^i - x_2^cm; x_3^i - x_3^cm ]
where (x_1^cm,x_2^cm,x_3^cm) represents the centre-of-mass of the polymer chain consisting of N identical monomers (x_1^i,x_2^i,x_3^i) and is calculated as follows:
x_1^cm = 1/N∑_i=1^Nx_1^i,
x_2^cm = 1/N∑_i=1^Nx_2^i,
x_3^cm = 1/N∑_i=1^Nx_3^i
The elements of S can be written using indicial notation as:
S_pq = 1/N∑_i=1^N (x_p^i - x_p^cm)(x_q^i - x_q^cm)
The polymer chain's shape and size can be easily measured by the eigenvalues of S, i.e., λ_1, λ_2, and λ_3. The radius of gyration(R_g) represents the size of the polymer chain, the square of which is equal to the trace of the gyration tensor<cit.>.
R_g^2 = Tr(S) = λ_1 + λ_2 + λ_3
We evaluate R_g for four different types of chain: linear, 3-armed star, 4-armed star, and 6-armed star using seven different chain lengths, and the results are summarized in Figure <ref>(a). The radius of gyration follows a power law with polymer chain length, R_g ∼ N^ν. The power law's exponent(ν) represents the quality of the solvent. The value of ν calculated in the present simulations are 0.623, 0.627, 0.631, and 0.626 for the linear, 3-armed star, 4-armed star, and 6-armed star chains, respectively. The scaling law is found to be independent of the functionality of the star-shaped chains in which the average value of ν∼ 0.627, indicating similar scaling behavior of the linear and star-shaped chains under good solvent conditions<cit.><cit.><cit.>. Table <ref> summarizes the values of ν for linear chains reported by experiments, simulation, and theory. The calculated average value of ν is in good agreement with the existing results in the literature. For the same chain length, linear chains are bigger than the star-shaped chains, and among the star-shaped chains, size decreases with an increase in functionality, i.e. number of arms, as expected. Compared to the linear chain, this reduction in the size of the branched chains for the same polymer chain length is measured using the geometrical shrinking factor(g_s) defined as the ratio of the mean squared radius of gyration of the branched chain to that of the linear chain, g_s = ⟨ R_g,b^2 ⟩/⟨ R_g,l^2 ⟩<cit.>. The values of g_s of star chains with different functionality and chain lengths are shown in Figure <ref>(b). We note that g_s doesn't vary much over the chain length for a particular chain type. The values of g_s for star chain with f=5 reported by Khabaz and Khare<cit.> is approximately 0.5, which falls between 0.41(6-armed star) and 0.58(4-armed star) calculated in the present work and suggests linear variation in g_s with f.
The shape of the polymer chains can be calibrated using asphericity(b) and relative shape anisotropy(κ^2) defined as,<cit.><cit.>
b = λ_1 - ( λ_2 + λ_3/2), λ_1 ≥λ_2 ≥λ_3
κ^2 = 1 - 3 λ_1 λ_2 + λ_2 λ_3 + λ_3 λ_1/(λ_1 + λ_2 + λ_3)^2
The variation of the two shape parameters with the polymer chain length is given in figure <ref>. The asphericity values are normalized by R_g^2 to make these independent of size. For the individual architectures, the values of both shape parameters do not vary much over the chain lengths. The normalized asphericity can take any value between 0 and 1. It will be 0 for a spherical shape or any shape of the platonic solids and 1 for a rod-like structure. As expected, the linear chains have higher asphericity values than the star-shaped chains. Its value is ∼ 0.64 for the linear chain in the present work, which is close to 0.625(calculated using eigenvalues), reported by Koyama<cit.> and 0.66, reported by Theodorou and Suter<cit.>. Among the star-shaped chains, the normalized asphericity decreases with an increase in functionality. Khabaz and Khare<cit.> have reported b/R_g^2 ∼ 0.362 for the 5-armed star chain, which falls between 0.29(6-armed star) and 0.4(4-armed star) in the present work. The other shape parameter is the relative shape anisotropy(κ^2), which also varies between 0 and 1. Its value is 1 for a rigid rod and 0 for a sphere and all platonic solids. It is also higher for linear chains than the star-shaped ones, as expected. In the present work, the value of κ^2 for the linear chain is ∼ 0.45, which is nearly the same as 0.44 reported by Khabaz and Khare<cit.>. As anticipated, for star-shaped chains, κ^2 decreases with increasing functionality. In the present work, κ^2 ∼ 0.3(3-armed star), 0.21(4-armed star), and 0.12(6-armed star) which are slightly lower than 0.3454(3-armed star), 0.2463(4-armed star), and 0.1512(6-armed star), respectively, reported by Zifferer<cit.>. The value of κ^2 reported by Khabaz and Khare<cit.> for a 5-armed star chain is ∼ 0.16, which falls between 0.12(6-armed star) and 0.21(4-armed star) calculated in the present work. To summarize, linear chains are less spherical and more anisotropic than the star-shaped chains. The higher the functionality among the star-shaped chains, the more spherical and less anisotropic the chain is. The variation of the shape parameters with functionality is plotted in Figure <ref>(a) and correlated with the diffusion coefficients in a later section.
§.§ Translational diffusion of star-shaped chains
The diffusion rate of a polymer chain in a solution can be measured by the variation of center-of-mass mean square displacement(MSD) with lag time. One linear and five star-shaped chains are modeled to investigate the influence of the shape of the polymer chain on its diffusion. The effect of chain size is eliminated by selecting the polymer's chain length (N) such that the resulting radius of gyration is approximately 5σ_p for all six types. The simulation box size is 48σ_p for all six cases. The MSD(Δ r^2) vs. lag time(Δ t) plot is shown in Figure <ref>. At short times(less than 400 τ), MSD increases at a stronger rate than linearly with time due to the inertia of the chain. For longer times, the MSD reaches the linear diffusive regime, from which diffusion coefficients(D) are calculated using the relation, Δ r^2 = 6D Δ t and summarized in the second last column of Table <ref>.
The linear chain can be considered a star chain with f=2 and has the highest value of diffusion coefficient. Among the star chains, the diffusion coefficient value decreases with increased functionality. Using the values of the diffusion coefficients, the hydrodynamic radius of the polymer chains can be calculated as<cit.>,
R_H = k_BT/6 πη D
where D is the translational diffusion coefficient of the polymer chain, and η is the solvent viscosity. The ratio of the radius of gyration and hydrodynamic radius, ρ = R_g/R_H, is a size-independent quantity and represents the effect of the architecture of the polymer chain on its diffusion. The variation of ρ with the functionality of the star polymers is plotted in Figure <ref>. We note that ρ decreases with an increase in the functionality of the star chain, as reported by Huber et al. <cit.> and Singh et al. <cit.>. Since all six types of chains are of the same size, this difference in the diffusion coefficient values can only be attributed to their shape parameters. In Figure <ref>, we have shown that the linear chain is more anisotropic and less spherical than the star-shaped chains, and among the star chains, κ^2 and b/R_g^2 decrease with increased functionality. Hence, the higher a star chain's relative shape anisotropy and normalized asphericity, the faster it diffuses along the translational direction. We investigate this further by computing the rotational diffusion of the polymer chains.
§.§ Rotational diffusion of star-shaped chains
The polymer chain reorients itself continuously in the solution while diffusing along the translational direction. The gyration tensor has real eigenvalues and orthogonal eigenvectors as it is a symmetric tensor approximating the polymer chain as an ellipsoidal shape<cit.>. The reorientation of the polymer chain is equivalent to the rotation of the imaginary ellipsoid, as explained using a schematic representation in Figure <ref>. Any vector rigidly attached to the polymer chain can be used for measuring the rate of reorientation. In this work, the eigenvector(e_1) corresponding to the largest eigenvalue(λ_1) of the gyration tensor is selected for measuring the rate of reorientation of the corresponding polymer chain. The relevant reorientational correlation function of the polymer chain can be defined as,
C(t) = ⟨ P_2(e_1(0).e_1(t)) ⟩
where P_2(x)=(3x^2 - 1)/2, is the second-order Legendre polynomial, and the angle bracket represents the time and ensemble average over five system replicas. For any isotropically reorienting polymer chain, following Wong et al.<cit.>, the reorientational correlation function can be approximated as,
C(t) = e^-6D_Rt
where D_R is the rotational diffusion coefficient of the polymer chain.
The variation of the reorientational correlation function with time is plotted in Figure <ref>. We note that C(t) decays faster for the star-shaped chains than the linear chain. For the star-shaped chains, the higher the functionality, the faster the decay of C(t). The rotational diffusion coefficients are calculated from the exponential fit using the least square method and are summarized in the last column of Table <ref>. The corresponding coefficient of determination(R^2) is more than 0.99 for all the cases. The faster the decay of the reorientational correlation function, the higher the value of D_R. The linear chain has the lowest value of D_R, and among the star chains, D_R increases with increased functionality. As discussed earlier for translational diffusion, this difference in the values of the rotational diffusion coefficient is because of the shape parameters, as all the six types of polymer chains considered here are of the same size. In terms of shape parameters, star polymer chains with lower values of relative shape anisotropy and normalized asphericity have a higher rate of rotational diffusion. The lower values of κ^2 and b/R_g^2 represent higher symmetry of monomer distribution with respect to the coordinate axes. It is intuitive that a star-shaped chain reorients faster when the distribution of its monomers is symmetrical with respect to the coordinate axes. It is to be noted that the variation in the rotational diffusion coefficient with functionality and shape parameters is opposite to that of the translational diffusion coefficient.
§.§ Correlation of diffusion and shape parameters of star-shaped chains
The variation of the shape parameters and the two types of diffusion coefficients with the functionality of the chains are plotted in Figure <ref>(a) and Figure <ref>(b), respectively, where the linear chain is considered a star chain with f = 2. Out of the two shape parameters, relative shape anisotropy(κ^2) can be expressed in terms of the invariants of the gyration tensor and is the overall measure of shape anisotropy<cit.>. By making two-to-one correspondence between the two types of diffusion coefficients in Figure <ref>(b) and the relative shape anisotropy in Figure <ref>(a), it can be stated that, for star-shaped chains with higher κ^2 values, the value of D is higher, and the value of D_R is lower. The origin of a polymer chain's translational and rotational diffusive motion is the collision with the surrounding solvent particles. The radius of gyration can be interpreted as the radius of the imaginary sphere surrounding the polymer chain in the solution. Maintaining the same R_g for all six types of chains leads to the same size of the corresponding imaginary sphere. Therefore, all six types of chains interact with an approximately equal number of solvent particles on average. Highly spherical and isotropic star-shaped polymer chains utilize more energy in rotational diffusion, which results in less energy for diffusing along the translational direction. The opposite is the case for highly anisotropic star-shaped chains. Hence, the higher the relative shape anisotropy value of a star-shaped chain, the slower the rotational diffusion rate and the faster the rate of translational diffusion, as shown in Figure <ref>(b).
The variation of the translational diffusion coefficient and the rotational diffusion coefficient with relative shape anisotropy of the star-shaped chains having the same value of R_g is shown in Figure <ref>. The higher values of κ^2 lead to a lower value of the rotational diffusion coefficient and a higher value of the translational diffusion coefficient. From these results, we conclude that a star polymer chain with a higher value of relative shape anisotropy will have a slower rate of rotational diffusion and diffuse faster in the translational direction. Even though this is demonstrated using star-shaped chains, the argument can be extended to other polymer configurations as well. Hegde et al. have reported that the linear chains have higher translational diffusion coefficients than the ring chains when both have the same radius of gyration<cit.>. From the definition of relative shape anisotropy(equation <ref>), it is intuitive that the linear polymer chain will have a higher value of κ^2 than the ring polymer chain. Hence, our argument also holds for the ring vs. linear case. Nevertheless, to verify this argument for generic polymer configurations, the study of other polymer chain architectures is essential.
§ CONCLUSION
In this work, the Brownian diffusion of the linear and star-shaped polymer chains of different functionalities is simulated using MPCD. It is shown that the radius of gyration of the star-shaped polymer chains follows a functionality-independent scaling law with chain length, in which the scaling exponent ν∼ 0.627. The linear chain is shown to be more anisotropic than the star-shaped chains, and for star-shaped chains, the value of relative shape anisotropy decreases with an increase in functionality. For the same radius of gyration, the linear chain diffuses at a faster rate along the translational direction and has a slower rate of rotational diffusion than the star-shaped chains. Among star-shaped chains with the same radius of gyration, higher functionality leads to a higher value of rotational diffusion coefficient and a slower rate of diffusion along the translational direction. In terms of the shape parameter, we conclude that the star-shaped chains with higher values of relative shape anisotropy have a slower rate of rotational diffusion and therefore diffuse at a faster rate along the translational direction. Hence, shape anisotropy leads to faster center-of-mass diffusion of star-shaped polymer chains in a solution.
G.T. acknowledges partial support from the Department of Science and Technology National Supercomputing Mission HPC system in the Supercomputing Education and Research Center-Indian Institute of Science. A.K. acknowledges partial support from SERB CRG/2022/005381. P.K.P. acknowledges partial support from the Ministry of Education, Government of India.
|
http://arxiv.org/abs/2307.05978v1 | 20230712074924 | Reduced basis method for non-symmetric eigenvalue problems: application to the multigroup neutron diffusion equations | [
"Yonah Conjungo Taumhas",
"Geneviève Dusson",
"Virginie Ehrlacher",
"Tony Lelièvre",
"François Madiot"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Transformers in Reinforcement Learning: A Survey
Samira Ebrahimi Kahou
August 12, 2023
================================================
In this article, we propose a reduced basis method for parametrized non-symmetric eigenvalue problems arising in the loading pattern optimization of a nuclear core in neutronics. To this end, we derive a posteriori error estimates for the eigenvalue and left and right eigenvectors.
The practical computation of these estimators requires the estimation of a constant called prefactor, which we can express as the spectral norm of some operator. We provide some elements of theoretical analysis which illustrate the link between the expression of the prefactor we obtain here and its well-known expression in the case of symmetric eigenvalue problems, either using the notion of numerical range of the operator, or via a perturbative analysis. Lastly, we propose a practical method in order to estimate this prefactor which yields interesting numerical results on actual test cases.
We provide detailed numerical simulations on two-dimensional examples including a multigroup neutron diffusion equation.
§ INTRODUCTION
In this work, we are interested in developing a numerical method to efficiently compute the solutions of a parametrized non self-adjoint eigenvalue problem for a large number of parameter values.
An example of application where this type of problem occurs and which motivates the present work is the resolution of criticality problems in neutronics. The method we propose relies on a reduced basis technique <cit.>.
Model order reduction methods such as reduced basis techniques <cit.> are useful to accelerate the computation of approximate solutions to parameterized problems.
In the context of neutronics, parametrized problems naturally occur when optimizing the loading pattern of a nuclear core <cit.>. Mathematically, this amounts to optimizing an objective function which involves the solution to a generalized non-symmetric eigenvalue problem, parameterized by the fuel assemblies distribution.
The objective of this work is thus to propose a reduced basis technique in this context. It can be seen as a generalization of <cit.>, where reduced basis methods for symmetric eigenvalue problems have been developed.
A main difficulty is to obtain reliable a posteriori estimators in order to build the reduced basis and certify the results obtained with the reduced problem. We refer to <cit.> for a posteriori error estimators for non self-adjoint eigenvalue problems in a classical finite element context. Let us also mention the recent work <cit.> and references therein for efficient a posteriori estimators for non-symmetric problems. We refer to <cit.> for some other applications of model order reduction techniques applied to neutronics.
As mentioned above, the main bottleneck here is to propose efficient a posteriori error estimators for a reduced basis approximation of non self-adjoint eigenvalue problems. More precisely, we consider a situation where one is interested in computing the eigenvalue of smallest modulus of a parameterized eigenvalue problem, which is assumed to be simple.
The a posteriori error estimators read as products of norms of the residuals of the direct and associated adjoint eigenvalue problems times a multiplicative constant, which we call hereafter the prefactor.
Computing an accurate and optimal value of this prefactor is not an easy task, compared to the case of symmetric eigenvalue problems where it can be expressed by means of the spectral gap of the considered operator.
Three main contributions are proposed in this work.
First, we derive an expression of the prefactor in the case of non-symmetric eigenvalue problems as the spectral norm of a composition of some well-chosen operators.
Second, we provide some elements of theoretical analysis to illustrate the close link between the obtained expression of the prefactor and its well-known expression in the case of symmetric eigenvalue problems.
This link is highlighted in two different ways: first, we give an expression of the prefactor using the distance between the approximate eigenvalue and the numerical range of the non-symmetric operator and observe that the numerical range plays a similar role as the spectrum of the operator in the self-adjoint case;
second, we use perturbative arguments to give a second-order development of this prefactor when the operator is a small perturbation of a symmetric operator.
As our third contribution, we propose a practical heuristic method to estimate the prefactor in the present reduced basis context and demonstrate the efficiency of the approach on test cases stemming from neutronics applications.
The outline of this article is as follows. In Section <ref>, we describe the prototypical reference problem of interest as well as the model order reduction method we use, which relies on a greedy algorithm to build the reduced basis. The greedy procedure requires a posteriori estimators which are presented in Section <ref>. These estimators are basically built as the products of residual norms with prefactors, whose computations are discussed in Section <ref>.
Finally, we provide numerical results on two different examples in Section <ref>: a toy example of a two-dimensional two-group problem, and a two-dimensional simple model of a minicore.
§ REDUCED BASIS METHOD FOR NON-SYMMETRIC GENERALIZED EIGENVALUE PROBLEMS
The objective of this section is to introduce the mathematical framework and the model order reduction method we consider. In Section <ref>, we describe the reference high-fidelity generalized eigenvalue problem of interest. The reduced-order model is then presented in Section <ref>, and the greedy algorithm used to build the reduced basis is finally explained in Section <ref>.
§.§ Reference high-fidelity problem
Let us present the parametrized generalized eigenvalue problem for which we wish to build a reduced-order problem. Let ∈ℕ^* be a positive integer which is assumed to be large in our context. In all the following, ℝ^ is endowed with the Euclidean scalar product[It is easy to generalize the results presented below to any Hilbertian norm.] denoted by ⟨·,·⟩ and associated norm ·. For all values of the vector of parameters μ belonging to the set of parameter values 𝒫⊂ℝ^p for some p≥ 1, we consider two matrices A_μ and B_μ in ℝ^× and the following generalized eigenvalue problem:
Find (u_μ, λ_μ) ∈ℝ^×ℂ such that λ_μ is an eigenvalue with minimal modulus:
A_μu_μ = λ_μ B_μu_μ,
u_μ =1.
As is classical in the context of reduced basis methods, we refer to problem (<ref>) as the high-fidelity (HF) problem. We make the following additional assumption which is satisfied in the problems we are eventually interested in for neutronics applications.
For any parameter μ ∈𝒫, A_μ is invertible and there exists a unique positive eigenvalue λ_μ which realizes the smallest modulus solution to (<ref>). Moreover, the eigenvalue λ_μ is simple.
A consequence of Assumption <ref> is that u_μ is uniquely defined (up to a sign), λ_μ is real, and that there is a spectral gap between λ_μ and the other eigenvalues solutions to problem (<ref>), a property that will also play a role in the a posteriori analysis below.
The associated adjoint problem then reads: Find (u_μ^*,λ_μ) ∈ℝ^×ℝ_+^* such that
A_μ^Tu_μ^* = λ_μ B_μ^Tu_μ^*, u_μ^* = 1.
Let us mention here that, for any A∈^×, the adjoint matrix A^T∈^× is defined relatively to the scalar product ⟨·, ·⟩ as follows:
∀ u,v ∈^, ⟨ v, Au ⟩ = ⟨ A^T v, u ⟩.
Similarly, for any column vector u∈^, we denote by u^T the unique line vector of ^ such that
∀ v ∈^, u^T v = ⟨ u ,v ⟩.
From Assumption <ref>, the eigenvectors u_μ and u^*_μ can be chosen real. In practice, the solutions to (<ref>) and (<ref>) are approximated by an inverse power method, which will be properly described in Algorithm <ref>.
We also define for all μ∈𝒫 the so-called effective multiplication factor
k_μ := 1/λ_μ.
There holds
k_μ = ⟨ u_μ^*,B_μu_μ⟩⟨ u_μ^*,A_μu_μ⟩.
On the one hand, Assumption <ref> holds for instance if A_μ is invertible and the matrix A_μ^-1B_μ coming from problem (<ref>) satisfies the assumptions of the Perron–Frobenius theorem <cit.>. Note that under the assumption that A_μ is invertible, λ_μ is solution to (<ref>) if and only if k_μ is an eigenvalue associated with the matrix A_μ^-1B_μ.
On the other hand, in the context of neutronics applications mentioned earlier and detailed in Section <ref>, (<ref>) is obtained as an appropriate discretization of a continuous problem where the associated resolvent operator satisfies the assumptions of the Krein–Rutman theorem and thus admits a simple real greatest eigenvalue in modulus denoted k^ ex_μ. Since 1/k^ ex_μ is solution to the continuous problem, the
smallest eigenvalue of (<ref>) in modulus is also expected to be simple and positive for fine enough discretization, i.e. large enough .
We now assume in all the rest of the article that Assumption <ref> is satisfied.
Under Assumption <ref>, ⟨ u_μ^*,A_μ u_μ⟩≠ 0.
We postpone the proof of this lemma after Lemma <ref>.
Without Assumption <ref>, it is possible that ⟨ u_μ^*,A_μ u_μ⟩ = 0.
Indeed, a simple example is to take A_μ=[ 1 -1; 0 1 ] and B_μ=[ 1 0; 0 1 ]. Let u_μ=(1,0) and u_μ^*=(0,1).
Equations (<ref>) and (<ref>) are satisfied with λ_μ = 1 while ⟨ u_μ^*,A_μ u_μ⟩ = 0.
We are interested in situations where one has to solve the reference high-fidelity problem (<ref>) quickly and for many values of μ. The idea is to build a reduced basis using some solutions of (<ref>) (so-called snapshots) computed offline, and to use a Galerkin method to project the problem (<ref>) onto this reduced basis, see Section <ref>. This requires a posteriori estimators to wisely select the parameters used to build the reduced basis, as well as to certify the numerical results obtained online on the reduced basis: this is discussed in Section <ref>.
§.§ Reduced-order model
The aim of this section is to present the reduced-order model obtained from a given reduced basis to get an approximation of (<ref>).
Let us consider a reduced linear subspace of ℝ^ of dimension N much smaller than , built in such a way that that any solution of problem (<ref>) can be accurately approximated by an element of the space (the construction of such a subspace will be discussed in the next section). A reduced-order model for problem (<ref>) can then be obtained from the reduced space as follows. Let (ξ_i)_1 ⩽ i ⩽ N be an orthonormal basis of .
The reduced matrices A_μ, N∈^N× N , B_μ, N∈^N× N are defined as follows: for all 1≤ i,j ≤ N,
A_μ, N^ij := ⟨ξ_i, A_μξ_j ⟩,
B_μ, N^ij := ⟨ξ_i, B_μξ_j ⟩.
The reduced-order model then consists in solving: Find (c_μ,N,λ_μ,N) ∈ℝ^N ×ℂ such that λ_μ,N is an eigenvalue with smallest modulus to
A_μ,Nc_μ,N = λ_μ,N B_μ,Nc_μ,N, u_μ,N = ∑_i=1^N c_μ,N^iξ_i,
and u_μ,N = 1,
where for all 1≤ i ≤ N, c_μ,N^i is the i^th component of the vector c_μ,N.
Similarly as in Section <ref> (see Assumption <ref>), we make the following assumption.
For any parameter μ ∈𝒫, the matrix A_μ,N is invertible and there exists a unique positive eigenvalue λ_μ,N which realizes the smallest modulus solution to (<ref>). Moreover, the eigenvalue λ_μ,N is simple.
Under this assumption, c_μ,N and u_μ,N are uniquely defined up to a sign and λ_μ,N is real. Endowing the space ^N with the canonical Euclidean scalar product ⟨·, ·⟩_ℓ^2, we can consider the solution to the associated reduced adjoint problem:
Find (c_μ,N^*,λ_μ,N) ∈ℝ^N ×ℝ_+^* such that the eigenvalue λ_μ,N is the smallest in modulus and
A_μ,N^t c_μ,N^* = λ_μ,N B_μ,N^t c_μ,N^*,
u_μ,N^* = ∑_i=1^N c_μ,N^*,iξ_i, and u_μ,N^*=1.
where for all 1≤ i ≤ N, c_μ,N^*,i is the i^th component of the vector c_μ,N^* and A_μ,N^t and B_μ,N^t are respectively the transpose of the matrix A_μ,N and B_μ,N.
Moreover, under this assumption, we have ⟨ c_μ,N^*,A_μ,N c_μ,N⟩_ℓ^2 = ⟨ u_μ,N^*,A_μ u_μ,N⟩≠ 0 (see Lemma <ref>),
and we define
k_μ,N = ⟨ c_μ,N^*,B_μ,Nc_μ,N⟩_ℓ^2⟨ c_μ,N^*,A_μ,Nc_μ,N⟩_ℓ^2
= ⟨ u_μ,N^*,B_μu_μ,N⟩⟨ u_μ,N^*,A_μu_μ,N⟩.
In practice, we use the inverse power method to solve (<ref>) and (<ref>). If both algorithms converge, we refer to the outputs c_μ,N and c^*_μ,N as the right and left eigenvectors of the reduced problem. If one of the power methods does not converge, the reduced basis is enriched using the high-fidelity left and right eigenvectors for the considered parameter value, see the construction of the reduced space in the next section. Note that the power methods applied to (<ref>) and (<ref>) (resp. to (<ref>) and (<ref>)) are guaranteed to converge if Assumption <ref> (resp. Assumption <ref>) is satisfied.
In the numerical examples presented in Section <ref>, we observe that the two power methods (for the direct and adjoint reduced eigenvalue problems) indeed converge and that ⟨ c_μ,N^*,A_μ,N c_μ,N⟩_ℓ^2≠ 0 as soon as the reduced space has a sufficiently large dimension (typically N ≥ 4 is sufficient in the numerical results presented in Section <ref>).
§.§ Choice of the reduced space: greedy algorithm
In practice, the reduced space used in the reduced-order model described in the previous section is built following the standard procedure of the reduced basis technique <cit.>. We first
initialize the reduced space _0 as a very low-dimensional space spanned by the first modes obtained from a Proper Orthogonal Decomposition of a family of vectors composed of a few snapshots of the direct and adjoint problems.
A sequence of parameter values (μ_n)_n≥ 1 is then selected from a greedy procedure described below, from which nested reduced spaces (_n)_n≥ 1 are built
as follows:
∀ n≥ 1, _n = Span{u_μ_1,…,u_μ_n,u^*_μ_1,…,u^*_μ_n}.
In the following, we denote by N_n:= dim_n and by u_μ, N_n, u_μ, N_n^*, λ_μ,N_n and k_μ, N_n the solutions of the reduced eigenvalue problems described in the previous section for = _n.
The choice made in (<ref>) to enrich the sequence of reduced spaces with both the eigenvector of the direct and of the adjoint eigenvalue problem stems from the a priori error analysis of Galerkin approximations of generalized eigenvalue problems (see <cit.>). Indeed, in an asymptotic regime, the error between the approximate and exact eigenvalue scales like
|λ_μ - λ_μ,N| ≤ C_μ1/γ_μ,Nε_N ε_N^*,
where C_μ is a positive constant which only depends on the parameter μ, where
ε_N := v_N ∈infu_μ-v_N,
ε_N^* := v_N ∈infu_μ^*-v_N,
and where γ_μ,N is the inf-sup constant of the reduced eigenvalue problem. Hence, it appears natural when it comes to the design of a greedy procedure in the present reduced basis context to enrich the Galerkin approximation space with snapshots of both the direct and adjoint eigenvalue problems, in order to at least get the reference solution for the reduced problem when considering the parameter μ of the snapshots.
In the greedy procedure, the parameters (μ_n)_≥ 1 need to be selected satisfying some criteria. In practice, it is common to choose a finite subset _ train⊂ of parameter values, called hereafter a training set and selecting the snapshots maximizing some error surrogate for the error between solutions of the reference model and the reduced model Δ_N_n, as is described in Algorithm <ref>.
In an ideal greedy procedure, we would
take the exact error as the error surrogate
Δ_N_n.
In that case, two possible choices for the definition of Δ_N_n would be:
a) either the error on the eigenvalue: Δ_N_n(μ):= e^k_N_n(μ)
b) or the error on the eigenvectors: Δ_N_n(μ):= e^u_N_n(μ) + e^u^*_N_n(μ)
with
e^u_N_n(μ) := u_μ - u_μ, N_n, e^u^*_N_n(μ) := u^*_μ - u^*_μ, N_n e^k_N_n(μ) := | k_μ - k_μ, N_n|.
However, these quantities are of course not available in general, so one has to resort to a posteriori error estimate for an efficient greedy algorithm. Therefore, the aim of the following section is to detail different strategies to define an a posteriori error estimator Δ_N(μ) in order to obtain an estimation of the errors on the eigenvalues and the eigenvectors for any reduced space without having to compute the solutions of the exact eigenvalue problem.
§ A POSTERIORI ERROR ESTIMATION
The goal of this section is to build a posteriori error bounds on the error between the solutions of the exact eigenvalue problems (<ref>) and (<ref>), and the solutions of the reduced eigenvalue problems (<ref>) and (<ref>).
This section is organized as follows. Sections <ref> and <ref> are respectively dedicated to error estimates on the eigenvectors and on the eigenvalue. In Section <ref>, we draw connections between the estimators we introduce in our context of non-symmetric eigenvalue problems, and the classical ones used for symmetric eigenvalue problems. Finally, we introduce in Section <ref> the pratical a posteriori error estimators that will be used for numerical experiments in Section <ref>.
To simplify notation, the subscript μ is omitted in this section, as only one parameter value μ∈ is considered. Therefore, the quantities u,u^*,λ,k are the solutions of the high fidelity problem, while u_N,u_N^*,λ_N,k_N are the solutions of the reduced problem. We are therefore aiming at deriving bounds for the quantities
e^k_N:=|k-k_N|, e^u_N:=‖ u-u_N‖, and e^u^*_N:=‖ u^*-u_N^*‖.
In order to estimate these errors, we first define the following residual vector quantities
R_N = (B-k_N A)u_N,
R_N^* = (B^T-k_N A^T)u_N^*.
We moreover define the vector
ũ^* = A^Tu^*A^Tu^*,
and the matrix
= A^-1 B,
which is well-defined since A is invertible from Assumption <ref>. Note that it then holds that
M u = k u, M^T ũ^* = k ũ^*.
§.§ Error estimates on the eigenvectors
Let P ∈^× and P^*∈^× be the matrices associated with the spectral projection operators onto Span{ũ^*}^⊥ and Span{u}^⊥ respectively. More precisely, P and P^* are defined by
P = I - u (ũ^*)^T⟨ u, ũ^*⟩,
P^* = I - ũ^* u^T⟨ u, ũ^*⟩,
where I denotes the identity matrix of ^×.
Before presenting the a posteriori error estimates,
we first begin by collecting a few useful auxiliary lemmas.
The spectral projector onto the eigenspace of M associated with the simple eigenvalue k is I-P, where P is defined by (<ref>).
Let us introduce the spectral projector P_ int∈^× of M associated with the eigenvalue k, which is defined by:
∀ v ∈^, P_ int v = ∫_𝒞_k (z-M)^-1v dz,
where 𝒞_k is a closed contour in the complex plane such that k is the only eigenvalue of M contained inside the loop. Let us show that P_ int=u (ũ^*)^T⟨ u, ũ^*⟩.
As the eigenvalue k is simple, it holds that Ran P_ int = Span{u}, and P_ int^T ũ^*=ũ^* by noting that P_ int^T is the spectral projector associated with M^T and the eigenvalue k. Let us show that Ker P_ int = ( Span{ũ^*})^⊥. Indeed, for all v∈^,
P_ intv = 0 ⟨ũ^*,v ⟩ = ⟨ P_ int^Tũ^*,v ⟩ = ⟨ũ^*, P_ intv ⟩ = 0 .
As Ran P_ int + Ker P_ int = ℝ^𝒩, we have Span{u} + [ Span{ũ^*}]^⊥ = ^. The identity
P_ int = I - P
is then an immediate consequence of this decomposition.
There holds
(i) P^2=P;
(ii) Ker P= Span{u}, Ran P=[ Span{ũ^*}]^⊥ and these two spaces are stable by P and M;
(iii) P=P.
(i) Let v∈^. Noting that P u =0, there holds
P^2 v = P(v-⟨ v,ũ^*⟩⟨ u, ũ^*⟩u) = P v - ⟨ v,ũ^*⟩⟨ u, ũ^*⟩P u = P v ,
hence P^2 = P.
(ii) The proof of the fact that Ker P= Span{u} and Ran P=[ Span{ũ^*}]^⊥ is immediate from the proof of the previous lemma. The fact that Ker P is stable by P and M is also obvious. Now, let v∈ Span{ũ^*}^⊥, i.e. such that ⟨ũ^*, v⟩ = 0. Then
⟨ũ^*, P v⟩ =
⟨ũ^*, v⟩ - ⟨ũ^*, v⟩⟨ũ^*, u⟩/⟨ũ^*, u⟩ = 0,
and
⟨ũ^*, M v⟩ =
⟨ M^T ũ^*, v⟩ =
k ⟨ũ^*, v⟩ = 0.
Therefore Pv ∈ Span{ũ^*}^⊥ and v ∈ Span{ũ^*}^⊥.
(iii) It is obvious that for all v∈ Ker P, PMv = MPv = 0. Besides, for all v∈ Ran P, it holds that Pv = v, and Mv ∈ Ran P from (ii) so that PMv = Mv. As a consequence, PMv = Mv = MPv for any v∈^, hence the desired result.
It is easy to check that the following Lemma holds on P^*, using similar arguments as in the proof of Lemma <ref>.
There holds
(i) (P^*)^2=P^*;
(ii) Ker P^* = Span{ũ^*}, Ran P^* = [ Span{u}]^⊥ and these two spaces are stable by P^* and M^T;
(iii) M^T P^* = P^* M^T.
We have now gathered enough results to prove Lemma <ref>.
Let us argue by contradiction. If ⟨ u_μ^*,A_μ u_μ⟩ = ⟨ũ_μ^*, u_μ⟩ = 0, then Span{u_μ}⊂ (Span{ũ_μ^*})^⊥.
Yet, we have Span{u_μ} + (Span{ũ_μ^*})^⊥ = ℝ^𝒩 using Lemma <ref>-(ii).
This yields to a contradiction and concludes the proof.
Let us introduce some notation. By Lemma <ref>, the operator PMP-k_NI leaves Ran P = [ Span{ũ^*}]^⊥ invariant.
Besides, provided that k_N ∉σ(PMP|_[ Span{ũ^*}]^⊥), the operator (PMP-k_NI)|_[ Span{ũ^*}]^⊥ is invertible,
seen as an operator from [ Span{ũ^*}]^⊥ onto [ Span{ũ^*}]^⊥.
We can thus define the Moore–Penrose inverse of this operator, denoted by (PMP-k_NI)^+ and defined (by linearity) as follows:
∀ v ∈ [ Span{ũ^*}]^⊥, (PMP-k_NI)^+v = (PMP-k_NI)|_[ Span{ũ^*}]^⊥^-1v,
∀ v ∈ Span{u}, (PMP-k_NI)^+v=0.
We define in a similar way the operator (P^* M^T P^*-k_NI)^+.
Let u_N,u_N^* ∈^∖{0} and let k_N∈ such that k_N∉σ((PMP)|_[ Span{ũ^*}]^⊥) and k_N∉σ((P^*M^TP^*)|_[ Span{ u}]^⊥.
Then, the following estimates hold:
inf_v∈ Span{u}u_N - v≤
C_N^u
R_N,
inf_v^*∈ Span{u^*}u_N^* - v^*≤
C_N^u^*R_N^*,
with
C_N^u := P (P P -k_N I)^+ P A^-1,
C_N^u^* :=
A^-T P^* (P^* M^T P^*-k_NI)^+ P^*.
Here and in the following, with a slight abuse of notation, we denote by · the operator norm associated with the vector norm · on ^𝒩.
Note that u_N, u_N^* and k_N do not have to be respectively related to u, u^* and k for the above estimates to hold. However, in practice, u_N will be an approximation of u, u_N^* will be an approximation of u^* and k_N will be an approximation of k, so that the norms of the residuals R_N and R_N^* will be small.
The results obtained in Proposition <ref> match those of <cit.> for k=0, noting the slightly different definition of the residual to take into account the generalized eigenvalue problem.
First, there holds
inf_v∈ Span{u}u_N - v≤P u_N.
Second, let us show that
P (P P -k_N I)^+ (P P -k_N I) P = P.
Indeed for v ∈ Span{u},
P (P P -k_N I)^+ (P P -k_N I) P v = 0 and Pv = 0. Besides, for v ∈ [ Span{ũ^*}]^⊥, P v = v, and
(P P -k_N I) P v ∈ [ Span{ũ^*}]^⊥ from Lemma <ref> (ii). As a consequence,
since k_N∉σ((PMP)_|[ Spanũ^*]^⊥, (P P -k_N I) is invertible on [ Span{ũ^*}]^⊥.
Hence for v∈ [ Span{ũ^*}]^⊥,
P (P P -k_N I)^+ (P P -k_N I) P v =
P (P P -k_N I)|_ Span{ũ^*}^-1(P P -k_N I)|_ Span{ũ^*} P v = Pv.
We conclude by noting that ^ = Span{u} + [ Span{ũ^*}]^⊥.
Then using Lemma <ref> (iii), we have
P u_N =
P (P P -k_N I)^+ (P P -k_N I) P u_N
= P (P P -k_N I)^+
P(-k_N I)u_N.
Using (<ref>), we obtain
P u_N = P (P P -k_N I)^+ P A^-1R_N.
Thus,
P u_N⩽P (P P -k_N I)^+ P A^-1R_N.
To show the second bound, we first note that
P^*(A^Tu_N^*) = A^Tu_N^* - ⟨ u,A^Tu_N^* ⟩⟨ u,ũ^* ⟩ũ^*,
so that using (<ref>) and (<ref>)
inf_v^*∈ Span{u^*}u_N^* - v^* = inf_v^*∈ Span{u^*
}A^-T(A^Tu_N^*-A^Tv^*)
= inf_ṽ^* ∈ Span{ũ^*}A^-T(A^Tu_N^*- ṽ^*)
⩽A^-TP^* A^Tu_N^*.
Now, using Lemma <ref> (iii) and similar arguments as above, there holds
P^*(A^Tu_N^*) = P^*
(P^* M^T P^*-k_NI)^+
(P^* M^T P^*-k_NI)
P^* A^Tu_N^*
= P^* (P^* M^T P^*-k_NI)^+ P^*(M^T-k_NI)A^Tu_N^*
= P^* (P^* M^T P^*-k_NI)^+ P^*(B-k_NA)^Tu_N^*.
Hence
P^*(A^Tu_N^*) = P^* (P^* M^T P^*-k_NI)^+ P^*R_N^*.
Then,
A^-TP^* A^Tu_N^* ⩽A^-T P^* (P^* M^T P^*-k_NI)^+ P^*R_N^*,
which proves (<ref>).
§.§ Error estimate on the eigenvalue
We now provide an estimate for the eigenvalue.
Let u_N, u_N^*∈^.
Under Assumption <ref> and the assumptions that
k_N := ⟨ u_N^*, Bu_N ⟩/⟨ u_N^*, A u_N ⟩ is such that k_N∉σ((PMP)|_[ Span{ũ^*}]^⊥) and k_N∉σ((P^*M^TP^*)|_[ Span{ u}]^⊥,
there holds
|k_N-k| ≤
C^k_N η_N^k,
with
η_N^k := R_NR_N^*/|⟨ u_N^*,A u_N⟩|,
and
C^k_N := [P^* (P^* M^T P^*-k_NI)^+ P^*]^T (M - kI) P (P P -k_N I)^+ P A^-1.
Note that in this result, the vectors u_N and u_N^* may not be solutions of a reduced eigenvalue problem of the form (<ref>) or (<ref>). The only requirement of Proposition <ref> is that k_N has to be related to u_N and u_N^* by the formula stated in the proposition.
For any α, β∈ℝ,
⟨ A^T(u_N^*-α u^*), ( -k I)(u_N-β u)⟩ = ⟨ A^T u_N^*, u_N ⟩
-β⟨ A^Tu_N^*, u ⟩
-α⟨ A^Tu^*, u_N ⟩
+αβ⟨ A^Tu^*, u ⟩
- k ⟨ A^T u_N^*,u_N ⟩
+β k ⟨ A^Tu_N^*, u ⟩
+ α k ⟨ A^T u^*, u_N ⟩
-αβ k ⟨ A^Tu^*, u ⟩.
Noting that Mu = k u, M^T A^T u^* = k A^T u^* and recalling that M=A^-1B, we obtain
⟨ A^T(u_N^*-α u^*), ( -kI)(u_N-β u)⟩ = ⟨ A^T u_N^*, M u_N ⟩ - k ⟨ u_N^*,Au_N ⟩
= (k_N-k)⟨ u_N^*,Au_N ⟩.
According to Lemma <ref>, we can set
α = 1A^Tu^*⟨ A^Tu_N^*,u⟩⟨ũ^*,u⟩, β = ⟨ u_N,ũ^*⟩⟨ u,ũ^*⟩,
so that we find
k_N - k = 1⟨ u_N^*,Au_N⟩⟨ P^*(A^Tu_N^*),(M-k I)P u_N⟩ .
Using (<ref>) and (<ref>) finishes the proof.
§ PRACTICAL ESTIMATES OF EFFICIENCIES AND PREFACTORS
As in the previous section, to simplify notation, the subscript μ is again omitted in this section.
In view of Proposition <ref> and Proposition <ref>, it is natural to estimate the actual errors e_N^k=|k-k_N|, e_N^u=u_μ,N-u_μ and e_N^u^*=u^*_μ,N-u^*_μ by, respectively,
Δ_N^k := C_N^k η_N^k, Δ_N^u := C_N^u R_N, Δ_N^u^* := C_N^u^*R^*_N,
where C_N^k, C_N^u and C_N^u^* are some constants which are
good estimates of the efficiencies
e_N^kη_N^k(μ), e_N^uR_N(μ), and e_N^u^*R^*_N(μ). For example, one could use practical (computable) estimations of the constants C_N^k, C_N^u and C_N^u^* appearing in Proposition <ref> and Proposition <ref>.
For the applications we are interested in, as will be illustrated below in Section <ref>, we observe that the operators are perturbations of symmetric operators. This is why we investigate in Section <ref> the links between the prefactor C_N^k introduced above and well-known results about this constant in the symmetric case. However, this does not yield practical efficient formulas for the prefactors. This is why we propose in Section <ref> a practical heuristic approach to compute some prefactors C_N^k, C_N^u and C_N^u^* in the reduced basis context, that we use in the numerical results to build practical a posteriori error estimators in the greedy algorithm to select the reduced space. This heuristic approach gives very interesting numerical results for neutronics applications as will be illustrated in Section <ref>.
§.§ Some connections with the symmetric case
The goal of this section is to draw links between the prefactor C_N^k defined in Section <ref>, and the prefactor which is traditionnally used for the computation of a posteriori error estimators for symmetric eigenvalue problems. We begin this section by recalling well-known results about a posteriori error estimators for symmetric eigenvalue problems in Section <ref>.
In particular, we recall that the value of this prefactor in the symmetric context is directly linked with the value of the spectral gap of the exact problem.
We then provide two different approaches to relate the value of the constant C_N^k defined by (<ref>) to the value of the prefactor in the symmetric case: (i) we prove in Section <ref> that in the non-symmetric case, the constant C_N^k can be estimated using the distance between the approximate eigenvalue and the numerical range of the non-symmetric operator;
(ii) in Section <ref> we study the perturbative regime where the non-symmetric operator can be seen as a small perturbation of a symmetric operator, and check that the prefactor for the non-symmetric operator is also a small perturbation of the well-known expression of the prefactor in the symmetric case.
§.§.§ Symmetric case
The aim of this section is to recall some well-known results about a posteriori error estimators for symmetric eigenvalue problems, i.e. in the case where A is a positive definite symmetric matrix and B = I, so that M = A^-1. In this case, all the eigenvalues of M are real and positive, k being its largest one. We still assume that k is a simple eigenvalue of M and denote by k_2 the second largest eigenvalue of M so that k > k_2. Note that in the symmetric case, there holds that u= u^* and P = P^* = P^T.
As a consequence, for a given vector u_N and the value k_N = ⟨ u_N, Bu_N ⟩⟨ u_N, A u_N ⟩ >0, we have (from (<ref>))
C^k_N = [P (P M P-k_NI)^+ P]^T (M - kI) P (P P -k_N I)^+ P A^-1
= P (P M P-k_NI)^+ P (M - kI) P (P P -k_N I)^+ P A^-1,
and we have the following proposition.
Let A be symmetric positive definite and B = I. Let k be the largest eigenvalue of M=A^-1, let us assume that it is simple, and let us denote by k_2 its second largest eigenvalue. Let us also assume that
k > k_N > k_2 > 0.
Then, there holds
C^k_N = k_2 (k-k_2)/(k_N-k_2)^2 = PA^-1M- kI/ dist(k_N, σ( (PMP)|_ Span{u}^⊥))^2.
Denoting by k=k_1 > k_2 ≥⋯≥ k_ the eigenvalues of M = A^-1 and by u_1, u_2, …, u_ corresponding eigenvectors, there holds
A^-1 = M = ∑_i=1^ k_i u_i u_i^T,
and
P = ∑_i=2^ u_i u_i^T.
Then, using functional calculus, there holds
P (P M P-k_NI)^+ P (M - kI) P (P P -k_N I)^+ P A^-1 =
∑_i=2^k_i(k-k_i)/(k_N-k_i)^2
u_i u_i^T.
Since the operator norm is associated with the Euclidean vector norm, the operator norm corresponds to the largest eigenvalue of the (symmetric) operator, so that
C_N^k = max_2≤ i ≤k_i(k-k_i)/(k_N-k_i)^2 = k_2(k-k_2)/(k_N-k_2)^2.
Since PA^-1 = k_2, M- kI = k-k_2 using (<ref>), and dist(k_N, σ( (PMP)|_ Span{u}^⊥))^2 = (k_N-k_2)^2, we easily obtain the second equality.
The constant C^k_N is therefore strongly linked to the spectral gap between the first and second eigenvalue of M in this particular symmetric case. However, this notion of spectral gap is not clear in the non-symmetric context and we provide two points of view which enable to draw a comparison between the symmetric and non-symmetric context.
§.§.§ Numerical range
In this section, we prove that in the general non-symmetric case, the value of the prefactor C_N^k can be estimated using the so-called numerical range of the non-symmetric operator.
Let us first recall the notion of numerical range.
Let Q∈^×.
The numerical range of the matrix Q is defined by
Num(Q) = {⟨ v,Q v⟩, v=1}.
Let Q∈^× let and z ∉σ(Q). Then,
(Q-z I)^-1⩽1 dist(z, Num(Q)).
Let w∈^ be a unit vector. There holds
dist(z, Num(Q)) ⩽|z-⟨ w,Q w⟩w^2|
⩽|⟨ w, (Q-z I)w⟩|w^2
⩽(Q-z I)ww.
Taking u=(Q-z I)w, then,
(Q-z I)^-1⩽1 dist(z, Num(Q)).
Under the same assumptions as in Proposition <ref>, there holds
C^k_N ≤ M-k I PA^-1/ dist(k_N, Num ((PMP)_|[ Span{ũ^*}]^⊥) ) dist(k_N, Num (P^* M^T P^*)_|[ Span{ u}]^⊥))
Note that the bound given by Proposition <ref> is exactly equal to the value of the prefactor in the symmetric case since when M is symmetric and non-negative, Num ((PMP)_|[ Span{ũ^*}]^⊥)= Num (P^* M^T P^*)_|[ Span{ u}]^⊥=[k_2, k_] where k > k_2 ≥…≥ k_ are the ordered eigenvalues of M.
Starting from (<ref>), it holds that
C^k_N ≤ P^* (P^* M^T P^*-k_NI)^+ P^* M - kI P (P P -k_N I)^+ PP A^-1 .
Using Lemma <ref>, we easily obtain the result.
The upper bound on C^k_N stated in Proposition <ref> goes to infinity if k_N (which is supposed to be an approximation of k) gets close to Num ((PMP)_|[ Span{ũ^*}]^⊥) or Num (P^* M^T P^*)_|[ Span{ u}]^⊥, which can be seen as an underlying spectral gap assumption.
§.§.§ A Perturbative approach
The aim of this section is to propose another connection between the estimation of the prefactor C_N^k in the non-symmetric case with its well-known expression in the symmetric case. In all this section, we assume that
{ A = A^ε= S + ε T with S^T=S, T^T=-T, ε>0,
B=I.
.
In other words, the matrix A is a perturbation of a symmetric positive definite matrix S∈^×, since ε>0 is intended to be a small parameter. We still assume here that B = I for the sake of simplicity.
We also assume that the positive definite symmetric matrix S has a simple positive lowest eigenvalue λ_S, and that u_S is an associated eigenvector.
We then denote by λ_S < λ_S,2≤…≤λ_S, all the eigenvalues of S. We also denote by k_S:= 1/λ_S and by k_S,i:= 1/λ_S,i for 2≤ i ≤.
By a perturbative argument, for any ε>0 small enough,
there exists a simple nonzero eigenvalue λ^ε of smallest modulus of A^ε, and we denote by u^ε
an associated direct eigenvector, u^*,ε an associated adjoint eigenvector, ũ^*,ε defined as in (<ref>) and
k^ε:= 1/λ^ε.
For the sake of simplicity of the perturbative analysis, we assume that the approximate value k_N is independent of ε. This for example makes sense if one uses a reduced-order model constructed from the one-dimensional reduced space 𝒱 = Span{u_S}. In that case, u_N = u_N^* = u_S and thus k_N = k_S.
In this section, using obvious notation, we would like to study the convergence of the prefactor
C_N^k,ε:= [P^*,ε(P^*,ε (M^ε)^T P^*,ε-k_NI)^+ P^*,ε]^T (M^ε - k^ε I) P^ε(P^ε M^ε P^ε -k_N I)^+ P^ε (A^ε)^-1
to the value
C_N^k, sym = k_S,2 (k_S-k_S,2)/(k_N-k_S,2)^2
as ε goes to 0.
We first perform a first-order expansion of the eigenvectors and eigenvalues in ε (cf. Chapter 2 of <cit.>).
Let us assume (<ref>) and
u^ε^2 = u_S^2 = 1 ⟨ u^ε, u_S ⟩>0.
Then, as ε goes to 0,
λ^ε = λ_S + O(ε^2),
k^ε = k_S + O(ε^2),
u^ε = u_S - ε(S-λ_S I)^-1_| Span{u_S}^⊥T u_S + O(ε^2),
u^*,ε = u_S + ε(S -λ_S I)^-1_| Span{u_S}^⊥T u_S + O(ε^2),
ũ^*,ε = u_S + ε(S -λ_S I)^-1_| Span{u_S}^⊥T u_S
+ O(ε^2).
Using the results of <cit.>, we decompose λ^ε, u^ε, and u^*,ε at first order as
λ^ε =
λ_A,0+ελ_A,1 + O(ε^2),
u^ε = u_A,0+ε u_A,1 + O(ε^2),
u^*,ε = u^*_A,0+ε u^*_A,1 + O(ε^2).
Using this decomposition, the eigenvalue problem writes
S u_A,0 + ε (S u_A,1 + T u_A,0)
= λ_A,0 u_A,0 + ε (λ_A,0 u_A,1 +
λ_A,1 u_A,0 ) + O(ε^2).
At order 0 in ε, we obtain u_A,0 = u_S and λ_A,0 = λ_S. Then, at first order,
S u_A,1 + T u_A,0 = λ_A,0 u_A,1 + λ_A,1 u_A,0.
Using (<ref>), one can write
u^ε^2 = u_A,0^2 + ε (⟨ u_A,0, u_A,1⟩ + ⟨ u_A,1, u_A,0⟩) + O(ε^2),
which implies that
⟨ u_A,0,u_A,1⟩ = 0 .
Using the latter and projecting (<ref>) onto u_A,0 gives
⟨ S u_A,1,u_A,0⟩ + ⟨ T u_A,0,u_A,0⟩ = λ_A,1⟨ u_A,0,u_A,0⟩ = λ_A,1.
As T is skew-symmetric, it holds ⟨ T u_A,0,u_A,0⟩ = 0, so that
⟨ S u_A,1,u_A,0⟩ = ⟨ u_A,1,S u_A,0⟩ = λ_A,0⟨ u_A,1,u_A,0⟩ = 0.
Hence λ_A,1=0.
Then (<ref>) transforms into
(S-λ_S I)u_A,1 = -T u_S.
The latter has a solution since T u_S ∈ Span{u_S}^⊥ and (Ker(S-λ_S I))^⊥ = Ran(S-λ_S I).
Hence
u_A,1 = - (S-λ_S I)^-1_| Span{u_S}^⊥ T u_S.
We can apply the same procedure for the adjoint eigenvector to obtain the result. Finally,
ũ^*,ε :=
(A^ε)^T u^*,ε/(A^ε)^T u^*,ε
=
(S-ε T) (u_S-ε u_A,1)/ (S-ε T) (u_S-ε u_A,1)
+ O(ε^2)
=
λ_S u_S - ε (Su_A,1 + Tu_S)/λ_S u_S - ε (Su_A,1 + Tu_S)+ O(ε^2)
=
(u_S - ε/λ_S (Su_A,1 + Tu_S))
(1 + ε/λ_S⟨ u_S,(Su_A,1 + Tu_S)⟩)
+ O(ε^2)
=
u_S - ε u_A,1
+ O(ε^2),
which concludes the proof.
We now provide first-order expansions of operators which will be needed in the subsequent estimation of the prefactor.
Let us assume (<ref>) and (<ref>).
Then, as ε goes to 0,
P^ε = P_S + ε P_T + O(ε^2) and
P^*,ε = P_S - ε P_T + O(ε^2),
where
P_S = I - u_S u_S^T, and
P_T = u_A,1 u_S^T - u_S u_A,1^T,
u_A,1 being defined in (<ref>).
We have
P^ε = I - u^ε (ũ^*,ε)^T⟨ u^ε, ũ^*,ε⟩,
= I - (u_S + ε u_A,1+ O(ε^2)) (u_S - ε u_A,1+ O(ε^2))^T
= I - u_S u_S^T +ε (u_A,1 u_S^T - u_S u_A,1^T) + O(ε^2).
Similarly,
P^*,ε = I - ũ^*,ε (u^ε)^T⟨ u^ε, ũ^*,ε⟩
= I - (u_S - ε u_A,1+ O(ε^2)) (u_S + ε u_A,1+ O(ε^2))^T
= I - u_S u_S^T +ε (- u_A,1 u_S^T + u_S u_A,1^T)+ O(ε^2).
This concludes the proof.
We now provide a first order expansion of the operator entering the prefactor in (<ref>), namely
ℳ^ε = [P^ε(P^ε M^ε P^ε-k_NI)^+ P^ε] (M^ε - k^ε I) P^ε(P^ε M^ε P^ε -k_N I)^+ P^ε (A^ε)^-1.
Let us assume (<ref>) and (<ref>).
Then, as ε goes to 0,
ℳ^ε = ℳ_0 + εℳ_1 + O(ε^2),
with
ℳ_0 =Γ_S(S^-1 - k_SI)Γ_S S^-1,
ℳ_1 =-Γ_SS^-1TS^-1Γ_SS^-1 + Γ_S(S^-1 - k_SI)Γ_TS^-1
-Γ_S(S^-1 - k_SI)Γ_S S^-1TS^-1 + Γ_T(S^-1 - k_SI)Γ_S S^-1,
and
Γ_S = P_S(P_SS^-1P_S -k_N I)^+P_S ,
Γ_T = P_S(P_SS^-1P_S -k_N I)^+P_T + P_T(P_SS^-1P_S -k_N I)^+P_S
- P_S(P_SS^-1P_S -k_N I)^+(P_SS^-1TS^-1P_S + P_TS^-1P_S + P_SS^-1P_T)(P_SS^-1P_S -k_N I)^+P_S .
First,
M^ε = (A^ε)^-1= S^-1 - ε S^-1 T S^-1+ O(ε^2).
Therefore, as we have P^ε = P_S + ε P_T +O(ε^2), there holds
P^ε M^ε P^ε = (P_S + ε P_T +O(ε^2))(S^-1 - ε S^-1 T S^-1+ O(ε^2))(P_S + ε P_T + O(ε^2))
= (P_SS^-1 - ε P_SS^-1TS^-1 + ε P_TS^-1+ O(ε^2))(P_S + ε P_T + O(ε^2) )
= P_SS^-1P_S - ε P_SS^-1TS^-1P_S + ε P_TS^-1P_S + ε P_SS^-1P_T + O(ε^2)
= P_SS^-1P_S + ε(P_SS^-1TS^-1P_S + P_TS^-1P_S + P_SS^-1P_T)+ O(ε^2) .
Using a first-order expansion of the pseudo-inverse in ε, there holds
(P^ε M^ε P^ε -k_N I)^+ = [(P_SS^-1P_S -k_N I) + ε(P_SS^-1TS^-1P_S + P_TS^-1P_S + P_SS^-1P_T)+ O(ε^2)]^+
= (P_SS^-1P_S -k_N I)^+
- ε(P_SS^-1P_S -k_N I)^+(P_SS^-1TS^-1P_S + P_TS^-1P_S + P_SS^-1P_T)(P_SS^-1P_S -k_NI)^+ + O(ε^2).
Hence, one can write
P^ε(P^ε^ε P^ε -k_N I)^+P^ε
= P_S(P_SS^-1P_S -k_N I)^+P_S
+ ε[P_S(P_SS^-1P_S -k_N I)^+P_T
- P_S(P_SS^-1P_S -k_N I)^+(P_SS^-1TS^-1P_S
+ P_TS^-1P_S + P_SS^-1P_T)
×(P_SS^-1P_S -k_N I)^+P_S + P_T(P_SS^-1P_S -k_N I)^+P_S]+ O(ε^2).
Defining
Γ^ε := P^ε(P^ε^ε P^ε -k_N I)^+P^ε,
we have just obtained that
Γ^ε = Γ_S + εΓ_T+ O(ε^2).
Using that
ℳ^ε = Γ^ε (M^ε - k^ε I)Γ^ε (A^ε)^-1 = (Γ_S + εΓ_T+ O(ε^2))(S^-1 - k_SI - ε S^-1 T S^-1+ O(ε^2))
×(Γ_S + εΓ_T+ O(ε^2))(S^-1 - ε S^-1 T S^-1+ O(ε^2)),
we easily obtain (<ref>).
We then estimate the prefactor C_N^k in the perturbative case using the previous results.
Let us assume (<ref>) and (<ref>). Let us also assume that
k_S≥ k_N > k_S,2 > 0,
and that k_S,2 is not degenerate.
Then, for ε sufficiently small, there holds
C_N^k, ε = C_N^ k, sym + O(ε^2),
C_N^k, skew + O(ε^2).
where C_N^ k, sym is defined by (<ref>).
Starting from (<ref>), let us first note that ℳ_0 = Γ_S(S^-1 - k_SI)Γ_S S^-1
has the same spectral decomposition as S, that is eigenvectors u_S,i with corresponding eigenvalues
{[ 0 i = 1; (k_S,i-k_S)k_S,i/(k_S,i- k_N)^2 2 ≤ i ≤. ].
From this, we deduce that
ℳ_0 = max_2≤ i ≤|k_S,i-k_S|k_S,i/|k_S,i- k_N|^2
= |k_S,2-k_S|k_S,2/|k_S,2- k_N|^2 = C_N^k, sym.
Note also that the same holds for Γ_S with eigenvalues
{[ 0 i = 1; 1/k_S,i- k_N 2 ≤ i ≤. ].
Then, noting that k_S,2 is a simple eigenvalue, we can
write down the Taylor expansion of the spectral norm as
C^k, ε_N = ℳ_0 + εℳ_1 + O(ε^2) = ℳ_0
+ ε u_ℳ,0^T ℳ_1 u_ℳ,0 + O(ε^2),
where u_ℳ,0 the unit eigenvector corresponding to the largest eigenvalue of ℳ_0,
that is u_ℳ,0 = ± u_S,2. For simplicity, let us choose
u_ℳ,0 = u_S,2.
Then
u_ℳ,0^T ℳ_1 u_ℳ,0 = -u_ℳ,0^TΓ_SS^-1TS^-1Γ_SS^-1u_ℳ,0
+ u_ℳ,0^TΓ_S(S^-1 - k_SI)Γ_TS^-1u_ℳ,0
-u_ℳ,0^TΓ_S(S^-1 - k_SI)Γ_S S^-1TS^-1u_ℳ,0
+ u_ℳ,0^TΓ_T(S^-1 - k_SI)Γ_S S^-1u_ℳ,0
=
- k_S,2^3/(k_S,2-k_N)^2
u^T_ℳ,0 Tu_ℳ,0
+k_S,2(k_S,2-k_S)/k_S,2-k_Nu^T_ℳ,0Γ_Tu_ℳ,0
-k_S,2^2(k_S,2-k_S)/(k_S,2-k_N)^2u^T_ℳ,0 Tu_ℳ,0
+k_S,2(k_S,2-k_S)/k_S,2-k_N u_ℳ,0^TΓ_T u_ℳ,0
=0,
where we used that the matrices T and Γ_T are skew-symmetric.
This concludes the proof.
We now illustrate the above bounds on toy numerical examples. Let us introduce the following matrices S,T∈^4× 4
:
S = [ 2000 0 0 0; 0 1500 0 0; 0 0 1000 0; 0 0 0 0.02 ],
T_0 = [ 0 1 1 1; -1 0 1 1; -1 -1 0 1; -1 -1 -1 0 ], T = S/T_0T_0 .
It then holds that k_S = 50 and k_S,2 = 0.001. Let us consider k_N=k_S. A second-order convergence of the difference |C_N^k,ε - C_N^k, sym| as a function of ε is observed on Figure <ref>. This is a strong indication that the estimate of Proposition <ref> is sharp.
In our practical applications of interest, we will indeed observe that the operator is a perturbation of a symmetric operator, but the estimate of the prefactor by the one obtained using the symmetric part is not sufficiently good over a large range of the values of the parameters μ, in particular because the spectral gap (see Assumption (<ref>)) is not uniformly bounded from below (see Section <ref> for a discussion). This is why we will resort to a practical heuristic method to approximate the prefactor, as is now explained in the next section <ref>.
§.§ Practical a posteriori error estimator
The aim of this section is to present the heuristic algorithm we use in order to estimate the prefactors C_N^k, C_N^u and C_N^u^* defined in Proposition <ref> and Proposition <ref> respectively. The algorithm then yields approximations of these constants, denoted by
C_N^k, C_N^u and C_N^u^*, which are used to build a posteriori error estimates for the greedy algorithm presented in Algorithm <ref>.
This heuristic procedure is based on the use of an estimation set of parameters values _ pref⊂, containing a finite number of elements, which does not contain any values of the parameters belonging to the training set _ train. In other words, the estimation set _ pref is chosen so that _ train∩_ pref = ∅. High-fidelity solutions of the eigenvalue problems (<ref>) are computed in the offline phase for all μ∈_ pref.
For a given reduced space V_N, let us introduce the efficiency ratios: for all μ∈,
ℰ^k_N(μ) := |k_μ,N-k_μ|η_N^k(μ), ℰ^u_N(μ) := u_μ,N-u_μR_N(μ) ℰ^u^*_N(μ) := u^*_μ,N-u^*_μR^*_N(μ) .
The latter quantities are computed in the offline phase for all μ∈_ pref.
By definition, it holds that for all μ∈𝒫,
ℰ^k_N(μ) ≤ C_N^k(μ), ℰ^u_N(μ) ≤ C_N^u(μ) ℰ^u^*_N(μ) ≤ C_N^u^*(μ) .
Our heuristic approach then consists in estimating the constants C_N^k(μ), C_N^u(μ) and C_N^u^*(μ) for all μ∈ by their maximum values over _ pref. More precisely, defining
C_N^k := max_μ∈_ prefℰ^k_N(μ), C_N^u := max_μ∈_ prefℰ^u_N(μ), and C_N^u^* := max_μ∈_ prefℰ^u^*_N(μ),
the practical a posteriori error estimates used in the greedy algorithm are then defined by
Δ_N^k(μ) := C_N^k η_N^k(μ), Δ_N^u(μ) := C_N^u R_N(μ), and Δ_N^u^*(μ) := C_N^u^*R^*_N(μ).
The efficiency of this practical approach will be illustrated in the next section, where numerical results obtained in neutronics applications are presented.
§ NUMERICAL RESULTS
The aim of this section is to illustrate the behaviour of the proposed reduced basis method on examples arising from neutronics applications. The considered physical model is presented in Section <ref>.
In Section <ref>, we describe the high-fidelity discretization of the problem using a finite element method.
The parametric dependency of the coefficients of the mathematical equations describing the model enables the matrices to be assembled using a so-called affine decomposition, discussed in Section <ref>.
The eigenvalue solver is described in Section <ref>. Finally, numerical tests presented in Section <ref> give an application of the reduced basis method to nuclear core computations.
§.§ The continuous model: two-group neutron diffusion equations
The stationary neutron flux density in a reactor core is determined by solving the transport equation which depends on six variables: position (3-dimensional), velocity direction (2-dimensional), the velocity norm or energy (1-dimensional). It physically states the balance between the emission of neutrons by fission and the absorption, scattering, and leakage of neutrons at the boundary of the spatial domain.
The most common discretization of the energy variable is the multigroup approximation where the energy domain is divided into subintervals called energy groups.
The reactor core is modeled by a bounded, connected and open subset Ω of ℝ^d (where typically d=3, but one ot two-dimensional are also considered) having a Lipschitz boundary which is piecewise regular. In practice, the neutron flux density is usually modeled by the multigroup neutron diffusion equations <cit.> at the reactor core scale.
Let us now make precise the specific two-group neutron diffusion model we consider in this work. For a given value μ∈ (the parameter set will be presented below), we consider the two-group neutron diffusion equations where the neutron flux u_μ^ ex:=(ϕ_ 1,μ^ ex,ϕ_2,μ^ ex)∈ H^1_0(Ω)^2 is decomposed into the neutron flux of high energy ϕ_ 1,μ^ ex∈ H^1_0(Ω) and thermal energy ϕ_2, μ^ ex∈ H^1_0(Ω). This model reads as
Find (u^ ex_μ:=(ϕ^ ex_1,μ,ϕ^ ex_2,μ), λ^ ex_μ)∈ H^1_0(Ω)^2 × such that λ^ ex_μ is an eigenvalue with minimal modulus, and
{ -div(D_1,μ∇ϕ_1,μ^ ex) + Σ_11,μϕ_1,μ^ ex + Σ_12,μϕ_2,μ^ ex =λ_μ^ ex
F_1,μ(ϕ^ ex_1,μ,ϕ^ ex_2,μ)
in Ω ,
-div(D_2,μ∇ϕ_2,μ^ ex) + Σ_21,μϕ^ ex_1,μ + Σ_22,μϕ^ ex_2,μ =
λ^ ex_μ
F_2,μ(ϕ^ ex_1,μ,ϕ^ ex_2,μ)
in Ω ,
ϕ^ ex_i,μ = 0, on ∂Ω , i = 1,2,
.
supplemented with a normalization condition on u^ ex_μ. Here, for all i,j ∈{1,2}, all μ∈ and all ϕ_1, ϕ_2 ∈ H^1_0(Ω),
* F_i,μ(ϕ_1,ϕ_2) := χ_i,μ((νΣ_f)_1,μϕ_1 + (νΣ_f)_2,μϕ_2) ;
* χ_i,μ: Ω→ is the neutron total spectrum of group i;
* ν_i,μ: Ω→ is the average number of neutrons emitted per fission of group i;
* Σ_fi,μ: Ω→ is the fission cross section of group i;
* D_i,μ: Ω→_+ is the diffusion coefficient of group i;
* Σ_ij,μ: Ω→ with
Σ_ij,μ ={Σ_ti,μ-Σ_s,0,ii,μ if i=j,
-Σ_s,0,ij,μ otherwise;
.
* Σ_ti,μ: Ω→ is the total cross section of group i;
* Σ_s,0,ij,μ: Ω→ is the Legendre moment of order 0 of the scattering cross section from group i to group j.
Note that in the equations above, we used the short-hand notation (νΣ_f)_i,μ to refer to the product ν_i,μΣ_fi,μ for i=1,2.
The so-called effective multiplication factor k^ ex_μ:=1/λ^ ex_μ measures the balance between the production and loss of neutrons. If k^ ex_μ=1, the nuclear chain reaction is self-sustaining; if k^ ex_μ>1, the chain reaction is diverging; if k^ ex_μ<1, the chain reaction vanishes.
Let us now describe the considered parametric dependency. We introduce a partition (Ω_m)_m=1^M of the domain Ω with M∈^* so that for all 1≤ m ≤ M, Ω_m is a domain with Lipschitz, piecewise regular boundaries.
For i=1,2, the coefficients D_i,μ, Σ_ij,μ, χ_i,μ, (νΣ_f)_i,μ are assumed to be piecewise regular on each domain Ω_m for 1≤ m ≤ M.
The parameter set then refers to the set of values that each of these coefficients can possibly take on each subdomain (Ω_m)_1≤ m ≤ M. In other words, the choice of a parameter value μ∈ corresponds to a choice of the values of each of these coefficients on all the subdomains.
In the following, we assume that the set of admissible parameter values is such that all the coefficients of the model belong to L^∞(Ω) and that
there exists α>0 and 0<ε<1 such that for all μ∈ and all i,j∈{1,2}, i≠ j, almost everywhere in Ω,
α≤ D_i,μ a.e. in Ω,
α≤Σ_ii,μ a.e. in Ω,
|Σ_ij,μ|≤εΣ_ii,μ a.e. in Ω,
0≤ (νΣ_f)_i,μ a.e. in Ω ,
there exists ĩ,j̃∈{1,2} such that χ_ĩ,μ(νΣ_f)_j̃,μ≠ 0 ∈ L^∞(Ω),
so that Problem (<ref>) is well-posed for all μ∈. The variational formulation associated to Problem (<ref>) writes:
Find (u_μ^ ex:=(ϕ^ ex_1,μ,ϕ^ ex_2,μ), λ^ ex_μ)∈ (H^1_0(Ω)^2×ℝ) such that λ^ ex_μ is an eigenvalue with minimal modulus and,
a_μ((ϕ^ ex_1,μ,ϕ^ ex_2,μ),(ψ_1,ψ_2)) =λ^ ex_μ b_μ((ϕ^ ex_1,μ,ϕ^ ex_2,μ),(ψ_1,ψ_2)) for all (ψ_1,ψ_2)∈ H^1_0(Ω)^2,
where for all (ϕ_1,ϕ_2), (ψ_1, ψ_2)∈ H^1_0(Ω)^2,
a_μ((ϕ_1,ϕ_2),(ψ_1,ψ_2)) := ∫_Ω(D_1,μ∇ϕ_1)·∇ψ_1 + Σ_11,μϕ_1ψ_1 + Σ_12,μϕ_2ψ_1
+ ∫_Ω(D_2,μ∇ϕ_2 )·∇ψ_2 + Σ_21,μϕ_1ψ_2 + Σ_22,μϕ_2ψ_2,
b_μ((ϕ_1,ϕ_2),(ψ_1,ψ_2)) := ∫_Ωχ_1,μ((νΣ_f)_1,μϕ_1 + (νΣ_f)_2,μϕ_2)ψ_1
+ ∫_Ωχ_2,μ((νΣ_f)_1,μϕ_1 + (νΣ_f)_2,μϕ_2)ψ_2,
supplemented with a normalization condition on u_μ^ ex.
The associated adjoint problem then reads,
Find (u_μ^*, ex:= (ϕ_1,μ^*, ex,ϕ_2,μ^*, ex), λ^ ex_μ)∈ (H^1_0(Ω)^2×ℝ) such that λ^ ex_μ is an eigenvalue with minimal modulus and,
a_μ((ψ_1,ψ_2),(ϕ_1,μ^*, ex,ϕ_2,μ^*, ex))) =λ^ ex_μ b_μ((ψ_1,ψ_2),(ϕ_1,μ^*, ex,ϕ_2,μ^*, ex)) for all (ψ_1,ψ_2)∈ H^1_0(Ω)^2,
supplemented with a a normalization condition on u_μ^*, ex.
§.§ The high-fidelity discretization
We describe in this section the high-fidelity discretization of the continuous problem introduced in the previous section, that we consider as the reference problem in our reduced basis context.
Let 𝒯_𝒩 be a shape-regular mesh of Ω and V_𝒩 be an associated conformal finite element approximation space of dimension 𝒩. We also denote by V_:= (V_)^2 which has dimension = 2. We assume that the mesh is such that the cross sections are regular on each element.
The discrete variational formulation associated to Problem (<ref>) writes
Find (u^_μ:=(ϕ_1,μ^𝒩,ϕ_2,μ^𝒩), λ_μ^)∈ (V_𝒩×ℝ) such that λ_μ^ is an eigenvalue with minimal modulus and,
a_μ((ϕ_1,μ^𝒩,ϕ_2,μ^𝒩),(ψ_1^𝒩,ψ_2^𝒩)) =λ_μ^ b_μ((ϕ_1,μ^𝒩,ϕ_2,μ^𝒩),(ψ_1^𝒩,ψ_2^𝒩)), for all (ψ_1^𝒩,ψ_2^𝒩)∈ V_𝒩 = (V_)^2,
where (ϕ_1,μ^𝒩,ϕ_2,μ^𝒩) satisfies a normalization condition.
We refer to <cit.> for the a priori error analysis of Problem (<ref>).
Similarly, the discrete variational formulation associated to Problem (<ref>) reads
Find (u^*,_μ:=(ϕ_1,μ^*,𝒩,ϕ_2,μ^*,𝒩), λ_μ^𝒩)∈ (V_𝒩×ℝ) such that λ_μ^ is an eigenvalue with minimal modulus and,
a_μ((ψ_1^𝒩,ψ_2^𝒩),(ϕ_1,μ^*,𝒩(μ),ϕ_2,μ^*,𝒩)) =λ_μ^ b_μ((ψ_1^𝒩,ψ_2^𝒩),(ϕ_1,μ^*,𝒩,ϕ_2,μ^*,𝒩)), for all (ψ_1^𝒩,ψ_2^𝒩)∈ V_ = (V_𝒩)^2,
where (ϕ_1,μ^*,𝒩,ϕ_2,μ^*,𝒩) satisfies a normalization condition.
Let us denote by (θ^1, ⋯, θ^𝒩) a basis
of V_𝒩.
Problem (<ref>) reads as follows in matrix form. For all μ∈𝒫, i=1,2, let u_μ:=(u_μ,k)_1≤ k ≤𝒩∈ℝ^𝒩 be the coordinates of u_μ^𝒩 in the basis (θ^1, ⋯, θ^𝒩) so that
u_μ^𝒩 := ∑_k=1^𝒩 u_μ,kθ^k.
Let us define the matrices A_μ:=( a_μ(θ^k,θ^l))_1≤ k,l ≤𝒩 and B_μ := ( b_μ(θ^k,θ^l))_1≤ k,l ≤𝒩. The pair ( u_μ, λ_μ) ∈ℝ^𝒩×ℝ is then solution to
A_μ u_μ = λ_μ B_μ u_μ,
where u_μ satisfies a normalization condition. This is the high-fidelity eigenvalue problem of the form (<ref>) we consider in the following numerical tests.
Likewise, for Problem (<ref>), the pair ( u^*_μ, λ_μ) ∈ℝ^𝒩×ℝ is solution to
(A_μ)^T u_μ^* = λ_μ (B_μ)^T u_μ^*
together with a normalization condition on u_μ^*. Here, u_μ^* = (u_μ,k^*)_1≤ k ≤∈^ is the vector of coordinates of the function u_μ^*, in the basis (θ^1, …, θ^), i.e.
u_μ^*,= ∑_k=1^u_μ,k^* θ^k.
Problem (<ref>) is the adjoint high-fidelity eigenvalue problem of the form (<ref>) that we consider in the following numerical tests.
§.§ Affine decomposition of the coefficients
In the following numerical tests, the domain Ω is chosen as [0,L]^2 for some L>0. We introduce a partition (Ω_k)_k=1^K of the domain Ω and the parameter functions
entering in the definition of Problem <ref> are assumed to be piecewise constant on each Ω_k for 1≤ k ≤ K.
The parameter μ is thus a K-dimensional vector of either scalars or vectors (containing macro-parameters such as the material, the burn up, the fuel temperature, or the boron concentration for example), which allows to set the values of the coefficients D_1, Σ_11, Σ_12, D_2, Σ_21, Σ_22, χ_1, χ_2, Σ_f1, Σ_f2 on the domain Ω_k, for each 1≤ k ≤ K.
We remark that for all μ∈𝒫, the matrices A_μ and B_μ write
A_μ = ∑_k=1^K∑_p=1^6 f_p(μ_k)A_k,p + M_bc and
B_μ = ∑_k=1^K∑_q=1^4 g_q(μ_k)B_k,q,
where f(μ_k) and g(μ_k) are the
vectors
defined by
f(μ_k) = (D_1(μ_k), Σ_11(μ_k), Σ_12(μ_k), D_2(μ_k), Σ_21(μ_k), Σ_22(μ_k) )
g(μ_k) = ((χ_1νΣ_f1)(μ_k),(χ_1νΣ_f2)(μ_k),(χ_2νΣ_f1)(μ_k),(χ_2νΣ_f2)(μ_k)),
A_k,p and B_k,q (1≤ k≤ K, 1≤ p≤ 6, 1≤ q≤ 4) are parameter-independent × matrices, and M_bc∈ℝ^× is a parameter-independent matrix which stems from the discretization of the boundary condition. As a consequence, all these matrices can be pre-computed in order to efficiently assemble the matrices A_μ and B_μ online, and estimate the residuals R_N(μ) and R^*_N(μ) as we now explain.
Thanks to the affine decomposition of the matrices A_μ and B_μ above, the residual norm is easily computable online, as it only requires algebraic operations over vectors of the size of the (small) reduced basis, which is N. Indeed, let (ξ_1,…,ξ_N) be an orthonormal basis of the chosen reduced space for the scalar product ⟨·,·⟩, and let V_N ∈ℝ^× N be the matrix containing the coordinates of the basis (ξ_1, …, ξ_N) in the canonical basis of ^. For 1≤ k,l ≤ K, 1≤ p,p'≤ 6, 1≤ q,q'≤ 4, we define, in the offline stage, the reduced matrices of dimension N× N as follows:
D^N_k,l,p,p' = V_N^tA_k,p^t 𝕏^-1A_l,p'V_N
E^N_k,l,p,q = V_N^tA_k,p^t 𝕏^-1B_l,qV_N
F^N_k,l,q,q' = V_N^tB_k,q^t 𝕏^-1B_l,q'V_N
D^N_bc,k,p = V_N^tM_bc^t 𝕏^-1A_k,pV_N
E^N_bc,k,q = V_N^tM_bc^t 𝕏^-1B_k,qV_N
F^N_bc = V_N^tM_bc^t 𝕏^-1M_bcV_N,
where 𝕏 stands for the Gram matrix of size × for the considered scalar product ⟨·,·⟩, commonly called the mass matrix, and A^t denotes the transpose of the matrix A.
Then, in the online stage, for a given parameter μ, we can assemble the residual norm as
R_N(μ) := (B_μ-k_μ,N A_μ)u_μ,N = √(c_μ,N^tG_μ,Nc_μ,N),
with
G_μ,N = |k_μ,N|^2 (∑_k,l=1^K∑_p,p'=1^6 f_p(μ_k)f_p'(μ_l)D_k,l,p,p'^N +
∑_k=1^K∑_p=1^6
f_p(μ_k) (D^N_bc,k,p + (D^N_bc,k,p)^t) + F^N_bc)
- k_μ,N( ∑_k,l=1^K∑_p=1^6∑_q=1^4
f_p(μ_k)g_q(μ_l)(E_k,l,p,q^N+(E_k,l,p,q^N)^t)) + ∑_k=1^K∑_q=1^4 g_q(μ_k)(E^N_bc,k,q+(E^N_bc,k,q)^t)
+ ∑_k,l=1^K∑_q,q'=1^4 g_q(μ_k)g_q'(μ_l)F_k,l,q,q'^N .
A similar construction is readily possible for R^*_N(μ).
§.§ Eigenvalue solver
The eigenvalue solver, for both high-fidelity and reduced-order models, relies on the inverse power method given in Algorithm <ref>. In practice, this algorithm is run with relative error tolerances set to τ_u=10^-6 and τ_λ=10^-7.
The direct high-fidelity eigenvalue problem (<ref>) is solved by applying Algorithm <ref> with 𝙰 = A_μ and 𝙱 = B_μ. Likewise, the adjoint eigenvalue problem (<ref>) is solved by applying Algorithm <ref> with 𝙰 = A^T_μ and 𝙱 = B^T_μ. The resolutions of the reduced eigenvalue problems are performed similarly.
§.§ Numerical tests
The aim of this section is to illustrate the numerical behaviour of the reduced basis method and the proposed a posteriori error estimators on two different numerical test cases. Let us introduce the notation
e_N^k(μ) = |k_μ-k_μ,N|, e_N^k,rel(μ) = |k_μ-k_μ,N||k_μ|,
e_N^u(μ) = u_μ-u_μ,N_ℓ^2,
e_N^u,rel(μ) = u_μ-u_μ,N_ℓ^2u_μ_ℓ^2, e_N,L^2^u,rel(μ) = u_μ-u_μ,N_L^2u_μ_L^2,
e_N^u^*(μ) = u^*_μ-u^*_μ,N_ℓ^2,
where the ℓ^2 norm is the Euclidean norm, and L^2 refers to the L^2 functional norm applied to the functions in the space V_ built from the vectors in ^𝒩 through (<ref>).
Moreover, we denote by t_ HF and t_ RB the mean computational times for one run (for a given parameter) of the high-fidelity and reduced solvers respectively.
§.§.§ Test case 1: 2D two-group toy example
The reduced basis method is first run on a simple test case where L=60 (we use here reduced units)
modeled with 𝒩= 2 × 841 degrees of freedom along K=4 subdomains. Figure <ref> shows the mesh used for the test case as well as the decomposition of Ω into four subdomains. Here, we set B_μ=I for all μ∈𝒫.
The training and test sets 𝒫_ train and 𝒫_ test are constructed using the following random sampling scheme: in each subdomain Ω_k, for 1≤ k ≤ K, the values of the coefficients are independently distributed according to the following laws:
* Σ_s,0,ij: uniform law on [0,0.15] , 1≤ i,j≤ 2;
* Σ_t1 and Σ_t2: uniform law on [2(Σ_s,0,12+Σ_s,0,21),0.7];
* D_i=1/3Σ_ti, i=1,2;
* χ_iνΣ_fj=δ_ij, 1≤ i,j≤ 2.
The coefficients are chosen so that the coercivity of Problems (<ref>) and (<ref>) are ensured.
The parametric spaces 𝒫_train and 𝒫_test are selected following the random sampling procedure described above so that #𝒫_train = 300, #𝒫_test = 50 and 𝒫_train∩𝒫_test = ∅.
In the offline stage, the greedy algorithm is performed using the a posteriori estimator
Δ_N(μ) =η_N^k(μ)
defined in (<ref>) for all μ∈𝒫 (in other words, we choose here C̅_N^k(μ)=1 for all μ, following the notation (<ref>)).
The left part of Figure <ref> depicts the fast convergence of the reduced basis method with respect to the size of the reduced space. The relative errors on the eigenfunctions e_N^u,rel(μ) and e_N,L^2^u,rel(μ) follow the same trend. The relative error e_N^k,rel(μ) between the high-fidelity solution and the reduced basis solution on the multiplication factor k_μ reaches the order of 10^-5
for N=100. Moreover, this error decreases by 4 orders of magnitude from N=10 to N=100. As expected, the error on the eigenvalue decreases twice faster than the error on the eigenvector.
Moreover, we checked that the value of the a posterior error estimator η_N^k(μ) stays below 10^-12 for the selected parameters, as expected.
In terms of computational time, the right part of Figure <ref> shows that, in the chosen setting, while the high-fidelity solution is computed in about 5.8s, the reduced solution is computed within up to 0.09s, which is overall 60 up to 115 times faster than the high-fidelity solver to obtain a relative error of order 10^-4 to 10^-5 on the eigenvalue.
It is also interesting to look at the behavior of the implemented a posteriori error estimators. The relation between the error |k_N(μ)-k(μ)| and the estimator Δ_N(μ) =η_N^k(μ) we used here can be first analyzed by looking at the prefactor C_N^k(μ), defined in (<ref>). The value of C_N^k(μ) on the test set 𝒫_ test is presented in Figure <ref>. In that particular case, we
fall into the framework developed in
Section <ref>. Indeed, when we compute the perturbation magnitude ε_μ as
ε_μ = A_μ-A_μ^T/2A_μ+A_μ^T/2,
we observe that ε_μ varies between 3× 10^-7 and 3× 10^-6 for μ∈𝒫_test.
Therefore, we expect C_N^k, sym(μ) defined in (<ref>) to be a a good approximation of C_N^k(μ). Unfortunately, this is not always the case as we observe on the left plot of Figure <ref>.
Actually, in the cases where the prefactors differ a lot, we observe that condition (<ref>) in Proposition (<ref>) is not satisfied, which explains why the perturbative expansions may not be sharp.
Figure <ref> compares the behavior of the simple a posteriori error estimators R_N(μ), R_N^*(μ) and η_N^k(μ) defined in (<ref>), with the corresponding errors e_N^u(μ), e_N^u^*(μ), and e_N^k(μ) over the dimension of the reduced space.
The plots of the true errors and the corresponding estimators are parallel from N ≥ 20 which confirms the similar convergence rate for the computed a posteriori estimators as the associated real errors.
Actually, the quantity η_N^k(μ) seems to be a reliable and efficient a posteriori estimate of the true error up to roughly a constant multiplicative factor over a large range of parameter values, as Figure <ref> illustrates.
In terms of absolute value, for N=100, the estimator η_N^k(μ) for the multiplication factor is about 10^-2 while the true error is approximately 10^-4: this illustrate the importance of introducing prefactors C̅_N^k(μ), C̅_N^u(μ) and C̅_N^u^*(μ) to estimate the true errors, see (<ref>). This is important in particular in order to stop the greedy procedure when the real error is below a given threshold. This will be discussed below.
§.§.§ Test case 2: 2D two-group "minicore" problem
We now provide a second, more challenging, test case called minicore.
The core is modeled as a square of side length L=107.52 cm. As Figure <ref> shows, it is made up of K=25 assemblies (1 fuel assembly composed of a mix of uranium dioxyde and Gadolinium oxyde denoted UGD12 + 8 fuel assemblies composed of uranium dioxyde labeled UO2 + 16 radial reflector assemblies named REFR),
each one being 21.504 cm long.
It is discretized into 𝒩=2602 degrees of freedom. Here, there holds B_μ≠ I, and the Dirichlet boundary condition in Problem (<ref>) is replaced by a Robin condition called void boundary condition which writes
D_i(r,μ) ∇ϕ_i(r,μ).n + 12ϕ_i(r,μ)=0 on ∂Ω,
1≤ i ≤ 2,
where n is the outward unit normal vector to
∂Ω.
In this test case, the parameter μ
stands for five parameters which determine all the physical parameters entering (<ref>).
More precisely, by recalling the partition (Ω_k)_k=1^K of the domain Ω, the parameter set 𝒫 is the 5K dimensional vector space
𝒫 = {μ = (μ_1,…,μ_K), ∀ 1≤ k≤ K, μ_k ∈ℝ^5 },
such that μ_k contains the following information attached to the subdomain Ω_k:
* the nature of the material in Ω_k;
* the burnup value, in MWd/ton;
* the fuel temperature, in K;
* the boron concentration, in particle per million (ppm);
* the moderator density.
The parametric sets 𝒫_train and 𝒫_test are randomly generated in 𝒫 such that
#𝒫_train = 1000, #𝒫_test = 50, and 𝒫_train∩𝒫_test = ∅ .
Regarding the offline stage, in order to avoid any stability issue, a POD procedure over a reduced space of dimension 10 (generated from 5 direct plus 5 adjoint eigenvectors snapshots) is used to initialize the greedy procedure (see Algorithm <ref>). Then, the greedy procedure is continued using the a posteriori estimator R_N+R^*_N, as the quantity of interest here is the two-group flux (ϕ_1^𝒩,ϕ_2^𝒩) as well as its adjoint (ϕ_1^*,𝒩,ϕ_2^*,𝒩).
The left part of Figure <ref> depicts mean relative errors e_N^k, rel, e_N,L^2^u, rel, and e_N^u, rel as a function of the dimension of the reduced basis.
The relative error on the multiplication factor is of the order of 10^-5 for N = 80.
Typically, as the left part of Figure <ref> shows, for a certain μ_0 ∈𝒫 and for N=100, the maximum error on the associated first-group flux does not exceed 3.2 × 10^-4; as for the second group, the right part of Figure <ref> shows that the flux error is locally gathered in an area of low flux, quite far from the hot spot.
Importantly, the reduced method enables the solution to be computed faster than the high-fidelity approach, which typically takes about 4.56 s to be computed for the present test case.
The right part of Figure <ref> illustrates that the relative saving time factor is a decreasing function of the dimension of the reduced space N, and exhibits a large computational gain compared to the high-fidelity solver.
It is observed that for a relative error on k_ eff ranging from 10^-4 to 10^-6, the reduced solution can be obtained with a computational time from 50 up to 300 times smaller than the high-fidelity solution.
We now study the certification of the method performed by the a posteriori estimator. Figure <ref> shows that, although the residuals display similar values as those for the real eigenvector errors, for the eigenvalue, the order of magnitude of the a posteriori estimator is roughly 10 times larger than the real error, for N≥ 30. Despite the fairly good parametric variations of the estimate, illustrated by Figure <ref>, the gap between real error and estimator must be corrected in order to implement a relevant stopping criterion in the greedy algorithm. This points out a certain variation of the prefactor C_N^k(μ) over the dimension of the reduced space N.
In order to bring a correction to the model, the practical efficiency of the estimator proposed in Section <ref> is computed.
The right plot of Figure <ref> shows that the efficiency ℰ_N^k defined in (<ref>) levels off for N=100 at the order of magnitude of 10^-1, and does not depend too much on the parameter μ.
Therefore, we propose to apply the procedure outlined in
Section <ref> to build a posteriori error estimators of the form (<ref>), with constants C_N^k, C_N^u and C_N^u^* approximated by (<ref>). This requires to
choose a set 𝒫_pref, that we randomly chose in 𝒫 such that
𝒫_pref⊂𝒫, #𝒫_pref = 10, and 𝒫_pref∩𝒫_train∩𝒫_test = ∅ .
As a result of this procedure, Figure <ref> shows that the order of magnitude of the modified estimator corresponds to the one of the real error, showing that the new a posteriori estimator tends to be an optimal stopping indicator.
Finally, we gather in Table <ref> the measured computational times for several quantities of interest and main stages in Python. Overall, the reduced basis method is very useful when the number p of solutions that must be computed is very large, such as in an optimization process. Roughly, if t_offline denotes the computational time of the offline stage, t_HF the high-fidelity solver computational time, and t_RB the reduced solver computational time, the reduced basis method becomes relevant when there holds
t_offline + p × t_RB < p × t_HF,
that is
p > t_offlinet_HF-t_RB.
For this test case, this corresponds to p > 1743 parameter values.
§ ACKNOWLEDGEMENTS
This project has received funding from the
European Research Council (ERC) under the European Union's Horizon 2020
research and innovation programme (grant agreement EMC2 No 810367).
GD was supported by the French ‘Investissements d’Avenir’ program, project Agence
Nationale de la Recherche (ISITE-BFC) (contract ANR-15-IDEX-0003). GD was also supported by the Ecole des Ponts-ParisTech.
siam
|
http://arxiv.org/abs/2307.07370v1 | 20230714142526 | AIC-AB NET: A Neural Network for Image Captioning with Spatial Attention and Text Attributes | [
"Guoyun Tu",
"Ying Liu",
"Vladimir Vlassov"
] | cs.CV | [
"cs.CV",
"cs.AI",
"cs.LG"
] |
: A Neural Network for Image Captioning with Spatial Attention and Text Attributes
Guoyun Tu
Department of Computer Science
KTH Royal Institute of Technology
Stockholm, Sweden
[email protected]
Ying Liu
Norna
Stockholm, Sweden
[email protected]
Vladimir Vlassov
Department of Computer Science
KTH Royal Institute of Technology
Stockholm, Sweden
[email protected]
August 12, 2023
=================================================================================================================================================================================================================================================================================================
Image captioning is a significant field across computer vision and natural language processing. We propose and present , a novel Attribute-Information-Combined Attention-Based Network that combines spatial attention architecture and text attributes in an encoder-decoder.
For caption generation, adaptive spatial attention determines which image region best represents the image and whether to attend to the visual features or the visual sentinel. Text attribute information is synchronously fed into the decoder to help image recognition and reduce uncertainty.
We have tested and evaluated our on the MS COCO dataset and a new proposed Fashion dataset. The Fashion dataset is employed as a benchmark of single-object images. The results show the superior performance of the proposed model compared to the state-of-the-art baseline and ablated models on both the images from MSCOCO and our single-object images. Our outperforms the baseline adaptive attention network by 0.017 (CIDEr score) on the MS COCO dataset and 0.095 (CIDEr score) on the Fashion dataset.
image captioning, neural networks, spatial attention, text attributes
Regular Research Paper
§ INTRODUCTION
The significant growth in web images has brought plenty of opportunities for computational understanding of images. Automatic image captioning is crucial for many applications, including image searching, categorizing, and indexing, and it has attracted attention from academia and industry. One can split the image captioning task into two parts (1) image recognition to detect and recognize objects in an image and (2) caption generation to summarize the extracted information and put it into text that humans understand.
Significant successes have been achieved in the problem of image captioning using Deep Learning (DL). Many works on image captioning have applied DL methods to images containing multiple objects and rich contextual information. As a result, various image captioning methods have been proposed, such as Visual space-based model <cit.>, multimodal space-based model <cit.>, dense captioning <cit.>, whole scene-based model, encoder-decoder architecture-based model <cit.>, compositional architecture-based model <cit.>, attention-based model <cit.>, semantic concept-based model <cit.>, stylized captions <cit.>. We address the following two problems (1) generating captioning on single-object images; (2) combining semantic text attributes and adaptive attention.
The first blind spot of the previous studies is that they focused on general multi-object images, whereas generating captioning on single-object images is barely studied. Two features distinguish single-object images from general images. First, single-object images contain more small details, thus requiring a higher recognition resolution. Second, the generated descriptions include more adjectives and nouns. In this work, we apply DL models on a fashion dataset of 144,422 images from 24,649 products. This dataset is used as a benchmark of single-object images. Each image has only one fashion item, and its caption describes that item, including its category, color, texture, and other details.
Secondly, previous DL approaches either boost image captioning with semantic concept <cit.> or make use of attention encoder-decoder framework <cit.>, i.e., two frameworks are used separately and cannot inform each other. We propose a novel attribute-image-combined attention-based neural network architecture ([<https://github.com/guoyuntu/Image-Captioning-On-General-Data-And-Fashion-Data>]) based on the adaptive attention network <cit.>. It combines the semantic concept-based architecture with the spatial attention-based architecture.
In , the text attributes are fed into each step of the LSTM decoder as an additional input when generating the captions. The attributes are obtained by an auxiliary CNN classifier.
The major contributions of our work are as follows.
* We propose an Attribute-Image-Combined Attention-Based Network () that combines the adaptive attention architecture and text attributes in an encoder-decoder framework and, as a consequence, improves the accuracy of image captioning compared to state-of-the-art alternatives.
* We evaluate and several other DL models on a single-object dataset, the Fashion dataset, containing 144,422 images from 24,649 products.
§ RELATED WORK
Prior works, e.g., <cit.>, use DL encoder-decoder methods, attention-based and semantic concept-based DL models for automated image captioning.
Encoder-Decoder for Image Captioning. The existing caption generation methods include template-based image captioning <cit.>, retrieval-based image captioning <cit.>, and language model caption generation <cit.>.
Most methods <cit.> use Deep Learning.
An encoder-decoder,
a popular approach to tackling language tasks, such as machine translation, can be used for image captioning to encode visual information and decode it in a natural language. A
network of this category extracts global image features from the hidden activations of a CNN and feeds them into an LSTM to generate a caption as a sequence of words; one word at each step depends on a context vector, the previous hidden state, and the previously generated words <cit.>.
Attention-based Networks. Following the trends to use the encoder-decoder architecture on image captioning, methods based on attention mechanisms <cit.> have been increasingly popular as they provide computer vision algorithms with the ability to know where to look. Instead of considering the image as a whole scene, an attention-based network dynamically focuses on various parts of the input image while generating captions.
An adaptive attention network is an encoder-decoder-based approach for image captioning. The decoding stage is split into two parts.
First, the Spatial Attention Network outputs a context vector c_t that depends on the feature map V extracted from the encoder and the hidden state h_t of the LSTM decoder. It could be considered as the attention map. The second part is Visual Sentinel, which can fall back on when it chooses not to attend to the image. This visual sentinel s_t is dependent on the input x_t, the hidden state h_t-1, and the memory cell m_t. Then, the new adaptive context vector is modeled as 𝐜_t = ∑_i=1^kα_tiυ_ti. 𝐜_t and h_t
determine the conditional probability for each time step of LSTM.
Semantic concept-based Models. The idea of semantic concept-based models is to extract a set of semantic concept proposals. These concepts, combined with visual features and hidden states, are used to generate the captions in the decoding stage. Karpathy et al. <cit.> proposed a model, in which dependency tree relations
are applied in training to map the sentence segments with the image regions with a fixed window context. Wu et al. <cit.> proposed a network, including high-level semantic concepts explicitly.
It adds an intermediate attribute prediction layer in an encoder-decoder framework to extract from images attributes used to generate semantically rich image captions.
The proposed technique in this paper is partially inspired by <cit.>, where Ting et al. suggested that the high-level attributes are more semantically rich and easily translated into understandable human sentences. Moreover, the best method is to feed attribute representations and visual features as a joint input to LSTM at each time step.
The prior works are limited to using
only one architecture, either semantic concept-based or spatial attention-based.
We propose that combines both structures so that they can communicate with each other and, as a consequence, improve the accuracy of image captioning.
§ : ATTRIBUTE-IMAGE-COMBINED ATTENTION-BASED NETWORK
We present our Attribute-image-combined Attention-based Network (), a novel encoder-decoder neural framework for image captioning. is an end-to-end network that tackles image captioning in the meantime, generates the attention map of the image. Fig. <ref> shows the network architecture. We extract image features using pre-trained ResNet-152 <cit.>, which implements residual learning units to alleviate the degradation of deep neural networks. We freeze the first six layers and take the last convolutional layer as visual features. We believe the features extracted retain both object and interaction information from the images.
Formally, let us denote the whole dataset as 𝔇 = {(𝐗_i,𝐲_i} (i=1,2,...,N) where 𝐗_i denotes the
i-th image and 𝐲_i = (y_1,y_2,...,y_t) denotes its caption label as a sequence of words. In an encoder-decoder framework, LSTM network plays the role of decoder and each conditional probability is modeled as:
∑_t=1^Llog p(y_t | y_1,y_2,...y_t-1,𝐗) = f(𝐡_t,𝐜_t)
where f is a nonlinear function that outputs the probability of y_t. 𝐜_t is the visual context vector at time step t extracted from image 𝐗. 𝐡_t denotes the hidden state at t. For LSTM, 𝐡_t could be modeled as:
𝐡_t = LSTM(𝐱_t,𝐡_t-1,𝐦_t-1)
where 𝐱_t is the input feature map, 𝐦_t-1 is the memory cell vector at
t-1.
§.§ Text Attribute Extractor
The first step in adding the text attributes into the LSTM decoder is to extract a set of words that are possible to attend in the image’s description. These words may belong to the following parts of speech, including nouns and adjectives. As suggested by <cit.>, we build the vocabulary V using the 1000 most common words of the training captions. Given a vocabulary of attributes, the next step is to detect these words from images. We train the text attribute extractors using a CNN-based model. An image passes the pre-trained VGG-16 model, and we express the Cov5 layer as the input feature map, which is fed into a 2-layer CNN following with one fully connected layer. The possibility p_i^w of image x_i containing word w is computed by a sigmoid layer:
p_t^w = 1/1+exp(-(v_t^wϕ(b_i)+u_w))
here ϕ(b_i) is the fully connected representation for image b_i, v_t^w and u_w are the associated weights and bias with word w.
Due to the highly imbalanced ratio of the positive labels (5 words per image) vs. negative, The loss function used for training the detector is
𝔏_i^C = -β_p(p(x_i)log(q_i))+β_n(1-p(x_i))(log(1-q(x_i)))
where β_p and β_n are class weights assigned for giving higher penalty over false positive predictions. Due to the very unbalanced labeling strategy,
β_p = 100 β̇_n.
§.§ Attribute-combined Model
With injecting the high-level attributes into the adaptive attention framework, we propose our (Fig. <ref>). In our model, the decoder is modified by additionally integrating visual information and high-level attributes. As Fig. <ref> shows, the encoded image features are fed from the start of the LSTM and text attributes are fed into each time step. Accordingly, given the attribute representation 𝐀 the calculation of the hidden state in each time step is converted from Eq. (<ref>) to:
𝐡_t = LSTM(𝐱_t, 𝐀,𝐡_t-1,𝐦_t-1)
Adding text attributes to the original adaptive attention framework. The architecture of at one time step is demonstrated in Fig. <ref>. The probability over the vocabulary at time step t can be computed as:
𝐩_t = softmax(𝐖_p(𝐜̂_t+𝐡_t))
where 𝐖_p is weight parameter to be learnt.
§ EVALUATION SETUP
To evaluate the proposed , we have conducted evaluation experiments using multi-object and single-object image datasets and have compared the performance of our with competing baselines.
§.§ Datasets and Preprocessing
In our evaluation, we have used two image datasets, the MS COCO dataset <cit.> and a fashion image dataset.
MS COCO Dataset <cit.> contains 328K images with a total of 2.5M
object instances. For the image captioning task, five
caption descriptions labeled for each image are used as ground truth.
We use the MS COCO dataset to compare the performance of
with a state-of-the-art work
<cit.>.
Fashion Dataset is our single-object fashion image dataset scraped in open websites from different fashion vendors, including Uniqlo, Toteme-Studio, Bestseller, Drykorn, Jlindeberg, Joseph-fashion, Marc-o-polo, Rodebjer, Tigerofsweden, and Vince. The raw data contains 1,511,916 images from 194,453 fashion products. Each data includes a text label consisting of various amounts of sentences.
Before the image captioning task, we have conducted a data cleaning task to remove images with invalid or unrelated captions.
The cleaned dataset includes 144,422 images from 24,649 products.
Each data is labeled with only one caption description sentence.
Products are allowed to map to the same caption, and there are 10,091 unique captions in this dataset.
Preprocessing. We apply the same split for COCO and Fashion datasets: 70% of the data for training, 15% for validation, and 15% for testing.
We resize all the images used in experiments to 224×224 with bilinear interpolation. We also create two variations for the Fashion dataset. The one-vendor condition focuses on the largest product vendor, Bestseller. Further called the Fashion Bestseller dataset, this subset contains 89,756 images from 19,385 products. The amount of unique captions is 8,448. The second condition employs all images in the dataset, which we further call the Fashion 9 vendors dataset. The reason behind it is that we found the same vendor usually describes their product in similar text form and style. For all three datasets, COCO, Fashion Bestseller, and Fashion 9 vendors, we automatically generate five attributes for each image with the following method to train the attribute extractor. First, we build an attribute vocabulary comprising 1000 most common words (nouns and adjectives) from the caption text. Then, we choose five words in the caption which occur in the vocabulary as attributes for each data.
§.§ Hyperparameters; Baseline and Ablated Models
Text Attribute Extractor. The convolution layer of the text attribute extractor has kernel size 5 × 5 and stride 1 × 1. The max pooling layer has kernel size 8 × 8 and stride 0. We train the text attribute extractor for 10 epochs.
network. In the decoding stage, words in captions are embedded into 255-dimension vectors, and words of attributes are embedded into 51-dimension vectors using the default word embedding function provided by PyTorch. The hidden size is set as 512. Adam optimizer with learning rate decay is employed to train the model. The parameters is set as: α = 0.8, β = 0.999, learning_rate = 4e-4. The decay of learning rate is modeled as:
l_r^E+1 = l_r^E * 0.5^E-20/50, E > 20
where l_r^E
is the learning rate in epoch E. We train the extractor for 50 epochs.
We compare
to the following baseline and ablated models.
* Adaptive: the state-of-the-art method, Adaptive <cit.>. Note that our model without the text attributes (i.e., being ablated) is the same as Adaptive.
* The Vanilla Encoder-Decoder (Vanilla-ED): an ablated , where we remove the adaptive attention architecture and attribute information while keeping a CNN-based encoder and LSTM-based decoder.
* The Text Attributes Only (Attr-Only): a second ablated , where we feed the attribute information into the LSTM decoder but remove the attention architecture.
§ RESULTS AND DISCUSSION
We evaluate image captioning on the MS COCO dataset, the Fashion Bestseller dataset, and the Fashion 9 vendors dataset. Table <ref> reports the evaluation results for these three datasets, where B-n is BLEU score that uses up to n-grams. In each column, higher is better.
We observe that our achieves the best performance compared to all three ablated versions. The ablation study reveals the complementarity of all constituents of .
In terms of CIDEr score, the Vanilla Encoder-Decoder network underperforms by 0.143, 0.319, and 0.148; the Adaptive attention network by 0.017, 0.095, and 0.095; the Attributes-combined model by 0.107, 0.201, and 0.142.
Note that adaptive attention architecture improves the performance better than the attribute information. These results indicate that two components indeed complement each other, and their co-existence crucially benefits the caption generation.
The three experimental conditions establish a comprehensive spectrum. The general image dataset, MS COCO,
is the most complex and contains multi objects in each image, for which the CIDEr score is the lowest across the datasets. We only compare the CIDEr score because it is the only metric that keeps a stable scale when the number of captions varies. The fashion dataset contains one single object per image. The Fashion Bestseller dataset is simpler than the Fashion 9 vendors dataset. Although the effectiveness of our network is still obvious, the performance gap widens as the task gets more complicated.
Although the attributes-combined model obtains a similar CIDEr score on the COCO dataset compared with the adaptive attention model, it observably underperforms on other scores. CIDEr focuses more on semantical correctness, while others reflect on grammaticality correctness <cit.>. These results indicate that attribute information provides significant semantic information. However, to demonstrate these attributes in the generated captions, the model achieves this at the expense of grammatical correctness as a result on MS COCO shows “attr-only” gains a similar CIDEr score as “Adaptive” but its BLEU scores are significantly poorer. Interestingly, it does not happen on the Fashion dataset. We argue that this is because of the small number of captions. The sentence pattern is easier to recognize on the Fashion dataset. However, this effect is not shown in our network. It reveals the attention architecture, especially the sentinel gate, corrects the bias brought by the attributes. The two components indeed complement each other.
On the fashion dataset, we observe that our model achieves better performance on the Fashion Bestseller dataset than the Fashion 9 vendors dataset, with an improvement of 0.892 (CIDEr score). This observation is the opposite of the regular pattern in which the increased data size improves the ML model's performance. The reason is the distinct styles and forms of captions from different vendors. The huge gaps in captions from one vendor to another are caused by the sub-standard labeling of the Fashion 9 vendors dataset.
§.§ Attention Distribution Analysis
To better understand our model, we also visualize the image attention distributions α for the generated caption. Using bilinear interpolation and pyramid expansion, we sample the attention map to the image size (224×224). Fig. <ref> shows the generated captions and the image attention distribution for specific words in the caption. The first five cases are success cases, and the last case shows a failure example. We see that our model learns to
pay attention to the specific region when generating different words in the caption, which corresponds strongly with human intuition. Note that on the failure case, although our model fails to focus on the region of the sleeves when generating “sleeves”, it still successfully recognizes the position of the printed stripe.
Since the COCO dataset provides the ground truth of objects’ bounding box, it can be used to evaluate the performance of attention map generation. The spatial intersection over union (sIOU) score is used to measure localization accuracy. Given the word w_t and its corresponding attention map α_t, we first segment the regions of the image with its attention value more extensive than a pre-class threshold th (after the map is standardly normalized to scale [0,1), where we set as 0.6. Then we take the bounding box covering the largest connected component in this segmentation map as the predicted attention region. We report the sIOU between the predicted bounding box and the ground truth for the top 20 most frequent COCO object categories, as Fig. <ref> shows. The average localization accuracy for the “Adaptive” is 0.415, and 0.419 for our . This implies that the attribute information benefits the attention map generation as a combined model. We also observe that our and its attention-only version have a similar trend. They both perform well on informative visual objects and large objects such as “cat”, “train”, “bed”, and “bus”, while they have poor performance on small objects such as “sink” and “clock”. We argue that it is because our attention map is extracted from 7 × 7 spatial map, which loses plenty of resolution and detail. This defect is remarkably exposed when detecting small objects. This reason can explain the wrong attention map on the Fashion dataset as well, where the majority of words are descriptions of details and refer to small regions on the image.
Since the bounding box ground truth is missing in the Fashion dataset, we apply statistic analysis for 5 typical words as quantitative analysis, hood, cap, pants, dresses, and sleeves. In common cases, hood and cap only show on the upper part of an image, pants and dresses only on the lower part, and sleeves only on the left and right sides. We assume these regions are their ground truth respectively and apply the same approach as explained above to measure the localization accuracy. Fig. <ref> reports the result of the Fashion dataset. We observe that outperforms on the first 4 words than the word sleeves and shows a similar trend with the adaptive attention model.
§ CONCLUSION
This work has been motivated by the task of generating captions for single-object fashion images and inspired by the adaptive attention architecture <cit.> and semantic concept <cit.>. Toward this end, we present Attribute-Image-Combined Attention-Based Network (). We have evaluated on the MS COCO and the Fashion datasets. Our experiments indicate that the ability to locate the relevant region of an image when generating different words and the combination with the attribute information is crucial for accurate caption generation.
Further research could explore two directions. First, according to the results of transfer learning, we suggest creating an up-to-standard labeling system for the Fashion dataset, which will benefit the consistency of the data and the robustness of the models trained on it. Second, we argue that segmenting the images into more regions will improve the performance since, in some cases, our model cannot pay attention to the accurate region when generating words.
Image captioning is a challenging and promising task for the Internet industry and computer vision. We believe this work represents a significant step in improving image captioning and breeds useful applications in other domains.
IEEEtran
|
http://arxiv.org/abs/2307.04227v1 | 20230709163906 | Relaxed Equilibria for Time-Inconsistent Markov Decision Processes | [
"Erhan Bayraktar",
"Yu-Jui Huang",
"Zhenhua Wang",
"Zhou Zhou"
] | math.OC | [
"math.OC",
"60J10, 60J27, 91A11"
] |
equationsection
DefinitionDefinition[section]
RemarkRemark[section]
TheoremTheorem[section]
LemmaLemma[section]
PropositionProposition[section]
CorollaryCorollary[section]
AssumptionAssumption[section]
ExampleExample[section]
|
http://arxiv.org/abs/2307.04337v1 | 20230710042906 | Detection of temporal fluctuation in superconducting qubits for quantum error mitigation | [
"Yuta Hirasaki",
"Shunsuke Daimon",
"Toshinari Itoko",
"Naoki Kanazawa",
"Eiji Saitoh"
] | quant-ph | [
"quant-ph"
] |
]Detection of temporal fluctuation in superconducting qubits for quantum error mitigation
Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan.
[Author to whom correspondence should be addressed: ][email protected]
Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan.
Quantum Materials and Applications Research Center, National Institutes for Quantum Science and Technology (QST), Tokyo 152-8550, Japan.
IBM Quantum, IBM Research-Tokyo, 19-21 Nihonbashi Hakozaki-cho, Chuo-ku, Tokyo, 103-8510, Japan.
Department of Applied Physics, The University of Tokyo, Tokyo 113-8656, Japan.
Institute for AI and Beyond, The University of Tokyo, Tokyo 113-8656, Japan.
WPI Advanced Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan.
Institute for Materials Research, Tohoku University, Sendai 980-8577, Japan.
We have investigated instability of a superconducting quantum computer by continuously monitoring the qubit output. We found that qubits exhibit a step-like change in the error rates. This change is repeatedly observed, and each step persists for several minutes. By analyzing the correlation between the increased errors and anomalous variance of the output, we demonstrate quantum error mitigation based on post-selection. Numerical analysis on the proposed method was also conducted.
[
Eiji Saitoh
August 12, 2023
===================
Over the last few decades, there has been a growing trend towards developing quantum computers and advances in quantum engineering technologies are overwhelming <cit.>. Among diverse materials or artificial atoms proposed to serve as quantum bits (qubits), superconducting qubits<cit.> are one of the most promising candidates. A number of studies have been conducted to improve the performance of superconducting qubits and several breakthroughs have been achieved<cit.>. Nevertheless, even the state-of-the-art qubits unpredictably interact with the surrounding environments and suffer from noise during computation, which places a critical limit on their computational abilities<cit.>.
Several attempts have been made to identify microscopic pictures of unexpected interactions and improve the device's performance <cit.>. Recent evidence suggests that superconducting qubits exhibit a temporal change in their coherence times under a continuous measurement <cit.>.
Qubit instability poses a serious threat to quantum computers. A sudden decrease in the qubit lifetime can temporarily degrade the device's performance.
In addition, most of the current quantum error mitigation (QEM) techniques <cit.> are unable to mitigate time dependent noise<cit.>, and a temporal change in decoherence calls for re-learning of a noise model or developing more sophisticated QEM techniques. Therefore, it is imperative to investigate the dynamics of a superconducting qubit system and assess its stability.
In this paper, we report a temporal change in the qubit errors in a superconducting quantum computer. We also developed an anomaly detection method for a temporal change in errors.
All the experiments were performed on , which is one of the IBM Quantum systems. This quantum computer has 27 transmon qubits and the readout assignment errors are around 1% on average. The energy relaxation times of the qubits are approximately 1.2× 10^2 μ s on average, with the phase damping times around 1.2× 10^2 μ s.
We iterate a same quantum circuit and a subsequent measurement for L times at a sampling rate of several hundred microseconds. As a result, we obtain a binary sequence 𝐗∈{0, 1}^L. To estimate the qubit output fluctuations, we transform a subsequence of 𝐗 with size N into a fluctuation indicator S, which is defined by
S = 1/m-1∑_j = 1^m(Y_j - Y)^2/Y ( 1 - Y)/n.,
where Y_j = 1/n∑_i = (j-1)n + 1^jnX_i, and Y = 1/m∑_j = 1^mY_j with some integers n and m that satisfy the condition N = nm≪ L. In the experiments below, we obtain a time series of S from the entire sequence 𝐗∈{0, 1}^L using the following procedure. We first take the average of every n data to obtain a time series 𝐘 with the length M = ⌊L/n⌋. We then calculate the time series 𝐒 from 𝐘 by applying a sliding window of size m, and thus the length of 𝐒 is given by l = M- m + 1.
The indicator S is introduced based on the following background. From the Born's rule, the measurement outcome X_i in the i-th measurement is a random variable whose distribution is given by the binomial distribution B(1, P_1), where P_1 denotes the probability of measuring the excited state. The average Y_j is also a random variable whose probability distribution is determined by the binomial distribution B(n, P_1). Thus, the expectation value of the sample mean Y = 1/m∑_j = 1^mY_j is equal to P_1, and that of the unbiased sample variance V_samp = 1/m-1∑_j = 1^m(Y_j-Y)^2 is equal to P_1(1 - P_1)/n. Since P_1 is unknown, we estimate the expected variance with V_bi = Y(1-Y)/n, and S is given by the ratio of V_samp and V_bi in Eq. (<ref>). Intuitively, S quantifies the extent to which the sample variance deviates from what is expected under the assumption that {X_i}_i are generated from an identical binomial distribution. S can be used to detect a temporal change in qubit errors and exclude abnormal outcomes in quantum computing as discussed later.
Note that S is a random variable obtained from the random variables X_1, X_2, …, X_N and S takes several values with different probabilities. The probability distribution of S is well described by the chi-squared distribution with (m-1) degrees of freedom and the mean of S is given by 1 with the variance σ^2 = 2/m-1, whose rigorous derivation is provided in the latter part of this letter. Thus, when we calculate S from an experimental result (for clarity we represent the experimental value as S_exp and use S_theo when we describe a stochastic characteristic of S), S_exp should spread randomly around 1 with the statistical fluctuation σ = √(%s/%s)2m-1. If S_exp significantly deviates from the probabilistic behavior of S_theo, we reject the hypothesis that the binary data X_1, X_2,… ,X_N are generated from an identical binomial distribution B(1, P_1) and the data are classified as anomalous in our QEM method.
First, we performed a one-qubit continuous measurement on the IBM quantum processor. The pulse sequence is depicted in Fig. <ref>(a). The qubit is initialized to the ground state with the reset pulse, excited with the π pulse, and then measured. We repeated this pulse sequence for 1000 seconds with the repeat delay time τ≈ 6× 10^2 μ s to record normal and abnormal behavior in a single set of experimental data. The time series of S_exp defined by Eq. (<ref>) was calculated from the obtained outcomes with the parameters n = m = 128 and L = 1787904.
Figure <ref>(b) illustrates the time series of S_exp. The value of S_exp remains almost constant for the first 230 s. This behavior is consistent with the fact that the expectation value of S_theo is equal to 1 with the standard deviation σ≈ 0.125. In the next moment, however, S_exp abruptly increases to approximately 4 [see the red band in Fig. <ref>(b)], which is 24 standard deviations above the mean, and this cannot be explained in terms of the statistical error. This increase persists for 110 seconds, and sharp switching behavior is repeatedly observed in the rest of the record as visualized by the four red bands in Fig. <ref>(b). This phenomenon is observed repeatedly in other experiments on .
Figure <ref>(c) compares the error rates in two time periods. The red bar represents 1 - P_1 in the time period from 430 s to 720 s, while the black bar shows that from 870 s to 1000 s, where P_1 denotes the average of the binary outcomes and should be 1 in the absence of errors. The temporal increase in S_exp appears to be closely related to a temporal increase in errors. This correlation between S_exp and errors suggests that we can reduce errors by classifying obtained outcomes based on the values of S_exp and eliminating the anomalous outcomes.
Based on this, we propose a QEM technique based on post-selection (or we also call it an anomaly detection). We first compute the time series 𝐒_exp from an obtained binary sequence 𝐗. Then, we compare each element of 𝐒_exp against a threshold value S_thre. If an element exceeds the threshold, we label the corresponding subsequence of 𝐗 as anomalous and segregate it from the remaining sequence. The critical value is determined based on the p-value in the detection and here we employ S_thre = 1.5, which corresponds to the p-value of 0.006334%. This method can be easily extended to multi-qubit computations by computing the time series of S_exp for each qubit individually.
We performed a Bell state measurement to demonstrate the proposed QEM as illustrated in Fig. <ref>. We obtain two binary sequences from two qubits and calculated the time series of S_exp from the two sequences individually. For each time window with size N, we calculate S_1 and S_2 from the two binary subsequences by Eq. (<ref>). If either S_1 or S_2 exceeds the threshold value S_thre = 1.5, the corresponding two binary subsequences are labeled as anomalous and labeled as normal otherwise. The time series of *Z_1Z_2 is depicted in Fig. <ref>(a), where *Z_1Z_2 denotes the expectation value of the observable Z_1Z_2, and it is calculated from the two binary sequences with the same window. *Z_1Z_2 should be 1 in the absence of errors. The red colored region represents the time periods labeled as anomalous based on S_exp and the blue represents the normal state. *Z_1Z_2 exhibits a great decrease to around 0.85 in the anomalous time period [the red band in Fig. <ref>(a)], while it shows little fluctuation around 0.97 in the normal time periods.
We obtain two histograms from the normal and anomalous outcomes as depicted in Fig. <ref>(b). The probabilities of measuring the four states, |00⟩,|01⟩, |10⟩, and |11⟩, are visualized by the black bars in Fig. <ref>(b). The top panel shows the probability distribution calculated from the data classified as the normal state [colored blue in Fig. <ref>(a)], while the one at the bottom depicts that from the anomalous state (colored red). The probability distribution of the anomalous state exhibits a prominent peak in the |10⟩ state. We compare the values of 1 - *Z_1Z_2 obtained from the two categorized data as shown in Fig. <ref>(c). This means that our method successfully removes the abnormal data and improves the fidelity in estimating the expectation value of a physical observable.
We then benchmarked the proposed protocol in a quantum volume circuit <cit.> as an example of sampler tasks, in which we measure the probability distributions of the final quantum states. The result is shown in Fig. <ref>. The circuit comprises three qubits and the qubits are measured after three layers of operation as shown in Fig. <ref>(a). Each layer is characterized by sampling a random permutation and then applying a random unitary transformation to the first two qubits.
We compute the time series of S_exp for the three qubits and classify the outcomes into the anomalous and normal state data as illustrated in Fig. <ref>(b). The blue regions represent the outcomes classified as normal, while the red corresponds to the anomalous. We obtain two probability distributions from the two categorized experimental data and compare them with the ideal distribution (the black bars) as depicted in Fig. <ref>(c). The distribution derived from the normal data is overall closer to the ideal distribution, demonstrating a 5.5% improvement in the Hellinger fidelity<cit.>.
We note that in our setup the circuit outcomes have been recorded for a sufficiently long time to investigate the time variation of S_exp. However, our mitigation technique can be applied at a moderate sampling overhead of tens of thousands shots, which is readily available with IBM Quantum processors.
Finally, we perform a theoretical analysis on the probability distribution of S_theo introduced in Eq. (<ref>). Note that the i-th measurement outcome X_i is given by a random variable following the Bernoulli distribution B(1, p_i), where p_i is the probability of measuring the excited state in the i-th measurement. Here we make two fundamental assumptions, namely, p_i is a constant P_1, and {X_i}_i independently obey the identical Bernoulli distribution. Under these assumptions, it analytically follows that the random variables nY_j = ∑_i = nj + 1^(n + 1)jX_i independently obey the binomial distribution B(n, P_1) and the variance of {Y_j}_j is given by P_1(1 - P_1)/n. Since n is sufficiently large (in the experiments n = 128), we can apply the central limit theorem and approximate the probability distribution of {Y_j}_j with a Gaussian distribution. Then we express S_theo in Eq. (<ref>) in terms of new random variables {Z_j}_j defined by Z_j = Y_j - P_1/√(P_1(1 - P_1)/n), which independently obey the standard normal distribution 𝒩(0, 1), where 𝒩(μ, σ^2) denotes a Gaussian distribution with the mean μ and the variance σ^2. The expression of S_theo is given by
S_theo = 1/m-1∑_j = 1^m (Z_j-Z)^2/( Z/√(n) + √(P_1/1 - P_1))( -Z/√(n) + √(1 - P_1/P_1)),
where Z = 1/m∑_j = 1^m Z_j∼𝒩( 0, 1/m). Z/√(n) takes values of order 1/√(nm) with high probability, and thus, when 1/√(nm) is much smaller than √(P_1/1 - P_1) and √(1 - P_1/P_1), Z/√(n) is negligible compared to √(1 -P_1/P_1) and √(P_1/1 - P_1) with a high likelihood. As a result, Eq. (<ref>) reduces to
S_theo≈S̃≡1/m-1∑_j = 1^m(Z_j-Z)^2
∑_j = 1^m(Z_j - Z)^2 obeys the chi-squared distribution with (m - 1) degrees of freedom<cit.> and therefore the statistical characteristic of S̃ is analytically derived. In particular, the mean of S̃ is μ = 1 and the variance is σ^2 = 2/m - 1, which is independent of P_1. This fact suggests that we can use the same threshold for anomaly detection in practical quantum computation where P_1 (or the measured quantum state) is unknown. The condition √(%s/%s)1 - P_1P_1, √(%s/%s)P_11-P_1≫1/√(nm) is satisfied in most of our experiments since we use n = m = 128, and the inequality 0.01≤ P_1≤ 0.99 holds due to the 1% readout assignment errors.
We then performed a Monte-Carlo simulation to support the validity of the discussions above, and the result is illustrated in Fig. <ref>. We numerically prepared 100,000 samples of S_theo for each of P_1 values we chose and compared the distributions of S_theo with those of S̃. The sample means of S_theo for several P_1 values (the blue dots) and the expectation value of S̃ (*S̃ = 1) (the red line) are depicted in Fig. <ref>(a), while Fig. <ref>(b) compares the variance of S_theo and S̃. The result provides a close similarity between the numerical and theoretical analysis for all the P_1 values. The probability density functions generated from the Monte-Carlo simulation are presented with the blue histograms in Fig. <ref>(c) for several P_1 values. The red lines show the functions calculated theoretically, showing a good agreement with the numerical histograms.
In conclusion, we have investigated a temporal change in fluctuations in superconducting qubits by developing a statistic that quantifies the qubit stability. The measured temporal change is closely related to a temporal increase in errors, and we have demonstrated QEM by analyzing the correlation of the fluctuation. Furthermore, we have conducted an analytical study on the QEM method, and performed a numerical simulation to verify the result.
This work was supported by CREST (Nos. JPMJCR20C1, JPMJCR20T2) from JST, Japan; Grant-in-Aid for Scientific Research (S) (No. JP19H05600), Grant-in-Aid for Transformative Research Areas (No. JP22H05114) from JSPS KAKENHI, Japan.
This work is partly supported by IBM-Utokyo lab.
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
§ AUTHOR DECLARATIONS
§.§ Conflict of Interest
The authors have no conflicts to disclose.
§.§ Author Contributions
Y. Hirasaki: Conceptualization (equal); Formal analysis (lead); Investigation (lead); Methodology (lead); Software(lead); Validation (equal); Writing – original draft (lead).
S. Daimon: Conceptualization (lead); Funding acquisition (equal); Investigation (supporting); Methodology (supporting); Project administration (lead); Software(equal); Supervision (supporting); Validation (equal); Writing – review & editing (supporting).
T. Itoko: Methodology (supporting); Validation (supporting); Writing – review & editing (supporting).
N. Kanazawa: Project administration (supporting); Software(supporting); Supervision (supporting); Writing – review & editing (supporting).
E. Saitoh: Funding acquisition (lead); Project administration (equal); Supervision (lead); Validation (equal); Writing – review & editing (lead).
|
http://arxiv.org/abs/2307.05747v2 | 20230708141455 | Integrating Curricula with Replays: Its Effects on Continual Learning | [
"Ren Jie Tee",
"Mengmi Zhang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
Spectroscopic Devices for Asteroseismology With Small Telescopes in NARIT
Saran Poshyachinda
=========================================================================
Humans engage in learning and reviewing processes with curricula when acquiring new skills or knowledge.
This human learning behavior has inspired the integration of curricula with replay methods in continual learning agents. The goal is to emulate the human learning process, thereby improving knowledge retention and facilitating learning transfer.
Existing replay methods in continual learning agents involve the random selection and ordering of data from previous tasks, which has shown to be effective. However, limited research has explored the integration of different curricula with replay methods to enhance continual learning.
Our study takes initial steps in examining the impact of integrating curricula with replay methods on continual learning in three specific aspects: the interleaved frequency of replayed exemplars with training data, the sequence in which exemplars are replayed, and the strategy for selecting exemplars into the replay buffer. These aspects of curricula design align with cognitive psychology principles and leverage the benefits of interleaved practice during replays, easy-to-hard rehearsal, and exemplar selection strategy involving exemplars from a uniform distribution of difficulties.
Based on our results, these three curricula
effectively mitigated catastrophic forgetting and enhanced positive knowledge transfer, demonstrating the potential of curricula in advancing continual learning methodologies. Our code and data are available: <https://github.com/ZhangLab-DeepNeuroCogLab/Integrating-Curricula-with-Replays>
§ INTRODUCTION
Continual learning enables consecutive task acquisition without forgetting previously trained tasks <cit.>. This adaptability is vital for autonomous systems in dynamic environments, such as updating a grocery classification model with new products without retraining it on previous products. However, a significant challenge in continual learning is catastrophic forgetting, where knowledge from recent tasks interferes with earlier ones <cit.>, leading to performance degradation on earlier tasks after training on a task sequence.
To resolve this problem,
there are three primary types of continual learning methods commonly employed in the field:
regularization-based methods introduce regularization terms to mitigate catastrophic forgetting by preserving important parameters during training <cit.>; rehearsal-based methods store and replay a subset of previous data during training to maintain knowledge from previous tasks <cit.> and parameter isolation methods isolate specific parameters for each task to prevent interference between tasks <cit.>.
Rehearsal-based methods have proven highly effective in continual learning. However, existing approaches typically involve randomly selecting and rehearsing data from previous tasks. Limited research explores the incorporation of meaningful curricula into replay methods.
In parallel, in the curriculum learning literature, various approaches have focused on weakly supervised <cit.>, unsupervised <cit.>, and reinforcement learning tasks <cit.>. These studies demonstrate that curricula improve generalization abilities, task performances,
and convergence speed <cit.> during training. However, they primarily address intra-class difficulty and example scheduling within a single task, neglecting the impact of class presentation sequences across multiple tasks. Recent research has explored curricula in continual learning scenarios without data replays <cit.>. In complement to this work, our study investigates the role of curricula specifically during replay in continual learning, while keeping the curricula consistent for the feed-forward training process.
Exploring optimal curricula offers countless possibilities, and in our study, we take initial steps to investigate a limited set of potential curricula. We draw inspiration from two sources to guide the design of these curricula. Firstly, neuroscience research has revealed that neural activity patterns associated with past experiences are replayed in specific orders during rest or sleep, which is believed to contribute to memory consolidation and spatial navigation <cit.>. Secondly, pedagogy studies indicate that repetitive practice and revisiting previous knowledge with increasing difficulty enhance long-term memory integration in students <cit.>.
Specifically, we propose three types of curricula for replays and examine their impact on catastrophic forgetting and positive knowledge transfer: (1) the interleaved frequency of replayed exemplars with training data, (2) the replay sequence of exemplars, and (3) the strategy for selecting exemplars into the replay buffer. The experimental findings align with cognitive psychology principles, highlighting the advantages of frequently interleaving between training data and replayed exemplars, incorporating easy-to-hard rehearsals, and selecting exemplars from a uniform distribution of difficulties for replay. These observations present a promising avenue for advancing continual learning methods. It also provides insights into the underlying mechanisms of replay strategies in mitigating forgetting and facilitating knowledge transfer across tasks.
§ RELATED WORKS
§.§ Replay Methods in Continual Learning
Extensive research has focused on utilizing replay methods to address the issue of catastrophic forgetting. Conventional replay methods, such as iCaRL <cit.> and ER <cit.>, involve explicit training on previously saved data, while several variants, like DGR <cit.> and Pseudo-Recursal <cit.>, replay on artificially synthesized samples by generative models, resembling data from previous tasks.
Although these replay methods have made significant contributions in reducing catastrophic forgetting, they paid little attention to the incorporation of meaningful curricula into replay methods. Most methods randomly interleave the replay samples with the training data, without exploring the optimal mixing strategies <cit.>. In our work, we systematically studied the effect of interleaving curricula, which involves mixing training data and replay samples within a pre-defined interleave interval.
§.§ Curriculum Learning
Curriculum learning methods can be broadly categorized into two groups. The first group involves manual curriculum design by humans before training <cit.>, but these methods typically rely on human expertise and struggle to generalize to new domains. The second group consists of models that can autonomously design curricula without human intervention <cit.>. However, the application of these methods to enhance model performance has received limited attention in the continual learning setting.
Here, we highlight two factors to consider when applying curricula on the replay methods in continual learning. Firstly, while curriculum learning has demonstrated efficacy in enhancing generalization and training speed within a single task, the objective of curriculum learning in the context of continual learning is to retain knowledge from previous tasks while acquiring new knowledge from the current task. Secondly, unlike within-task curriculum learning, models in continual learning only have access to data from the current task, making it challenging to create a comprehensive between-task curriculum that encompasses the entire dataset.
Here, we took initial steps in this direction by exploring automated methods to determine the sequence of replay samples and introducing the sample selection strategy which finds the best replay samples for building a curriculum.
§ EXPERIMENTS
We investigated the effect of three types of replay curricula in the class incremental learning (CIL) setting. We first introduce CIL, and then elaborate on the three replay curricula individually.
Problem Setting. The objective of CIL is to teach a unified classification model Θ to recognize sets of object classes incrementally over time. Specifically, an image dataset D, consisting of N object classes, is split into subsets {D_1,...,D_t,...,D_T}
of
images
and presented over a sequence of T tasks. In each task t, the model only has access to training data in D_t, consisting of
samples from distinct classes C_t, and (x_i,t,y_i,t) is the i-th (image, label) pair in D_t. The model Θ can run multiple passes over D_t in task t. The model stops training on D_t
after its performance on the validation set saturates, considering the five most recent epochs.
We implemented the naive replay method where some raw images and their corresponding labels are selected from previous tasks and are stored in the replay buffer R_t. These data in R_t are inter-leaved with D_t for rehearsals. There are three types of replay curricula involved in this study: (1) the interleave frequency; (2) the rehearsal sequence of R_t in CIL; and (3) the image selection for R_t.
R_t is kept at a constant size of 1200 over all the tasks. See Appendix for more training details.
As an upper bound, we also include the offline method where the model Θ is trained on the entire dataset Θ from {D_1,...,D_T} over multiple epochs without any continual learning.
Datasets. We conducted experiments to investigate the use of these three types of curricula in replay methods on the two image datasets ciFAIR-10 and ciFAIR-100 <cit.>.
ciFAIR-10 dataset contains 10 object classes. The protocol asks the model Θ to incrementally learn 2 object classes in each task. There are a total of 5 tasks. ciFAIR-100 dataset contains 100 object classes. The CIL protocol asks the model Θ to incrementally learn 5 object classes in each task. There are a total of 20 tasks.
Both datasets have a total of 60,000 images, with 50,000 images used for training and 10,000 images used for testing.
The conclusions drawn from the experiments on both datasets are consistent. Without loss of generality, we focus on all the experiments and result analysis in ciFAIR-100 in the main text.
See Appendix for more implementation details and results on ciFAIR-10.
Evaluation Metrics. To assess the continual learning performance of the model Θ, we followed <cit.> and introduce 2 standard evaluation metrics. We define Forgetfullness (F) as the percentage decrease in classification accuracy on the test instances from C_1
between the Θ_t (model after being trained on D_t) and Θ_1. An ideal Θ_t could maintain the same classification accuracy on C_1 over tasks; i.e. ∀ t, F_t=0. The higher F is, the more Θ suffers from catastrophic forgetting. To assess the overall classification performance of Θ over tasks, we also report the continual average classification accuracy (Avg. Accu.). Avg. Accu. is computed as the average accuracy on all the test instance from C_i, where i∈{1, 2, ..., t}. For simplicity, we report the averaged F and Avg. Accu. over all the tasks.
Experimental Controls.
Within each experiment, only one variable of interest changes while the rest of the experiment conditions are fixed as control variables. As the previous study has shown that the sequence of class presentations affects the continual learning performance <cit.>, we use the same class presentation sequence in all three experiments. The same MobileNetV3 (small) network architecture is used as the backbone for the model Θ for all experiments. In every experiment, the total number of training samples and the total number of replay samples exposed to Θ remain the same across all experiment variables. Each experiment is conducted with 4 runs initialized with 4 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 4 runs is reported.
§.§ Interleave Divisions during Rehearsals
The number of interleaving divisions refers to the number of splits of in D_t and R_t. It indicates how often the model Θ rehearses on R_t, while learning on a subset of D_t. For example, for interleaving division number 400, D_t is split into 400 groups where each group contains an equal number of (x_i,t,y_i,t) (image, label) pairs, and these (image, label) pairs are randomly selected from D_t without replacement. Correspondingly, R_t is also split into 400 groups with the same splitting criteria as D_t. At each training epoch, the model Θ_t at task t is repeatedly trained with one group of D_t followed by one group of R_t, until the entire D_t and R_t are exhaustively seen by Θ_t. We titrate the interleave division numbers with the range of 1, 8, 60, 120, and 300.
The training data is interleaved with replay data and then presented to the model in sequence. Different interleave division numbers result in different data presentation sequences; hence, different curricula.
§.§ Rehearsal Sequence of Replay Samples
We use the interleave divisions 1 and 600 for all the experiments in this subsection and vary the rehearsal sequence of data samples in R_t by taking into account the two factors: the sample difficulty levels and the increasing or decreasing directions of sample difficulty levels.
To measure whether a sample is easy or hard to learn, we introduce two difficulty measures: (1) the confidence score difficulty metrics and (2) the distance vector difficulty metrics. The confidence score difficulty metrics were used to assess whether a teacher network with full knowledge of the entire dataset D predicted high or low confidence of the given sample belonging to its ground truth class label. Specifically, each image within R_t was input to a teacher network. The teacher network is based on a MobileNetV3 (small) architecture, pre-trained on the entire dataset D. After this, the confidence score for the ground truth class of each sample was extracted from the teacher network’s output. R_t was then sorted according to its individual sample’s confidence score, where a higher confidence score means that the sample is easier to learn for Θ.
However, in CIL setting, having a teacher network with full access to the whole dataset is impractical, as the data is incrementally available over tasks. Hence, we employed the distance vector difficulty metrics, used widely in literature <cit.>. Intuitively, if the sample is closer to other samples in the memory buffer, it is easier for Θ to learn and generalize to other samples as well.
The 2nd last layer from a ResNet-50 model <cit.>, pretrained on the ImageNet dataset, was used to extract the feature vector of each sample in R_t. A Euclidean distance matrix was created, where the pairwise Euclidean distance for all the samples based on their feature vectors was computed. We then compute the sum of each row of the matrix and denote this column vector as a distance vector. Each element in this distance vector represents how much a particular sample differs from all other samples in the feature space. A smaller value in the distance vector means that the particular replay sample is easier to learn for Θ.
We introduce a series of rehearsal sequences in the orders of either easy-to-hard samples or hard-to-easy samples, where the difficulty levels of each sample are determined by either the confidence score difficulty metrics or the distance vector difficulty metrics.
As the previous study has shown that the class orders are also essential for continual learning <cit.>, here we also explore the effect of the class orders during replays. When we design the rehearsal sequence based on class difficulties in R_t, we adapt the two sample-level difficulty measures above to compute class-level difficulty measures by taking the average over all samples of the same class in R_t. We then sort all the samples in R_t by their class difficulty metrics, regardless of their individual sample difficulty scores.
Samples in R_t sorted by their class difficulties
were then presented to the model Θ in either the easy-to-hard or hard-to-easy
orders.
§.§ Selection of Samples for Replay Buffer
In common practice, selecting samples for R_t+1 from task t is often conducted in a random manner <cit.>. In contrast to the previous works, we vary the sample selection criteria for R_t+1 as follows: selecting only the easiest samples from task t for R_t+1, selecting the hardest samples from task t for R_t+1, and selecting samples that are uniformly distributed across difficulty levels from task t for R_t+1. The difficulty levels are judged based on the confidence scores and the distance vectors defined in the previous subsection. We use interleave division numbers 1 and 600 for all the experiments in this subsection.
§ RESULTS
§.§ Frequent Replays Enhance Performances
We report F and Avg. Accu. as a function of interleave divisions in Table <ref>.
Notably, we observed that interleave divisions are important factors influencing the continual learning performance of the replay method with the larger interleave divisions leading to better performances, as indicated by the decreasing F and increasing Avg. Accu. over all the tasks. It is possible that the model parameters at large division numbers are updated more frequently for both the current task and all previous tasks, resulting in minimal forgetting. However, we also note that the continual learning performance saturates at interleave division number 120. This implies that increasing interleave divisions beyond optimal values brings no extra benefits in continual learning.
§.§ Easy-To-Hard Rehearsal Sequences are Beneficial
We studied the models trained with different rehearsal sequences sorted in easy-to-hard or hard-to-easy curricula based on sample-level or class-level difficulty measures computed from either the confidence scores or distance vectors. We reported the Avg. Accu. results in Figure <ref> and F scores in Appendix and made four key observations. First, aligning with the observations in Table <ref> and the discussion from the previous subsection, large interleave divisions benefit continual learning models with higher average accuracy and less forgetting. Second, rehearsal sequences sorted by instance-level difficulties lead to much better continual learning performances (compare red bars versus blue bars). Third, the confidence score is a better evaluation metric measuring instance-level difficulties, as shown by the bars with and without texture patterns. Finally, the models trained with the easy-to-hard rehearsal sequences outperform the ones with reversed rehearsal sequences (compare light versus dark grey bars). It is possible that easy-to-hard rehearsal sequences make the models converge faster on the previous tasks due to more stable gradient updates; hence, the sequences lead to minimal forgetting and higher classification accuracy. We also compared the continual learning performance for both the offline method and the continual learning method with various curricula and observed that there still exists a large performance gap between these two.
§.§ Replays with Only Hard Data Hurt Performances
Here, we explored the effect of different sample selection strategies for replay samples in terms of the sample difficulty levels based on distance vectors or confidence scores. From Figure <ref>,
Our observations indicate that exclusively choosing the most challenging replay samples leads to inferior performance compared to selecting the easiest samples or incorporating samples with a balanced distribution of difficulty levels. Selecting samples with a uniform distribution of difficulty levels yields the best continual learning performance. This outcome may be attributed to the fact that difficult replay samples result in less flat loss landscapes, which in turn make the training process more challenging and slower to converge <cit.>. A curriculum for training the models to rehearse from the easiest to the hardest samples is the best, as it balances the greater precision in data fitting due to the hardest samples and the fast convergence speed during training due to the easier samples. Similar to the previous subsection, we also noted that the confidence score is a better measure of sample difficulty levels than the distance vectors.
§ CONCLUSION
Our study
examines the role of curricula during replays in the class-incremental learning setting in continual learning. We designed and conducted a series of controlled experiments to study the three key questions on replays: how often is the replay, what data should be replayed, and in what sequence to replay.
Across the two common image datasets, our experimental results shed light on the underlying principles of replay methods in continual learning and reveal the good curricula design choices for replay methods.
These curricula designs not only facilitate positive knowledge transfers (which has been explored in existing curriculum learning literature), but also mitigate catastrophic forgetting (a significant problem we need to solve in continual learning). Specifically, we
found that (1) replays should happen frequently; (2) only rehearsing on the most difficult exemplars hurts continual learning performances; and (3) rehearsals on samples with increasing difficulty eliminate forgetting more than its reversed difficulty orders.
There are numerous other possible choices of curricula designs for replay methods, such as a unified difficulty metric considering both confidence scores and distance vectors or the use of a student feedback loop to update the difficulty scores. In the future, we will look into the role of curricula under
stringent continual learning conditions, such as learning with limited training time or noisy data. We will also conduct experiments on other large-scale datasets and apply our replay curriculum to existing replay-based continual learning algorithms.
§ ACKNOWLEDGEMENTS
This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-025), its NRFF award NRF-NRFF15-2023-0001, Mengmi Zhang's Startup Grant from Agency for Science, Technology, and Research (A*STAR), and Early Career Investigatorship from Center for Frontier AI Research (CFAR), A*STAR. The authors declare that they have no competing interests.
§ APPENDIX
§.§ Experimental Details
For experiments on both ciFAIR-10 and ciFAIR-100, PyTorch’s default implementation of cross entropy loss was used for object classification tasks. The SGD algorithm was used as the optimizer. The learning rate was set at a constant of 0.001. Momentum was fixed at 0.9. A batch size of 32 is used.
For ciFAIR-10, we employ a 2-layer 2D-convolutional network with 6 and 16 channels in the successive layers, followed by 3 fully connected layers with 400, 120 and 84 hidden units respectively. ReLU was used as the activation function.
We follow the standard training and testing data splits from the original ciFAIR-10.
In every task, the model is trained for 250 epochs. Each experiment is conducted with 20 runs initialized with 20 random seeds, where the seeds are used to vary the controlled variables. The average performance across all 20 runs is reported.
For ciFAIR-100, PyTorch's implementation of MobileNetV3 (small) was used, including the default layers and activation function. We used a custom training, validation, and test data splits with a ratio of 9:1:2, and a stopping criteria for training depending on the validation loss. The ciFAIR-100 images were upscaled to 72x72 using PyTorch's bicubic interpolation function before training.
§.§ More Results and Analysis
We reported the continual learning performance on ciFAIR-10 dataset of the models trained with the three types of curricula as elaborated in Experiments Section.
See Table <ref> for interleave divisions, Figure <ref> for rehearsal sequences, and Figure <ref> for sample selections.
All the tables and figures on ciFAIR-10 dataset follow the same design conventions as the corresponding tables and figures on ciFAIR-100 dataset in the main text. The conclusions from the results of ciFAIR-10 dataset are consistent with the ones on the ciFAIR-100 dataset.
|
http://arxiv.org/abs/2307.05596v1 | 20230710193032 | Compositional Generalization from First Principles | [
"Thaddäus Wiedemer",
"Prasanna Mayilvahanan",
"Matthias Bethge",
"Wieland Brendel"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
[1]Equal contribution [2]Equal supervision
[0]Code available at <github.com/brendel-group/compositional-ood-generalization>
The Synthesis Lab: Empowering Collaborative Learning in Higher Education through Knowledge Synthesis
Bodong Chen
August 12, 2023
====================================================================================================
Leveraging the compositional nature of our world to expedite learning and facilitate generalization is a hallmark of human perception. In machine learning, on the other hand, achieving compositional generalization has proven to be an elusive goal, even for models with explicit compositional priors. To get a better handle on compositional generalization, we here approach it from the bottom up: Inspired by identifiable representation learning, we investigate compositionality as a property of the data-generating process rather than the data itself. This reformulation enables us to derive mild conditions on only the support of the training distribution and the model architecture, which are sufficient for compositional generalization. We further demonstrate how our theoretical framework applies to real-world scenarios and validate our findings empirically. Our results set the stage for a principled theoretical study of compositional generalization.
§ INTRODUCTION
Systematic compositionality <cit.> is the remarkable ability to utilize a finite set of known components to understand and generate a vast array of novel combinations. This ability, referred to by Chomsky <cit.> as the “infinite use of finite means”, is a distinguishing feature of human cognition, enabling us to adapt to diverse situations and learn from varied experiences.
It's been a long-standing idea to leverage the compositional nature of the world for learning. In object-centric learning, models learn to isolate representations of individual objects as building blocks for complex scenes. In disentanglement, models aim to infer factors of variation that capture compositional and interpretable aspects of their inputs, for example hair color, skin color, and gender for facial data. So far, however, there is little evidence that these methods deliver substantially increased learning efficacy or generalization capabilities (<cit.>, <cit.>). Across domains and modalities, machine learning models still largely fail to capture and utilize the compositional nature of the training data (<cit.>).
To exemplify this failure, consider a model trained on a data set with images of two sprites with varying position, size, shape, and color overlaid on a black canvas. Given the latent factors, a simple multi-layer neural network can easily learn to reconstruct images containing compositions of these sprites that were covered by the training set (Figure <ref>, top rows). However, reconstruction fails for novel compositions—even if the individual components have been observed before (Figure <ref>, bottom row). Failure to generalize to unseen data in even this simplistic regression setting demonstrates that compositional generalization does not automatically emerge simply because the data is of a compositional nature.
We therefore take a step back to formally study compositionality and understand what conditions need to be fulfilled for compositional generalization to occur. To this end, we take inspiration from identifiable representation learning and define a broad class of data generating processes that are compositional and for which we can provably show that inference models can generalize to novel compositions that have not been part of the training set. More precisely, our contributions are as follows:
* We specify compositional data-generating processes both in terms of their function class and latent distributions (Sections <ref> and <ref>) such that they cover a wide range of assumptions made by existing compositional methods.
* We prove a set of sufficient conditions under which models trained on the data are able to generalize compositionally (Section <ref>).
* We validate our theory in a range of synthetic experiments and perform several ablation studies that relate our findings to empirical methods (Section <ref>).
§ RELATED WORK
Representation learning
Disentanglement and identifiable representation learning aim to learn succinct representations that both factorize the data space efficiently and are robust towards distributional changes <cit.>. However, the expectation that more compositional representations lead to better out-of-distribution (OOD) generalization has not been met, as demonstrated by <cit.> and <cit.>. Although our work does not directly address generalization issues in identifiable representation learning, our setup is directly inspired by it, and we examine data-generating processes similar to <cit.>.
Empirical Approaches
Many empirical methods use compositional priors and claim improved compositional generalization. The problem has been studied especially closely in language <cit.>, but it remains far from being solved <cit.>. Object-centric learning is another domain in which compositionality plays a major role, and many approaches explicitly model the composition of scenes from object-“slots” <cit.>. The slot approach is also common in vector-symbolic architectures like <cit.> and <cit.>. For most of these works, however, compositional generalization is not a focal point, and their actual generalization capability remains to be studied. There are also some architectures like transformers <cit.>, graph neural networks <cit.>, bilinear models <cit.>, or complex-valued autoencoders <cit.> that have been claimed to exhibit some degree of compositional generalization, but again, principled analysis of their generalization ability is lacking. Our framework can guide the systematic evaluation of these methods. While we use the visual domain as an example throughout this work, our contributions are not tied to any specific data domain or modality.
Theoretical approaches to OOD generalization
The OOD generalization problem for non-linear models where train and test distributions differ in their densities, but not their supports, has been studied extensively, most prominently by <cit.> and <cit.>. We refer the reader to <cit.> for a comprehensive overview. In contrast, compositional generalization requires generalizing to a distribution with different, possibly non-overlapping support. This problem is more challenging and remains unsolved. <cit.> were able to show that models can generalize between distributions with a very specific relation, but it is unclear what realistic distributions fit their constraints. <cit.> also study out-of-support problems theoretically but touch on compositional generalization only as a workaround for general extrapolation. Recently, <cit.> took a first step towards a more applicable theory of compositional generalization to unseen domains, but their results still rely on specific distributions, and they do not consider functions with arbitrary (nonlinear) compositions or multi-variate outputs. In contrast, our framework is independent of the exact distributions used for training and testing, and our assumptions on the compositional nature of the data allow us to prove generalization in a much broader setting.
§ A FRAMEWORK FOR COMPOSITIONAL GENERALIZATION
We use the following notation throughout. [N] denotes the set of natural numbers {1, 2, ..., N}. Id denotes the (vector-valued) identity function. We denote two functions f, g agreeing for all points in set P as f ≡_P g. Finally, we write the total derivative of a vector-valued function f by all its inputs z as ∂ f/∂ z, corresponding to the Jacobian matrix with entries ∂ f_i/∂ z_j.
§.§ Compositionality
Colloquially, the term “compositional data” implies that the data can be broken down into discrete, identifiable components that collectively form the whole. For instance, in natural images, these components might be objects, while in music, they might be individual instruments. As a running illustrative example, we will refer to a simple dataset similar to multi-dSprites <cit.>, as shown in Figure <ref>. Each sample in this dataset is a composition of two basic sprites, each with a random position, shape, size, and color, size.
Drawing inspiration from identifiable representation learning, we define compositionality mathematically as a property of the data-generating process. In our example, the samples are generated by a simple rendering engine that initially renders each sprite individually on separate canvases. These canvases are then overlaid to produce a single image featuring two sprites. More specifically, the rendering engine uses the (latent) properties of sprite one, z_1 = (z_1,x, z_1,y, z_1,shape, z_1,size, z_1,color), to produce an image x̃_1 of the first sprite. The same process is repeated with the properties of sprite two, z_2 = (z2,x, z_2,y, z_2,shape, z_2,size, z_2,color), to create an image x̃_2 of the second sprite. Lastly, the engine combines x̃_1 and x̃_2 to create the final overlaid rendering x of both sprites. Figure <ref> demonstrates this process.
In this scenario, the individual sprite renderers carry out the bulk of the work. In contrast, the composition of the two intermediate sprite images x̃_1, x̃_2 can be formulated as a simple pixel-wise operation (see Appendix <ref> for more details). The rendering processes for each sprite are independent: adjusting the properties of one sprite will not influence the intermediate image of the other, and vice versa.
We posit that this two-step generative procedure—the (intricate) generation of individual components and their (simple) composition into a single output—is a key characteristic of a broad class of compositional problems. If we know the composition function, then understanding the basic elements (for example, the individual sprites) is enough to grasp all possible combinations of sprites in the dataset.
We can thus represent any latent variable model f : 𝒵→𝒳, which maps a latent vector z∈𝒵 to a sample x in the observation space 𝒳, as a two-step generative process.
{ C, φ_1, …, φ_K, 𝒵_1, …, 𝒵_K, 𝒳̃_1, …, 𝒳̃_K} is a compositional representation of function f if
∀z∈𝒵 f( z) = C ( φ_1( z_1), ..., φ_K( z_K) ) and 𝒵 = 𝒵_1×…×𝒵_K,
where z_i denotes the canonical projection of z onto 𝒵_i. We refer to φ_k: 𝒵_k →𝒳̃_k as the component functions, to 𝒳̃_1, …, 𝒳̃_K as the (hidden) component spaces, and to C: 𝒳̃_1 ×…×𝒳̃_K →𝒳 as the composition function.
Note that in its most general form, we do not require the component functions to be identical or to map to the same component space. The compositional representation of a function f is also not unique. For instance, any f possesses a trivial compositional representation given by {f, Id, …, Id} (for the sake of clarity, we will omit the explicit mention of the latent factorization and component spaces henceforth). We will later establish conditions that must be met by at least one compositional representation of f.
Our definition of compositionality naturally aligns with various methods in the fields of identifiability, disentanglement, or object-centric learning. In the decoder of SlotAttention <cit.>, for example, each component function is a spatial broadcast decoder followed by a CNN, and the composition function is implemented as alpha compositing.
<cit.> model the component functions as element-wise multiplication of high-dimensional latent codes, which are then composed through a straightforward sum. A similar approach is chosen by <cit.>, except that interactions between components are modeled using matrix multiplication.
§.§ Compositional Generalization
The model in Figure <ref> was trained supervisedly, it was trained to reconstruct samples x given the ground-truth latent factors (z_1, z_2) for each sprite (see Section <ref> for more details). We denote this model as f̂, indicating that it is meant to replicate the ground-truth generating process f of the data. The model f̂ indeed learned to fit f almost perfectly on the training distribution P, but failed to do so on the test distribution Q.
This failure is surprising because the test samples only contain sprites already encountered during training. The novelty lies solely in the combination of these sprites. We would expect any model that comprehends the compositional nature of the dataset to readily generalize to these test samples.
This compositional aspect of the generalization problem manifests itself in the structure of the training and test distribution. In our running example, the model was trained on samples from a distribution P that contained
all possible sprites in each slot, but only in combination with one base sprite in the other slot (illustrated in Figure <ref>A). More formally, the support of P can be written as
P = { ( z_1 ∈𝒵_1, z_2 ∈𝒵_2) | z_1 = z_1^0 ∨ z_2 = z_2^0 }.
The test distribution Q is a uniform distribution over the full product space 𝒵_1×𝒵_2, i.e. it contains all possible sprite combinations. More generally, we say that a generalization problem is compositional if the test distribution contains only components that have been present in the training distribution, see Figure <ref>. This notion can be formalized as follows based on the support of the marginal distributions:
Given two arbitrary distribution P, Q over latents z = ( z_1, ..., z_K) ∈𝒵 = 𝒵_1 ×⋯×𝒵_K, P has compositional support Q if
P_ z_k = Q_ z_k⊆𝒵_k ∀ k ∈ [K].
Clearly, compositional generalization requires compositional support. If regions of the test latent space exist for which a component is not observed, as in Figure <ref>E, we can examine a model's generalization capability, but the problem is not compositional. Depending on whether the gap in the support is in the middle of a latent's domain or towards either end, the generalization problem becomes an interpolation or extrapolation problem instead, which are not the focus of this work.
§.§ Sufficient conditions for compositional generalization
With the above setup, we can now begin to examine under what conditions compositional generalization can be guaranteed to occur.
To make this question precise, let us assume for the moment that the sprites don't occlude each other but that they are just summed up in pixel space. Then the compositional representation of the generative process is simply {Id, φ_1, φ_2}, i.e.
f(z) = φ_1( z_1) + φ_2( z_2).
The question becomes: Given supervised samples (z_i, x_i) from P, can we learn a new model f̂ that is equivalent to f on Q, i.e. for which f̂≡_Qf? We assume that C is known, so in order to generalize, we must be able to reconstruct the individual component functions φ_i.
For the simple case from equation <ref>, we can fully reconstruct the component functions as follows. First, we note that if P is in an open set, we can locally reconstruct the hidden Jacobian of φ_i from the observable Jacobian of f as
∂f/∂z_k(z) = ∂φ_k/∂z_k(z_k).
Since the training distribution contains all possible component configurations z_i, we can reconstruct the Jacobian of φ_i in every point z_i. Then we know everything about φ_i up to a global offset (which can be removed if there exists a known initial point for integration).
Our goal is to extend this approach to a maximally large set of composition functions C. Our reasoning is straightforward if C is the identity, but what if we have occlusions or other nonlinear interactions between slots? What are general conditions on C and the support of the training distribution P such that we can still reconstruct the individual component functions and thus generalize compositionally?
Let us now consider the sprites example with occlusions, and let us assume that the support of P is basically a thin region around the diagonal; see Figure <ref> (left). In this case, the two sprites are always relatively similar, leading to large overlaps. It is impossible to reconstruct the full Jacobian of the occluded sprite from a single sample. Instead, we need a set of samples for which the background sprite is the same while the foreground sprite is in different positions; see Figure <ref> (right). With sufficient samples of this kind, we can observe all pixels of the background sprite at least once. Then reconstruction of the Jacobian of φ_1 is possible again.
This line of thought brings us to a more general condition on the data-generating process: The composition function C and the support P must be chosen such that the full Jacobian can be reconstructed for each component function for all component latents. We formally define the concept of sufficient support below. Note that whether the support of P is sufficient or not strongly depends on the choice of composition function C.
A distribution P over latents z = ( z_1, ..., z_K) ∈𝒵, has sufficient support a compositional representation of a function f, if P is in an open set and for any latent value z_k^*, there exists a (finite) set of points P'_k( z_k^*) ⊆{ p ∈ P| p_k = z_k^* } for which the sum of total derivatives of C has full rank. That is,
∑_ p ∈ P'_k( z_k^*)∂ C/∂φ_k(φ( p)) = M,
where M is the dimension of the component space 𝒳̃_k ⊆ℝ^M.
We are now ready to state our main theorem, namely that if f, f̂ share the same composition function and if P has compositional and sufficient support, then the model f̂ generalizes to Q if it matches the ground-truth data-generating process f on P.
Let P, Q be arbitrary distributions over latents z = ( z_1, ..., z_K) ∈𝒵.
Let f, f̂ be functions with compositional representations in the sense of definition <ref> that share { C, 𝒵_1, ..., 𝒵_K }, but use arbitrary {φ_1, ..., φ_K, 𝒳̃_1, ..., 𝒳̃_K }, {φ̂_1, ..., φ̂_K, 𝒳̂_1, ..., 𝒳̂_K }.
Assume the following assumptions hold:
* C, φ_k, φ̂_k are differentiable, C is Lipschitz in φ, and φ is continuous in z.
* P has compositional support Q in the sense of definition <ref>.
* P has sufficient support f in the sense of definition <ref>.
* There exists an initial point p^0∈ P such that φ ( p^0) = φ̂( p^0).
Then f̂ generalizes to Q, f P≡f̂ f Q≡f̂.
The proof follows roughly the intuition we developed above in that we show that the Jacobians of the component functions can be reconstructed everywhere. Bear in mind that this is simply a construction for the proof: The theorem holds whenever f̂ fits the output of f on the training distribution P, which we can achieve with standard supervised training and without access to the ground-truth Jacobians. It should also be emphasized that since the compositional representation is not unique, the theorem holds if there exists at least one for which the assumptions are fulfilled. Note also that the initial point condition <ref> is needed in the proof, but in all practical experiments (see below), we can generalize compositionally without explicit knowledge of that point. We relegate further details to Appendix <ref>.
§ EXPERIMENTS
We validate our theoretical framework on the multi-sprite data. All models were trained for 2000 epochs on training sets of 100k samples using an NVIDIA RTX 2080 Ti; all test sets contain 10k samples. Table <ref> summarizes the reconstruction quality achieved on the in-domain (ID) test set (P) and the entire latent space (Q) for all experiments.
Motivating experiment
We implement the setup from Figure <ref> to demonstrate that a compositional model does indeed generalize if the conditions from Theorem <ref> are met. We model the component functions as four fully-connected layers followed by four upsampling-convolution stages, mapping the 5d component latent to 64×64 RGB images. For training stability, the composition function is implemented as a soft pixel-wise addition using the sigmoid function σ(·) as
x = σ(x̃_1) ·x̃_1 + σ(-x̃_1) ·x̃_2,
which allows component 1 to occlude component 2. We contrast this to a non-compositional monolithic model, which has the same architecture as a single component function (with adjusted layer sizes to match the overall parameter count of the compositional model). We show that both models have the capacity to fit the data by training on random samples covering the entire latent space (Table <ref>, #1,2). We then train on a distribution with orthogonal support as in equation <ref>, albeit with two planes for the foreground component to satisfy the sufficient support condition (Definition <ref>) as explained in Figure <ref>. Both models can reconstruct ID samples, but only the compositional model generalizes to the entire latent space (Table <ref>, #3,4).
Flexible compositional support Next, we demonstrate the variety of settings that fulfil the compositional support assumption as illustrated in Figure <ref>B and C. To this end, we repeat the experiment on training sets P sampled from (i) a normal distribution with orthogonal support (Table <ref>, #5) and (ii) a uniform distribution over a diagonal support chosen broad enough to satisfy the sufficient support condition (Table <ref>, #6). The model generalizes to the entire latent space in both settings. Since the generalization performance is already close to ceiling, broadening the support of both distributions (Table <ref>, #7,8) does not further increase performance.
Violating Conditions Finally, we look at the effect of violating some conditions.
* Gaps in support (Table <ref>, #9) If there are gaps in the support of the training set such that some component configurations are never observed (Figure <ref>E) violates the compositional support condition (Definition <ref>). While the overall reconstruction performance only drops slightly, visualizing the reconstruction error over a 2d-slice of the latent space in Figure <ref> illustrates clearly that generalization fails exactly where the condition is violated.
* Insufficient training variability (Table <ref>, #10) Reducing the width of the diagonal support violates the sufficient support condition (Definition <ref>) as soon as some parts of the background component are always occluded and can not be observed in the output anymore. We can clearly see that reconstruction performance on the entire latent space drops significantly as a result.
* Collapsed Composition Function (Table <ref>, #11) Changing the output of each component function from RGB to RGBa and implementing the composition as alpha compositing yields a model that is still compositional, but for which no support can satisfy the sufficient support condition since the derivative of transparent pixels will always be zero and the Jacobian matrix can therefore never have full rank (more details in Appendix <ref>). However, we observe that the model still generalizes to the entire latent space and achieves even lower reconstruction error than the original model. This emphasizes that what we present are merely sufficient conditions, which might be loosened in future work.
§ DISCUSSION
We presented a first step and a framework to study compositional generalization in a more principled way. Clearly, there remain many open questions and limitations that we leave for future work.
Supervised setting
We only studied a supervised regression setting in which the model has access to the ground-truth latents of each training sample. Ultimately, we are
interested in the unsupervised setting akin to what is typically studied in identifiable representation learning. The unsupervised setting comes with inherent ambiguities that make generalizations guarantees harder to derive. Still, the results in this paper build an important foundation for future studies because sufficient conditions in the supervised setting can be considered necessary conditions in the unsupervised setting.
Jacobian and initial point The proof of Theorem <ref> utilizes the Jacobian of the ground-truth model. We emphasize again that this construction is necessary only for the proof and does not mean that we require access to the data-generating processes' full Jacobian for training.
Similarly, the existence of an initial point p^0 is a technicality of the proof that is not reflected in the experiments. While it is not yet clear whether it is possible to complete the proof without the initial point condition, we believe there is a self-consistency condition that might alleviate the need for this condition. The experiments thus hint at the existence of alternative proof strategies with relaxed assumptions.
Known composition function
We also assume the composition function to be known which is approximately true in many interesting scenarios, such as object composition in scenes or the composition of instruments in music. In fact, many structured representation learning approaches like SlotAttention <cit.> incorporate structural components that are meant to mimic the compositional nature of the ground-truth-generating process. In other interesting cases like language, however, the composition function is unknown a priori and needs to be learned. This might be possible by observing how the gradients of C change with respect to a fixed slot, at least if certain regularity conditions are fulfilled.
Inductive biases
Some of the conditions we derived can be relaxed in the presence of certain inductive biases. For example, models with an inductive bias towards shift invariance might be able to cope with certain gaps in the training support (e.g., if sprites are not visible in every position). Similarly, assuming all component functions φ to be identical would substantially simplify the problem and allow for much smaller sufficient supports P. The conditions we derived do not assume any inductive bias but are meant to formally guarantee compositional generalization. We expect that our conditions generalize to more realistic conditions as long as the core aspects are fulfilled.
Error bounds Our generalization results hold only if the learned model perfectly matches the ground-truth model on the training distribution. This is similar to identifiable representation learning, where a model must find the global minimum of a certain loss or reconstruction error for the theory to hold. Nonetheless, extending our results towards generalization errors that are bounded by the error on the training distribution is an important avenue for future work.
Broader impact Compositional generalization, once achieved, has the potential to be beneficial in many downstream applications. By substantially increasing sample and training efficiency, it could help to democratize the development and research of large-scale models. Better generalization capabilities could also increase the reliability and robustness of models but may amplify existing biases and inequalities in the data by generalizing them and hinder our ability to interpret and certify a model's decisions.
§ CONCLUSION
Machine learning, despite all recent breakthroughs, still struggles with generalization. Taking advantage of the basic building blocks that compose our visual world and our languages remains unique to human cognition.
We believe that progress towards more generalizable machine learning is hampered by a lack of a formal understanding of how generalization can occur. This paper focuses on compositional generalization and provides a precise mathematical framework to study it. We derive a set of sufficient conditions under which compositional generalization can occur and which cover a wide range of existing approaches. We see this work as a stepping stone towards identifiable representation learning techniques that can provably infer and leverage the compositional structure of the data. It is certainly still a long road toward scalable empirical learning techniques that can fully leverage the compositional nature of our world. However, once achieved, there is an opportunity for drastically more sample-efficient, robust, and human-aligned machine learning models.
§ ACKNOWLEDGMENTS
We would like to thank (in alphabetical order): Jack Brady, Simon Buchholz, Attila Juhos, and Roland Zimmermann for helpful discussions and feedback.
This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A. WB acknowledges financial support via an Emmy Noether Grant funded by the German Research Foundation (DFG) under grant no. BR 6382/1-1 and via the Open Philantropy Foundation funded by the Good Ventures Foundation. WB is a member of the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645. This research utilized compute resources at the Tübingen Machine Learning Cloud, DFG FKZ INST 37/1057-1 FUGG. We thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting TW and PM.
§ AUTHOR CONTRIBUTIONS
The project was led and coordinated by TW. TW and PM jointly developed the theory with insights from WB. TW implemented and conducted the experiments with input from PM and WB. TW led the writing of the manuscript with help from WB, PM, and MB. TW created all figures with comments from PM and WB.
unsrtnat
§ PROOF OF THEOREM <REF>
We reiterate the setup and notation introduced in the paper here for ease of reference.
Notation
[N] denotes the set of natural number {1, 2, ..., N}.
Id denotes the (vector-valued) identity function.
We write two functions f, g agreeing for all points in set P as f ≡_P g.
Finally, we write the total derivative of a vector-valued function f by all its inputs z as ∂ f/∂ z, the Jacobian matrix with entries ∂ f_i/∂ z_j.
Setup
We are given two arbitrary distributions P, Q over latents z = ( z_1, ..., z_K) ∈𝒵. Each latent z_k describes one of the K components of the final data point x produced by the ground-truth data-generating process f. A model f̂ is trained to fit the data-generating process on samples of P; the aim is to derive conditions on P and f̂ that are sufficient for f̂ to then also fit f on Q.
We assume that f, f̂ are chosen such that we can find at least one compositional representation (Definition <ref>) for either function that shares a common composition function C and factorization of the latent space 𝒵_1 ×⋯×𝒵_K = 𝒵.
For f̂ to generalize to Q, we need to show fitting f on P implies also fitting it on Q, in other words
f P≡f̂ f Q≡f̂
Since C is the same for both functions, we immediately get
φQ≡φ̂ f Q≡f̂,
it suffices to show that the component functions generalize.
Note, however, that since C is not generally assumed to be invertible, we do not directly get that agreement of f, f̂ on P also implies agreement of their component functions φ, φ̂ on P.
We require P to have compositional support Q (Definition <ref> and Assumption <ref>).
The consequence of this assumption is that any point q = ( q_1, ..., q_K) ∈ Q can be constructed from components of the K support points p^k = ( p^k_1, ..., p^k_K) ∈ P subject to p^k_k = q_k as
q = ( p^1_1, ..., p^K_K ).
A trivial consequence, then, is that points x̃∈𝒳̃ in component space corresponding to points in Q in latent space can always be mapped back to latents in P
φ( q) = (φ_1( q_1), ..., φ_K( q_K)) = (φ_1( p^(1)_1), ..., φ_K( p^(K)_K ))
because each component function φ_k only depends on the latents z_k of a single component.
This is also the case for the component functions φ̂ of f̂ so that we get
φP≡φ̂φQ≡φ̂.
We now only need to show that φP≡φ̂ follows from f P≡f̂. As noted above, this is not guaranteed to be the case, as C is not generally invertible (in the presence of occlusions). We, therefore, need to consider when a unique reconstruction of the component functions φ (and correspondingly φ̂) is possible, based on only the observations x = f( z) on Q.
As explained in the main paper, we can reason about how a change in the latents z_k of some slot affects the final output, which we can express through the chain rule as
∂ f/∂ z_k( z) _N × D
= ∂ C/∂φ_k(φ( z)) _N × M∂φ_k/∂ z_k( z_k) _M × D ∀ k ∈ [K].
Here, N is the dimension of the final output (64 × 64 × 3 for RGB images), M is the dimension of a component's representation x̃_k (also 64 × 64 × 3 for RGB images), and D is the dimension of a component's latent description z_k (5: x-position, y-position, shape, size, hue for sprites).
Note that we can look at the derivative component-wise because each component function φ_k only depends on the latents z_k of its component. However, the combination function still depends on the (hidden) representation of all components, and therefore ∂ C/∂φ_k is a function of all φ and the entire z.
In equation <ref>, the left-hand side (LHS) ∂ f/∂ z_k can be computed from the training, as long as P is an open set. On the right-hand side (RHS), the functional form of ∂ C/∂φ_k is known since C is given, but since φ( z) is still unknown, the exact entries of this Jacobian matrix are unknown. As such, equation <ref> defines a system of partial differential equations (PDEs) for the set of component functions φ with independent variables z.
Before we can attempt to solve this system of PDEs, we simplify it by isolating ∂φ_k/∂ z_k. Since all terms are matrices, this is equivalent to solving a system of linear equations. For N = M, ∂ C/∂φ_k is square, and we can solve by taking its inverse as long as the determinant is not zero. In the general case of N ≥ M, however, we have to resort to the pseudoinverse to write
∂φ_k/∂ z_k^*
= ( ∂ C/∂φ_k^⊤∂ C/∂φ_k)^-1∂ C/∂φ_k^⊤∂ f/∂ z_k ∀ k ∈ [K],
which gives all solutions ∂φ_k/∂ z_k^* if any exist. This system is overdetermined, and a (unique) solution exists if ∂ C/∂φ_k has full (column) rank. In other words, to execute this simplification step on P, we require that for all z ∈ P the M column vectors of the form
( ∂ C_1/∂φ_km(φ( z)), ..., ∂ C_N/∂φ_km(φ( z)) )^⊤ ∀ m ∈ [M]
are linearly independent. Each entry of a column vector describes how all entries C_n of the final output (the pixels of the output image) change with a single entry φ_km of the intermediate representation of component k (a single pixel of the component-wise image). It is easy to see that if even a part of the intermediate representation is not reflected in the final output (in the presence of occlusions, when a single pixel of one component is occluded), the entire corresponding column is zero, and the matrix does not have full rank.
To circumvent this issue, we realize that the LHS of equation <ref> only depends on the latents z_k of a single component. Hence, for a given latent z and a slot index k, the correct component function will have the same solution for all points in any (finite) set
P'( z, k) ⊆{ p ∈ P | p_k = z_k }.
We can interpret these points as the intersection of P with a plane in latent space at z_k (all latent combinations in the training set in which one component is fixed in a specific configuration).
We can then define a modified composition function C̃ that takes z and a slot index k as input and produces a “superposition” of images corresponding to the latents in the subset as
C̃( z, k ) = ∑_ p ∈ P'( z, k) C(φ( p) ).
Essentially, we are condensing the information from multiple points in the latent space into a single function.
This enables us to write a modified version of equation <ref> as
∑_ p ∈ P'( z, k)∂ f/∂ z_k( p)
= ∑_ p ∈ P'( z, k)∂ C/∂φ_k(φ( p))
∂φ_k/∂ z_k( z_k)
= ∂C̃/∂φ_k ( z, k)
∂φ_k/∂ z_k( z_k)
∀ k ∈ [K]
Now we can solve for ∂φ_k/∂ z_k as in equation <ref>, but this time require only that ∂C̃/∂φ_k has full (column) rank for a unique solution to exist,
∂C̃/∂φ_k ( z, k)
= ∑_ p ∈ P'( z, k)∂ C/∂φ_k(φ( p))
= M ∀ z ∈ P ∀ k ∈ [K].
In general, this condition is easier to fulfill since full rank is not required in any one point but over a set of points. For occlusions, for example, any pixel of one slot can be occluded in some points p ∈ P', as long as it is not occluded in all of them. We can interpret this procedure as “collecting sufficient information” such that an inversion of the generally non-invertible C becomes feasible locally.
The requirement that P has to be an open set, together with the full rank condition on the Jacobian of the composition function condensed over multiple points, C̃, is termed sufficient support in the main paper (Definition <ref> and Assumption <ref>). As explained here, this allows for the reconstruction of ∂φ_k/∂ z_k from the observations,
f P≡f̂∂φ/∂ zP≡∂φ̂/∂ z.
The above step only gives us agreement of the derivative of the component functions, ∂φ_k/∂ z_k, not agreement of the functions themselves. As explained above, the solution to the linear system of equations <ref> constitutes a system of partial differential equations (PDEs) in the set of component functions φ with independent variables z. We can see that this system has the form
∂_i φ( z) = a_i( z, φ( z)),
where i ∈ [L] = [K× D] is an index over the flattened dimensions K and D such that ∂_i φ denotes ∂φ/∂ z_L (which is essentially one column of ∂φ_k/∂ z_k aggregated over all k) and a_i is the combination of corresponding terms from the LHS. If this system allows for more than one solution, we cannot uniquely reconstruct the component functions from their derivatives.
If we have access to some initial point, however, for which we know φ( 0) = φ^0, we can write
φ(z_1, ..., z_L) - φ^* = ( φ(z_1, ..., z_L) - φ(0, z_2, ..., z_L) )
+ ( φ(0, z_2, ..., z_L) - φ(0, 0, z_3, ..., z_L) )
+ ...
+ ( φ(0, ..., 0, z_L) - φ(0, ..., 0) ).
In each line of this equation, only a single z_i =: t is changing; all other z_1, ..., z_L are fixed. Any solution of <ref>, therefore, also has to solve the L ordinary differential equations (ODEs) of the form
∂_t φ(z_1, ..., z_i-1, t, z_i+1, ..., z_L) = a_i(z_1, ..., z_i-1, t, z_i+1, ..., z_L, φ(z_1, ..., z_i-1, t, z_i+1, ..., z_L) ),
which have a unique solution if a_i is Lipschitz in φ and continuous in z_i, as guaranteed by <ref>. Therefore, <ref> has at most one solution.
This reference point does not have to be in z = 0, as a simple coordinate transform will yield the same result for any point in P.
It is therefore sufficient that there exists some point p^0 ∈ P for which φ ( p^0) = φ̂( p^0) to obtain the same unique solution for φ and φ̂, which is exactly what <ref> states. Overall, this means that agreement of the derivatives of the component functions also implies agreement of the component functions themselves,
∂φ/∂ zP≡∂φ̂/∂ zφP≡φ̂
Finally, we can conclude the model f̂ fitting the ground-truth generating process f on the training distribution P, through <ref>, <ref>, <ref>, <ref>, implies the model generalizing to Q as well. In other words, equation <ref> holds.
§ DETAILS ABOUT THE COMPOSITIONAL FUNCTIONS
As explained in equation <ref> in section <ref>, the composition function is implemented as a soft pixel-wise addition in most experiments. The use of the sigmoid function σ(·) in the composition
x = σ(x̃_1) ·x̃_1 + σ(-x̃_1) ·x̃_2
was necessary for training stability. With this formulation, sprites can also overlap somewhat transparently, which is not desired and leads to small reconstruction artifacts for some specific samples. Implementing the composition with a step function as
x = step(x̃_1) ·x̃_1 + step(-x̃_1) ·x̃_2
instead would be more faithful to the ground-truth data-generating process, but is hard to train with gradient descent.
Note that both formulations could easily be extended to more than one sprite by simply repeating the composition operation with any additional sprite.
In section <ref>, we also looked at a model that implements the composition through alpha compositing instead (see also Table <ref>, #11). Here, each component's intermediate representation is an RGBa image. The components are then overlaid on an opaque black background using the composition function
x_α = x_1, α + ( 1 - x_1, α) · x_2, α
x_RGB = x_1, α· x_1, RGB + ( 1 - x_1, α) ·x_2, α/x_α· x_2, RGB.
While this yields a compositional function, the sufficient support condition (Definition <ref>) is generally not fulfilled on the sprites data. The reason is that in fully transparent pixels (α = 0), changing the RGB value is not reflected in the output. Conversely, if a pixel is black, changing its alpha value will not affect how it is blended over a black background. As a result, most columns in the Jacobian ∂ C/∂φ_k (see also equation <ref>) will be zero. Since the intermediate representations of each sprite will contain a lot of black or transparent pixels (the entire background), the rank of the Jacobian here will be low. In this case, the workaround from equation <ref> does not help since the low rank is not a result of another component in the foreground but of the specific parameterization of each component itself.
As stated in the main paper, the fact that this parameterization still produces good results and generalizes well is an indicator that there might be another proof strategy or workaround that avoids this specific issue.
|
http://arxiv.org/abs/2307.10205v1 | 20230714070148 | Adversarial Training Over Long-Tailed Distribution | [
"Guanlin Li",
"Guowen Xu",
"Tianwei Zhang"
] | cs.LG | [
"cs.LG",
"cs.CR",
"cs.CV"
] |
packeditemize
∙
definitionDefinition
theoremTheorem
lemmaLemma
proofProof
sectionSec.Secs.
sectionSectionSections
tableTableTables
tableTab.Tabs.
empty
Adversarial Training Over Long-Tailed Distribution
Guanlin Li
Nanyang Technological University, S-Lab
[email protected]
Guowen Xu
City University of Hong Kong
[email protected]
Tianwei Zhang
Nanyang Technological University
[email protected]
August 12, 2023
============================================================================================================================================================================================================================
empty
In this paper, we study adversarial training on datasets that obey the long-tailed distribution, which is practical but rarely explored in previous works. Compared with conventional adversarial training on balanced datasets, this process falls into the dilemma of generating uneven adversarial examples (AEs) and an unbalanced feature embedding space, causing the resulting model to exhibit low robustness and accuracy on tail data. To combat that, we propose a new adversarial training framework – Re-balancing Adversarial Training (). This framework consists of two components: (1) a new training strategy inspired by the term effective number to guide the model to generate more balanced and informative AEs; (2) a carefully constructed penalty function to force a satisfactory feature space. Evaluation results on different datasets and model structures prove that can effectively enhance the model's robustness and preserve the model's clean accuracy. The code can be found in <https://github.com/GuanlinLee/REAT>.
§ INTRODUCTION
Adversarial training <cit.> has been widely used to improve the robustness of the model against adversarial attacks <cit.>. However, existing efforts mainly focus on training on balanced datasets, while ignoring more realistic datasets obeying long-tailed distributions <cit.>. Informally, training data subject to a long-tailed distribution has the property that the vast majority of the data belong to a minority of total classes (i.e., “head” classes), while the remaining data belong to other classes (“body” and “tail” classes) <cit.>. This distinct nature yields new problems in adversarial training (see Section <ref> for detailed explanations). First, it is difficult to produce uniform and balanced AEs: AEs are always misclassified by the model into the head classes with overwhelming probabilities regardless of the labels of their corresponding clean samples. Second, the excessive dominance of head classes in the feature embedding space further compresses the feature space of tail classes. The mutual entanglement of the above two problems leads to the underfitting of tail classes in both robustness and accuracy, thus leading to unsatisfactory training performance.
To address these challenges, Wu et al. <cit.> proposed RoBal, the first work (and the only work, to our best knowledge) towards adversarial training on datasets with long-tailed distributions. It is essentially a two-stage re-balancing adversarial training method. The first stage lies in the training process, where a new class-aware margin loss function is designed to make the model pay equal attention to data from head classes and tail classes. The second stage focuses on the inference process, where a pre-defined bias is added to the predicted logits vectors, thereby improving the prediction accuracy of samples from the tail classes. Moreover, RoBal constructs a new normalized cosine classification layer, to further improve models' accuracy and robustness.
While RoBal shows impressive results on a variety of datasets, it still has several limitations that need to be addressed. First, the robustness of RoBal mainly benefits from gradient obfuscation (specifically, gradient vanishing) <cit.> in the proposed new scale-invariant classification layer. This can be easily compromised by simply multiplying the logits by a constant, as the constant can increase the absolute value of gradients against gradient vanishing and correct the sign of gradients during AE generation (see Tables <ref> and <ref>). Second, the designed class-aware margin loss ignores samples from body classes and exclusively focuses on head and tail classes, which inevitably reduces the overall model accuracy. Detailed analysis and evaluations of RoBal can be found in Sections <ref> and <ref>.
To advance the practicality of adversarial training on long-tailed datasets, we design a new framework: Re-balancing adversarial training (), which demonstrates higher clean accuracy and robustness compared to RoBal. Our insights come from the revisit of two key components in adversarial training: AE generation and feature embedding. Particularly, for AE generation, we force the generated AEs to be misclassified into each class as uniformly as possible, so that the information of the tail classes is sufficiently learned during the adversarial training to improve the robustness. Our implementation is inspired by the term of effective number <cit.> in long-tailed recognition, which was proposed to increase the marginal benefits from data of tail classes. We generalize the definition of effective number to the AE generation process and propose a new Re-Balanced Loss () function. dynamically adjusts the weights assigned to each class, which significantly improves the effectiveness of the original balanced loss based on effective number <cit.>.
For feature embedding, it is challenging to balance the volume of each class's feature space, especially if the size of each class varies. To address this issue, we propose a Tail-sample-mining-based feature margin regularization () approach. treats the samples from tail classes as hard samples and optimizes feature embedding distributions of tail classes and others. To better fit the unbalanced data distribution, we propose a joint weight to increase the contribution of tail features in the entire feature embedding space.
We conduct comprehensive experiments on CIFAR-10-LT and CIFAR-100-LT datasets to demonstrate the superiority of over existing methods. For instance, achieves 67.33% clean accuracy and 32.08% robust accuracy under AutoAttack, which are 1.25% and 0.94% higher than RoBal.
In summary, our method is not a simple application of previous works. We evaluate and validate that simply integrating existing methods cannot achieve satisfactory results. The superiority of lies in the new modifications we propose over these methods, to adapt to the long-tail adversarial training scenario. In particular,
* we generalize the definition of effective number and extend it from normal training to AE generation. Without the modification, the results are lower (“ENR” in Table <ref>).
* we introduce a new regularization term into long-tailed adversarial training based on the feature space assignment behavior of models to balance the feature embedding (visualizations in Appendix <ref>).
§ BACKGROUND AND MOTIVATION
§.§ Long-tailed Recognition
Data in the wild usually obey a long-tailed distribution <cit.>, where most samples belong to a small part of classes. Models trained on long-tailed datasets usually give higher confidence to the samples from head classes, which harms the generalizability for the samples from the body or tail classes. It is challenging to solve such overconfidence issues under the long-tailed scenarios <cit.>. Several approaches have been proposed to achieve long-tailed recognition. For instance, (1) the re-sampling methods <cit.> generate balanced data distributions by sampling data with different frequencies in the training set. (2) The cost-sensitive learning methods <cit.> modify the training loss with additional weights to balance the gradients from each class. (3) The training phase decoupling methods <cit.> first train a feature extractor on re-sampled balanced data, and then train a classifier on the original dataset. (4) The classifier designing methods <cit.> modify the classification layer with prior knowledge to better fit the unbalanced data. Morel details about related works are in Appendix <ref>.
§.§ Long-tailed Adversarial Training
Adversarial training is a promising solution to enhance the model's robustness against AEs. Previous works mainly consider adversarial training on balanced datasets. Discussions of these works can be found in Appendix <ref>. When the training data become unbalanced, training a robust model becomes more challenging. As mentioned in Section <ref>, in long-tailed recognition, most data come from the head classes while the data of the tail classes are relatively scarce. This causes two consequences: unbalance in the output probability space and unbalance in the feature embedding space, which are detailed as follows.
First, we need to generate AEs on-the-fly during adversarial training. The unbalanced output probability space caused by long-tailed datasets can lead to unbalanced AEs, which cause the produced model to show unbalanced robustness across different classes. Figure <ref> shows such an example. We adopt PGD-based adversarial training to train a ResNet-18 model and measure the distribution of the model's predictions for the generated AEs during the training process. Figure <ref> shows the case of a balanced training set (CIFAR-10). We observe that the predictions of the AEs are uniformly distributed among all the classes. In contrast, Figure <ref> shows the case of an unbalanced training set (CIFAR-10-LT). Due to the long-tailed distribution, most AEs are labeled as head classes. This indicates that the final model has lower accuracy and robustness for tail classes, making it more vulnerable to adversarial attacks, e.g., AutoAttack <cit.>.
Second, in an unbalanced training set, the head classes can dominate the feature embedding space of the model, which can reduce the area of tail features. As a result, the performance and generalizability of the model for tail classes will be decreased. In contrast, a model trained on balanced data will give an even feature space for each class. Figure <ref> compares the feature maps of AEs in these two scenarios, where we train ResNet-18 models with PGD-based adversarial training on balanced and unbalanced CIFAR-10[For better readability, we only show four classes (two head classes “airplane” (blue) and “automobile” (orange), and two tail classes “ship” (green) and “truck” (red).). The complete feature maps for 10 classes can be found in Appendix <ref>.]. We observe the long-tailed scenario has larger differences between head and tail features compared to the balanced scenario. We show the feature embedding space for various cases to prove our statement in Appendix <ref>.
A straightforward way is to directly adopt existing solutions introduced in Section <ref> (e.g., <cit.>) for adversarial training, which can produce more balanced AE prediction distributions and feature embedding space. However, they can only partially address the overconfidence and underconfidence issues in model prediction, due to the lack of tail samples and AEs predicted as tail classes
(see Section <ref> and Appendix <ref>). RoBal <cit.> is the first methodology specifically for adversarial training with long-tailed datasets. It introduces a new loss function to promote the model to learn features from head classes and tail classes equally. It further replaces the traditional classification layer with a cosine classifier, where both weights and features are normalized and the outputs are multiplied by a temperature factor. In the inference phase, RoBal adjusts the output logits with a prior distribution, which is aligned with the label distribution. However, in our experiments, we find RoBal ignores the features from the body classes, which can harm the clean accuracy and robustness. Furthermore, RoBal can be easily defeated by a simple adaptive attack, which multiplies the output logits with a factor when generating AEs (see Section <ref>). This motivates us to explore a better solution for long-tailed recognition with adversarial training.
§ METHODOLOGY
We introduce , a new framework for adversarial training on unbalanced datasets. includes two innovations to address the two issues discussed in Section <ref>. Specifically, to balance the AE distribution and make the model learn more information from tail samples, we modify the objective function in the AE generation process with weights calculated based on the effective number <cit.>. To balance the feature embedding space, we propose a regularization term to increase the area of features from tail classes.
To better introduce our method, we first give formal definitions of a long-tailed dataset. Consider a dataset containing C classes with N_i samples in each class i. We assume the classes are sorted in descending order based on the number of samples in each class, i.e., N_i ≥ N_i+1. The unbalanced ratio is defined as UR = N_1/N_C <cit.>. Following previous works <cit.>, a long-tailed dataset can be divided into three parts: (1) i is a head class (HC) if 1 ≤ i ≤⌊C/3⌋, where ⌊ x ⌋ is a floor function; (2) i is a tail class (TC) if ⌈2C/3⌉≤ i ≤ C, where ⌈ x ⌉ is a ceiling function; (3) The rest are body classes (BC). Below, we describe the detailed mechanisms of .
§.§ Re-balancing AEs
For adversarial training, it is desirable that the objective function could encourage AEs that are classified into rarely-seen classes while punishing AEs that are classified into abundant classes. To realize this in the long-tailed scenario, we borrow the idea of the effective number from <cit.> and generalize it to adversarial training. The effective number is mainly used to measure the data overlap of each class. For class i containing N_i data, its effective number is defined as E_N_i = 1-β^N_i/1-β, where β=∑ N_i -1/∑ N_i. Given the effective numbers E_N_i and E_N_j, if E_N_i>E_N_j, the marginal benefit obtained from increasing the number of training samples in class i is less than increasing the same number of training samples in class j <cit.>. This implies that we can adopt the effective number as a guide to balance the distribution of AEs generated during training. We explain the motivation of using the effective number in AE generation as follows.
At a high level, the generation of AEs can be viewed as a data sampling process, i.e., AEs are essentially sampled from the neighbors of their corresponding clean samples. Therefore, we can calculate the effective number between AEs generated in two consecutive epochs, and use it as the basis to assign dynamic weights to each class in the loss function, inducing the model to produce as many less overlapped AEs as possible in consecutive epochs. This implicitly generates more AEs that are classified into tail classes and makes the model extract more marginal benefits from samples of tail classes, thus achieving our purpose.
We now describe our technical design. For simplicity, we assume that the predicted label distributions (i.e., labels assigned by the model M for AEs) in two successive training epochs will stay stable and not change too much, which has been proven in <cit.>. Then, in epoch k-1, we count the number of AEs that are classified into each class, denoted as 𝐧 = [n_1, n_2, …, n_C].[For the first training epoch, we directly use the number of clean data N_i in each class as the prior distribution.]. As a result, generating AEs in epoch k can be approximated as sampling new AEs after sampling n_i data for each class i. Therefore, we can compute the effective number of class i as E_n_i = 1-β_i^n_i/1-β_i, where β_i=N_i -1/N_i. Note that our β is class-related to assign finer convergence parameters for each class, which is different from the calculation in <cit.>. We will experimentally prove that this adaptive effective number can better improve the robustness of models in Section <ref>.
Based on the property that the effective number of each class is inversely proportional to the marginal benefit of the new samples of this class, we construct a new indicator variable weight w_i for the marginal benefit, which is inversely proportional to E_n_i. This weight can be used to correct the loss in the AE generation process. Specifically, following the class-balanced softmax cross-entropy loss proposed in <cit.>, we compute the weight w_i for class i as follows:
w_i = C/E_n_i∑_j=1^C1/E_n_j
With the weight w_i for each class i, we design a new Re-Balancing Loss () function as below:
= - w_i *loge^z_i/∑_j e^z_j
where loge^z_i/∑_j e^z_j is the original loss function adopted to generate AEs. Our goal is to maximize to generate AEs for adversarial training.
Analysis.
We analyze why can help generate balanced AEs from unbalanced data samples. First, we show the effective number enjoys the asymptotic properties: (1) when n_i → 0, we have E_n_i→ 0 and w_i → C; (2) when n_i →∞, we have E_n_i→1/1 - β_i and w_i → 0, as there exists a E_n_j→ 0, i≠ j. Based on the asymptotic properties, if there are many AEs assigned to the label of class i in epoch k-1, then in epoch k, the increased effective number E_n_i results in a smaller w_i. As a consequence, will induce AEs generated in this round with minimized data overlap compared to AEs of the previous round, which implicitly generates more AEs that are classified into other classes. Our experiments in Section <ref> indicate that combined with long-tailed recognition losses, can better balance the AE generation process and increase the number of AEs predicted into tail classes by 5×. Figure <ref> compares the distances of AEs between two consecutive training epochs without and with . The models (ResNet-18) are trained on CIFAR-10-LT with the unbalanced ratio UR=50. A smaller distance indicates a larger overlap of the two AEs and less marginal benefit the model can obtain from the process. We observe that is able to increase the distances of AEs from tail classes, and generate more informative AEs to enhance the model's robustness.
§.§ Tail Feature Alignment
From Figure <ref>, we find the feature space for tail classes is smaller than that for head classes, making the model lean to classify the input into head classes. So, it is important to expand the feature space for tail classes to balance the feature representation. To achieve this goal, we first define a probabilistic feature embedding space as 𝐟^p = [e^f_1/∑_j e^f_j, e^f_2/∑_j e^f_j, …, e^f_K/∑_j e^f_j] = [f_1^p, f_2^p, …, f_K^p], where f_i is the i-th feature before the final classification layer, and K is the feature dimension. The motivation of using a probabilistic feature embedding space is to overcome the scale changes in feature representations caused by the unbalanced data distribution <cit.>. For each class i, we assume the probabilistic feature is sampled from a distribution 𝒟_i^f. As a result, given any two classes i∈ TC and j∈ HC∪ BC, our goal is to maximize the difference between 𝒟_i^f and 𝒟_j^f, thereby rebalancing the distributions of different classes in the feature space and making them more divisible.
We design a Tail-sample-mining-based feature margin regularization () approach to achieve this goal. Algorithm <ref> describes its detailed mechanism. Specifically, let 𝐅^p = [𝐟_1^p, 𝐟_2^p, …, 𝐟_B^p] denote all probabilistic features of a batch containing B samples, and 𝐲=[y_1, y_2, …, y_B] denote the labels of the corresponding feature representations. The class weights Ω=[ω_1, ω_2, …, ω_C] are calculated based on the smoothed inverse class frequency <cit.>, i.e., ω_i = √(∑_j N_j/N_i), implying that tail classes have larger class weights than head classes. The core component of is the computation of the regularization term R, which is updated for each y_i ∈ TC using the following equation:
R = R -1/B∑_j=1^B (-1)^1(y_i = y_j)(ω_i+ω_j)∑_k=1^K f^p_j,klogf^p_j,k/f^p_i,k
where 1(y_i = y_j) is the indicator function (outputting 1 if y_i = y_j, and 0 otherwise), and the initial value of R is 0. In Equation <ref>, we first compute the feature distribution differences using the Kullback–Leibler divergence (KLD): ∑_k=1^K f^p_j,klogf^p_j,k/f^p_i,k, where f^p_i,k is the value of the k-th dimension in the probabilistic feature 𝐟^p_i for the i-th sample.
A larger KLD value means a larger difference between the distributions of the feature embeddings of the i-th and j-th samples. Hence, with the property of R, for each batch, we can maximize the distributional differences between 𝒟_i^f, i∈TC and 𝒟_j^f, j≠ i, j∈ [C], and minimize the distributional gap for samples from the same tail class. To further enhance the influence of the regularization term among tail classes, we assign a joint weight (ω_i+ω_j) to the feature pair (𝐟^p_i,𝐟^p_j). ω_i for tail samples is bigger than that for head samples. To increase the distinction between pairs of tail classes and non-tail classes and pairs of two tail classes, the joint weight will further strengthen the effect of the regularization for pairs of two tail classes, thus improving the performance. Finally, we adopt the average of the distance inside the batch, i.e., R/S.
Note that our regularization term is general and can be used with any other long-tailed recognition loss function L_lt in the following form:
L = L_lt +
To summarize, the training pipeline of our uses to generate adversarial examples, from which it trains the robust model with the loss function L.
§ EXPERIMENTS
Datasets and Models. We evaluate our method on CIFAR-10-LT and CIFAR-100-LT, which are the mainstream datasets for evaluating long-tailed recognition tasks <cit.>. To generate the unbalanced dataset, we follow the approach in <cit.> to set the unbalanced ratio (UR) as {10, 20, 50, 100} for CIFAR-10-LT and {10, 20, 50} for CIFAR-100-LT. We choose the ResNet-18 (ResNet) <cit.> and WideResNet-28-10 (WRN) <cit.> as the target models.
Baselines. We consider two baselines. The first one is to simply combine existing adversarial training methods with various long-tailed recognition losses. Our experiments and analysis in Appendix <ref> show that some adversarial training methods cannot converge well with the long-tailed recognition loss, such as TRADES <cit.>, AWP <cit.> and MART <cit.>. So we choose the most effective one: PGD-AT <cit.>. The second baseline is RoBal <cit.>.
Implementation. In our experiments, the number of training epochs is 80. The learning rate is 0.1 at the beginning and decayed in epochs 60 and 75 with a factor of 0.1. The weight decay is 0.0005. We adopt SGD to optimize the model parameters with a batch size of 128. We save the model with the highest robustness on the test set. For adversarial training, we adopt l_∞-norm PGD <cit.>, with a maximum perturbation size ϵ=8/255 for 10 iterations, and step length α=2/255 in each iteration. For each configuration, we report the mean and standard error under three repetitive experiments with different random seeds. Training with is efficient and does not incur significant costs, as demonstrated in Appendix <ref>.
Attacks. We mainly consider the l_∞-norm attacks to evaluate the model's robustness. The results under the l_2-norm attacks can be found in Appendix <ref>. We choose four representative attacks: PGD attack <cit.> with the cross-entropy loss under 20 and 100 steps (PGD-20 and PGD-100), PGD attack with the C&W loss <cit.> under 100 steps (CW-100), and AutoAttack <cit.> (AA).
§.§ Ablation Studies
Impact of Long-tailed Recognition Losses.
Our framework is general and can be combined with different long-tailed recognition losses. We select four state-of-the-art losses and add each one with to evaluate : focal loss (FL) <cit.>, effective number loss (EN) <cit.>, label-distribution-aware margin loss (LDAM) <cit.>, and balanced softmax loss (BSL) <cit.>. For comparisons, we also choose PDG-AT and replace the original cross-entropy loss with the above long-tailed recognition loss for model parameter optimization.
Table <ref> shows the comparison results with ResNet-18 and CIFAR-10-LT (UR=50). We obtain two observations. (1) The BSL loss can significantly outperform other long-tailed recognition losses for clean accuracy as well as robust accuracy against different attacks. So in the rest of our paper, we mainly adopt this choice for evaluations. (2) achieves better robustness than PGD-AT for whatever loss function is adopted to train the model. Furthermore, the clean accuracy is improved in most cases when is used. Therefore, we conclude that has strong generalization and applicability to different recognition losses.
Impact of AE Generation Re-balancing Losses.
We then compare the effectiveness of our proposed with various rebalancing methods adopted in the AE generation process. We replace the cross-entropy with four state-of-the-art rebalancing strategies: (1) ReWeight (RW) <cit.>; (2) ReWeight Smooth (RWS) <cit.>; (3) Effective Number Reweight (ENR) <cit.>; (4) Balanced Softmax ReWeight (BRW) <cit.>.
Table <ref> shows the comparison results on CIFAR-10-LT (UR=50) with the ResNet-18 model structure. We observe that the considered strategies can indeed increase the clean accuracy and robustness of the final models by re-balancing the generated AEs. Particularly, our outperforms other approaches, giving better robustness under different attacks. Furthermore, our dynamic effective number achieves better results than the original effective number implementation (ENR), in which the model adopts labels of the clean data to balance the AE generation, indicating the effectiveness of the proposed re-balancing method. This is attributed to our adaptive effective number based on the AE re-balancing generation, which allows the samples to equally learn features of both head and tail classes, and makes the model obtain more marginal benefit from the AEs. Furthermore, our (i.e., combining and ) achieves the best results under various attacks, which proves the effectiveness of the feature distribution alignment strategy.
§.§ Evaluation under Various Settings
Varying the Unbalanced Ratio. We first investigate the impact of the unbalanced ratio on training performance. Table <ref> shows the comparison results between PGD-AT and on the CIFAR-10-LT and ResNet-18 models. We have the following observations. (1) For both methods, increasing UR can reduce the model's clean accuracy and robustness. (2) outperforms PGD-AT under different values of UR and attacks, due to the re-balanced AE generation and feature embedding space.
Varying Datasets and Model Architectures. can well generalize to different datasets and models. Table <ref> compares PGD-AT and on CIFAR-100-LT with ResNet-18. Table <ref> compares two approaches on CIFAR-10-LT and CIFAR-100-LT with two model architectures. Similar to the above results, can bring additional performance improvement under various unbalance degrees and attacks for different configurations. More results and analysis can be found in Appendix <ref>.
§.§ Comparisons with RoBal
To the best of our knowledge, RoBal <cit.> is the only work specifically focusing on adversarial training on unbalanced datasets. As analyzed in Section <ref>, there are several limitations in RoBal. Besides, we find that the scale-invariant classification layer in RoBal can cause gradient vanishing when generating AEs with the cross-entropy loss. It is because the normalized weights of the classification layer and the normalized features greatly reduce the scale of the gradients, failing to generate powerful AEs. We propose a simple adaptive attack to break the gradient vanishing and invalidate RoBal: the adversary can multiply the output logits with a factor (10 in all cases) when generating AEs, and then use these AEs to attack RoBal. This can significantly decrease the robustness of the trained model.
We perform experiments to compare RoBal and from different perspectives, as shown in Tables <ref> and <ref>. We adopt the CIFAR-10-LT and CIFAR-100-LT with ResNet-18 settings, respectively. More results with different configurations can be found in Appendix <ref>. First, for PGD-based attacks, we show that the model robustness partially originates from the gradient vanishing, and our adaptive attack can successfully break this effort. CW attack and AA can easily break the gradient obfuscation in the classification layer, due to the different loss functions in the AE generation process. Second, comparing the results of RoBal and under different values of UR, can achieve higher clean accuracy and robustness, especially with larger UR. This indicates is a better training strategy for highly unbalanced datasets.
Furthermore, Figure <ref> illustrates the robust accuracy of ResNet-18 on CIFAR-10-LT (UR=50) under different numbers of PGD attack steps. It proves that our outperforms RoBal under all attack budgets. Figure <ref> plots the accuracy of ResNet-18 on CIFAR-10-LT (UR=50) under the PGD-20 attack for each class. It indicates that our can achieve higher clean accuracy and robust accuracy on “body” classes, which will be explained as follows. More results can be found in Appendix <ref>.
Interpretation.
We perform an in-depth analysis of the comparisons between RoBal and . We show the distributions of the predicted labels of AEs during adversarial training for these two approaches in Figure <ref>. We choose the configurations of CIFAR-10-LT (UR=50) and ResNet-18. Results under other configurations can be found in Appendix <ref>. For RoBal, we observe that there are fewer AEs classified into body classes and more AEs classified into tail classes, indicating that RoBal makes the model pay more attention to head and tail classes while overlooking the body classes. In contrast, treats the body and tail classes more equally, and this is one reason to achieve better performance on the “body” classes. Furthermore, we plot the feature embedding space with the t-SNE tool in Appendix <ref> for feature-level comparison.
Discussions. First, from the results, our solution brings more benefit than that brought by RoBal, as shown in Table <ref>. RoBal can sometimes hurt the robustness and clean accuracy. Second, we prove the robustness from RoBal partially depends on the gradient obfuscation and can be defeated by an adaptive attack. Third, through the results of PGD-AT with BSL loss, RoBal, and , we find that compared with improving the robustness, it is easier to enhance the clean accuracy, which is consistent with the conclusion from <cit.>. To explore the reason behind this phenomenon, we analyze the difficulty and challenges of adversarial train- ing on long-tailed datasets in Appendix <ref>. All in all, improving robustness requires more data, and the number of data in the tail classes is not enough to train a model with high robustness, which is a big challenge in adversarial training. How to further improve it is our future work.
§ CONCLUSION
In this paper, we propose , a new long-tailed adversarial training framework to improve the training performance on unbalanced datasets. We present two novel components, for promoting the model to generate balanced AEs, and a regularization term for forcing the model to assign larger feature spaces for tail classes. With these techniques, helps models achieve state-of-the-art results and outperforms existing solutions on different datasets and model structures. There still exists a robustness gap between the ideal result obtained in the balanced setting and our approach. In the future, we aim to keep reducing this gap with more advanced solutions, e.g., new robust network structures or training loss functions.
§ ACKNOWLEDGEMENT
This work is supported under the RIE2020 Industry Alignment Fund–Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contributions from the industry partner(s). It is also supported in part by Singapore Ministry of Education (MOE) AcRF Tier 2 MOE-T2EP20121-0006 and AcRF Tier 1 RS02/19.
ieee_fullname
§ RELATED WORKS
§.§ Long-tailed Recognition
Long-tailed learning means training a machine learning model on a dataset that follows a long-tailed distribution. It has been applied to various scenarios including classification tasks <cit.>, object detection tasks <cit.> and segmentation tasks <cit.>. To alleviate the uneven distribution of data in the dataset, i.e., the majority of the data belong to the head classes, while the data belonging to the tail classes are insufficient, many methods have been proposed, which can be roughly divided into four categories:re-sampling, cost-sensitive learning, training phase decoupling and classifier designing.
The re-sampling methods can be divided into four classes, i.e., random under-sampling head classes <cit.>, random over-sampling tail classes <cit.>, class-balanced re-sampling <cit.> and scheme-oriented sampling <cit.>. These methods solve the unbalance problem by using sampling strategies to generate desired balanced distributions.
The cost-sensitive learning methods have two types of applications, i.e., class-level re-weighting <cit.> and class-level re-margining <cit.>. It assigns different weights to each class or adjust the minimal margin between the features and the classifier to balance the learning difficulties, achieving better performance under unbalanced data distributions.
The training phase decoupling is used to improve both the feature extractor and classifier. <cit.> find that training the feature extractor with instance-balanced re-sampling strategy and re-adjusting the classifier can significantly improve the accuracy in long-tailed recognition. <cit.> further observe that a balanced feature space benefits the long-tailed recognition.
The classifier designing aims to address the biases that the weight norms for head classes are larger than them of tail classes <cit.> in the traditional layers under long-tailed datasets. <cit.> propose a normalized classification layer to re-balance the weight norms for all classes. <cit.> also adopt a normalized classifier to defend against adversarial attacks. <cit.> propose a hierarchical classifier mapping the images into a class taxonomic tree structure. <cit.> propose a classifier with causal inference to better stabilize the gradients. Note that modifying classifier can indeed improve the performance of models on unbalanced data. However, we argue that it may introduce gradient obfuscation resulting in adaptive adversarial attacks. For more details, please refer to Section 4.
§.§ Adversarial Training
Adversarial training <cit.> is widely studied to defend against adversarial attacks. Its basic idea is to generate on-the-fly AEs to augment the training set. It can be formulated as the following min-max problem <cit.>:
min_θmax_x^*ℓ(x^*, y;θ)
where x^* is the training sample generated from a clean one x to maximize the loss function ℓ(·), y is the ground-truth label, θ is the model parameters. The first phase (maximization optimization) is to generate samples maximizing the loss function. The second stage (minimization optimization) is to optimize the model parameter θ to minimize the loss function under samples generated in phase one.
In previous works, there are three main research topics in adversarial training, i.e., improving the model robustness <cit.>, reducing the gap between clean accuracy and robustness <cit.> and addressing overfitting challenges <cit.>.
In this paper, we focus on adversarial training on datasets with long-tailed distributions. To our best knowledge, <cit.> present the first work dedicated to improving the accuracy as well as robustness to tail class during adversarial training. They design a new loss function and cosine classifier to achieve this. However, we experimentally demonstrate the unsatisfactory security and performance of this work in the Section 4.3, which motivates us to design more secure and satisfactory adversarial training methods tailored to datasets that obey long-tailed distributions.
§ ADVERSARIAL TRAINING ON UNBALANCED DATASET
To explore the effectiveness of adversarial training strategies proposed on balanced datasets, we compare recent adversarial training methods in Table <ref>. The results indicate that improving robustness on a balanced dataset is non-trivial, but these improvements cannot be expressed under an unbalanced dataset. Furthermore, we find that the simplest and the most straightforward method, PGD-AT, obtains the best results. On the other hand, methods adopting clean samples to train models, like TRADES and MART, will achieve lower clean accuracy, as the unbalanced data will harm the model's accuracy on the balanced test set.
In Figure <ref>, the t-SNE results prove that each class is assigned an area of a similar size in the feature space when the model is trained on balanced data. But, if the model is trained on unbalanced data, the areas for head classes expand and encroach areas that should belong to tail classes, causing the area of tail features to shrink, which represents the unbalanced feature embedding space. As a result, the performance and generalizability for tail classes decrease.
To alleviate the unbalance problem, we replace the cross-entropy loss in TRADES and MART with Balanced Softmax Loss (BSL). However, in our experiments, we find that BSL will make the model not converge. The reason can be that the gradient directions of BSL and KL divergence are contradicted. So, in our paper, we mainly consider enhancing the PGD-AT method to better fit the unbalanced datasets.
§ STUDYING DATA HUNGER AND DATA UNBALANCE
In this part, we further examine the effects of the data hunger and the data unbalance on the model robustness, which is explored to construct an experimental upper bound on the robustness of the long-tailed adversarial training methods. To be specific, the data hunger raises from the insufficient data from the body classes and tail classes, which is one of the impacts of the long-tailed datasets. And another one is the data unbalance. To exclusively study the data hunger in a balanced dataset, for a given unbalanced ratio, we sample the same number of samples as the long-tail dataset but form them into balanced small (BS) datasets. We then train models on this dataset with PGD-AT to learn an experimental upper bound, which is represented as “PGD-AT (BS)” in our experiments. When we train models with PGD-AT (BS) the loss function used to optimize models is Cross-Entropy loss. On the other hand, when we train models on unbalanced datasets, the basic loss function used to optimize models is BSL.
Comparing the results of models trained under balanced datasets and unbalanced datasets in Tables <ref>– <ref>, it is clear that the models train on unbalanced datasets suffer from a bigger reduction when the number of training samples decreases, which means that the data unbalance harms the model's robustness in a larger degree than the data hunger. Training models on unbalanced data is more challenging than training models on small but balanced data under adversarial scenarios for different model structures and datasets. It is reasonable, because in the long-tailed datasets, there are fewer data in the tail classes, making the model unable to learn much information for such classes. Furthermore, compared with training a model on CIFAR-10, when training a model on a more complex dataset, such as CIFAR-100, the performance decrease is less than expected, which will be studied in our future work. On the other hand, the experimental results of PGD-AT (BS) can be seen as upper bounds for the models trained on same-size unbalanced datasets.
§ RESULTS UNDER L_2-NORM ATTACKS
In Table <ref>, we show the results of models under l_2-norm attacks. For the PGD attacks, the max perturbation size is ϵ=1.0, and the step length is α=0.2. We consider the 20-step attack, PGD-20, and the 100-step attack, PGD-100. For the C&W attack, we follow its official implementation. The results confirm that our can improve the model's robustness under different threat models. On the other hand, the gradient obfuscation is more serious under l_2-norm attacks, so our adaptive attacks achieve better results.
§ COMPARING WITH ROBAL
Varying Datasets.
Similar to our main paper, we illustrate the robust accuracy under the different numbers of PGD attack steps of ResNet-18 on CIFAR-100-LT (UR=10) in Figure <ref> in this section. The results prove that our outperforms RoBal under all attack budgets. In Figure <ref>, we plot the accuracy of ResNet-18 on CIFAR-100-LT (UR=10) under the PGD-20 attack for each class. The results indicate that our can achieve higher clean accuracy and robust accuracy on “body” classes, which is consistent with the conclusion in the main paper.
Varying Model Structures. To show the superiority of on different model structures, we compare the results of RoBal and on ResNet-18 and WideResNet-28-10, respectively. The results in Tables <ref> and <ref> prove that models trained with lead models trained with RoBal on both clean accuracy and robustness, which means is a better training strategy for different model structures.
Interpretation.
We choose the configurations of CIFAR-10-LT (UR=50) and ResNet-18. Then, we plot the feature embedding space with the t-SNE tool for models trained with different strategies in Figure <ref>. We first generate AEs with the PGD-20 attack on the test set and use t-SNE to plot the feature distribution for AEs. ResNet-18 is adopted as the model architecture. Figure <ref> is the feature result for PGD-AT over the balanced dataset CIFAR-10. We observe that samples from different classes are not quite overlapped with each other in the feature space, making them easier to be classified. In contrast, Figures <ref> and <ref> show the results for PGD-AT (BSL loss) and RoBal over the unbalanced dataset CIFAR-10. We observe that there are more samples from different classes entangled together in their feature embeddings, which can harm the model's robustness. Figure <ref> shows the results of our under the same unbalanced setting. We can see the feature space is more similar to the one obtained from the balanced dataset (Figure <ref>). This explains the effectiveness of in enhancing the model robustness and clean accuracy from the feature perspective.
§ AE PREDICTION DISTRIBUTION
In Figure <ref>, the distributions of the model's predictions for generated AEs in different epochs are illustrated. From the plot, we can obtain the same conclusion as the one in our main paper that the RoBal will make the model pay more attention to head and tail classes and ignore the body classes. On the contrary, our will help the model value the body and tail classes equally, which can further improve the model's robustness.
In Figure <ref>, we compare the AE distribution on CIFAR-100-LT (UR=10), when training models with RoBal and , respectively. The results prove that RoBal will cause unbalanced AE distribution when the number of classes increases. There are more samples predicted as tail classes by the model. However, our can keep the balanced AE distribution and obtain better results.
§ TRAINING COST OVERHEAD OF
We compare the training time overhead of compared with PGD-AT (BSL is adopted) method and RoBal on one single V100 GPU card. The results are shown in Table <ref>. When we train a ResNet-18 on CIFAR-10-LT (UR=50), the training time overhead for one epoch is about 8 seconds. When we train a ResNet-18 on CIFAR-10-LT (UR=100), the training time overhead for one epoch is about 5 seconds. So, our is efficient on long-tailed datasets and does not increase too much training time.
|
http://arxiv.org/abs/2307.05544v1 | 20230708211940 | Coupling high-overtone bulk acoustic wave resonators via superconducting qubits | [
"Wayne Crump",
"Alpo Välimaa",
"Mika A. Sillanpää"
] | quant-ph | [
"quant-ph",
"cond-mat.mes-hall"
] |
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
Department of Applied Physics, Aalto University, P.O. Box 15100, FI-00076 AALTO, Finland
In this work, we present a device consisting of two coupled transmon qubits, each of which are coupled to an independent high-overtone bulk acoustic wave resonator (HBAR). Both HBAR resonators support a plethora of acoustic modes, which can couple to the qubit near resonantly. We first show qubit-qubit interaction in the multimode system, and finally quantum state transfer where an excitation is swapped from an HBAR mode of one qubit, to an HBAR mode of the other qubit.
Coupling high-overtone bulk acoustic wave resonators via superconducting qubits
Mika A. Sillanpää
===============================================================================
Hybrid quantum systems seek to combine strengths and offset weaknesses of different quantum technologies in order to improve capability beyond that of any one technology. Superconducting circuits are one of the more mature quantum technologies at this stage and have been integrated with many other systems due to the relative ease in design and fabrication as well as good coherence times <cit.>.
Many different acoustic systems have been integrated with superconducting circuits such as nanomechanical oscillators <cit.>, phononic crystals <cit.>, bulk acoustic wave systems <cit.> and surface acoustic wave systems <cit.>. Acoustic resonators can offer great coherence properties <cit.> as well as smaller mode volumes due to the relation between wave velocity and wavelength, with the difficulty coming in coupling these resonators strongly with electromagnetic systems.
The strong coupling of acoustic modes with superconducting qubits has resulted in many experiments exploring the quantum nature of mechanical oscillations, with experiments demonstrating number splitting <cit.>, the creation of non-classical states in the acoustic mode <cit.>, Landau-Zener-Stückelberg interference <cit.>, and entanglement <cit.>. The ability to prepare acoustic resonators in arbitrary quantum states opens up the possibility of using them in applications such as quantum memories due to their coherence properties and insensitivity to electromagnetic noise.
High-overtone bulk acoustic wave resonators (HBAR) offer access to mechanical modes in the GHz regime, making them attractive for integration with superconducting qubits. The piezoelectric interaction enables coupling in the strong regime and their state to be controlled and read-out using the qubit. The system has been implemented using a 3D <cit.> and 2D <cit.> transmon architecture with part or all of the qubit capacitor directly patterned on the piezo layer of the HBAR. This was later improved in both cases by using a flip-chip design <cit.> which has lead to the current state of the art <cit.>. Experiments on these system have demonstrated the creation of non-classical multiphonon states <cit.>, demonstration of dispersive readout for a parity measurement of the mechanical mode <cit.>, and sideband control of the mechanical modes <cit.>.
Work thus far has focused on coupling of a qubit and a single HBAR device supporting a set of acoustic modes. In this work we couple two complete qubit-HBAR systems together via qubit-qubit interaction, and transfer excitations within the system, including between the HBAR modes. This demonstrates the possibility of integrating multiple HBAR devices into quantum circuits enabling the exploration of much larger and complex systems.
In the system there are two qubits which are coupled together as well as being individually coupled to a set of HBAR modes. The qubit-mode couplings can be described by the Jaynes-Cummings model, and the qubit-qubit coupling will be capacitive and therefore expected to take the iSWAP form
<cit.>. The system as a whole can then be described by the Hamiltonian:
H/ħ = ω_1/2σ_(z,1) + ω_2/2σ_(z,2) + J (σ_(+,1)σ_(-,2) + σ_(-,1)σ_(+,2))
+ ∑_m [ ω_(m,1)( a_(m,1)^† a_(m,1) + 1/2) .
. + g_(m,1)(a_(m,1)^†σ_(-,1) + a_(m,1)σ_(+,1))]
+ ∑_n [ ω_(n,2)( a_(n,2)^† a_(n,2) + 1/2) .
. + g_(n,2)(a_(n,2)^†σ_(-,1) + a_(n,2)σ_(+,1))] ,
where ω_1 and ω_2 are the qubit frequencies, J is the qubit-qubit coupling, ω_(m,1) and ω_(n,2) are the HBAR mode frequencies corresponding to their respective qubits and g_(m,1), g_(n,2) are the couplings to the HBAR modes. The σ_i,j are the pauli operators and a_m,a_m^† are the annihilation and creation operators.
In order to theoretically analyze the experiments described below, we determine the time evolution of the system using the Lindblad master equation. We include the qubits' decay and dephasing, as well as mechanical mode decay.
Figure <ref> shows an optical image of the device used for the experiments.
The device consists of a superconducting circuit with two qubits, each with their own readout, flux bias control and excitation lines. The qubits have a capacitive coupling to each other, as well as to the HBAR flip chip that covers both. The qubits have a round pad on the bottom arm of around 80 μm in diameter which defines the capacitive coupling to the HBAR chip. The circuit was patterned using Electron beam lithography and metalised with evaporated Aluminium. Double angle evaporation was used to create the Josephson junctions for the qubits.
The HBAR flip chip consists of a 900 nm AlN piezo layer, a 250 μm sapphire layer and a 60 nm Mo layer in-between to act as a ground plane to enhance the coupling to the mechanical modes <cit.>. The HBAR was placed by hand onto the circuit chip and glued with standard epoxy.
The qubit frequencies can be tuned in the range 3.7-4.5 GHz and have readout resonator frequencies of 6.230 GHz and 6.013 GHz. The operating points of the qubits were chosen to maximise their coherence properties and hence they are operating at or close to their minimum frequencies, as shown in fig:overview.
The bottom two plots of figure <ref> show two-tone measurements sweeping the qubit frequencies in the neighbourhood of their operating frequencies chosen for later experiments. The operating frequency of qubit 1 was set near its minimum at ω_1,OP/2π = 3.7778 GHz and qubit 2 at its minimum at ω_2,OP/2π = 3.6673 GHz. The many small anticrossings occur when a qubit is sweeping past an HBAR mode, while the larger anticrossing at 3.778 GHz seen in the data for qubit 2 corresponds to the qubit-qubit coupling. The spacing between HBAR modes (free spectral range, FSR) is around 22 MHz which corresponds well with the thickness of the HBAR sapphire layer. The dashed lines show the eigenvalues according to equation <ref>.
At the qubits respective operating points, they had T_1 values of 2.2 μs and 2.41 μs, as well as T_2 values of 4.41 μs and 1.02 μs. Their respective 2g couplings to their HBAR modes were 2.55 MHz and 2.85 MHz, with the mechanical T_1 values being 380 ns and 320 ns. The system had a qubit-qubit 2g coupling of 16.7 MHz.
Figure <ref> shows a vacuum Rabi oscillation experiment where an excitation is swapping between an initially excited qubit and its coupled mechanical modes. In panels (a,b) qubit 2 is being controlled and measured and we see vacuum Rabi oscillations with the mechanical modes (red arrows) and also with the other qubit (blue arrows), corresponding with the anticrossings seen in figure <ref> bottom right. In figure <ref> (c,d) qubit 1 is controlled and experiences vacuum Rabi oscillations with its coupled mechanical modes following the anticrossings seen in figure <ref> bottom left. Since the flux is tuned in the positive direction, it first sweeps on resonance with the lower mode and then with the upper mode seen in figure <ref> bottom right.
If one looks closely the vacuum Rabi oscillation fringes can be seen to be asymmetric, especially in figure <ref> (a). The source of this unknown and it results in deviations from the theory at later simulation times. Some slight asymmetry could be generated for the nearest mode by including the effect of the π pulse specifically in the simulations, but this was not enough to reproduce the long tail of the fringes from the mode nearest the qubit operation point seen in figure <ref> (a) which extend very far, up to where qubit 1 is. It can also be seen in figure <ref> (a) that the vacuum Rabis with qubit 1 also show these extended fringes on the right side. This behaviour may be related to the same phenomena that is seen in frequency domain, where at the avoided crossing, the upper branch has less weight than the lower branch. It is possible at least some of the asymmetry is caused by pulse distortion <cit.>.
The line cuts in figure <ref> (b) show a double oscillation feature that occurs when qubit 2 is near the qubit 1 frequency. This is because the excitation is experiencing Rabi oscillations with both the other qubit and the nearby acoustic modes at the same time but on different time scales, hence the multiple oscillating feature.
It is important to determine whether or not the qubits couple to the same set of acoustic modes. The issue is nontrivial since on one hand, the qubits are in close proximity to each other and share the same HBAR chip, which would point to delocalized acoustic modes. On the other hand, one could argue that the electric field of either qubit should confine the HBAR mode only below its own electrode. We attempted to carry out finite-element simulations, however, full 3-dimensional solution was beyond reach. In 2 dimensions and with a limited overtone number, we saw indications of a delocalized acoustic mode, with the study showing that moving the qubit coupling pad changed the strength of coupling to modes of different lateral profile. Experimentally, the issue cannot immediately be resolved in spectroscopy, since the HBAR spectral lines in figure <ref> are equal within measurement uncertainties, which however is expected based on the geometry. A time-domain experiment was done to confirm that the qubits couple to their individual sets of acoustic modes. This was done by swapping an excitation from qubit 1 to its acoustic mode at 3.788 GHz, and then tuning it away whilst tuning the qubit 2 on resonance with this mode. The experiment found no response and so concluded that the qubits indeed couple to separate modes with any stray coupling being too weak to observe.
Finally, we demonstrate the swapping of an excitation through the degrees of freedom of the system. Figure <ref> shows the pulse sequence and measured data. The excitation swaps from the 3.7885 GHz HBAR mode coupled to qubit 1 all the way to various HBAR modes coupled to qubit 2. The resulting measurement data is similar to figure <ref> (a) as the last part of the pulse sequence is similar to that experiment, however this excitation has travelled from an acoustic mode coupled to the opposite qubit hence why the initial excited state population is reduced due to decoherence.
Now that we have shown the ability to transfer excitations around the system, we would in principle be able to create an entangled state between arbitrary acoustic modes. However, due to the limited coherence of the system, we were not able to measure this in practice. One needs to measure the entangled modes simultaneously under a series of tomography pulses in order to produce the density matrix of the system (for example see <cit.>). This was not straightforward to do in our system as the acoustic modes are coupled to different qubits, meaning we need to readout the acoustic mode in single-shot to be able to correlate the results. We are limited both by our single-shot readout fidelity <60%, and by not being in the strong dispersive regime which requires acoustic T_1 times of 8 μs at our coupling magnitudes.
A possible simplification to make is to only measure an entangled state which does not occupy number states higher than |1⟩ so that in this case one can swap the state back to the qubits and measure them. Due to the low readout fidelity, we have to use an ensemble measurement. There is a tomography pulse scheme to measure the two qubit density matrix using an ensemble measurement <cit.>. This requires an appropriate two-qubit gate as a part of the tomography pulse scheme and in our case this would be an iSWAP pulse. The calibration of this iSWAP pulse was problematic having a fidelity of 55% which was not sufficient to do the two qubit tomography. We estimate that probably higher than 70% gate fidelity is required to be able to perform the measurement.
In order to improve the fidelity of single and two-qubit gates in the system, one would like the FSR to be larger than the coupling by a factor of at least 20. This is so that if the qubit is in between two modes, it will only interact dispersively. Also the FSR should be larger than inverse pulse widths, so that these are not exciting nearby mechanical modes as well. Longer coherence times for both the qubits and the acoustics are important towards this end. The ideal solution would be the development of a tunable coupler, to be able to selectively couple to modes of interest, which is important for using HBARs in quantum information processing.
In conclusion we have fabricated and measured a sample consisting of two qubits each coupled to an individual set of high overtone bulk acoustic (HBAR) modes as well as to each other. An excitation was swapped from an HBAR mode coupled with one qubit, to an HBAR mode coupled to the other qubit. This demonstrates the possibility to integrate multiple HBAR devices into a superconducting circuit, where complex quantum states could be stored across these devices.
We would like to thank Mikael Kervinen for useful discussion. We acknowledge the facilities and technical support of Otaniemi research infrastructure for Micro and Nanotechnologies (OtaNano) that is part of the European Microkelvin Platform. This work was supported by the Academy of Finland (contracts 307757), by the European Research Council (101019712), and by the Wihuri Foundation. We acknowledge funding from the European Union's Horizon 2020 research and innovation program under the QuantERA II Programme (13352189). The work was performed as part of the Academy of Finland Centre of Excellence program (project 336810).
20
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Clerk et al.(2020)Clerk,
Lehnert, Bertet, Petta, and Nakamura]Clerk2020hybrid
author author A. A. Clerk, author K. W. Lehnert,
author P. Bertet, author J. R. Petta, and author Y. Nakamura, title
title Hybrid quantum systems with circuit quantum
electrodynamics, @noop journal journal
Nature Physics volume 16, pages
257–267 (year 2020)NoStop
[Regal et al.(2008)Regal,
Teufel, and Lehnert]Lehnert2008Nph
author author C. A. Regal, author J. D. Teufel, and author K. W. Lehnert, title title Measuring nanomechanical
motion with a microwave cavity interferometer, @noop journal journal Nature Physics volume
4, pages 555–560 (year 2008)NoStop
[Teufel et al.(2011)Teufel,
Li, Allman, Cicak,
Sirois, Whittaker, and Simmonds]Teufel2011a
author author J. D. Teufel, author Dale Li,
author M. S. Allman, author K. Cicak, author
A. J. Sirois, author
J. D. Whittaker, and author
R. W. Simmonds, title
title Circuit cavity electromechanics in the
strong-coupling regime, @noop journal journal Nature volume 471, pages
204–208 (year 2011)NoStop
[O'Connell et al.(2010)O'Connell, Hofheinz, Ansmann, Bialczak, Lenander, Lucero, Neeley, Sank, Wang, Weides,
Wenner, Martinis, and Cleland]OConnell2010
author author A. D. O'Connell, author M. Hofheinz,
author M. Ansmann, author Radoslaw C. Bialczak, author M. Lenander, author
Erik Lucero, author
M. Neeley, author D. Sank, author H. Wang, author M. Weides,
author J. Wenner, author John M. Martinis, and author A. N. Cleland, title title Quantum ground state and single-phonon
control of a mechanical resonator, @noop journal
journal Nature volume 464, pages 697–703 (year 2010)NoStop
[Arrangoiz-Arriola et al.(2019)Arrangoiz-Arriola, Wollack, Wang,
Pechal, Jiang, McKenna,
Witmer, Van Laer, and Safavi-Naeini]Safavi2019Fock
author author Patricio Arrangoiz-Arriola, author E. Alex Wollack, author Zhaoyou Wang, author Marek Pechal, author Wentao Jiang, author Timothy P. McKenna, author Jeremy D. Witmer, author Raphaël Van Laer, and author Amir H. Safavi-Naeini, title title Resolving the energy levels of a nanomechanical oscillator, @noop journal journal Nature volume 571, pages 537–540 (year
2019)NoStop
[Chu et al.(2017)Chu,
Kharel, Renninger, Burkhart,
Frunzio, Rakich, and Schoelkopf]SchoelkopfHBAR2017
author author Yiwen Chu, author Prashanta Kharel,
author William H. Renninger,
author Luke D. Burkhart,
author Luigi Frunzio, author Peter T. Rakich, and author Robert J. Schoelkopf, title title Quantum acoustics with superconducting
qubits, @noop journal journal
Science volume 358, pages 199–202
(year 2017)NoStop
[Kervinen et al.(2018)Kervinen, Rissanen, and Sillanpää]kervinen_interfacing_2018
author author Mikael Kervinen, author Ilkka Rissanen, and author Mika Sillanpää, title title
Interfacing planar superconducting qubits with high overtone bulk acoustic
phonons, @noop journal journal
Physical Review B volume 97, pages
205443 (year 2018)NoStop
[Gustafsson et al.(2014)Gustafsson, Aref, Kockum, Ekström, Johansson, and Delsing]Delsing2014
author author Martin V. Gustafsson, author Thomas Aref, author Anton Frisk Kockum, author Maria K. Ekström, author Göran Johansson, and author Per Delsing, title title
Propagating phonons coupled to an artificial atom, @noop
journal journal Science volume 346, pages 207–211 (year
2014)NoStop
[Noguchi et al.(2017)Noguchi, Yamazaki, Tabuchi, and Nakamura]Nakamura2017
author author Atsushi Noguchi, author Rekishu Yamazaki, author Yutaka Tabuchi, and author Yasunobu Nakamura, title title Qubit-assisted
transduction for a detection of surface acoustic waves near the quantum
limit, @noop journal journal Phys.
Rev. Lett. volume 119, pages 180505
(year 2017)NoStop
[Moores et al.(2018)Moores,
Sletten, Viennot, and Lehnert]moores_cavity_2018
author author Bradley A. Moores, author Lucas R. Sletten, author Jeremie J. Viennot, and author K. W. Lehnert, title title
Cavity Quantum Acoustic Device in the Multimode Strong Coupling
Regime, @noop journal journal
Physical Review Letters volume 120, pages 227701 (year 2018)NoStop
[Bienfait et al.(2019)Bienfait, Satzinger, Zhong, Chang, Chou, Conner, Dumur,
Grebel, Peairs, Povey, and Cleland]Cleland2019PhEntangl
author author A. Bienfait, author K. J. Satzinger, author Y. P. Zhong, author H.-S. Chang,
author M.-H. Chou, author C. R. Conner, author
É. Dumur, author
J. Grebel, author G. A. Peairs, author R. G. Povey, and author A. N. Cleland, title title
Phonon-mediated quantum state transfer and remote qubit entanglement, @noop journal journal Science volume 364, pages 368–371 (year
2019)NoStop
[Gokhale et al.(2020)Gokhale, Downey, Katzer, Nepal, Lang, Stroud, and Meyer]gokhale_epitaxial_2020
author author Vikrant J. Gokhale, author Brian P. Downey, author D. Scott Katzer, author Neeraj Nepal, author Andrew C. Lang, author Rhonda M. Stroud, and author David J. Meyer, title title
Epitaxial bulk acoustic wave resonators as highly coherent multi-phonon
sources for quantum acoustodynamics, @noop journal
journal Nature Communications volume
11, pages 2314 (year 2020)NoStop
[Chu et al.(2018)Chu,
Kharel, Yoon, Frunzio,
Rakich, and Schoelkopf]chu_creation_2018
author author Yiwen Chu, author Prashanta Kharel,
author Taekwan Yoon, author Luigi Frunzio, author
Peter T. Rakich, and author
Robert J. Schoelkopf, title
title Creation and control of multi-phonon Fock
states in a bulk acoustic-wave resonator, @noop journal journal Nature volume 563, pages 666–670 (year 2018)NoStop
[Kervinen et al.(2019)Kervinen, Ramírez-Muñoz, Välimaa, and Sillanpää]kervinen2019landau
author author Mikael Kervinen, author Jhon E. Ramírez-Muñoz, author Alpo Välimaa, and author Mika A. Sillanpää, title title Landau-Zener-Stückelberg Interference in a Multimode
Electromechanical System in the Quantum Regime, @noop journal journal Phys. Rev. Lett. volume 123, pages 240401 (year
2019)NoStop
[Wollack et al.(2022)Wollack, Cleland, Gruenke, Wang, Arrangoiz-Arriola, and Safavi-Naeini]Wollack2022entangle
author author E. Alex Wollack, author Agnetta Y. Cleland, author Rachel G. Gruenke, author Zhaoyou Wang,
author Patricio Arrangoiz-Arriola, and author Amir H. Safavi-Naeini, title title Quantum state preparation and tomography of entangled mechanical
resonators, @noop journal journal
Nature volume 604, pages 463–467
(year 2022)NoStop
[Kervinen et al.(2020)Kervinen, Välimaa, Ramírez-Muñoz, and Sillanpää]Kervinen2020
author author Mikael Kervinen, author Alpo Välimaa, author Jhon E. Ramírez-Muñoz, and author Mika A. Sillanpää, title title Sideband control of a multimode quantum bulk acoustic system, @noop journal journal Phys. Rev.
Applied volume 14, pages 054023
(year 2020)NoStop
[von Lüpke et al.(2022)von Lüpke, Yang, Bild, Michaud, Fadel, and Chu]Lupke2022
author author Uwe von Lüpke, author Yu Yang,
author Marius Bild, author Laurent Michaud, author
Matteo Fadel, and author
Yiwen Chu, title title Parity measurement in the strong dispersive regime of
circuit quantum acoustodynamics, @noop journal
journal Nature Physics volume 18, pages 794–799 (year 2022)NoStop
[Kwon et al.(2021)Kwon,
Tomonaga, Lakshmi Bhai, Devitt, and Tsai]Kwon2021GateBased
author author Sangil Kwon, author Akiyoshi Tomonaga, author Gopika Lakshmi Bhai, author Simon J. Devitt, and author Jaw-Shen Tsai, title title Gate-based
superconducting quantum computing, @noop journal
journal Journal of Applied Physics volume 129, pages 041102 (year
2021)NoStop
[Rol et al.(2020)Rol,
Ciorciaro, Malinowski, Tarasinski, Sagastizabal, Bultink,
Salathe, Haandbaek, Sedivy, and DiCarlo]Rol2020PulseDistorsion
author author M. A. Rol, author L. Ciorciaro,
author F. K. Malinowski,
author B. M. Tarasinski,
author R. E. Sagastizabal,
author C. C. Bultink, author Y. Salathe, author
N. Haandbaek, author
J. Sedivy, and author
L. DiCarlo, title title Time-domain characterization and correction of on-chip
distortion of control pulses in a quantum processor, @noop
journal journal Applied Physics Letters volume 116, pages 054001 (year 2020)NoStop
[Li et al.(2017)Li,
Xue, Tan, Liu, Dai, Zhang, Yu, and Yu]Li2017Ensemble
author author Mengmeng Li, author Guangming Xue, author Xinsheng Tan, author Qiang Liu, author Kunzhe Dai,
author Ke Zhang, author Haifeng Yu, and author Yang Yu, title
title Two-qubit state tomography with ensemble
average in coupled superconducting qubits, @noop journal journal Applied Physics Letters volume 110, pages 132602 (year
2017)NoStop
|
http://arxiv.org/abs/2307.10209v1 | 20230714131001 | On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks | [
"Hafsa Bousbiat",
"Yassine Himeur",
"Abbes Amira",
"Wathiq Mansoor"
] | cs.CR | [
"cs.CR",
"cs.LG"
] |
On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
Hafsa Bousbiat
University of Klagenfurt
Klagenfurt, Austria
[email protected]
Yassine Himeur
University of Dubai
Dubai, UAE
[email protected]
Abbes Amira
University of Sharjah
Sharjah, UAE
[email protected]
Wathiq Mansoor
University of Dubai
Dubai, UAE
[email protected]
August 12, 2023
==========================================================================================================================================================================================================================================================================================================
Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as load disaggregation algorithms, are fundamental approaches for effective energy management. Despite the success of deep models in load disaggregation, they face various challenges, particularly those pertaining to privacy and security. This paper investigates the sensitivity of prominent deep NILM baselines to adversarial attacks, which have proven to be a significant threat in domains such as computer vision and speech recognition.
Adversarial attacks entail the introduction of imperceptible noise into the input data with the aim of misleading the neural network into generating erroneous outputs. We investigate the Fast Gradient Sign Method (FGSM), a well-known adversarial attack, to perturb the input sequences fed into two commonly employed CNN-based NILM baselines: the Sequence-to-Sequence (S2S) and Sequence-to-Point (S2P) models. Our findings provide compelling evidence for the vulnerability of these models, particularly the S2P model which exhibits an average decline of 20% in the F1-score even with small amounts of noise. Such weakness has the potential to generate profound implications for energy management systems in residential and industrial sectors reliant on NILM models.
Non-intrusive Load Monitoring, Load Disaggregation, Adversarial Attacks, Smart Home Energy Management
§ INTRODUCTION
Providing appliance-level feedback has been widely acknowledged as a promising approach for achieving significant energy savings <cit.>. The previous strategy can lead up to 12% of energy savings in the residential sector <cit.>. It allows individuals to make informed decisions about their daily interaction with in-home electrical devices. In this regard, Non-intrusive Load Monitoring (NILM) has emerged as a promising technology that utilizes advanced algorithms to infer the power consumption of individual appliances based on aggregate household measurements <cit.>. With minimal hardware requirements, NILM approaches are an attractive solution for energy management and sustainability efforts. More precisely, NILM algorithms rely only on a single metering point reporting the total power consumption of the household to identify operating appliances <cit.> allowing thus to avoid complex hardware deployment and maintance.
Exceptional performance enhancement in NILM scholarship can be observed in recent years. Particularly, the adoption of deep models for load disaggregation by Kelly et al. <cit.> in 2015 has been a turning point <cit.>. Compared to statistical classical models, deep neural networks have shown tremendous potential in enhancing the accuracy of energy consumption estimates <cit.>. A main advantage of these models is the automatic feature extraction that provides an end-to-end solution.
Nonetheless, the robustness of these models against adversarial attacks has become a growing concern in recent years <cit.>.
Adversarial attacks can pose a significant threat to NILM's accuracy and reliability by injecting small amounts of noise into the input data to deceive the neural network and generate erroneous outputs <cit.>. In the case of NILM, adversarial attacks can cause wrong energy consumption estimates <cit.>, privacy breaches, instability in home energy management systems <cit.>, and even safety risks <cit.> when used in critical infrastructures.
This paper aims to address this gap by evaluating the sensitivity of two popular NILM baselines to adversarial attacks.
The remainder of this paper is organized as follows: Section <ref> discusses related work and recent literature providing insights into the state-of-the-art NILM models and adversarial attacks. Section <ref> describes the different steps followed to perform the attack, including the generation of adversarial examples and their evaluation on the target models. Section <ref> describes the experimental setup followed by a presentation the results of the experimental setup in Section <ref>. Finally, Section <ref> concludes the paper by summarizing the main contributions of this research and discussing potential future directions in this area.
§ RELATED WORK
Non-intrusive load monitoring (NILM) gained significant attention in the past five years, with neural networks being widely acknowledged as state-of-the-art models <cit.>. Popular deep baselines for NILM, such as Sequence-to-Sequence and Sequence-to-point, are based on convolutional layers <cit.>. However, other contributions based on LSTM <cit.> have also demonstrated the robustness of these layers in identifying appliance usage. Advanced models, such as the BERT model <cit.> and UNET model <cit.>, have also been suggested in the literature.
Despite the success of these models in estimating the power consumption of household appliances under different conditions <cit.>, they suffer from privacy and security issues <cit.>. They can reveal sensitive information about individuals' daily routines, as illustrated in <cit.>, leading to concerns from the consumer's side.
The security of NILM systems is thus a main concern when deep models are adopted. Yet, it received little attention. A compact review of adversarial attacks in smart grid scenarios was suggested in <cit.>, considering deep neural networks. The review study revealed that despite the potential for extreme harm, these attacks have received very little attention from the research community. Two types of attacks can be performed: white-box and black-box attacks.
White-box attacks assume full knowledge of the attacked model <cit.>, and are gaining interest in related work. They were particularly considered in <cit.>, targeting IoT devices where the case of residential smart meters was considered as a case study. The obtained results demonstrated that adversarial samples are often indistinguishable from real samples, leading to high success rates of the performed attack. This finding illustrates that adversarial attacks represent a major threat for energy management systems (EMS) both in residential and industrial setups, notably in the case of sensitive sectors, as they can easily be used to fool demand response programs. However, this attack model makes the assumption of complete knowledge of the model, which is hard to obtain in real scenarios.
In contrast, black-box attacks assume no knowledge of the model or the training data <cit.>. The main advantage of adopting this threat model, compared to the previous one, is its practicality. The significant influence of this attack on energy analytics and load disaggregation was explored in <cit.>. The findings revealed that black-box attacks can be performed even in cases where the adversary has limited knowledge of the target system.
§ METHODOLOGY
The threat model aims to fool neural networks into producing inaccurate estimates of power consumption, thereby disrupting the energy management systems of smart buildings.
As the attack involves adding perturbations to the input data, the attacker's primary challenge would be to determine the optimal amount of noise to introduce, maximizing the success rate while remaining undetected by data poisoning detection methods. The current paper's methodological design aims to evaluate this aspect while also assessing whether the attack's effect is similar for two different disaggregation models, given appliances with heterogeneous power consumption magnitudes.
The steps involved in the methodology are depicted in Figure <ref>. Firstly, the models are trained on the original data and evaluated on a testing data set, without attack. Secondly, the gradient of the pre-trained model is used to generate perturbed inputs, which are then fed back into the network to produce the corresponding predictions.
§.§ Models
The Seq2Point and Seq2Seq <cit.> models for NILM have gained significant attention due to their competitive performance in estimating the power consumption of household appliances. These models are based on convolutional layers and have similar structures, differing only in the shape of their output.
The choice of the Seq2Point and Seq2Seq models as representative models for load disaggregation is a solid choice, given their competitive performance, wide acknowledgement, and adoption in the research community. To implement these models, we leveraged the code available from NILMtk <cit.>, a popular open-source toolkit for NILM research. The training process was conducted for 150 epochs, using an Adam optimizer with a learning rate of 10^-4.
§.§ Attack
We propose to assess the robustness of the baseline models against a widely recognized white-box adversarial attack, namely the Fast Sign Gradient Sign Method (FGSM). This attack leverages the gradient of the model to generate an adversarial sample of the input. For time series data, this technique involves computing the gradient of the loss function ℒ with respect to the input sequence, in order to create a new sequence that increases the loss. This newly generated sequence is referred to as an adversarial sample x, where ϵ is a multiplier to ensure that the added perturbations remain small.
x = x + ϵ * sign (∇_x ℒ(θ, x, y))
The FGSM has an important feature, namely that the gradient calculation is performed with respect to each input sequence, enabling the attacker to identify how each value of the input sequence contributes to the loss function, with the ultimate goal of misleading the model.
It is worth noting that the FGSM aims to mislead an already pre-trained model, without affecting its parameters. Hence, it represents a powerful tool to assess the model's robustness against adversarial attacks, as it provides insight into the model's vulnerabilities without affecting its training process.
§ EVALUATION
§.§ Data
The present study utilizes data from UKDALE, a dataset recorded in the UK, comprising five buildings, with the first building having a recording period longer than three years. The pre-trained models are constructed using data from the first building, where a four-month recording period with a sampling rate of 30 seconds is considered. The study further considers four appliances with varying consumption patterns: the washing machine, the kettle, the microwave, and the fridge. For testing purposes, a period of one month is considered for each appliance.
§.§ Metrics
For a fair evaluation of the models as well as the effect of the FGSM attack, three metrics are used, the Mean Average Error (MAE), the F1-score and the Normalised Disaggregation Error (NDE).
The MAE for an appliance i is described as follows:
MAE^(i) = 1/N·∑_t=0^N-1 |ŷ_t^(i)-y_t^(i)|
where, y_t is the ground truth values of the power consumption, ŷ_t is the predicted power consumption, and N represents the number of samples. In addition to the MAE, the NDE is considered as recommended in recent literature <cit.>:
NDE^(i) = √(∑_t=0^N-1(ŷ_t^(i)-y_t^(i))^2/∑_t=0^N-1(y_t^(i))^2)
Furthermore, the F1-score is used to assess the performance of the model on state estimation of different appliances, defined as follows:
F1-score = 2 · Precision · Recall/Precision + Recall
Where the Precision = TP/(TP+FP), Recall = TP/(TP+FN). A threshold of 10 watts was used to derive states of the fridge and a threshold of 500 watts was used in the case of the remaining appliances.
§ RESULTS & DISCUSSION
The results obtained for both baseline models are presented in Table <ref>. We report on four scenarios: the performance of the model without attack and the performance of the model under attack with three different values of ϵ (0.01, 0.10, and 0.25). For each of the considered scenarios, we report on the three metrics for the four appliances considered.
When no attack is performed, both Seq2Point and Seq2Seq models yield equivalent performance considering the three metrics with the Seq2Point model providing slightly better results considering all appliances and all metrics. The latter observation is reflected by a minimal f1-score of 67% and a maximum of 12.4 watts for the MAE. The previous observation aligns well with existing literature <cit.> suggesting the superiority of Seq2Point model.
The results show that as the epsilon value increases, the performance of all appliances deteriorates with the Kettle performing the best and the Fridge performing the worst, for both models. At an epsilon value of 0.01, all appliances show a slight decrease in performance. Nonetheless, starting from an epsilon value of 0.10, the MAE of all appliances increases significantly, with the Washing Machine appliance showing the highest increase. Meanwhile, F1-score decreases for all appliances, with the Microwave appliance showing the highest decrease. At an epsilon value of 0.25, the MAE, F1-score, and NDE values continue to deteriorate for all appliances, with the Washing Machine appliance showing the highest MAE and the Fridge appliance showing the highest NDE.
When comparing the effect of the added noise on the Seq2Seq and Seq2Point models, the Seq2Seq shows more robustness against the attack. On one side, Seq2Point demonstrates higher sensitivity to the added noise with a decrease in all metrics by a factor of three for frequently used appliances (i.e., the fridge, the microwave, the kettle) and a decrease by factor of ten (10) in the case of the washing machine for ϵ=0.10, considered as one of the major loads. On the other side, the Seq2Seq demonstrates moderate sensitivity where a decrease with only a factor of 1.5 is recorded for frequently used appliances and a factor of 5 in the case of the washing machine for the same values of ϵ. It can be seen that even if high values of ϵ lead to slight deterioration in the MAE, the seq2seq model still yields approximately the same F1-score values unlike the Seq2Point model where the deterioration is recorded in all metrics.
The Figure <ref> illustrates one generated activation for the washing machine for the Seq2Point model. It can be observed that the noise added first reflects on the OFF states through small fluctuations in the order of hundreds of watts starting from values of ϵ=0.10. Furthermore, the figure shows that higher values of ϵ lead to an attenuated signal or undetected states (e.g., second peak in the figure). Figure <ref> illustrates the same activation generated with Seq2Seq model. It is clear that the model shows more robustness where it is capable of generating stable signal even with higher values of added noise with slight deterioration in the estimated values.
The adversarial attack illustrated in this study on NILM models can easily be leveraged in smart homes to mislead the neural networks. This is particularly relevant in the case of home automation functions <cit.>. For example, an attack can be performed on light control systems that are based on NILM and leads to considerable losses if unnoticed.
It can also lead to the incorrect estimation of load flexibility in the system. If a device is identified as "off" although it is actually "on," it will not be considered for automated control, which will affect the ability to shave demand peaks when a large number of flexible loads are attacked. For instance, if the system estimates that 75 kW of demand during peak hours in a day stem from the heating system in winter time and the NILM algorithm misclassifies the state of this appliance, then the peak reduction potential is lost. Moreover, the attack can be leveraged at specific times to mislead occupancy detection systems and easily lead to erroneous predictions that can be used by malicious parties.
§ CONCLUSION
This paper suggested to assess the robustness of the Seq2Seq and Seq2Point models against adversarial attacks considering the FGSM attack. Even with smaller values of ϵ, the obtained results reveal high sensitivity of the Seq2Point model, while demonstrating a moderate sensitivity of the Seq2Seq model. This study confirms findings from related about the superiority of Seq2Point in normal scenarios but highlights that Seq2Seq reveal more strength to injected noise. Adversarial attacks can have destructive effect on EMS systems and different energy services. In future work, we aim reinforce the robustness of these models to such attacks using different strategies including augmenting the set of training data with adversarial samples.
IEEEtran
|
http://arxiv.org/abs/2307.05110v1 | 20230711083712 | Gate voltage induced injection and shift currents in AA- and AB-stacked bilayer graphene | [
"Ze Zheng",
"Kainan Chang",
"Jin Luo Cheng"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"physics.optics"
] |
GPL Photonics Laboratory, State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, Jilin, 130033 P. R. China
University of Chinese Academy of Sciences, Beijing 100039, China.
[email protected]
GPL Photonics Laboratory, State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, Jilin, 130033 P. R. China
University of Chinese Academy of Sciences, Beijing 100039, China.
[email protected]
GPL Photonics Laboratory, State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, Jilin, 130033 P. R. China
University of Chinese Academy of Sciences, Beijing 100039, China.
Generating photogalvanic effects in centrosymmetric materials can provide new opportunities for developing passive photodetectors and energy harvesting devices. In this work, we investigate the photogalvanic effects in centrosymmetric two-dimensional materials, AA- and AB-stacked bilayer graphene, by applying an external gate
voltage to break the symmetry. Using a tight-binding model to describe the electronic states, the injection coefficients for circular photogalvanic effects and shift conductivities for linear photogalvanic effects are calculated for both materials with light wavelengths ranging from THz to visible.
We find that gate voltage induced photogalvanic effects can be very significant for AB-stacked bilayer graphene,
with generating a maximal dc current in the order of mA for a 1 μm wide sample illuminated by a light intensity of 0.1 GW/cm^2, which is determined by the optical transition around the band gap and van Hove singularity points.
Although such effects in AA-stacked bilayer graphene are about two orders of magnitude smaller than those in AB-stacked bilayer graphene, the spectrum is interestingly limited in a very narrow photon energy window, which is associated with the interlayer coupling strength.
A detailed analysis of the light polarization dependence is also performed. The gate voltage and chemical potential can be used to effectively control the photogalvanic effects.
Gate voltage induced injection and shift currents in AA- and AB-stacked bilayer graphene
Jin Luo Cheng
August 12, 2023
========================================================================================
§ INTRODUCTION
Photogalvanic effects are nonlinear optical responses that
generate direct currents in homogeneous materials, and such a passive
process is considered as a direct and powerful photoelectric
conversion method <cit.>.
The widely discussed photogalvanic effects can be induced by the one-color injection current and shift current, which are second order nonlinear optical processes occurring in noncentrosymmetric materials, or the two-color coherent current injection processes, which are third (for “1+2” process) <cit.> or fifth (for “2+3” process) <cit.> order nonlinear optical processes and are not sensitive to the inversion symmetry of materials.
According to the response to the light polarization, second order photogalvanic effects are also
phenomenologically divided into circularly
polarized
photogalvanic effect and linearly polarized photogalvanic effect, where the
latter is light phase insensitive and can be used for solar energy
harvest without forming p-n junctions to surpass the Shockley-Queisser
limit <cit.>.
One of
the research topics in this field is to find materials with significant
photogalvanic effects at a specific frequency range, and several studies have been conducted on various new materials, including 2D
materials <cit.>, Dirac or Weyl semimetals <cit.>, ferroelectric materials <cit.>, and so on.
As the first two-dimension material, graphene is a potential
candidate for realizing new functionality in optoelectronic devices
due to its superior optical and electronic properties exceeding many
traditional bulk materials. However, because of its centrosymmetric
crystal structure, one-color injection and shift currents vanish in
many few-layer graphene as well as their nanostructures, while
two-color coherent control has been well studied in both theories <cit.>
and experiments <cit.>. It is still meaningful to generate one-color injection and shift currents in centrosymmetric graphene based
structure, in order to utilize its extraordinary physical
properties. The generation of second order response can be realized
by forming an asymmetric interface or edge <cit.>, applying an external
electric field <cit.>, forming surface curvature <cit.>, considering the spatial
variation of the light field <cit.>, and stacking graphene layers into asymmetric structure <cit.>.
Wei et al. <cit.> studied the gate field induced injection and
shift currents in zigzag graphene nanoribbons, and found
that the subband and edge states determine the generated currents
with an effective modulation of their amplitudes by the ribbon width and the
static field strength.
Xiong et al. <cit.> investigated the light polarization dependence
of in-plane shift current in a AB-stacked bilayer
graphene (AB-BG) with applying a gate voltage, and their results
clearly illustrated a sizeable photocurrent at a given light
frequency; however, neither the spectra
of the shift conductivity nor the injection current was present.
By stacking two layers of monolayer graphene with a relative rotation to form a twisted bilayer graphene, a large shift current can be produced due to a huge density of states when the flat band is formed at magic angles <cit.>. Surprisingly, whether the gate voltage can generate
photogalvanic effect in AA-stacked bilayer graphene (AA-BG) is still not clear.
In this paper, we systematically study the spectra of the
injection coefficients and shift conductivities of AA-BG and AB-BG under applying a gate voltage to break the
inversion symmetry, as well as their dependence on the gate voltage
and chemical potential.
Their electronic states are described
by widely adopted tight-binding model
formed by the carbon 2p_z orbitals
<cit.>,
and the expressions for injection coefficient and shift conductivity
are employed from Ref. [].
Our results confirm the feasibilities of generating photogalvanic effects in AA-BG and AB-BG. Particularly, the response of AA-BG distributes in a very narrow spectral region,
while a maximal current in the order of mA can be generated in AB-BG for a 1 μm wide sample at light intensity of 0.1 GW/cm^2.
This paper is organized as follows.
In Sec. 2 we introduce the
tight-binding models for the AA-BG and AB-BG under applying a gate voltage, and give the expressions for the
injection coefficient and shift conductivity.
In Sec. 3 we present the spectra of injection coefficient and shift
conductivity for AA-BG and AB-BG,
and discuss the effects of the gate voltage and chemical potential.
We conclude in Sec. 4.
§ MODELS
§.§ Hamiltonian
We consider the tight-binding Hamiltonian for the AA-BG and AB-BG, whose crystal structures are
illustrated in Fig. <ref> (a) and (b), respectively.
These two structures have the same primitive lattice vectors a_1=a_0(1/2x̂ + √(3)/2ŷ)
and a_2=a_0(-1/2x̂ + √(3)/2ŷ) with the lattice constant
a_0=2.46 Å.
The atomic positions in the unit cell are taken as
τ_A= 0, τ_B=( a_1+ a_2)/3,
τ_A'=cẑ, and τ_B'=τ_B+cẑ for
AA-BG, and τ_A= 0, τ_B=( a_1+ a_2)/3, τ_A'=τ_B+cẑ, and τ_B'=2τ_B+cẑ for AB-BG, where
c=3.35 Å is the interlayer distance.
The primitive reciprocal lattice vectors are b_1=2π/a_0(x̂ + 1/√(3)ŷ) and b_2=2π/a_0(-x̂+1/√(3)ŷ).
The electronic states are described by a tight-binding model employing carbon 2p_z orbitals.
The unperturbed Hamiltonian <cit.> for
AA-BG is
H^ AA_ k=(
[ -Δ γ_0g_ k γ_1 γ_3g_
k; γ_0g^*_ k -Δ γ_3g^*_ k γ_1; γ_1 γ_3g_ k Δ γ_0g_ k; γ_3g^*_ k γ_1 γ_0g^*_ k Δ; ]) .
Here k is the electron wavevector, and g_
k=1+e^-i k· a_1+e^-i k· a_2. The
hopping parameters are illustrated in Fig. <ref> (a) with
γ_0=2.569 eV, γ_1=0.361 eV, and
γ_3=-0.032 eV.
The on-site energies ±Δ are induced by
a gate voltage.
The Hamiltonian for AB-BG is given from Ref. as
H^ AB_ k=(
[ -Δ-Δ'/2 γ_0^' g_ k γ_4^'
g_
k γ_3^' g^*_ k; γ_0^' g^*_ k -Δ+Δ'/2 γ_1^' γ_4^' g_ k; γ_4^' g^*_ k γ_1^' Δ+Δ'/2 γ_0^' g_ k; γ_3^' g_ k γ_4^' g^*_
k γ_0^' g^*_ k Δ-Δ'/2; ]) ,
where the hopping parameters (see Fig. <ref> (b)) are γ_0^'=-3.16 eV,
γ_1^'=0.381 eV, γ_3^'=-0.38 eV, and γ_4^'=0.14 eV.
The on-site
potential difference Δ^'=0.022 eV is induced by the
asymmetric environment of A, B atoms in the crystal structure.
The eigenstates C_n k and eigenenergies ϵ_n k at the nth band
are obtained by diagonalizing the Hamiltonian through
H_ kC_n k=ϵ_n kC_n k .
The calculation of the optical responses involves the position operator r_ k and velocity
operator v_ k, which are
r_ k = i∇_ k +
([ τ_A 0 0 0; 0 τ_B 0 0; 0 0 τ_A' 0; 0 0 0 τ_B' ]) , v_ k=1/iħ[ r_ k, H_ k] ,
respectively.
The matrix elements of the position operator give the Berry
connections ξ_nm k by
ξ_nm k=C^†_n k r_ kC_m k ,
and those of the velocity operator are calculated as v_nm k=C^†_n k v_ kC_m k.
Due to the derivative with respect to the wavevector k, a direct calculation of ξ_nm k from Eq. (<ref>) requires that the wavefunction C_n k is a
smooth function of k.
However, this becomes quite difficult in numerical calculation because of
the phase arbitrary for a numerical wavefunction. Practically, the off-diagonal
terms of ξ_nm k can be also calculated from the velocity
operator as
r_nm k=ξ_nm k= v_nm k/iω_nm k (n m)
0 (n=m)
,
with ħω_nm k=ϵ_n k-ϵ_m k.
The diagonal terms ξ^a_nn k usually appear in the generalized derivative of (r^c_ k)_;nmk^a=∂ r_nm k^c/∂ k^a-i(ξ^a_nn k-ξ^a_mm k)r^c_nm k, which is calculated alternatively <cit.> by
(r^c_ k)_;nmk^a= -ir^c_nm k V_mn k^a+ħ M^ca_nm
k+i[r^a_ k, v^c_
k]_nm/iω_nm k ,
with V_mn k^a=v^a_mm
k-v^a_nn k=∂ω_mn k/∂ k^a and
M^ca_nm k=C^†_n k1/iħ[r^a_ k,v^c_ k]C_m k ,
where the Raman letters a,c indicate the Cartesian directions
x,y,z. Note that the electron wavevector has only in-plane
components x,y, the derivative ∂/∂ k^z
thus gives zero and (r_
k^a)_;nmk^z=-i(ξ^z_nn k-ξ^z_mm k)r^a_nm
k.
§.§ Injection and shift currents
We focus on the injection and shift currents induced by
a laser pulse centered at frequency
ω, for which the electric field is E(t)=
E_0(t)e^-iω t+c.c. and E_0(t) is
a slow varying envelop function.
The response static currents can be written as
J_0(t)= J_ inj(t)+ J_ sh(t) .
Here the first term J_ inj(t) is a one-color injection current satisfying
dJ^a_ inj(t)/dt=2iη^abc(ω)E_0^b(t)[E_0^c(t)]^* ,
with the injection coefficient η^abc(ω) given by
η^abc(ω)
=2e^3π/ħ^2∫d
k/4π^2∑_nm V_mn k^af_nm
kIm[r^c_mn kr^b_nm k]δ(ω_mn k-ω) .
Here f_nm k=f_n k-f_m k is the population difference
with the Fermi-Dirac distribution f_n k=[1-e^(ϵ_n
k-μ)/k_BT]^-1 for given chemical potential μ and
temperature T.
The second term J_ sh(t) in Eq. (<ref>) is a shift current written as
J^a_ sh(t) =2σ^abc(ω)E_0^b(t)[E_0^c(t)]^* ,
with the shift conductivity σ^abc(ω) given by
σ^abc(ω)
=-iπ e^3/ħ^2∫d
k/4π^2∑_nmf_nm k[r^b_mn k(r_ k^c)_;nmk^a+r^c_mn
k(r_ k^b)_;nmk^a]δ(ω_mn k-ω) .
Further discussion of photocurrents starts with a
symmetry analysis on the tensors of η^abc(ω) and σ^abc(ω).
The presence of time-reversal symmetry
gives r_nm k= r_mn(- k)=[ r_nm (- k)]^*,
v_nm k=- v_mn(- k)=-[ v_nm(- k)]^*, ϵ_n k=ϵ_n (- k), and (r^b_ k)_;nm
k^a=-(r^b_- k)_;mn k^a=-[(r^b_ k)_;nmk^a]^*. Thus from
Eqs. (<ref>) and (<ref>), we obtain η^abc=[η^abc]^* and
σ^abc=[σ^abc]^*, which are both real numbers.
At finite gate voltage, the crystal point group of AB-BG is
C_3v, whose symmetry is lower than that of AA-BG with crystal
point group C_6v. Thus we can check the symmetry properties of
AB-BG first, and then refine them to AA-BG. Combining the point
group and the time reversal symmetry, the nonzero tensor components
satisfy η^xzx=η^yzy=η^xxz=η^yyz, σ^xzx=σ^yzy=σ^xxz=σ^yyz,
σ^zxx=σ^zyy, σ^zzz, and
σ^yyy=-σ^yxx=-σ^xxy=-σ^xyx. Then the
injection current becomes
dJ_ inj^a(t)/dt=4η^xzx(ω)
Im{E^a_0(t)[E^z_0(t)]^∗}(1-δ_a,z) ,
and the shift current is
J_ sh^x(t) =4σ^xzx(ω)Re{E^z_0(t)[E^x_0(t)]^*}-4σ^yyy(ω)Re{E^x_0(t)[E^y_0(t)]^*} ,
J_
sh^y(t) =4σ^xzx(ω)Re{E^z_0(t)[E^y_0(t)]^*}+2σ^yyy(ω)
[ |E^y_0(t)|^2-|E^x_0(t)|^2] ,
J_ sh^z(t) =2σ^zxx(ω)[|E^x_0(t)|^2+|E^y_0(t)|^2]
+2σ^zzz(ω)|E^z_0(t)|^2 .
For AA-BG, the results are similar except that the σ^yyy
component disappears due to the extra crystal symmetry.
The injection current in AA-BG or AB-BG requires an elliptically
polarized light incident obliquely, and its z-component vanishes
due to the lack of freely moving electrons along this quantum
confined direction. The z-component of shift current in AA-BG or
AB-BG, induced by the charge shift
between the two layers under the light excitation, can be always generated. Such shift current can lead to charge
accumulation between these two layers, which can further induce a
gate voltage in this system, as discussed by Gao et al.
<cit.>. The in-plane components of
the shift current in AA-BG can be generated only for an
elliptically polarized light incident obliquely, while those in
AB-BG have no such limit.
§ RESULTS
§.§ Analytical results for AA-BG
The Hamiltonian for the AA-BG can be analytically diagonalized. The eigenstates are
C_n k =√(1-
α_n N_β_n k)/2√(2)(
[ -ĝ_ k; - β_n; β_nĝ_ k; 1 ]) +α_n√(1+α_n N_β_n k)/2√(2)(
[ ĝ_ k; β_n; β_nĝ_ k; 1 ]) ,
with ĝ_ k=g_ k/|g_ k| and
N_β_n k=γ_3|g_ k|+β_nγ_1/√(Δ^2+(γ_3|g_ k|+β_nγ_1)^2) .
Here n=1,2,3,4 denotes the band index with
α_n=-1,-1,+1,+1 and β_n=-1,+1,-1,+1, respectively. The associated eigenenergies are
ϵ_n k
=β_nγ_0|g_ k|+α_n√(Δ^2+(γ_3|g_ k|+β_nγ_1)^2) .
With the analytic wavefunctions in Eq. (<ref>), Berry
connections ξ_nm k can be calculated directly from Eq. (<ref>), as listed in Appendix <ref>, where the relations between all
components are also presented. There exist selection rules for
r_nm k^z as
r^z_13 k =r^z_31 k=c
N_-1 k/2 ,
r^z_24 k=r^z_42 k=c
N_+1 k/2 .
Therefore, r_nm k^z is nonzero only for the band pair
(n,m)=(1,3) or (2,4). The injection coefficient becomes
η^xzx(ω)= e^3/2πħ^2∫ d
k{f_13 k
V_31 k^xIm[r^x_31 kr^z_13 k]δ(ω_31 k-ω)
.
.+f_24 k V_42 k^xIm[r^x_42 kr^z_24 k]δ(ω_42 k-ω)} .
The intraband Berry connections are obtained as
ξ_nn k =1/2[g_ k^∗(i∇_ k)g_
k+a_0/√(3)ŷ]+1/2cẑ(1+α_n√(1-
N_β_n k^2)) ,
The matrix elements for ξ_nn
k^x/y are independent of the band index n, thus (r_
k^a)_;nmk^b=∂ r_nm k^a/∂ k^b for
b=x,y and (r_
k^a)_;nmk^z=-i(ξ_nn k^z-ξ_mm k^z)r_nm
k^a. The shift conductivities become
σ^xzx(ω)
= -ie^3/4πħ^2∫ d k [f_13 k(r^z_31 k∂ r^x_13 k/∂ k_x+r^x_31 k∂ r^z_13 k/∂ k_x)δ(ω_31 k-ω).
.+f_24 k(r^z_42 k∂ r^x_24 k/∂ k_x+r^x_42 k∂ r^z_24 k/∂ k_x)δ(ω_42 k-ω)] ,
σ^zzz(ω)
= e^3/2πħ^2∫ d
k[f_12 k|r^z_31
k|^2(ξ_33 k^z-ξ_11 k^z)δ(ω_31 k-ω).
. +f_24 k|r^z_42 k|^2(ξ_44
k^z-ξ_22 k^z)δ(ω_42 k-ω)] ,
σ^zxx(ω)
= e^3/2πħ^2∫ d
k ∑_nm f_nm k|r^x_mn
k|^2(ξ_mm k^z-ξ_nn k^z)δ(ω_mn k-ω) ,
It can be seen that the coefficients η^xzx, σ^xzx, and
σ^zzz are induced by the transitions only from the band 1 to 3 or
from the band 2 to 4, while σ^zxx has no such
limit.
These coefficients can be further simplified with the analytical expressions of all these quantities, which can be obtained under the linear dispersion
approximation around the Dirac points, as shown in
Appendix <ref>.
Figure <ref> (a) shows the band structure of
AA-BG for Δ=0 and 0.4 eV. With applying a gate
voltage, the interlayer coupling shifts the energies of the Dirac
cones of each layer, while the electronic states at zero energy are
still degenerate. The
bands 1 and 3 (or 2 and 4) are approximately parallel to each
other, and their energy differences
are in the range of
2√(Δ^2+(γ_1+3γ_3)^2)≤ħω_42
k≤2√(Δ^2+γ_1^2)≤ħω_31 k≤2√(Δ^2+(γ_1-3γ_3)^2)
due to 0≤|g_ k|≤ 3, where the middle value is obtained at the Dirac points and the
other two values are obtained at the M points. Figure
<ref> (b) gives the joint density of states (JDOS)
J_31(ω) and J_42(ω) for
related two pairs of bands, which are defined as
J_nm(ω) = ∫ d k δ(ħω_nm k-ħω) .
These two JDOS are strongly localized in energy, regardless of whether there is the gate voltage. For Δ=0.4 eV, D_42(ω) is nonzero in the energy range of [0.95,
1.08] eV and D_31(ω) is nonzero in the energy range of [1.08,
1.21] eV.
§.§ Band structure of AB-BG
The Hamiltonian in Eq. (<ref>) for AB-BG can be also
analytically diagonalized, as shown in Appendix <ref>, but the
expressions for the eigenenergies are too complicated to provide
meaningful physical insight, thus we discuss the band structure based on numerical calculation.
This work focuses on the electronic transitions around the Dirac points, for convenience, the wavevectors are expressed as
k=k̅2π/a_0 (x̂cosθ+ŷsinθ)+ K
with θ=2nπ/3 along the K-M directions, and θ=(2n+1)π/3 along the K-Γ directions.
Figure <ref> (a) gives the
band structure for AB-BG at gate voltages Δ= 0 and 0.4 eV.
At Δ=0, in each Dirac cone, the two middle bands are degenerate at the Dirac points with k̅=0 and other three k points on the K-M paths with k̅=
-γ_1^'γ_3^'/√(3)πγ_0^'^2∼ 0.003 (see details in Appendix <ref>).
Meanwhile, the energy differences, ħω_31 k and ħω_42 k, have minima at the Dirac points.
For nonzero gate voltage, the degeneracy at these points is lifted.
The eigenenergies at the Dirac points are ±Δ-Δ^'/2,
±√(Δ^2+γ_1^2)+Δ^'/2,
and the middle two bands around the Dirac points have the Mexican hat shape <cit.>.
At Δ=0.4 eV, the energy difference ħω_32 k shows a minimum with increasing k̅ for each θ, as shown in the k-resolved energy difference in the inset, where
the three-fold rotational symmetry can be clearly seen around this Dirac point.
Along the K-M directions, the minima of ħω_32 k appear around k̅=0.027 to give the band gap of E_g=0.28 eV; and along the K-Γ directions, the minima appear around k̅=0.023, which have an energy E_1=0.4 eV higher than the band gap and give a van Hove singularity (VHS).
Similar results can be found for ħω_42 k, and another VHS appears with energy E_2=0.97 eV; however, ħω_31 k shows a minimum at the Dirac points but no VHS appears.
Figure <ref> (b) gives JDOS of J_31(ω), J_32(ω), J_41(ω), and J_42(ω) at Δ=0 and 0.4 eV.
The gate voltage changes these JDOS significantly around the band edge.
J_32(ω) and J_42(ω) have divergences at the VHS points with energies E_1 and E_2, respectively; and J_31(ω) has a peak located at E_3∼0.97 eV around the band edge,
which is induced by the nearly parallel bands (1, 3) around the Dirac points.
The VHS points do not appear for all gate voltages.
Figure <ref> (c) exhibits
Δ dependence of the k̅ value for the minimal energy of ħω_32 k and ħω_42 k for θ along the K-M and K-Γ directions, respectively.
Along the K-M directions, ħω_32 k has a minimum value at nonzero k̅ for all Δ, which gives the band gap E_g of the system; while along the K-Γ directions, the minimum energy E_1 moves to a nonzero k̅ only for Δ≥ 0.023 eV, where VHS appears as well. Note that the JDOS J_32 k shows a maximum at the band edge when there is no VHS for Δ < 0.023 eV.
However, the minima of ħω_42 k along the K-M and K-Γ directions
locate not at the Dirac points only for
Δ≥0.174 eV, where VHS appears as well. For Δ<0.174 eV, J_42(ω) also shows a maximum at the band edge between the bands 4 and 2, where this energy is still noted as E_2; the maximum of J_31(ω) also locates at the band edge between bands 3 and 1, where this energy is still noted as E_3.
The gate voltage dependences of these energies E_g, E_1, E_2, and E_3 are shown in Fig. <ref> (d).
§.§ Injection coefficients and shift conductivities at Δ=0.4 eV
In this section we present the numerical results for
injection coefficient η^xzx(ω) and shift
conductivities σ^yyy(ω), σ^xzx(ω),
σ^zxx(ω), and σ^zzz(ω). The parameters
are chosen as T=300 K, μ= 0, Δ=0.4 eV. During the numerical calculation, the Brillouin
zone is divided into a 3000×3000
homogeneous grid.
The δ functions in Eqs. (<ref>) and (<ref>) are approximated by a Gaussian function as
δ(ω)=ħ/√(π)Γe^-(ħω)^2/Γ^2
with the Gaussian broadening Γ=10 meV.
Figure <ref> (a) shows the injection
coefficient spectra for AA-BG and AB-BG.
For the injection in AA-BG, the spectrum is just a peak located
in a very narrow energy range 1.069 eV<ħω<1.087 eV with
an absolute value about 0.067 A· s^-1· m/V^2. From
the analytic results shown in Eq. (<ref>), the spectra include two
contributions at different photon energy regions: one is from the optical transition between the bands (1, 3) for photon energy ħω>2√(Δ^2+γ_1^2) or 1.078 eV<ħω<1.087 eV, and
the other is between the bands (2, 4) for
ħω<2√(Δ^2+γ_1^2) or 1.069 eV<ħω<1.078 eV; both magnitudes are
nearly proportional to
ħω-2√(Δ^2+γ_1^2). These two contributions
merge as a single peak just because the δ function is numerically
broadened with Γ=10 meV, which is even
larger than each energy region.
The injection coefficient η^xzx in AB-BG starts with photon energy higher than the gap, i.e.,
ħω>0.28 eV, and reaches its maximum
value of 25 A· s^-1· m/V^2 in amplitude at
ħω=0.45 eV, which is slightly larger than the first
VHS energy of JDOS E_1; the
energy difference arises from the zero electron velocity at this VHS.
Considering the thickness of a bilayer graphene as 2c=6.7 Å, the effective bulk injection coefficient is 3.7×10^10 μ A· s^-1V^-2, which is nearly 50 times larger that that in bulk GaAs <cit.>.
After this peak, the amplitude of injection coefficient
decreases as the photon energy increases, except for a small peak located around the JDOS peak at higher energy E_2 or E_3. It can be seen that the
injection coefficient for AB-BG is about two orders of magnitude
larger than that for AA-BG.
To have a direct impression on these values, we give an estimation on how large the injection current can be in AB-BG.
Based on the Eq. (<ref>), when the laser is
a 45^∘ obliquely incident p-polarized light with photon
energy of 0.45 eV, light intensity of I=0.1 GW/cm^2, and pulse duration of τ=1 ps, the generated injection current is
2η^xzxI/2cϵ_0Wτ∼ 9 mA for an electrode with a width W=1 μ m.
Then we turn to the shift conductivities, as
shown in Figs. <ref> (b–d).
Figure <ref> (c) gives the shift conductivity for
AA-BG. It can be seen that the component σ^zzz is
about one order of magnitude larger than
σ^xzx, or is at least two order of magnitude larger than σ^zxx. Both
σ^zzz and σ^xzx have nonzero values only in the
very narrow energy regions, similar to the
injection coefficient. These results are consistent with the
analytic results shown in Eqs. (<ref>–<ref>). Interestingly,
σ^xzx includes the contributions from the band 1 to 3 and
from the band 2 to 4 but with opposite signs.
For AB-BG shown in Figs. <ref> (b) and (d), all nonzero components start from the band
edge ħω≥ E_g. Different from the injection
coefficients, the shift conductivities at the band edge are
nonzero, and show prominent peaks. Especially, σ^yyy
shows a large value about 6×10^-13 A· m/V^2 at the
band edge and it drops quickly with increasing the photon energy.
The effective bulk shift conductivity is 896 μ A/V^2, which is several times larger than in GeSe (200 μ A/V^2) <cit.>.
Besides, the component σ^zzz is at least one order of
magnitude smaller than other nonzero components, totally different
from the case of AA-BG, where it is the largest one.
The spectra of σ^xzx and σ^zxx have similar
amplitude around a few 10^-14 A· m/V^2, which is a few
tens of times smaller than the peak of σ^yyy; they also show
some fine structures around those characteristic energies E_1, E_2,
and E_3.
We repeat the above estimation for the shift current using the same parameters but ħω=0.3 eV, and then obtain the generated shift current of 2σ^yyyI/2cϵ_0W∼ 0.23 mA.
§.§ Effects of Gate voltage
Figure <ref> gives the gate voltage dependence of the
injection coefficients and shift conductivities for AA-BG and AB-BG
at zero chemical potential.
Note that the negative gate voltage leads to opposite
coefficients, which are consistent with the results
by Xiong et al. <cit.>, thus only positive
gate voltages are shown here.
Figures <ref> (a) and (b) show the spectra of η^xzx
and σ^zzz for AA-BG, respectively. As indicated in previous section, both spectra for different gate
voltages are nonzero in a very
narrow photon energy region. With the increase of the gate voltage,
the region moves to larger energy and the values of both spectra
increase, which are indicated by ∝Δ in Eqs. (<ref>) and (<ref>).
Figure <ref> (c) gives the injection coefficient
η^xzx for AB-BG. At each gate voltage, the injection
coefficient shows two peaks located at photon energies slightly larger than
E_1 and E_2, which have been discussed in previous section. As the gate
voltage Δ varies, the peak amplitude reaches
a maximum at Δ∼0.2 eV.
The shift conductivities σ^xzx,
σ^yyy and σ^zxx for AB-BG are plotted in
Figs. <ref> (d–f). They show some similar
characteristics: (1) The spectra are located at about the band
gap similar to the case of Δ=0.4 eV, and their amplitudes increase with the decrease of Δ; σ^xzx and σ^zxx
increase much faster than σ^yyy.
(2) There exist sign changes of shift
conductivities.
§.§ Effects of Chemical potential
The chemical potential μ dependence of
injection coefficients and shift conductivities
at Δ=0.4 eV are depicted in Fig. <ref> with
the same layout as Fig. <ref>.
For AA-BG in Figs. <ref> (a) and (b), they
show very similar asymmetric dependence on the chemical
potential: with the increase of the chemical
potential, the values of all coefficients increase and the locations shift to higher or lower photon
energies depending on the sign of the chemical potential. For positive
chemical potential, the transitions between bands (1, 3) are
suppressed according to the Pauli blocking effects, while new extra transitions
between bands (2, 4) appear due to the additional free electrons in the
band 2. The extra transitions require lower photon energy and red
shift the spectra, and they also correspond to larger JDOS, leading to larger coefficients. Similar results can be
analyzed for negative chemical potential, but with switching the
band pairs (1, 2) and (3, 4).
In AB-BG, the chemical potential μ has different effects, as shown in Figs. <ref> (c–f). Due to the
existence of the band gap, the spectra are hardly changed when the
chemical potential lies in the gap.
When μ is above the conduction band
edge or below the valence
band edge, the main
peak of η^xzx around 0.5 eV is reduced
gradually due to the Pauli blocking, and there appear new transitions between
the bands (1, 2) or (3, 4) to give additional injections with opposite
signs. Similar
results are obtained for the shift conductivities.
§ CONCLUSION
In this paper we have studied the gate voltage induced injection current and shift current in AA- and AB-stacked bilayer graphene.
The gate voltage plays a crucial role in breaking the inversion symmetry of bilayer graphene to induce photogalvanic effects, and at the same time it effectively changes the band structure for AB-BG with
opening gaps located in the K-M directions and inducing additional VHS located in the K-Γ directions.
In AA-BG, the injection and shift currents are mainly induced by optical transitions between two pairs of nearly parallel bands; the coefficient spectra locate in a very narrow photon energy region of about 20 meV.
In AB-BG, the optical transition can occur between any possible band pairs, and the structure of spectra are strongly determined by the band gap and the VHS energies.
For both structures, the injection and shift currents can be generated by the existence of an oblique p-polarized light component, while the in-plane shift currents in AB-BG can also be generated by normal incident lights. The out-of-plane shift current finally results in a static electric polarization between layers. The stacking order has significant effects on both currents. The injection coefficient for AA-BG is about two orders of magnitude smaller than that for AB-BG, while the shift conductivities are mostly in the same order of magnitude.
All these coefficients can be effectively modulated by the gate voltage
and the chemical potential.
Our results suggest that gate voltage controlled bilayer graphene
can be used to realize tunable optoelectronic detectors working in
the mid-infrared.
This work has been supported by National Natural Science Foundation of China Grant No. 12034003, 12004379, and 62250065. J.L.C. acknowledges the support from Talent Program of CIOMP.
§ BERRY CONNECTIONS OF AA-BG
The general expression for the Berry connection of AA-BG is
ξ_nm k =(√(1-α_n N_β_n k)√(1-α_m N_β_m k) +α_nα_m√(1+α_n N_β_n k)√(1+α_m N_β_m k))×
×1+β_nβ_m/8[ĝ_ k^∗(i∇_ kĝ_ k) + ŷ d]
+ (α_m√(1-α_n N_β_n k)√(1+α_m N_β_m k) +α_n√(1+α_n N_β_n k)√(1-α_m N_β_m k))
×β_nβ_m-1/8[ĝ_ k^∗(i∇_ kĝ_ k) - ŷ d]
+iδ_β_nβ_m/2( √(1-α_n
N_β_n k)∇_ k√(1-α_m
N_β_m k)+ α_nα_m√(1+α_n
N_β_n k)∇_ k√(1+α_m
N_β_m k))
+(√(1-α_n
N_β_n k) + α_n√(1+α_n
N_β_n k))(
√(1-α_m N_β_m k)+
α_m√(1+α_m N_β_m k))
×1+β_nβ_m/8cẑ
with d=√(3)/3a_0. Here we give the x-component between
different bands as
r^x_13 k =-r^x_31 k=-i/2∂ N_-1 k/∂ k_x/√(1-
N_-1
k^2)=-i/2γ_3/|Δ|(1-
N_-1 k^2)∂ |g_ k|/∂
k_x ,
r^x_24 k =-r^x_42 k=-i/2∂ N_+1 k/∂ k_x/√(1- N_+1 k^2)=-i/2γ_3/|Δ|(1-
N_+1 k^2)∂ |g_ k|/∂
k_x ,
r^x_12 k =r^x_21 k=-r^x_34 k=-r^x_43 k
=1/4[√(1+ N_-1 k)√(1- N_+1 k) +√(1- N_-1 k)√(1+ N_+1 k)][ĝ_
k^∗(i∂ĝ_ k/∂ k_x)] ,
r^x_32 k =-r^x_23 k=r^x_14 k=-r^x_41 k
=1/4[√(1+ N_-1 k)√(1- N_+1 k) -√(1- N_-1 k)√(1+ N_+1 k)][ĝ_ k^∗(i∂ĝ_ k/∂ k_x) ] .
Combining with other quantities in Eqs. (<ref>) and
(<ref>), the injection coefficients and
the shift conductivities can be evaluated. For the latter use, we also
need
V_21 k =2γ_3/ħ N_-1
k∂ |g_ k|/∂ k_x ,
V_43 k =2γ_3/ħ N_+1
k∂ |g_ k|/∂ k_x .
§ ANALYTICAL EXPRESSIONS OF Η^XZX, Σ^XZX, AND
Σ^ZZZ IN AA-BG UNDER THE LINEAR DISPERSION
APPROXIMATION
Here we give the analytic results for η^xzx in
Eq. (<ref>), σ^xzx in Eq. (<ref>) and
σ^zzz in Eq. (<ref>) under the linear
dispersion approximation around the Dirac points. The term of σ^zxx
is not discussed due to its very small magnitude, as shown in Fig. <ref> (c).
The integrands of
η^xzx, σ^xzx, and σ^zzz are functions of
|g_ k|, ∂ |g_ k|/∂ k_x, and
∂ ^2 |g_ k|/∂ k_x^2, where all terms
involving |g_ k| can be simplified by using the properties of
the δ function. The function δ(ħω_nm
k-ħω) is nonzero only for |g_ k|=G_nm with
γ_3 G_31 =γ_1-√((ħω/2)^2-Δ^2) ,
for ħω≥2√(Δ^2+γ_1^2) ,
γ_3 G_42 =√((ħω/2)^2-Δ^2)-γ_1 ,
for ħω≤ 2√(Δ^2+γ_1^2) .
Further we get
.( N_-1 k) |_|g_ k|=G_31 = - .( N_+1
k)|_|g_ k|=G_42= - √(1-(2Δ/ħω)^2) .
* By substituting the expressions of V_nm
k^x, r_31 k^x, r_13 k^z, r_42 k^x, and
r_24 k^z, η^xzx becomes
η^xzx
= e^3/2πħ^2∫ d
k (cγ_3^2/2ħ|Δ|){f_12 k
N_-1 k^2(1- N_-1 k^2)(∂ |g_
k|/∂ k_x)^2δ(ω_31
k-ω).
.+f_34 k
N_+1 k^2(1- N_+1 k^2)(∂ |g_
k|/∂ k_x)^2δ(ω_42
k-ω)}
= e^3c|Δ|/πħ^2
(ħω)^2[1-(2Δ/ħω)^2]{f_13 k|_|g_ k|=G_31
F_31(ω) + f_24 k|_|g_ k|=G_42
F_42(ω) } ,
with
F_nm(ω) = ∫ d k (γ_3∂ |g_ k|/∂
k_x)^2 δ(ħω_nm
k-ħω) .
* To get the result for σ^xzx, we use
∂ N_-1 k/∂ k_x =
(1- N_-1
k^2)^3/2γ_3/|Δ|∂ |g_ k|/∂ k_x
to get
r_31 k^z∂ r_13 k^x/∂ k_x +r_31
k^x∂ r_13 k^z/∂ k_x = ic/4(1+ N_-1 k^2)(1- N_-1
k^2)^3/2(γ_3/|Δ|∂ |g_
k|/∂ k_x)^2
-ic/4 N_-1 k(1-
N_-1 k^2)γ_3/|Δ|∂^2 |g_
k|/∂ k_x^2 .
Similar expressions can be obtained for terms involving r_32
k. Then we get
σ^xzx
= e^3c/4πħ(ħω)^2{[2-(2Δ/ħω)^2]2|Δ|/ħω[f_13 k|_|g_ k|=G_31 F_31(ω)+f_24 k|_|g_ k|=G_42 F_42(ω)].
.-|Δ|√(1-(2Δ/ħω)^2)[f_13 k|_|g_ k|=G_31 Q_31(ω)-f_24 k|_|g_ k|=G_42 Q_42(ω)]} .
with
Q_nm(ω) = ∫ d k γ_3∂^2 |g_ k|/∂ k^2_xδ(ħω_nm k-ħω) .
* The term of σ^zzz(ω) becomes
σ^zzz(ω)= e^3/2πħ^2∫ d k{f_13 kc^2/4 N_-1 k^2c√(1- N^2_-1 k)δ(ω_31
k-ω).
.+f_24 kc^2/4 N_+1 k^2c√(1- N^2_+1 k)δ(ω_42
k-ω)}
= e^3c^3/4πħ|Δ|/ħω[1-(2Δ/ħω)^2][f_13 k|_|g_ k|=G_31 J_31(ω)+f_24 k|_|g_ k|=G_42 J_42(ω)] .
with
J_nm(ω) = ∫ d k δ(ħω_nm
k-ħω) .
When the optical transition occurs just around the Dirac points K, we can
approximate |g_ k+ K|=√(3)a_0k/2, then the
δ functions can be worked out as
δ(2√(Δ^2+(γ_3|g_ k|-γ_1)^2)-ħω)
=δ(k-2G_31/(√(3)a_0))/√(3)a_0|γ_3|√(1-(2Δ/ħω)^2)θ(ħω-2√(Δ^2+γ_1^2)) ,
δ(2√(Δ^2+(γ_3|g_ k|+γ_1)^2)-ħω)
=δ(k-2G_42/(√(3)a_0))/√(3)a_0|γ_3|√(1-(2Δ/ħω)^2)θ(2√(Δ^2+γ_1^2)-ħω) .
Then we get
[ J_31(ω); J_42(ω) ] =8π/3a_0^2γ_3^2√(1-(2Δ/ħω)^2)|γ_1-√((ħω/2)^2-Δ^2)|[ θ(ħω-2√(Δ^2+γ_1^2)); θ(2√(Δ^2+γ_1^2)-ħω) ] ,
[ F_31(ω); F_42(ω) ] =3a_0^2γ_3^2/8[ J_31(ω); J_42(ω) ] ,
[ Q_31(ω); Q_42(ω) ] =-π/√(1-(2Δ/ħω)^2)[ θ(ħω-2√(Δ^2+γ_1^2)); θ(2√(Δ^2+γ_1^2)-ħω) ] ,
where two Dirac points have been counted in the integration.
In such
approximation, the expressions for η^xzx, σ^xzx, and σ^zzz are expressed as
η^xzx(ω)= e^3c|Δ|√(1-(2Δ/ħω)^2)/ħ^2(ħω)^2
|γ_1-√((ħω/2)^2-Δ^2)|( M_31(ω)+ M_42(ω)) ,
σ^xzx(ω)=
e^3c|Δ|(ħ^2ω^2-2Δ^2)/2ħ(ħω)^4√(1-(2Δ/ħω)^2)|√(1-(2Δ/ħω)^2)-2γ_1/ħω|( M_31(ω)+ M_42(ω))
-ce^3|Δ|/4ħ(ħω)^2( M_31(ω)- M_42(ω)) ,
σ^zzz(ω)= e^3c^3|Δ|√(1-(2Δ/ħω)^2)/3ħ(a_0γ_3)^2|√(1-(2Δ/ħω)^2)-2γ_1/ħω|( M_31(ω)+ M_42(ω)) ,
respectively, with
[ M_31(ω); M_42(ω) ]=[ f_13 k|_|g_ k|=G_31θ(ħω-2√(Δ^2+γ_1^2)); f_24 k|_|g_ k|=G_42θ(2√(Δ^2+γ_1^2)-ħω) ] .
Through the Taylor expansion, the above expressions around frequency 2√(Δ^2+γ_1^2) can be approximated as
η^xzx(ω)≈ ce^3|Δ||2√(γ_1^2+Δ^2)-ħω|/8ħ^2(γ_1^2+Δ^2)( M_31(ω)+ M_42(ω)) ,
σ^xzx(ω)≈ ce^3|Δ|(2γ_1^2+Δ^2)|2√(γ_1^2+Δ^2)-ħω|/32ħγ_1^2√(γ_1^2+Δ^2)^3( M_31(ω)+ M_42(ω))
-ce^3|Δ|/16ħ(γ_1^2+Δ^2)( M_31(ω)- M_42(ω)) ,
σ^zzz(ω)≈ ce^3|Δ||2√(γ_1^2+Δ^2)-ħω|/6ħ a_0^2γ_3^2(γ_1^2+Δ^2)( M_31(ω)+ M_42(ω)) .
§ EIGENENERGIES OF AB-BG
The eigenenergies ϵ satisfy the equation
|H^AB_ k - ϵ | = 0 ,
or
ϵ^4 + x_2 ϵ^2 + x_1 ϵ + x_0 = 0 ,
with
x_2 = -γ_1^'^2-
(2γ_0^'^2+γ_3^'^2+2γ_4^'^2)|g_ k|^2-2[Δ^2+(Δ^'/2)^2] ,
x_1= -4γ_0^'γ_4^'(γ_1^' |g_ k|^2+γ_3^'Re[g_
k^3])+Δ^'(γ_3^'^2 |g_ k|^2 -
γ_1^'^2) ,
x_0= (γ_0^'^2-γ_4^'^2)^2|g_
k|^4-2γ_3^'[γ_1^'(γ_0^'^2+γ_4^'^2)-γ_0^'γ_4^'Δ^']Re[g_
k^3]
+
{γ_3^'^2[γ_1^'^2+
Δ^2-(Δ^'/2)^2]-(2γ_0^'^2-γ_3^'^2)[Δ^2-(Δ^'/2)^2]-2γ_0^'γ_1^'γ_4^'Δ^'}|g_
k|^2
+[Δ^2-(Δ^'/2)^2][γ_1^'^2+
Δ^2-(Δ^'/2)^2] .
Then the analytic expressions of the eigenenergies are
ϵ_n k =
1/2[α_n√(-2x_2-β_n2x_1/√(y)-y)+β_n√(y)] ,
for n=1, 2, 3, 4 .
with
y = 1/6[4^1/3(y_1+√(y_1^2-4y_2^3))^1/3+4^2/3y_2/(y_1+√(y_1^2-4y_2^3))^1/3-4x_2] ,
y_1 =2x_2^3+27x_1^2-72x_2x_0 ,
y_2 =x_2^2+12x_0 .
At the Dirac
points with g_ k=0, the four eigenenergies are ±Δ-Δ^'/2,
±√(Δ^2+γ_1^2)+Δ^'/2.
In general the electron-hole symmetry for AB-BG
is broken due to the nonzero of γ_4^' and
Δ^'. However, we find that γ_4^' and
Δ^' have negligble effects on the optical transition
between the bands (2, 3). With setting γ_4^'=0 and
Δ^'=0, the eigenvalues become
ϵ_n k = α_n
1/√(2)√(z_1+α_nβ_n √(z_2)) ,
with
z_1 =γ_1^'^2+2Δ^2 +
(2γ_0^'^2+γ_3^'^2)|g_ k|^2 ,
z_2 =4γ_0^'^2[γ_3^'^2|g_ k|^4+2γ_1^'γ_3^'Re[g_
k^3]+(γ_1^'^2+4Δ^2)|g_ k|^2] + (γ_3^'^2|g_
k|^2-γ_1^'^2)^2 .
Obviously, the electronic states become electron-hole symmetric.
Using
Eq. (<ref>), we can have analytic discussion on the
band gap E_g and the VHS for J_32. Around the Dirac
point K, the approximation g_
k+ K=-r e^iθ can be adopted for k=
2r/√(3)a_0(cosθx̂ + sinθŷ).
For zero Δ, the zero energy of ϵ_3 k
can be directly found from Eq. (<ref>) at r=0 or
r=r_0=-γ_1^'γ_3^'/γ_0^'^2 and
θ=(2n+1)π/3. Therefore, there exist in total four
degenerate zero energy points in one Dirac cone at Δ=0;
one is at this Dirac point, and the other three locate along the K-M
directions.
Furthermore, for small r, ϵ_3 k can be
approximated by
ϵ_3 k^2 =Δ^2 + c_2
r^2 + c_3 cos(3θ)
r^3 + c_4 r^4 ,
with
c_2 =γ_3^'^2-4γ_0^'^2Δ^2/γ_1^'^2 ,
c_3 = -
2γ_0^'^2γ_3^'/γ_1^' ,
c_4 =
γ_0^'^2/γ_1^'^2[γ_0^'^2-2γ_3^'^2
+
4Δ^2(2γ_0^'^2-γ_3^'^2)/γ_1^'^2
+ 16γ_0^'^2Δ^4/γ_1^'^4] .
From Eq. (<ref>) the band structure around the Dirac points has following features:
* For nonzero Δ, the energy ϵ_3 k at the Dirac point K is an extreme, and it is a local minimum
(maximum) as c_2>0 (c_2<0), which corresponds to
|Δ|<Δ_c (|Δ| > Δ_c) with
Δ_c=|γ_3^'γ_1^'/(2γ_0^')|=0.0229 eV.
* We first look at the case |Δ|>Δ_c (c_2<0). For a fixed
θ, ϵ_3 k around the Dirac point K has one more
local minimum located at r=r_e(cos3θ) with
r_e(cos3θ)= -3c_3cos3θ+ √(9c_3^2cos^23θ-32c_2c_4)/8c_4 .
When r is fixed and θ varies, ϵ_3 k has local maxima as cos3θ=1 and local minima as
cos3θ=-1. When both r and θ are considered,
there exists a minimum at r=r_e(-1) and
θ=(2n+1)π/3 (along the K-Γ directions for integer n), and a VHS point at r=r_e(1) and
θ=2nπ/3 (along the K-M directions).
* For the case |Δ|<Δ_c (c_2>0), ϵ_3 k has no VHS point around the Dirac points but
the minimum along K-Γ directions still exists.
* Similar analysis can be applied to study the JDOS J_42= J_31. After ignoring γ_4^' and
Δ^', ϵ_4 k-ϵ_2 k has a local
minimum at the K point, and there is no VHS in J_42. Therefore, γ_4^' and
Δ^' play a key role in forming a VHS in J_42.
|
http://arxiv.org/abs/2307.07592v1 | 20230714193258 | Cosmological constraints from Type I radio-loud quasars | [
"L. Huang",
"Z. Y. Tu",
"N. Chang",
"Z. Y. Chang",
"F. F. Song"
] | astro-ph.CO | [
"astro-ph.CO",
"astro-ph.GA",
"astro-ph.HE",
"gr-qc"
] |
[email protected]
College of Science, Jiujiang University, Jiujiang 332000, People's Republic of China.
Key Laboratory of Functional Microscale Materials in Jiangxi Province,
Jiujiang 332000, People's Republic of China.
College of Science, Jiujiang University, Jiujiang 332000, People's Republic of China.
Key Laboratory of Functional Microscale Materials in Jiangxi Province,
Jiujiang 332000, People's Republic of China.
Xinjiang Astronomical Observatory, Chinese Academy of Sciences,
Urumqi 830011, People's Republic of China.
Key Laboratory of Radio Astronomy, Chinese Academy of Sciences,
Nanjing, 210008, People's Republic of China.
College of Science, Jiujiang University, Jiujiang 332000, People's Republic of China.
Key Laboratory of Functional Microscale Materials in Jiangxi Province,
Jiujiang 332000, People's Republic of China.
College of Science, Jiujiang University, Jiujiang 332000, People's Republic of China.
Key Laboratory of Functional Microscale Materials in Jiangxi Province,
Jiujiang 332000, People's Republic of China.
We obtain a new sample of 1192 Type I quasars with the UV-optical, radio and X-ray wavebands coverage by combining <cit.> and other matching data of SDSS-DR16 with FIRST, XMM–Newton, and Chandra Source Catalog, and a sample of 407 flat-spectrum radio-loud quasars (FSRLQs) of blazars from the Roma-BZCAT, which can be used to investigate their multi-band luminosity correlations and measure the luminosity distances of these Type I radio-loud quasars (RLQs) samples. We check the correlation between X-ray, UV-optical, and radio luminosity for various groupings of radio-quiet quasars (RQQs) and RLQs by parameterizing X-ray luminosity as a sole function of UV-optical or radio luminosity and as a joint function of UV/optical radio luminosity, which also can be employed to determine these cosmological distances. By Bayesian information criterion (BIC), the data suggest that the X-ray luminosity of RQQs is indirectly correlative with radio luminosity because of the connection between UV-optical and radio luminosity. But for RLQs, the X-Ray luminosity is directly related to radio luminosity, and the correlations between X-ray, optical/UV, and radio luminosity increase with the ratio of monochromatic luminosities logR. Meanwhile, we compare the results from RLQs with different UV-optical power law index Γ _UV, the goodness of fit for RLQs with Γ _UV≤ 1.6 seems to be better. Finally, we apply a combination of Type I RLQs and SN Ia Pantheon to verify the nature of dark energy concerning whether or not its density deviates from the constant, and give the statistical results.
Cosmological constraints from Type I radio-loud quasars
F. F. Song
August 12, 2023
=======================================================
§ INTRODUCTION
A great number of quasars data have been obtained and used to investigate their luminosity correlation. There is a dichotomy in the distribution of the radio luminosity of quasars <cit.>, which depends on the ratio of monochromatic luminosities measured at (rest frame) 5 GHz and 2500 Å <cit.>. RLQs are often defined by log R > 1 and RQQs satisfy log R ≤ 1. A large number of data suggest that the X-ray luminosity of RQQs is related to the UV-optical luminosity <cit.>, which also indicates that X-ray emission is created by Compton upscattering of disk photons occurring in a hot "corona". The X-ray properties of RLQs are different from those of RQQs. The X-ray emission of RLQs are not merely contributed to inverse compton scattering, but also powered directly or indirectly by the radio jet <cit.>, which can be verified by parameterization methods.
On the other hand, quasars can also be categorized by whether they have broad emission lines (Type I), only narrow lines (Type II), or no lines except when a variable continuum is in a low phase (Blazars) <cit.>, and blazars are generally divided into two classes on the basis of their optical spectra. The first class is represented by the flat-spectrum radio-loud quasars (FSRLQs), the second class is the BL Lac objects characterized by featureless spectra with emission/absorption lines of equivalent width lower than 5 Å <cit.>.
To investigate the multi-band luminosity correlations of quasars and measure the luminosity distances of these Type I quasars, we construct a large sample of Type I quasars by combining <cit.> and other matching data of SDSS-DR16 with FIRST, XMM –Newton, and Chandra Source Catalog, and a sample of 407 FSRLQs of blazars from the Roma-BZCAT. Meanwhile, we compare the X-ray luminosity relation of RLQs with different UV-optical power law index Γ _UV and X-ray photon index Γ _X.
In addition, Worrall et al. have used Type I RLQs to check whether their luminosity correlation depends on redshift <cit.>. Hence, we also consider dividing the RLQs sample into different redshift bins, which can be applied for segment fitting and examining whether the X-ray luminosity relation is redshift-dependent.
In Section <ref> of this paper, we introduce the source of data used, including Type I quasars and blazars. In Section <ref>, we adopt three parametric models to analyze the X-ray luminosity correlation of RQQs and RLQs, which include X-ray luminosity as a sole function of UV-optical or radio luminosity and as a joint function of UV-optical and radio luminosity. We compare and analyze the results from three different models by using the Bayesian information criterion (BIC). Furthermore, we subdivide the RLQs sample into various redshift bins, which can be used for testing whether there is a redshift evolution of the X-ray luminosity relation. In Section <ref>, we employ the X-ray luminosity relation of Type I RLQs to measure and obtain their cosmological luminosity distance. In Section <ref>, we apply a combination of Type I RLQs and SN Ia Pantheon to test the nature of dark energy by reconstructing the dark energy equation of state w(z), which concerns whether or not the density of dark energy evolves with time. In Section <ref>, we summarize this paper.
§ DATA USED
Modern optical instruments and surveys (e.g. Sloan Digital Sky Survey; SDSS) <cit.>; Radio surveys (e.g. FaintImages of the Radio Sky at Twenty-Centimeters; FIRST) <cit.>, and archival X-ray data from XMM–Newton <cit.>, Chandra <cit.>, provide large amounts of quasars data, which can be applied to check the multi-band luminosity correlation for quasars. The 16th data release (DR 16) from the SDSS presented a quasar catalog including the spectra of 750,414 quasars <cit.>, in addition, a catalog containing 946,432 sources observed at a frequency of 1.4GHz were released by FIRST <cit.>.
We first matched the SDSS-DR16 quasar catalogue with the latest FIRST survey data using a 2” matching radius, all Type I quasars flagged as broad absorption lines (BALs) are removed, and obtained a matched sample of Type I quasars with the UV-optical and radio wavebands coverage. Next we matched this sample to the latest XMM-Newtom Source Catalog and the Chandra Source Catalog Release 2.0 to obtain their X-ray fluxes (0.2-12 keV for XMM-Newtom and 0.5-7 keV for Chandra) <cit.>, with a matching radius of 5”. Finally, we construct a large sample of Type I quasars with multi-wavelength coverage, and some of these objects are from <cit.>.
For this new sample, the UV-optical power-law index Γ _UV can be obtained from a fit of f_ν∝ν ^ - (Γ _UV - 1) to u, g, r, i and z band, and the r-band apparent magnitude can also be used for calculating the UV-optical flux at (rest-frame) 2500 Å, where ⟨Γ _UV⟩ = 1.6 are considered for the K-correction <cit.>. In the same way, the observed 1.4 GHz flux is utilized to calculate radio flux at (rest-frame) 5 GHz by assuming a_r = - 0.5. For X-ray fluxes of this sample, Galactic-absorption correction is performed by using PIMMs and obtain the unabsorbed flux density at observed-frame 2 keV, where a specifying Galactic column density and a power-law index in the X-Ray band ⟨Γ _X⟩ = 1.6 are considered, it can be used to determine band pass-corrected rest-frame 2 keV flux.
On the other hand, the blazars also can be used to check multi-band luminosity correlations, especially for FSRLQS. Recently Massaro present a multifrequency catalogue of blazars, named Roma-BZCAT which contains coordinates and multifrequency data of 3561 sources <cit.>. We match the Roma-BAZAT with the SDSS-DR16 quasar catalogue and obtain 407 FSRLQs with multi-wavelength coverage. At finally, this blazars sample and Type I quasars sample can be applied to investigate their luminosity correlations and measured the luminosity distances of these Type I RLQs.
In this paper, we only consider RLQs with log R > 2, RIQs and RQQs satisfy 1< logR ≤ 2 and logR ≤ 1 respectively. Meanwhile, we employ parametric methods to test their multi-band luminosity correlation.
§ THE RELATION BETWEEN X-RAY, UV-OPTICAL, AND RADIO LUMINOSITIES
§.§ Insights from scatter plots
We firstly plot the L_X - L_uv and L_X - L_radio plane for Type I quasars and blazars (FSRLQs), as shown in Fig. <ref> <ref>, the luminosities L_λ(2500Å) have been obtained from the measured fluxes assuming Λ CDM cosmology (Ω _m = 0.3, 1pt 1ptH_0 = 70 1pt km 1pt 1pts^ - 1 1pt Mpc^ - 1). Meanwhile, we fit the linear relation to the data and obtain the theoretical values of X-Ray luminosity from the best fitting values of parameters. The upper left panel of Fig.<ref> illustrates the L_X - L_uv plane for Type I quasars with logR ≤ 1 (RQQs) and 1< logR ≤ 2 (RIQs), and the dotted line represents the theoretical values of X-Ray luminosity from the linear relation together with the best fitting values of parameters, which implies that the X-ray luminosity of RQQs and RIQs is related to UV-optical luminosity and originates from the inverse compton scattering. The lower left panel of Fig.<ref> shows the L_X - L_radio plane for Type I RQQs and RIQs, and the dotted line represents the theoretical values, which also indicates that X-ray luminosity of RQQs is indirectly correlated with radio luminosity because of the connection between UV/optical and radio luminosity from Fig.<ref> (L_uv - L_radio plane). The indirect relation between X-Ray and radio luminosity in RQQs will also be discussed in Sec <ref>.
Likewise, the L_X - L_uv plane of Type I RLQs (log R>2) is shown in the upper right panel of Fig.<ref>, which suggests that X-ray luminosity of RLQs is correlate with UV-optical luminosity. For the L_X - L_radio plane of Type I RLQs illustrated in the lower right panel of Fig.<ref>, whether or not the X-ray luminosity of RLQs is indirectly or directly related to radio luminosity will be discussed in Sec <ref>.
The L_X - L_uv plane and L_X - L_radio of blazars (FSRLQs) are illustrated in the upper and lower panel of Fig.<ref>, which similarly implies that X-ray luminosity of RLQs is related to UV-optical luminosity. On the other hand, we compare the correlation between X-Ray luminosity and UV-optical luminosity of Type I quasars with UV-optical power-law Γ _UV≤ 1.6 and Γ _UV> 1.6, which is shown in Fig.<ref>. We will further discuss it in Sec <ref>.
§.§ Models constraints from Type I quasars
We apply various parameterization methods to test the multi-band luminosity correlation of quasars, which involve different physical mechanisms. The most common parametric equation is <cit.>
[ Model; I:logL_X = α + γ _uvlogL_uv + γ _radio'logL_radio, ]
The above equation can become the relation L_X∝ L_uv^γ _uvL_radio^γ _radio', which concerns that X-ray luminosity is related to both UV-optical luminosity and radio luminosity. Using formula L = 4πD_L^2F in (<ref>), we get
[ logF_X = Φ (F_UV,F_radio,D_L); 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt = α + γ _uvlogF_UV + γ _radio'logF_radio; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + (γ _uv + γ _radio' - 1)log (4πD_L^2), ]
where F_X, F_UV and F_radio are measured at (rest-frame) 2 keV, 2500 Å and 5 GHz, D_L is the luminosity distance, which can be obtained by the integral formula of D_L-z relation. This equation can be effectively used for testing X-ray luminosity correlation for RLQs and RQQs.
The second model is considered that X-ray luminosity is only correlated with UV-optical luminosity, and its parametric form is <cit.>
IV: 1pt 1pt 1pt 1pt 1pt 1ptlogL_X = α + γ _uvlogL_uv,
We can also consider other model as
III: 1pt 1pt 1pt 1pt 1pt 1ptlogL_X = α + γ _radio'logL_radio,
Model II and Model III can become the relation L_X∝ L_uv^γ _uv and L_X∝ L_radio^γ _radio'. The above two models refer that X-ray luminosity is only correlative with UV-optical or radio luminosity.
In the same way, from equations (<ref>) and (<ref>), we can get X-ray flux F_X as the function of F_UV, F_radio and D_L, which can be used to test X-ray luminosity relations.
We fit the three parametric models by minimizing a likelihood function consisting of a modified χ ^2 function based on MCMC, allowing for an intrinsic dispersion σ <cit.>
[ - 2ln L = ∑_i = 1^N {[log(F_X)_i - Φ(F_UV,F_radio,D_L)_i]^2/s_i^2}; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + ∑_i = 1^N ln (2π s_i^2) , ]
where Φ (F_UV,F_radio,D_L) is given by equation (<ref>), and s_i^2 = σ _i^2(logF_X) + γ _uv^2σ _i^2(logF_UV) + δ ^2, δ is the intrinsic dispersion, which can be fitted as a free parameter and δ is usually much larger than the measurement error.
The Hubble constant H_0 is degenerate with the parameters α when fitting equation (<ref>), we fix H_0 = 70 1pt 1pt km 1pt 1pt 1pts^ - 1 1pt Mpc^ - 1 <cit.>. If we want to better test X-ray luminosity relations and further select the optimal model, we should not fix Ω _m. Therefore, we fit the three models to Type I quasars without fixing Ω _m and seek the best model.
We adopt the maximum likelihood function (equation (<ref>)) based on MCMC to constrain three models, the model fitting results for Type I quasars are illustrated in Table <ref>. Meanwhile, we fit Model I to Type I quasars with Γ _UV≤ 1.6 and Γ _UV> 1.6, as well as the objects with the X-Ray power-law index Γ _X≤ 1.6 and Γ _X> 1.6, Γ _X can be obtained from a fit of f_ν∝ν ^ - (Γ _X - 1) to their X-Ray fluxes(0.2-0.5, 0.5-1, 1-2, 4-4.5, 4.5-12 keV for XMM-Newtom and 0.5-1.2, 1.2-2, 2-7 keV for Chandra), the statistical results can be used to test whether there are different, which are also shown in Table <ref>.
§.§ Models analysis and comparison
We use BIC to seek an optimal model. The BIC is
BIC = - 2lnL_max + k 1pt 1ptln 1pt 1pt N,
where L_max is the maximum likelihood, k is the number of free parameters of the model, and N is the number of data points.
For model II, by comparing the results in Tabel <ref> from different logR of Type I quasars, we find that the correlation between X-ray and UV-optical luminosity increases with logR. Similarly, for the model I, the statistical results imply that the correlation between X-ray and radio luminosity becomes stronger as the ratio of monochromatic luminosities logR increases. Meanwhile, Model II has the smallest BIC by comparing the results in Table <ref> from fitting for different models to RQQs, which indicates that the X-ray luminosity of RQQs is not directly correlated with their radio luminosity, but there is an indirect relation between X-ray and radio luminosity because of the connection between UV-optical and radio luminosity from Fig. <ref>.
For RLQs, BIC for Model I is far smaller than Model II and III, which implies X-ray luminosity of RLQs is not only connected with optical/UV luminosity but also directly related to radio luminosity. A possible reason for the luminosity correlations RLQs is that a fraction of the nuclear X-ray emission is directly or indirectly powered by the radio jet, the specific physical mechanism needs to be further understood.
As for RIQs, By comparing BIC in Table <ref> from fitting for different models to RIQs, we find that Model I has the smallest BIC, which might indicate that there is a weak correlation between the X-ray and radio luminosity of RIQs. Furthermore, for the fitting results BIC, there is a difference between Type I quasars with Γ _UV≤ 1.6 and Γ _UV> 1.6 for Model I, the same goes for Γ _X≤ 1.6 and Γ _X> 1.6. The goodness of fit for Γ _UV≤ 1.6 and Γ _X> 1.6 seem to be better.
§.§ Analysis of the relation L_X∝ L_uv 1pt 1pt^γ _uvL_radio 1pt 1pt^γ _radio'
We divide the Type I RLQs sample in several redshift bins, which can be used to verify if there is redshift dependence for luminosity relation. The redshift bin are Δ ((1 + z)^ - 1) = 0.05. We apply the parametric model <cit.>
[ logF_X = α (z) + γ _uv(z)logF_UV; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + γ _radio^'(z)logF_radio, ]
where α (z), 1ptγ _uv(z), 1ptγ _radio'(z) and the intrinsic dispersion δ (z) are free parameters. We fit equation (<ref>) to segmented Type I RLQs and check whether the X-Rays relation is dependent on the redshift. The fit results of γ _uv(z), 1pt 1ptγ _radio'(z), and 1pt 1ptδ (z) at different redshift are illustrated in Fig. <ref>, which show that their values are no obvious evidence for any significant redshift evolution. The average values of parameters are ⟨γ _uv⟩ = 0.47 ± 0.1, 1pt 1pt 1pt⟨γ _radio'⟩ = 0.27 ± 0.056.
§ A MEASURE OF LUMINOSITY DISTANCE FOR TYPE I RLQS
Meanwhile, we measure the luminosity distance for Type I RLQs. From Model I, equation (<ref>) gives distance modulus as
DM = 5[logF_X - γ _uvlogF_UV - γ _radio'F_radio - α ']/2(γ _uv + γ _radio' - 1),
where α ' = α + (γ _uv + γ _radio' - 1)log (4π ).
The formula of error is
σ _DM = DM√((σ _f/f)^2 + (σ _γ _uv/γ)^2 + (σ _γ _radio'/γ)^2) .
where f=logF_X - γ _uvlogF_UV - γ '_radioF_radio - α ', γ = γ _uv + γ _radio' - 1, and σ _f^2 = σ _i^2(logF_X) + γ _uv^2σ _i^2(logF_UV) + σ _α '^2. From equation (<ref>), the uncertainty of the slope γ _uv and γ _radio' obviously influence the error of distance modulus for Type I RLQs.
Fig <ref> shows distance modulus of Type I quasars with Γ _UV≤ 1.6 and Γ _UV> 1.6 from a fit of Model I when assuming Λ CDM cosmology, and their averages in small redshift bins. Meanwhile the properties of 710 Type I quasars and their distance modulus are listed in Table <ref>.
§ THE RECONSTRUCTION OF DARK ENERGY EQUATION OF STATE W(Z)
Although the dark energy model can be used to effectively explain the accelerating expansion of the universe and the cosmic microwave background (CMB) anisotropies <cit.>, the
origin and property of dark energy density and pressure are still unclear.
The research methods of dark energy include two kinds. One is to try to explain the physical origin of its density and pressure by constraining dark energy physical models <cit.>. Understanding the physical nature of dark energy is important for our universe. Whether or not the dark energy is composed of Fermion pairs in a vacuum or Boson pairs, Higgs field. The order of magnitude for the strength of dark energy is far smaller than that the elementary particles needed when they were created in the very early Universe. The other method is to investigate whether or not the dark energy density evolves with time, this can be checked by reconstructing the dark energy equation of state w(z) <cit.>, which is independent of physical models. The high redshift observational data can better solve these problems.
The reconstruction methods of the dark energy equation of state can be classed into parametric and non-parametric methods <cit.>. We apply Type I RLQs and SNla to reconstruct w(z) by parametric method assuming X-ray luminosity relation Equation (<ref>), which can be used for testing the property of dark energy.
SNla Pantheon sample is a combination of data sources from the Sloan Digital Sky Survey (SDSS), the Pan-STARRS1 (PS1), SNLS, and various low-z and Hubble Space Telescope samples. There are 335 SNIa provided by SDSS <cit.>, and PS1 presented 279 SNla <cit.>. The rest of the Pantheon sample are from the CfA1 - 4, CSP, and Hubble Space Telescope (HST) SN surveys <cit.>. This joint sample of 1048 SNIa is called the Pantheon sample.
The integral formula of D_L - z relation in near flat space is given by
[ D_L = 1 + z/H_0∫_0^z dz'[Ω _m(1 + z')^3; 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt 1pt + Ω _R(1 + z')^4 + Ω _DE^(0) e ^∫_0^z'1 + w(z^”)/1 + z^”dz^”]^ - 1/2 ]
where Ω _R is radiation density. Ω _DE^(0) is the present dark energy density and satisfies Ω _DE^(0) = 1 - Ω _m when ignoring Ω _R, w(z) is dark energy equation of state.
We choose w_0w_aCDM model and the parametric form is
w(z) = w_0 + w_az/1 + z.
Therefore dark energy density can be written as
Ω _DE(z) = Ω _DE^(0)(1 + z)^3(1 + w_0 + w_a)exp [ - 3w_az/(1 + z)].
We fit w_0w_aCDM model parameters to Type I RLQs and SNla by minimizing χ _Total^2, the χ _Total^2 is
χ _Total^2 = - 2lnL^RLQs + χ _SN^2,
where - 2lnL^RLQs is given by equation (<ref>), and χ _SN^2 can be expressed as
χ _SN^2 = Δμ ^TC_μ _ob^ - 1Δμ ,
where Δμ = μ - μ _th. C_μ is the covariance matrix of the distance modulus μ.
Another function is χ' _Total^2, which satisfies
χ' _Total^2 = χ _RLQs^2 + χ _SN^2,
and χ _RLQs^2 = - 2lnL^RLQs - ∑_i = 1^N ln (2π s_i^2).
We adopt equation (<ref>) to constrain model parameters, and fit results are illustrated in table <ref>, w_0w_aCDM has better goodness of fit than Λ CDM, and Δχ _Total^2 is improved by -3.2, it implies Λ CDM model is in tension with Type I RLQs at ∼ 1.5σ, which is consistent with the results from the distance measurement using Baldwin effect of quasars <cit.>. Meanwhile fig <ref> shows 68% and 95% contours for w_0 and w_a from a fit of the X-ray luminosity relation L_X∝ L_uv^γ _uvL_radio^γ _radio' and w_0w_aCDM model to a combination of SNla and Type I RLQs.
§ SUMMARY
The investigation of X-ray luminosity correlation for RQQs and RLQs could make us understand more of their physical mechanism. We obtain a new sample of 1192 Type I quasars with the UV-optical, radio and X-ray wavebands coverage by combining <cit.> and other matching data of SDSS-DR16 with FIRST, XMM–Newton, and Chandra Source Catalog, and a sample of 407 flat-spectrum radio-loud quasars (FSRLQs) of blazars from the Roma-BZCAT. Firstly, we apply three parametric methods to test the correlation between X-ray, UV-optical, and radio luminosity. The statistical results indicate that the X-ray luminosity of RQQs is correlated with their UV-optical luminosity, which also can be considered that the X-ray luminosity of RQQs is indirectly correlated with radio luminosity because of the connection between UV-optical and radio luminosity.
Meanwhile, data suggest that the correlation between X-ray and UV-optical luminosity increases with the ratio of monochromatic luminosities log R, Similarly, the correlation between X-ray and radio luminosity also becomes stronger as log R increases. For RLQs, the results imply that the X-ray luminosity of RLQs is not only connected with optical/UV luminosity but also directly related to radio luminosity. A possible reason for the luminosity correlations RLQs is that a fraction of the nuclear X-ray emission is directly or indirectly powered by the radio jet. In addition, we compare the results from Type I quasars with Γ _UV≤ 1.6 and Γ _UV> 1.6, as well as Γ _X≤ 1.6 and Γ _X> 1.6 using a fit of X-ray luminosity relation L_X∝ L_uv^γ _uvL_radio^γ _radio', the goodness of fit for Γ _UV≤ 1.6 and Γ _X> 1.6 seem to be better.
Secondly, We divide the Type I RLQs sample into discrete redshift bins and combine a special model, which can be applied to describe if there is a redshift evolution of X-ray luminosity relation L_X∝ L_uv^γ _uvL_radio^γ _radio', the fit results show the model parameters approach to the constant, which indicates there is not an obvious redshift dependence for L_X∝ L_uv^γ _uvL_radio^γ _radio'.
Finally, we obtain the luminosity distance of 710 Type I RLQs from a fit of X-ray luminosity relation L_X∝ L_uv^γ _uvL_radio^γ _radio' when assuming Λ CDM cosmology, and use a joint of SNla and Type I RLQs sample to reconstruct the dark energy equation of state w(z) by parametric method and test the nature of dark energy. The data suggests w_0w_aCDM model is superior to cosmological constant Λ CDM model at ∼ 1.5σ.
In the future, we will cross-correlate the Dark Energy Spectroscopic Instrument (DESI) quasar catalogs with the XMM-Newton, Chandra archives, and radio surveys. We expect to obtain more quasars with multi-wavelength coverage and high redshift (z>3) objects, which can be used to investigate their multi-band luminosity correlations. Meanwhile, the high redshift observational data can better test the properties of dark energy, it will determine the future of the universe, whether the universe keeps expanding or shifts from expansion to contraction. It will similarly determine the future of humanity.
apsrev4-2
|
http://arxiv.org/abs/2307.04449v1 | 20230710095935 | Graph Convolutional Networks for Simulating Multi-phase Flow and Transport in Porous Media | [
"Jiamin Jiang",
"Bo Guo"
] | physics.comp-ph | [
"physics.comp-ph",
"cs.LG"
] |
[mycorrespondingauthor]Corresponding author
[email protected]
Chevron Technical Center
Hydrology and Atmospheric Sciences, The University of Arizona
Numerical simulation of multi-phase fluid dynamics in porous media is critical for many subsurface applications. Data-driven surrogate modeling provides computationally inexpensive alternatives to high-fidelity numerical simulators. While the commonly used convolutional neural networks (CNNs) are powerful in approximating partial differential equation solutions, it remains challenging for CNNs to handle irregular and unstructured simulation meshes. However, subsurface simulation models often involve unstructured meshes with complex mesh geometries, which limits the application of CNNs. To address this challenge, here we construct surrogate models based on Graph Convolutional Networks (GCNs) to approximate the spatial-temporal solutions of multi-phase flow and transport processes. We propose a new GCN architecture suited to the hyperbolic character of the coupled PDE system, to better capture the saturation dynamics. Results of 2D heterogeneous test cases show that our surrogates predict the evolutions of the pressure and saturation states with high accuracy, and the predicted rollouts remain stable for multiple timesteps. Moreover, the GCN-based models generalize well to irregular domain geometries and unstructured meshes that are unseen in the training dataset.
§ INTRODUCTION
Dynamics of multiple fluid phases in porous media are critical for many applications in Earth's subsurface, including oil and gas recovery, groundwater remediation, geological CO_2 sequestration, and subsurface hydrogen storage. Numerical simulations play an increasingly important role in understanding, quantifying, and controlling these multi-phase flow processes. Predicting the evolution of subsurface fluid dynamics requires solving partial differential equations (PDEs) governing the multi-phase flow and transport processes. These PDEs are often highly nonlinear and exhibit an intricate mixture of elliptic and hyperbolic characteristics, posing challenges to numerical methods. Moreover, significant uncertainties are present in the model parameters due to data scarcity in the subsurface. As a result, many model simulation runs (e.g., thousands) are required to quantify the uncertainties propagated from the parameters to the predictions. Therefore, computationally efficient simulation techniques are critical for subsurface applications.
The deep learning revolution (LeCun et al. 2015; Krizhevsky et al. 2017) has dramatically changed scientific fields such as computer vision and natural language processing. More recently, deep learning algorithms have been extended towards constructing data-driven surrogate models to approximate the solutions of PDEs, particularly in the context of fluid dynamics (Guo et al. 2016; Kutz 2017; Long et al. 2018; Bar-Sinai et al. 2019; Santos et al. 2020; Li et al. 2020; Lu et al. 2021; Vinuesa and Brunton 2022). Compared to high-fidelity numerical simulators, a learned simulator can provide much faster predictions, especially for high-dimensional nonlinear systems.
A number of studies have applied image-based approaches and snapshots of simulation data over a spatially discretized input domain for surrogate modeling of subsurface flow and transport problems. Most of these works leverage convolutional neural networks (CNNs) to learn the nonlinear mappings from the input properties (e.g., permeability) to the output states (pressure and saturation) on regular Cartesian meshes (Mo et al. 2019; Tang et al. 2020; Wang and Lin 2020; Wen et al. 2021; Zhang et al. 2021; Jiang et al. 2021; Yan et al. 2022; Maldonado-Cruz and Pyrcz 2022). While CNNs are powerful in approximating PDE solutions, they are restricted to a specific discretization of the physical domain in which they are trained. Due to the inherent limitations of standard convolution operations, it remains challenging for CNNs to handle irregular and unstructured simulation meshes. However, driven by the need to accurately characterize complex geological features and heterogeneity, subsurface simulation models often involve corner-point and unstructured meshes with skewed and degenerate mesh geometries. These complexities limit the application of CNN-based models for subsurface problems. Note that Maucec and Jalali (2022) recently applied the interaction networks (Battaglia et al. 2016) for surrogate modeling of a two-phase incompressible flow problem, but their surrogate leads to large prediction errors of the pressure field.
Graph Neural Networks (GNNs) have successfully been employed to learn the dynamic evolutions of PDEs, under mesh-based simulation frameworks (Pfaff et al. 2020; Belbute-Peres et al. 2020; Iakovlev et al. 2020; Chen et al. 2021; Brandstetter et al. 2022; Pilva and Zareei 2022). In contrast to CNNs, GNNs naturally enable operating on unstructured meshes with complex domain boundaries. A simulation mesh can be viewed as a graph composed of nodes, and a set of edges representing the connectivity between the nodes. The key idea of GNNs is to aggregate and propagate the local information of system states from their neighborhoods into node representations, through multiple message passing layers (Kipf and Welling 2016; Gilmer et al. 2017).
In the present work, we apply Graph Convolutional Networks (GCNs) to learn surrogate models for predicting the spatial-temporal solutions of multi-phase flow and transport in porous media. We seperately design two GCN architectures that are suited to the elliptic and hyperbolic characteristics of the coupled PDE system, to better capture the pressure and saturation dynamics. The GCN-based models are trained by supervising on the per-node output states. We evaluate the prediction performance of the trained surrogates using 2D heterogeneous cases. The results show that our surrogates predict the dynamic evolutions with high accuracy, and the predicted rollouts remain stable for multiple timesteps. Moreover, our GCN models generalize well to irregular domain geometries and unstructured meshes that are not present in the training dataset.
§ MATHEMATICAL MODEL AND DISCRETIZATION
§.§ Immiscible multi-phase flow in porous media
We consider compressible and immiscible flow and transport in porous media with n_p number of phases. The mass-conservation equation for phase l (l ∈{ 1,...,n_p }) can be written as
∂/∂ t ( ϕρ_l s_l ) + ∇· (ρ_lv_l ) - ρ_l q_l = 0,
where t is time. ϕ is rock porosity. q_l is the volumetric injection or pumping rate of wells (source or sink term). ρ_l is phase density. s_l is phase saturation, which is constrained by
∑_l s_l = 1,
The Darcy phase velocity, v_l, is expressed as
v_l = -k λ_l ( ∇ p_l - ρ_l g ∇ z ).
where k is rock permeability. p_l is phase pressure. g is gravitational acceleration and z is depth (assuming positive downward). λ_l = k_rl/μ_l is phase mobility, where k_rl and μ_l are relative permeability and fluid viscosity, respectively.
For oil-water flow that only involves two fluid phases, Eq. (<ref>) can be simplified to
∂/∂ t ( ϕρ_o s_o ) + ∇· ( ρ_ov_o ) - ρ_o q_o = 0,
∂/∂ t ( ϕρ_w s_w ) + ∇· ( ρ_wv_w ) - ρ_w q_w = 0,
with the saturation constraint as s_o + s_w - 1 = 0.
§.§ Fully-implicit discretization
To solve the PDE system from Eq. (<ref>), we apply a finite volume method that discretizes the simulation domain into a mesh consisting of n_b cells and the fully-implicit scheme for the time discretization
| Ω_i |/Δ t ( ( ϕ_i ρ_l,i s_l,i )^n+1 - ( ϕ_i ρ_l,i s_l,i )^n ) - ∑_j∈ adj(i) ( ρ_l,ijυ_l,ij )^n+1 - Q_l,i^n+1 = 0,
where i ∈{ 1,...,n_b } is cell index, | Ω_i | is cell volume, (ij) corresponds to the interface between cells i and j. Superscripts represent timesteps, and Δ t is timestep size.
The discrete phase flux based on the two-point flux approximation can be written as
υ_l,ij = T_ijλ_l,ijΔΦ_l,ij,
where ΔΦ_l,ij = Δ p_l,ij - g_l,ij is the phase-potential difference with the discrete weights g_l,ij = ρ_l,ij g Δ z_ij. The phase mobility λ_l,ij is evaluated using the Phase-Potential Upwinding (PPU) scheme (Sammon 1988; Brenier and Jaffré 1991). In PPU, the mobility of each phase is treated separately according to the sign of the phase-potential difference. The upwinding criterion is given as
λ_l,ij = {[ λ_l(s_i), ΔΦ_l,ij≥ 0; λ_l(s_j), otherwise ].
where s_i = { s_l,i}_l ∈{ 1,...,n_p } denotes the saturations of cell i.
The total face transmissibility T_ij combines two half-transmissibilities in a half of the harmonic average
T_ij = T_i T_j/T_i + T_j , T_i = k_i A_ij/d_i.
where A_ij denotes the interface area, k_i is the permeability of cell i, and d_i is the length from the cell centroid to the interface.
In the finite volume formulation, the discrete source (or sink) term for a mesh cell containing a well (referred to as well cell) is written as
Q_l,i = WI_i ( ρ_lλ_l )_i ( p_l - p^W )_i,
which represents the well flux for phase l in cell i. p_l,i is well-cell pressure, p^W_i is wellbore pressure, and WI_i is well index.
The discretized nonlinear system, written in a residual format, has the following form
ℛ(u^n+1) = 0
where u represents the state variables (pressure and saturation) of mesh cells. The nonlinear system is often solved using the Newton method, which performs multiple iterations until convergence. For each timestep, with the solution u^n, and a chosen timestep size Δ t, the new state u^n+1 is obtained.
§ SURROGATE MODELS
A simulator ℍ maps the current state of mesh cells to the next timestep state. We denote a simulated rollout trajectory as ( u^0, u^1, ..., u^n_t ), which is computed iteratively by u^n+1 = ℍ ( u^n ) for every timestep.
The goal of our surrogate learning task is to replace the computationally expensive high-fidelity simulator with surrogate simulators that predict the next state
u^n+1≈u^n+1 = ℕ ( u^n; Θ ),
where ℕ is a next-step prediction model based on GNN, whose parameters Θ can be optimized for some training objectives. u^n+1 indicates the predicted state from the surrogate model. Given the initial state u^0, ℕ ( ; Θ ) can rapidly produce a rollout trajectory of states ( u^0, u^1, ..., u^n_t ), where n_t is the number of timesteps.
The coupled multi-phase system (<ref>) has an intricate mixture of elliptic and hyperbolic characteristics. It is beneficial to employ specialized GNN architectures suitable for the specific characteristics of the coupled system. Therefore in the present work we seperately design and train two models that compute the solutions of pressure and saturation in a sequential manner as
{p^n+1 = ℕ_p ( p^n, s^n; Θ_p ) ,
s^n+1 = ℕ_s ( p^n+1, s^n; Θ_s ) .
.
where ℕ_p and ℕ_s represent respectively the pressure and saturation models. At each time step, the saturation model takes the input from the pressure model.
The above process can be written using a compact operator as
[ p^n+1, s^n+1 ] = ℕ_s ∘ℕ_p ( p^n, s^n; Θ_p, Θ_s ).
§ GRAPH NEURAL NETWORKS
We leverage the power of GNNs to construct data-driven surrogate simulators to approximate the PDE solutions (Equation (<ref>)). GNNs provide a flexible and efficient way to operate over data that are structured as graphs, naturally fitting mesh-based simulations (Pfaff et al. 2020; Pilva and Zareei 2022).
Let G = (X, E) be graph representation (Fig. <ref>) of a simulation mesh with nodes X (blue dots) where x_i denotes the cell centroid, and undirected edges E (red line segments) where e_ij represents the connecting neighboring cells at x_i and x_j. 𝒩(i) is the set of adjacent nodes around node i. We further denote the node and edge features by u_i and e_ij respectively.
§.§ Message Passing Framework
A GNN-based model consists of a stack of neural network layers, each aiming to aggregate local neighborhood information, i.e., features of neighbors, around each node and then passes this aggregated information on to the next layer (Kipf and Welling 2016). The fundamental operation in GNNs is the message-passing procedure, which updates the feature vector of each node based on the features of its neighboring nodes. The message-passing rule is generally formulated as (Gilmer et al. 2017)
u'_i = γ ( u_i, j∈𝒩(i)⊕ψ ( u_i, u_j, e_ij ) ),
where ⊕ denotes a differentiable, permutation-invariant function (e.g., summation, mean, or maximum), and γ and ψ are differentiable neural networks such as MultiLayer Perceptrons (MLPs). Each subsequent message-passing layer contains a separate set of network parameters, and operates on the output u'_i of the previous layer.
In our work, we consider weighted graphs and employ the GCN operator, GraphConv, from Morris et al. (2019). At layer m, the new node features are updated as
u^(m+1)_i = σ ( W_1 u^(m)_i + W_2 ∑_j∈𝒩(i) w_ij·u^(m)_j ),
where W_1 and W_2 are parameter matrices, w_ij denotes the edge weight, and σ denotes a non-linear activation function, e.g., ReLU or Tanh.
We additionally utilize the edge convolution operator, EdgeConv, from Wang et al. (2019). EdgeConv exploits local geometric structures by constructing a local graph and applying convolution operations on the edges connecting neighboring pairs of nodes. The layer output can be computed by
u^(m+1)_i = j∈𝒩(i)maxΨ ( u^(m)_i, u^(m)_j - u^(m)_i ).
where Ψ denotes an MLP. As can be seen, the max aggregation operation is used on the edge features associated with all the edges emanating from a node.
§ MODEL ARCHITECTURES
In this section, we present the detailed surrogate models which can predict the next-step dynamic states of the coupled PDE system. Our GCN models have an Encoder-Processor-Decoder structure. Schematic of a general GCN model architecture is plotted in Fig. <ref>. The node features are first encoded into latent vectors of size n_H. The input features u_i^n of mesh node i for each timestep contain the dynamic variables (pressure and saturation), permeability, and pore volume. A one-hot vector indicating node type (reservoir, production, and injection nodes), and well index are also added. We assign the transmissibility T_ij of each cell interface as edge weight. Each feature is scaled individually to [0, 1] using the min-max normalization method. The Decoder extracts one output state (p^n+1 or s^n+1) from the latent node features after the final processing layer. The Encoder and Decoder are two-layer MLPs with ReLU nonlinearities except for the output layer of the Decoder, after which we do not apply any nonlinearity.
The Processor of the pressure model ℕ_p is constructed by stacking 7 identical GraphConv layers with the mean aggregation operation and ReLU nonlinearities, to obtain a sequence of updated latent features. For the ℕ_s model, we propose a combined architecture (3 EdgeConv followed by 5 GraphConv layers with max aggregation), which is found to be quite effective for capturing the hyperbolic (saturation) solution. The Tanh activation function is applied. The sizes of hidden units for ℕ_p and ℕ_s are n_Hp = 32 and n_Hs = 128, respectively.
§ TRAINING PROCEDURE
We train the GCN models using the dynamic state pairs ( u^n; u^n+1 ) from n_Y number of simulated rollout trajectories. We employ a mean squared error loss between predictions u_y^n and their corresponding ground truth values u_y^n (simulator reference). The L_2 loss function is minimized through
Θ^* = Θargmin 1/n_Y1/n_t∑_y=1^n_Y∑_n=1^n_tu_y^n - u_y^n_2^2
where n_t is the number of timesteps, and u_y^n denotes either pressure or saturation of every mesh node, at time t_n, for training sample y.
Modeling a complex time-dependent PDE system requires the model to mitigate error accumulation over long rollout trajectories (Sanchez-Gonzalez et al. 2020). Because we only train our surrogates on ground-truth one-step data, we corrupt the input saturation states with normal noise N_s ( 0, σ_s = 0.02 ). In this way, the rollouts of multiple timesteps from trained models become robust to their own noisy, previous predictions as input.
§ SURROGATE MODEL EVALUATIONS
We explore the prediction performance of the surrogate models and their generalization capabilities on out-of-training domain shapes and meshes. As an example, we consider 2D reservoir models in the x-y domain containing two wells (one injector and one producer) that operate under constant bottom-hole pressure (BHP). No-flow boundary condition is specified at the reservoir boundaries. The set-up of the base model is summarized in Table <ref>. Quadratic relative permeabilities are used. Capillary pressure is neglected. Total simulation time is 100 days, with a number of 20 timesteps.
There are 160 high-fidelity simulation runs as training data samples with random well locations and rock properties. The realizations of heterogeneous permeability and porosity fields are generated using a Gaussian distribution. The surrogate models are trained on a NVIDIA Tesla V100 GPU using the Adam optimizer (Kingma and Ba 2014) with learning rate 1e-4. The training loss (MSE) curves are plotted in Fig. <ref>.
It takes around 2 and 4 hours to train the pressure and saturation models, respectively. Note that the training times can be reduced by optimizing the hyperparameters of GCNs, and the learning rate schedule of the optimizer. Moreover, the large numbers of training epochs currently used are actually not necessary to reach reasonably low prediction errors. The trained models can predict a rollout trajectory in 0.1 seconds, achieving a significant reduction of computational time compared with the high-fidelity simulator, which requires about 22 seconds for a simulation run.
§.§ Regular Cartesian mesh
We first present the predictions of three representative testing samples on a regular 60 × 60 Cartesian mesh. The rock property fields of the three cases are shown in Fig. <ref>, Fig. <ref> and Fig. <ref>, respectively.
We only compare the solution profiles (pressure and water saturation) at the end of the simulation between the surrogate (prediction) and high-fidelity (ground truth) simulators, because the solutions at the final time of a rollout trajectory should exhibit the largest accumulated errors. The pressure and water saturation profiles of the three cases are shown in Fig. <ref>, Fig. <ref> and Fig. <ref>, respectively. As can be seen, the GCN-based surrogate models accurately capture the evolutions of the pressure and saturation states, with very low mean errors in all the three cases. It is important to note that even though our models were trained on the next-step predictions, the rollouts remain stable for multiple steps.
We can also see that the pressure model is capable of providing physically smooth pressure solutions. The saturation fields are strongly impacted by the well locations and heterogeneous rock properties. The saturation model based on our new GCN architecture incorporating EdgeConv (<ref>) can reproduce both the shapes and heterogeneous details of the discontinuous saturation fronts quite well. Note that relatively large saturation differences are mainly observed near the water fronts.
To demonstrate the improvement due to EdgeConv, here we additionally show the results from a GCN model with only 7 GraphConv layers (without EdgeConv) and the max aggregation operation. The water saturation profiles of the three cases are shown in Fig. <ref>. Although the model (GraphConv) captures the overall shapes of the saturation fields with reasonable accuracy, greater errors are evident near the water fronts. Moreover, some heterogeneous details inside the water plume are smeared out, compared with the model with the combined architecture (EdgeConv+GraphConv). The mean relative errors of the invaded region (saturation bigger than the residual value) from the two saturation models are reported in Table <ref>.
§.§ Irregular Cartesian mesh
We further evaluate the generalization ability of the trained surrogate simulators, using two test samples with irregular mesh geometries. The rock property fields of the two cases on irregular Cartesian mesh are shown in Fig. <ref> and Fig. <ref>, respectively. The solution profiles of the two cases are shown in Fig. <ref> and Fig. <ref>, respectively. We can see that the surrogates predict the state evolutions with high accuracy. There is no significant saturation error from the predictions, except within certain regions near the domain boundaries. The results demonstrate that the Graph Convolutional Networks can generalize well to unseen domain geometries, even though our models were trained using only the data samples on a regular square domain.
§.§ PEBI mesh
Furthermore, we perform testing on a perpendicular bisector (PEBI) mesh with homogeneous permeability of 700 md and porosity of 0.3. Because the neighboring node numbers are different from the previous Cartesian meshes, we add 5 training samples with different rock properties and well locations on the PEBI mesh to improve prediction accuracy, adding up to a total of 165 samples in the new training dataset. A schematic of the PEBI mesh is plotted in Fig. <ref>. The solution profiles are shown in Fig. <ref>. Again, we can observe that the solutions from the surrogates closely match the high-fidelity simulation. Our GCN models generalize well to unstructured meshes, suggesting that the networks learn a general understanding of the physical processes of the multi-phase flow and transport PDE system.
§ SUMMARY
We apply GCNs for surrogate modeling to approximate the spatial-temporal solutions of multi-phase flow and transport in porous media. We propose a new GCN architecture suited to the hyperbolic character of the coupled PDE system, to better capture the saturation dynamics. Our surrogate models can provide significant speedups (220 orders of magnitude) compared to the high-fidelity simulator.
The prediction performance of the trained surrogates and their generalization capabilities on out-of-training domain shapes and meshes are evaluated using 2D heterogeneous test cases. The results show that our surrogates accurately predict the evolutions of the pressure and saturation states. Even though the models were trained on the next-step predictions, the rollouts remain stable for multiple timesteps. The saturation model based on the GCN architecture incorporating EdgeConv can reproduce both the shapes and heterogeneous details of the discontinuous saturation fronts with high accuracy. Moreover, we demonstrate that the GCN-based models generalize well to unseen domain geometries and unstructured meshes.
§ ACKNOWLEDGEMENTS
We thank Sidian Chen at The University of Arizona for constructive discussions.
§ REFERENCES
Brenier, Y. and Jaffré, J., 1991. Upstream differencing for multiphase flow in reservoir simulation. SIAM journal on numerical analysis, 28(3), pp.685-696.
Battaglia, P., Pascanu, R., Lai, M. and Jimenez Rezende, D., 2016. Interaction networks for learning about objects, relations and physics. Advances in neural information processing systems, 29.
Bar-Sinai, Y., Hoyer, S., Hickey, J. and Brenner, M.P., 2019. Learning data-driven discretizations for partial differential equations. Proceedings of the National Academy of Sciences, 116(31), pp.15344-15349.
Belbute-Peres, F.D.A., Economon, T. and Kolter, Z., 2020, November. Combining differentiable PDE solvers and graph neural networks for fluid flow prediction. In international conference on machine learning (pp. 2402-2411). PMLR.
Brandstetter, J., Worrall, D. and Welling, M., 2022. Message passing neural PDE solvers. arXiv preprint arXiv:2202.03376.
Chen, J., Hachem, E. and Viquerat, J., 2021. Graph neural networks for laminar flow prediction around random two-dimensional shapes. Physics of Fluids, 33(12), p.123607.
Guo, X., Li, W. and Iorio, F., 2016, August. Convolutional neural networks for steady flow approximation. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 481-490).
Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O. and Dahl, G.E., 2017, July. Neural message passing for quantum chemistry. In International conference on machine learning (pp. 1263-1272). PMLR.
Iakovlev, V., Heinonen, M. and Lähdesmäki, H., 2020. Learning continuous-time pdes from sparse data with graph neural networks. arXiv preprint arXiv:2006.08956.
Jiang, Z., Tahmasebi, P. and Mao, Z., 2021. Deep residual U-net convolution neural networks with autoregressive strategy for fluid flow predictions in large-scale geosystems. Advances in Water Resources, 150, p.103878.
Kingma, D.P. and Ba, J., 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Kipf, T.N. and Welling, M., 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.
Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2017. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6), pp.84-90.
Kutz, J.N., 2017. Deep learning in fluid dynamics. Journal of Fluid Mechanics, 814, pp.1-4.
LeCun, Y., Bengio, Y. and Hinton, G., 2015. Deep learning. nature, 521(7553), pp.436-444.
Long, Z., Lu, Y., Ma, X. and Dong, B., 2018, July. Pde-net: Learning pdes from data. In International conference on machine learning (pp. 3208-3216). PMLR.
Morris, C., Ritzert, M., Fey, M., Hamilton, W.L., Lenssen, J.E., Rattan, G. and Grohe, M., 2019, July. Weisfeiler and leman go neural: Higher-order graph neural networks. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 4602-4609).
Mo, S., Zhu, Y., Zabaras, N., Shi, X. and Wu, J., 2019. Deep convolutional encoder‐decoder networks for uncertainty quantification of dynamic multiphase flow in heterogeneous media. Water Resources Research, 55(1), pp.703-728.
Maldonado-Cruz, E. and Pyrcz, M.J., 2022. Fast evaluation of pressure and saturation predictions with a deep learning surrogate flow model. Journal of Petroleum Science and Engineering, 212, p.110244.
Maucec, M. and Jalali, R., 2022. GeoDIN-Geoscience-Based Deep Interaction Networks for Predicting Flow Dynamics in Reservoir Simulation Models. SPE Journal, 27(03), pp.1671-1689.
Peaceman, D.W., 1983. Interpretation of well-block pressures in numerical reservoir simulation with nonsquare grid blocks and anisotropic permeability. Society of Petroleum Engineers Journal, 23(03), pp.531-543.
Pfaff, T., Fortunato, M., Sanchez-Gonzalez, A. and Battaglia, P.W., 2020. Learning mesh-based simulation with graph networks. arXiv preprint arXiv:2010.03409.
Pilva, P. and Zareei, A., 2022. Learning time-dependent PDE solver using Message Passing Graph Neural Networks. arXiv preprint arXiv:2204.07651.
Sammon, P.H., 1988. An analysis of upstream differencing. SPE reservoir engineering, 3(03), pp.1-053.
Sanchez-Gonzalez, A., Godwin, J., Pfaff, T., Ying, R., Leskovec, J. and Battaglia, P., 2020, November. Learning to simulate complex physics with graph networks. In International conference on machine learning (pp. 8459-8468). PMLR.
Santos, J.E., Xu, D., Jo, H., Landry, C.J., Prodanović, M. and Pyrcz, M.J., 2020. PoreFlow-Net: A 3D convolutional neural network to predict fluid flow through porous media. Advances in Water Resources, 138, p.103539.
Tang, M., Liu, Y. and Durlofsky, L.J., 2020. A deep-learning-based surrogate model for data assimilation in dynamic subsurface flow problems. Journal of Computational Physics, 413, p.109456.
Vinuesa, R. and Brunton, S.L., 2022. Enhancing computational fluid dynamics with machine learning. Nature Computational Science, 2(6), pp.358-366.
Wang, Y., Sun, Y., Liu, Z., Sarma, S.E., Bronstein, M.M. and Solomon, J.M., 2019. Dynamic graph cnn for learning on point clouds. Acm Transactions On Graphics (tog), 38(5), pp.1-12.
Wang, Y. and Lin, G., 2020. Efficient deep learning techniques for multiphase flow simulation in heterogeneous porousc media. Journal of Computational Physics, 401, p.108968.
Wen, G., Hay, C. and Benson, S.M., 2021. CCSNet: a deep learning modeling suite for CO2 storage. Advances in Water Resources, 155, p.104009.
Yan, B., Harp, D.R., Chen, B., Hoteit, H. and Pawar, R.J., 2022. A gradient-based deep neural network model for simulating multiphase flow in porous media. Journal of Computational Physics, 463, p.111277.
Zhang, K., Wang, Y., Li, G., Ma, X., Cui, S., Luo, Q., Wang, J., Yang, Y. and Yao, J., 2021. Prediction of field saturations using a fully convolutional network surrogate. SPE Journal, 26(04), pp.1824-1836.
|
http://arxiv.org/abs/2307.05945v1 | 20230712062251 | YOGA: Deep Object Detection in the Wild with Lightweight Feature Learning and Multiscale Attention | [
"Raja Sunkara",
"Tie Luo"
] | cs.CV | [
"cs.CV",
"cs.LG"
] |
[email protected]
[email protected]
Department of Computer Science
Missouri University of Science and Technology, Rolla, MO 65409, USA
We introduce YOGA, a deep learning based yet lightweight object detection model that can operate on low-end edge devices while still achieving competitive accuracy. The YOGA architecture consists of a two-phase feature learning pipeline with a cheap linear transformation, which learns feature maps using only half of the convolution filters required by conventional convolutional neural networks. In addition, it performs multi-scale feature fusion in its neck using an attention mechanism instead of the naive concatenation used by conventional detectors. YOGA is a flexible model that can be easily scaled up or down by several orders of magnitude to fit a broad range of hardware constraints. We evaluate YOGA on COCO-val and COCO-testdev datasets with other over 10 state-of-the-art object detectors. The results show that YOGA strikes the best trade-off between model size and accuracy (up to 22% increase of AP and 23-34% reduction of parameters and FLOPs), making it an ideal choice for deployment in the wild on low-end edge devices. This is further affirmed by our hardware implementation and evaluation on NVIDIA Jetson Nano.
§ INTRODUCTION
Object detection empowered by deep learning has made booming success in diverse applications such as autonomous driving, medical imaging, remote sensing, and face detection. Research in this area has been thriving and the performance competition is fierce. Well-known detectors include the R-CNN series
<cit.>, YOLO series
<cit.>, SSD <cit.>, RetinaNet <cit.>, EfficientDet <cit.>, YOLO-Anti <cit.>, UDNet <cit.>, etc.
Although the fierce competition has led to better performance in general, it has also resulted in deeper neural network architectures and more complex model designs, implying a need for more training data, more tuning parameters, and longer training and inference time. This would not be suitable for resource-constrained environments such as Internet of Things (IoT) devices at the edge.
Researchers have attempted Pruning and Quantization methods toward this goal. However, that is an “aftermath” approach and the effect is often limited (for example, we applied PyTorch's pruning utility to three popular object detection models and observed a mere improvement of 0%, 8%, and -15%(negative), respectively). To fundamentally address this problem for edge deployment in the wild, a clean-slate design is much more desired.
In this paper, we propose YOGA, a new object detection model based on a resource-conscious design principle. YOGA cuts down model size by up to 34% (cf. Table <ref>), in terms of number of model parameters and FLOPs, yet notably, achieving competitive accuracy (often even better, by up to 22%; cf. Table <ref>). YOGA consists of (i) a new backbone called CSPGhostNet (cross stage partial GhostNet), (ii) a new neck called AFF-PANet (attention feature fusion-based path aggregation network), and (iii) a YOLO-based head. (The underlined letters account for the coined name, YOGA.)
Our main idea is twofold. First, to slim down the neural network, we use a two-phase feature learning pipeline with a cheap linear transformation called group convolution throughout the network, which can learn the same number of feature maps as in standard CNNs but using only half of the convolution filters.
Second, to achieve high accuracy, we
fuse multi-scale feature maps at the neck using a local attention mechanism along the channel dimension (besides global attention along the space dimension), rather than using the conventional concatenation which is essentially equal-weighted.
Apart from being lightweight and high-performing, YOGA also represents a flexible design in that it can be easily scaled up or down in a wide range by choosing different repetitions of one of its building blocks (CSPGhost; cf. fig:YOGA). This makes it easily fit for a broad range of applications with different resource constraints, from small embedded IoT systems to intermediate edge servers and to powerful clouds.
Besides the performance evaluation commonly seen in the literature, we have also implemented YOGA on real hardware, NVIDIA Jetson Nano 2GB (the lowest-end deep learning device from NVIDIA), and tested its performance to assess its applicability to edge deployment in the wild. The results are promising (for instance, YOGA-n runs at 0.57 sec per 640x640 image, which is close to real-time) and will surely be much more responsive on less-restrictive hardware (e.g., Jetson Nano 4GB, or Jetson TX2).
In summary, the contributions of this paper are:
* We propose YOGA, a new object detection model
that learns richer representation (via attention based multi-scale feature fusion) with a much lighter model (reducing nearly half convolution filters via group convolution).
* We provide a theoretical explanation of how label smoothing facilitates backpropagation during training, by mathematically analyzing how the loss gradient vector is involved in the recursive backpropagation algorithm when label smoothing is used. We also overcome a GhostNet overfitting issue using a hyper-parameter tuning method based on Genetic Algorithm.
* We compare YOGA with a large variety of (over 10) state-of-the-art deep learning object detectors (as YOGA can be easily scaled up or down so we can make fair comparison with models at different levels of scales). The results demonstrate the superiority of YOGA on the joint performance of model size and accuracy.
* We also migrate YOGA to real hardware to assess its usability in the wild. Our experiments show that YOGA is well suited for even the lowest-end deep learning edge devices.
§ RELATED WORK AND PRELIMINARIES
Current state-of-the-art object detection models are convolutional neural network (CNN) based and can be categorized into one-stage and two-stage detectors, or anchor-based or anchor-free detectors. A two-stage detectors first use region proposals to generate coarse object proposals, and then use a dedicated per-region head to classify and refine the proposals. In contrast, one-stage detectors skip the region proposal step and run detection directly over a dense sampling of locations. Anchor-based methods use anchor boxes, which are a predefined collection of boxes that match the widths and heights of training data objects, to improve the loss convergence during training. We provide a classification of some well-known object detection models in Table <ref>. For a detailed review of such methods, the reader is referred to a comprehensive survey <cit.>. For a overview of deep-learning based methods for salient object detection in videos, refer to <cit.>.
Generally, one-stage detectors are faster than two-stage ones and anchor-based models are more accurate than anchor-free ones. Thus, in the YOGA design, we focus on the one-stage and anchor-based models, i.e, the first cell of Table <ref>.
A typical one-stage object detection model is depicted in Fig. <ref>. It consists of a CNN-based backbone for visual feature extraction and a detection head for predicting class and bounding box of each contained object. In between, a neck is added to combine features at multiple scales to produce semantically strong features for detecting objects of different sizes.
§ DESIGN OF YOGA
An overview of the YOGA architecture is given in Fig. <ref>.
§.§ Backbone: CSPGhostNet
Our design of backbone, called CSPGhostNet, is motivated by two observations and guided by two corresponding aims. First, we identify that standard CNNs add many redundant features in order to learn better representation of input images. For example, ResNet <cit.> generates numerous similar feature maps. Such designs come at a price of high computational cost and heavyweight models. Therefore, we pose the following question as our first aim: Is it possible to generate the same number of features with similar (necessary) redundancy but using much less computation and less parameters?
Second, we observe that the training and inference time of current deep learning models has a large room to improve. We are therefore motivated to also speed up training and inference processes in our context, object detection.
To achieve the first aim, we adapt GhostNet <cit.> to exploit a low-cost two-phase convolutional pipeline. For the second aim, we integrate half of the feature maps across backbone and neck from the beginning to the end to create a shortcut, by leveraging CSPNet <cit.>.
Specifically, it splits each input feature map into two parts, feeds one part through a group of convolution blocks while letting the other part bypass those blocks, and merges these two branches at the end via concatenation. This shortcut also reduces the repetition of gradient information during backpropagation. In the following, we focus on explaining how we achieve the first aim using GhostNet because CSPNet can be applied without much change to its original architecture (however, we are the first that creates a new module combining GhostNet and CSPNet).
Ghost bottleneck (G-bneck) as in fig:YOGA2(b), which we draw based on the GhostNet paper <cit.> (but does not exist in <cit.>), was designed specially for small CNNs. It is not trivial to use G-bneck to build medium and large CNNs. In fact, this is also the reason why GhostNet, which is built on top of G-bneck, was compared with only small neural nets like MobileNetv2 and MobileNetv3. To overcome this limit, we designed a new CSPGhost module as in fig:YOGA2(a), where G-bneck layer is just part of it (the light blue block). This CSPGhost module allows us to build medium and large CNNs.
CSPGhost (in light green) is located at multiple positions in both our backbone and neck (see fig:YOGA), and its internal structure is shown in fig:YOGA2(a). CSPGhost contains a Ghost bottleneck layer (in light blue) and multiple Conv Blocks (in light yellow). Each Conv Block consists of a 2D Convolution, a BatchNorm, and a SiLU non-linear activation function (fig:YOGA2(c)). The Ghost bottleneck layer is similar to ResNet's basic residual block that integrates several
convolutional layers and short-cut connections. It mainly consists of two GhostConv modules (in orange) with depth-wise convolution in between: the first GhostConv module acts as an expansion layer that increases the number of channels, while the second GhostConv module reduces the number of channels to match the input shortcut path, after which the input of the first GhostConv and the output of the second GhostConv is connected by the shortcut through the depth-wise convolution and Conv block.
The GhostConv block (in orange) stands for Ghost convolution and its structure is given in Fig. <ref>. In standard convolution, C_2 filters each of depth C_1 will be used to transform an input feature map of depth C_1 to an output feature map of depth C_2, as shown in Fig. <ref>a. In contrast, GhostConv uses only C_2/2 standard filters to generate an intermediate feature map, denoted by X_a, of depth C_2/2, and then applies a group convolution with C_2/2 groups to X_a to generate a feature map, X_b, of identical depth C_2/2. Group convolution with C_2/2 groups is a cheap linear transformation which only performs per-channel instead of cross-channel convolution as in the standard convolution. Finally, the two feature maps X_a and X_b are concatenated to obtain the output feature map, which has a depth of C_2.
Mathematically, we can formulate this process as follows:
X_a = w_1⊗x_0 (first half; std conv)
X_b = w_2 ⊗̃ X_a (second half; group conv)
y = X_a ⊕ X_b (output feature maps)
where ⊗ denotes the standard convolution, ⊗̃ denotes the group convolution, and ⊕ denotes concatenation along the channel dimension. Thus, we can see that GhostConv adopts ordinary convolution to generate a few intrinsic feature maps and then utilizes cheap linear operations to augment the features and increase the number of channels.
Because of this, GhostConv can speed up the convolution process as well as reduce the number of parameters by 2 approximately. In general, the improvement factor is s = C_2/D(X_a) where D(X) is the number of channels of a feature map X. Based on the empirical analysis in <cit.>, s=2 results in the best performance. Therefore, we choose half filters to generate intermediate feature maps in GhostConv as shown in fig:ghost.
Our redesigned backbone CSPGhostNet enables YOGA to substantially reduce the number of parameters and FLOPs
without sacrificing its detection performance (mAP). Moreover, as a general guideline in deep learning, less parameters also tend to imply a more generalizable neural network.
Finally, we add spatial pyramid pooling (SPP) <cit.> to the tail of our backbone network in order to increase the receptive field.
§.§ Neck: AFF-PANet
We also design a new neck architecture called AFF-PANet that addresses a fundamental problem in object detection: feature fusion. An object detection task inevitably requires fusing low-level and high-level feature maps extracted from the backbone. However, current research all centers around fusing these feature maps using a naive concatenation with no learning involved. As illustrated in fig:object_de, such a neck simply stacks the feature maps Z_3, Z_4 and Z_5 along the channel dimension, and then applies a standard convolution to match the output channels.
The problem with this kind of naive fusion is that concatenation essentially treats each feature map equally, but the features learned by the backbone have multiple scales and larger-scale ones tend to overshadow smaller-scale ones. Therefore, we propose to incorporate learning into the fusion process using an attention mechanism. However, this is non-trivial because typical attention methods such as SENet <cit.> cannot be directly applied to multi-scale features. The underlying reason is that those channel attention mechanisms use an extreme and coarse feature descriptor that implicitly assumes that large objects occupy a large portion of space and averages the feature maps across the spatial dimension, which would wipe out much of the image signal present in small objects. More specifically, such methods compress each feature map into a scalar and thus the average of feature maps along the spatial dimension becomes very small, resulting in poor detection of small objects. In fact, these are global attention mechanisms alone which cannot well handle multi-scale feature fusion.
Our design is the first that introduces AFF <cit.> into the area of object detection in order to add local attention to feature fusion (besides global attention), and the first that incorporates it in PANet <cit.> to shorten the pathway of passing feature information to the head. The paper <cit.> proposed an AFF module in Feature Pyramid Networks (FPN), but in our case, we integrated the AFF module into the Path Aggregation Network (PANet) and build a new neck architecture called AFF-PANet, which we explain the details below.
AFF uses a multi-scale channel attention module (MS-CAM) as depicted in fig:aff.
Given an intermediate feature map 𝐙∈ℝ ^C × H × W,
there are two network pathways, one computing a global channel context 𝐠(𝐙) and the other is responsible for computing a local channel context 𝐋(𝐙). The two contexts 𝐠(𝐙) and 𝐋(𝐙) are then combined through a broadcasting addition operation, followed by a Sigmoid non-linearlity to map values into the range of 0-1, to obtain the attentional weights 𝐌(𝐙) = σ(𝐠(𝐙)⊕𝐋(𝐙)). Therefore, by ushering AFF into object detection, we mainly exploit its local attention pathway, i.e., 𝐋(𝐙).
Mathematically, this process can be expressed as
𝐀𝐅𝐅(𝐗, 𝐘) = 𝐌(𝐗⊎𝐘)⊙𝐗 + (1 - 𝐌(𝐗⊎𝐘)) ⊙𝐘
where 𝐗∈ℝ^C × H × W is a low-level feature map, and 𝐘∈ℝ^C × H × W is a high-level feature map, ⊎ denotes the initial feature integration which we choose to be element-wise summation, 𝐌(𝐗⊎𝐘) is a MS-CAM function that computes fusion weights between 0 and 1, which are finally applied to 𝐗 and 𝐘 to form a weighted average.
Corresponding to fig:YOGA, 𝐗 and 𝐘 are the outputs of blocks 4 and 15 respectively, and 𝐀𝐅𝐅(𝐗, 𝐘)∈ℝ^C × H × W is the fused feature as the output of block 16.
We also incorporate PANet in our neck. The reason is as follows. The feature pyramid network (FPN) <cit.> is comprised of only a top-down pathway to concatenate feature maps (cf. fig:object_de left part of neck), but PANet adds a bottom-up pathway and multiple lateral connections. This addition will help shorten the path of passing feature information from earlier layers to the head through only a few convolutional layers, thereby learning richer representations for multi-scale objects (as shorter paths have a better gradient flow from earlier CNN layers to the neck).
In summary, our AFF-based neck design introduces learning into multi-scale feature fusion by combining feature maps using learnable weights rather than naive concatenation.
§.§ Head: YOLO
The purpose of the head is to perform dense predictions, where each prediction consists of an object confidence score, a probability distribution of the object classes, and bounding box coordinates. A head makes these predictions based on the feature maps (Z^'_3, Z^'_4, and Z^'_5 as shown in fig:object_de), obtained from the neck.
We adopt the YOLO head architecture which consists of a 3 × 3 convolution layer followed by a 1 × 1 convolution layer. The number of filters used in this 1 × 1 convolution layer is N (C + 5), where C is the number of classes and N is the number of anchor boxes (each prediction is made by using an anchor at one of three different scales). The output of the head is post-processed by non-maximum suppression (NMS) to eliminate redundant and low-confidence bounding boxes.
§.§ Label Smoothing
We use a regularization technique called label smoothing <cit.> to improve backpropagation gradients during neural network training. Unlike one-hot vector where the entire probability mass is concentrated on a single true class, label smoothing weighs 1-(K-1)ϵ on the true class and ϵ on the remaining K-1 classes. This section provides an in-depth mathematical explanation of how this method helps backpropagation during model training, as <cit.> proposed it only heuristically.
Given an input sample, let 𝐲 be its true label encoded by label smoothing, and 𝐲_n be its prediction made at a neural network's last (the n-th) layer. Using the cross-entropy loss L(𝐲,𝐲_n) = - ∑_i=1^K y_ilog y_n_i, the gradient of this loss with respect to the predicted output 𝐲_n is given by
∇_𝐲_nL =
-1-(K-1)ϵ/y_n_c, for the true class c
-ϵ/y_n_i, any other classes i
This gradient is then applied to the recursive backpropgation step given below:
∇_𝐳_kL = (∇_𝐲_kL) J_𝐲_k(𝐳_k) , ∇_𝐲_k-1L = (∇_𝐳_kL) 𝐖_k
∇_𝐖_kL = 𝐲_k-1∇_𝐳_kL, ∇_𝐛_kL = ∇_𝐳_kL
where 𝐳_k and 𝐲_k represent the pre-activation and post-activation vectors, respectively, at layer k. We compute Jacobian J_𝐲_n(𝐳_n) using the last layer activation function, and then apply the recursion (<ref>) from k = n (last layer) to 1 (first layer) to compute all the gradients.
The Jacobain matrix J_𝐲_k(𝐳_k) and weight matrix 𝐖_k for the k-th layer are given, respectively, by
[
∂ y_k_1/∂ z_k_1 ∂ y_k_1/∂ z_k_2 ∂ y_k_1/∂ z_k_D
∂ y_k_2/∂ z_k_1 ∂ y_k_2/∂ z_k_2 ∂ y_k_2/∂ z_k_D
⋮ ⋮ ⋱ ⋮
∂ y_k_M/∂ z_k_1 ∂ y_k_M/∂ z_k_2 ∂ y_k_M/∂ z_k_D
],
[
w^(k)_11 w^(k)_21 w^(k)_D_k-11
w^(k)_12 w^(k)_22 w^(k)_D_k-12
⋮ ⋮ ⋱ ⋮
w^(k)_1D_k w^(k)_2D_k w^(k)_D_k-1D_k
]
where D_k represents the number of neurons in the k-th layer.
For the label-smoothing based loss, all the entries of the gradient ∇_𝐲_nL (Eq. <ref>) are non-zero. As this gradient vector is multiplied with the Jacobian J_y(z) and the weight matrix 𝐖_k in the recursive backpropagation step, using such gradient to update weights during gradient descent would significantly mitigate the gradient vanishing problem and thus help the training and convergence of deep neural networks.
§ PERFORMANCE EVALUATION
For an extensive performance evaluation, we compare YOGA with a large number of state-of-the-art object detection models as our baselines, including YOLOv5, EfficientDet, YOLOX, YOLOv4, PP-YOLO, DETR, Faster-RCNN, SSD512, etc. and the complete list can be seen from Tables <ref> and <ref>. (All our results are fully reproducible, and our code will be open-sourced upon acceptance.)
Model scaling. As mentioned before, YOGA can easily scale up or down to suit different application or hardware needs. For example, we have tested that its Nano version YOGA-n can run near real-time on Jetson Nano and its Large version YOGA-l can run real-time on a V-100 GPU.
Specifically, one can scale YOGA by simply adjusting the number of filters in each convolutional layer (i.e., width scaling) and the number of convolution layers in the backbone (i.e., depth scaling) to obtain different versions of YOGA, such as Nano, Small, Medium, and Large. The width and depth scaling will result in a new width of ⌈ n_w ×width factor⌉_8 and a new depth of ⌈ n_d ×depth factor⌉, respectively, where n_w is the original width and n_d is the original number of repeated blocks (e.g., 9 as in 9 × CSPGhost as in Fig. <ref>), and ⌈·⌉_8 means rounded off to the nearest multiple of 8. The width/depth factors are given in Table <ref>, where YOGA-n/s/m/l correspond to the Nano, Small, Medium, and Large versions of YOGA, respectively.
§.§ Experiment setup
Dataset and metrics. We use the COCO-2017 dataset <cit.> which is divided into train2017 (118,287 images) for training, val2017 (5,000 images; also called minival) for validation, and test2017 (40,670 images) for testing. We use a wide range of state-of-the-art baseline models as listed in Tables <ref> and <ref>. We report the standard metric of average precision (AP) on val2017 under different IoU thresholds [0.5:0.95] and object sizes (small, medium, large). We also report the AP metrics on test-dev2017 (20,288 images) which is a subset of test2017 with accessible labels. However, the labels are not publicly released but one needs to submit all the predicted labels in JSON files to the CodaLab COCO Detection Challenge <cit.> to retrieve the evaluated metrics, which we did.
Training. We train different versions (nano, small, medium, and large) of YOGA on train2017. Unlike most other studies, we train from scratch without using transfer learning. This is because we want to examine the true learning capability of each model without being disguised by the rich feature representation it inherits via transfer learning from ideal (high quality) datasets such as ImageNet. This was carried out on our own models (YOGA-n/s/m/l) and all the existing YOLO-series models (v5, X, v4, and their scaled versions like nano, small, large, etc.). The other baseline models still used transfer learning because of our lack of resource (training from scratch consumes an enormous amount of GPU time). However, note that this simply means that those baselines are placed in a much more advantageous position than our own models as they benefit from high quality datasets.
We apply this the same way to our YOLO series baselines (v5, X, v4, and their scaled versions) as well. On the other hand, other baseline models still use (and thus benefit from) transfer learning because of lack of resource (training from scratch consumes enormous time and GPU resources). However, this simply means that those baselines are in a much more advantageous position than YOGA.
Hyperparameter Tuning. We use the Genetic Algorithm (GA) to tune hyperparameters for YOGA. We ran GA on m=20 hyperparameters for 200 generations on a subset of the COCO dataset. Our choice of m=20 is based on the Vapnik–Chervonenkis (VC) inequality:
P[|E_in - E_out| > ϵ] ≤ 4 m_h(2N) e^-1/8ϵ ^2 N
where E_in is the error on the validation set, E_out is the error on the test set, N is the number of validation samples, and m_h is the growth function of a hypothesis set defined by m. For a small ϵ = 0.05 and with probability 95%, we choose N ≥ 10× VC-dimension by rule of thumb for good generalization (E_in≈ E_out), where VC-dimension <cit.> is the number of independent parameters which is upper-bounded by m. Since COCO validation data contains N=5000 images, choosing m=20 satisfies the above inequality, which ensures the difference between test and validation error to be arbitrarily small according to (<ref>). Therefore, our mAP estimates computed from validation data will be reliable to use and is a good proxy for mAP on test data.
The hyperparameters of GA-based optimization are as follows. It uses the SGD optimizer with momentum 0.937, weight decay of 0.005, a learning rate that linearly increases from 0.0033 to 0.01 for the first three epochs, and then decreases using the Cosine decay strategy to a final value of 0.001. The total number of epochs is 200, which is chosen based on our observation as shown in Fig. 6, where the model enters the overfitting region beyond 200 epochs.
In neural network training, a larger batch size can lead to faster training, but it also requires more memory. On the other hand, a smaller batch size may require more training steps, but it may also be more memory efficient. When training on a GPU, the available memory is a limiting factor on the maximum batch size. Therefore, we trained our YOGA nano and small models on 4 V-100 32 GB GPU with a batch size of 128, and medium and large models with batch size 32. We employed CIoU loss for objectness and cross-entropy loss for classification. To mitigate overfitting, we applied several data augmentation techniques following YOLOv5,
including photometric distortions of hue, saturation, and value, as well as geometric distortions such as translation, scaling, shearing, fliplr and flipup. Multi-image enhancement techniques such as mosaic and cutmix were also employed.
For baselines, we use their best hyperparameters stated in their respective papers, or given in their respective online repositories.
§.§ Results
With no test-time augmentation, we compare YOGA with baselines at the image resolution of 640 × 640. Table <ref> reports the results on the validation dataset (5000 images with ground truth). Table <ref> reports the results on the test-dev dataset (20000 images with no public ground truth). In order to obtain the accuracy on test-dev, we submitted all our predictions to the CodaLab COCO Detection Challenge (Bounding Box) <cit.> in JSON files.
The APS/APM/APL in Table <ref> and <ref> means AP obtained on small/medium/large objects (not model scales). To simplify notation (e.g., in tables and figures), this section denotes by YOLO the YOLOv5 latest version v6.1 release in February 2022.
The scales of YOLO-v7 models (tiny-6.2M, base-36.9M, and X-71.3M) do not match our and and other baseline models, and thus prevent a fair comparison. For example, YOGA-n/m/l has only 1.9/16.3/33.6 M parameters, hence we did not compare YOGA with YOLOv7. However, we still included YOLO-v7 results in both Tables <ref> and <ref> for reference.
mystar
[star,star point ratio=2.25,minimum size=10pt,
inner sep=0pt,draw=red,solid,fill=red] ;
nabla
[nabla,star point ratio=2.25,minimum size=5pt,
inner sep=0pt,draw=orange,solid,fill=orange] ;
figure:teaser provides a comparison of YOGA with multiple SOTA models in terms of AP and number of parameters. The four YOGA points correspond to YOGA-n/s/m/l. Similarly, the points of other models correspond to their respective model sizes too. The results show that YOGA has the best AP at every model scale, or equivalently the lightest model for any target AP. For instance, PPYOLO <cit.> has an AP of 22.7 at 4.20M parameters, while YOGA achieves an AP of ∼35 (interpolated) with the same number of parameters, amounting to a 54% improvement. More thorough and detailed comparisons are presented in Tables <ref> and <ref> and discussed below.
Nano and Small models. With the same number (1.9M) of parameters, YOGA-n achieves an AP of 32.3 which is 15.35% higher than the best-performing model, YOLO-n, whose AP is 28.0. In the comparison on small objects, YOGA-n achieves an improvement of 7.4% APS over YOLO-n.
Similarly, our YOGA-s achieves 40.7 AP while the best-performing YOLO-s achieves 37.4, with almost the same number of parameters and FLOPs, which indicates a 8.8% increase in the AP value. Our YOGA-s achieves 23.0 AP on small objects (APS) while the best-performing YOLO-s achieves 21.09 APS, a +9.0% increase in the APS.
Compared to state-of-the-art models on test-dev (Table <ref>), our YOGA-n model compared to YOLO-n achieves an improvement of 15% AP (+4.2 AP). When we compare on object scales, on small objects, there is an improvement of 12% APS (+1.5 APS);
on medium objects, there is an improvement of 11% APM (+3.4 APM ), and on large objects, there is an improvement of 22% APL (+7.7 APL ).
Similarly, our YOGA-s model compared to YOLO-s, achieves an improvement of 8.6% AP (+3.2 AP). When we compare on object scales, on medium objects, there is an improvement of 5% APM (+2.0 APM);
on large objects, there is an improvement of 17% APL (+7.9 APL).
Medium and Large models. When compared to YOLO-m, our YOGA-m achieves similar AP value of 45.2 but significantly reduces parameters and FLOPs by 23% and 29% respectively. simlarly, comparing with YOLOX-M model, our YOGA-m has significantly reduction in parameters and FLOPs by 35% and 53%.
Compared to YOLO-l, our YOGA-l model achieves the same AP (48.9) but with a significantly lower number of parameters and FLOPs: 27.7% and 34% respectively. simlarly, comparing with YOLOX-L, our YOGA-l model has significant reduction in parameters and FLOPs by 38% and 53.8%.
When compared to state-of-the-art models on test-dev (Table <ref>), YOGA-m achieves AP of 46.4, a 2% improvement to YOLO-m and it uses only 16.3 M parameters and and 34.6 BFLOPs, which are significantly (23% and 29%) lower than YOLO-m.
Visual Comparison.
We compare the bounding box predictions of the YOLO-n and our YOGA-n model on two random sample images on COCO validation dataset. In fig:object_comparison, we see that the top row image has two ground truth objects (Blue boxes) and our YOGA-n detects both objects (Purple boxes), while YOLO-n (Green boxes) fail to detect one object (Bench). Similarly, the bottom row image has a total of eleven ground truth objects, and YOLO-n detects only eight objects while our YOGA-n model detects nine objects, an extra object called Baseball bat.
§.§ Hardware Implementation and Evaluation
To assess the edge suitability of YOGA and its usability in the wild, we migrate YOGA code to NVIDIA Jetson Nano 2GB, which comes with 2 GB 64-bit LPDDR4 25.6 GB/s RAM and 32 GB MicroSD storage
and is the lowest-end deep learning hardware product from NVIDIA.
fig:hardware_setup shows the hardware setup for our edge inference experiments, where we have set up the runtime environment (Ubuntu 20.04, PyTorch 1.12, Jetpack 4.6) for evaluation. We measure the inference time of YOGA to see how near-real-time it can be when performing object detection. The results are reported in tab:nano_inference, where we see that YOGA-n achieves an inference time of of 0.57 sec per image (each image is of large size 640x640) which is close to real-time. We highlight an important fact that, as seen from fig:hardware_setup (in oval shapes), the 2GB memory on Jetson Nano was fully utilized at peak time and 1.828 GB swap space on the disk had to be used to compensate for the memory shortage. This means that the disk I/O had throttled the performance substantially, and it is therefore reasonable to anticipate a significantly better performance on the 4 GB Jetson Nano and even better on Jetson TX2, which would no longer or rarely need to use swap space, making the inference indeed (near)real-time.[Both before and at the time of writing, the global market has been undergoing a severe GPU product shortage and many products have been out of stock in the market. As a consequence, we could not procure more hardware for testing.]
§.§ Ablation study
We also design experiments to investigate the individual effect of our new backbone and neck: specifically, how our CSPGhostNet backbone compares to the YOLO backbone (arguably the best backbone so far) and how our AFF-PANet neck compares to the naive concatenation as used in all SOTA architectures. Moreover, we also evaluate the effect of label smoothing on the gradient descent convergence. We conduct these ablation studies using YOGA-n.
The results for backbone and neck are given in Table <ref>. We observe that, using the AFF-PANet neck architecture consistently leads to improved performance compared to using the Naive Concat (PANet) neck architecture. Additionally, using our CSPGhostNet backbone leads to better performance than using the existing YOLO backbone in both cases. Overall, these results suggest that both the AFF-PANet neck and the CSPGhostNet backbone contribute positively to the performance of YOGA.
For label smoothing, we observed during our training that it helped our model training to converge to a desirable AP[0.5:0.95] and recall in ∼10% less number of epochs than without label smoothing.
§ CONCLUSION
This paper presents YOGA, a novel object detection model with an efficient convolutional backbone and an enhanced attention-based neck. It is a deep yet lightweight object detector with high accuracy, which we have validated with extensive evaluation benchmarked against more than 10 state-of-the-art modern deep detectors. For instance, in its Nano version, our YOGA-n outperforms the current best-performing model YOLOv5n by 15.35% in AP, with similar number of parameters and FLOPs; this improvement further increases to 22% on detecting large objects (APL) on the test-dev dataset. In its Medium version, our YOGA-m achieves the same AP (45.2) as the best-performing model YOLOv5m but with 23% fewer parameters and 29% fewer FLOPs. In its Large version, our YOGA-l achieves the same AP (48.9) as the best-performing model YOLOv5-l but with 27.7% fewer parameters and 34% fewer FLOPs.
We have also implemented and assessed YOGA on the lowest-end deep learning device from NVIDIA, Jetson Nano 2GB, and the results affirmed that YOGA is suitable for edge deployment in the wild. For instance, YOGA-n runs at 0.57 sec per 640x640 image, which is close to real-time.
The main limitation of YOGA is that it could be prone to overfitting when training extremely large models. Nonetheless, such extra-large models are rather unlikely to be adopted in edge deployments. Future directions for improving YOGA or object detection in general, include: (1) investigating different attention mechanisms such as incorporating self-attention or transformer architectures; (2) exploring ways to further optimize the model for specific hardware platforms such as mobile devices; (3) extending YOGA to handle additional tasks or challenges such as semantic segmentation, instance segmentation, and object tracking.
In summary, YOGA represents a new contribution to the field of object detection by ushering in high run-time efficiency, low memory footprint, and superior accuracy simultaneously. In addition, its flexible scalability makes it applicable to a wide range of applications with different hardware constraints in IoT, edge and cloud computing.
elsarticle-harv
|
http://arxiv.org/abs/2307.04398v1 | 20230710075950 | The tt-geometry of permutation modules. Part II: Twisted cohomology | [
"Paul Balmer",
"Martin Gallauer"
] | math.RT | [
"math.RT",
"math.CT"
] |
TTG-Perm II: twisted cohomology]The tt-geometry of permutation modules.
Part II: Twisted cohomology
We continue our analysis of the derived category of permutation modules over a finite group.
In https://arxiv.org/abs/2210.08311Part I, we identified the spectrum of this tensor-triangulated category as a set.
Here we describe its topology, by reducing the problem to elementary abelian groups and then using a twisted form of cohomology to realize the spectrum as a Dirac scheme.
[2020]20C20; 14C15, 18F99, 18G90
First-named author supported by NSF grant DMS-2153758. Second-named author partially supported by the Max-Planck Institute for Mathematics in Bonn.
The authors thank the Hausdorff Institute for Mathematics in Bonn for its hospitality during the 2022 Trimester on “Spectral Methods in Algebra, Geometry, and Topology”.
[
Mohamed Ayadi
July 8, 2023
=================
--
§ INTRODUCTION
Unless mentioned otherwise, G stands for a finite group and for a field of positive characteristic p, with p typically dividing the order of G.
§.§ Executive summary
We study the homotopy category of bounded complexes of permutation -modules, idempotent-completed:
(G):=((G;))^♮ = ((G;)^♮).
This (G) is a tensor-triangulated category (`tt-category' for short).
We determined all the points of its tt-spectrum in Part I of this series; see <cit.>. The present paper aims at elucidating its topology.
This knowledge will give us, among other things, the classification of thick ⊗-ideals in (G).
Our main results are the following.
On the one hand, we show that the space is a colimit of over a suitable category of elementary abelian p-groups E that appear as subquotients of G.
On the other hand, when E is elementary abelian, we describe the spectrum as a `Dirac scheme' in the sense of Hesselholt-Pstrągowski <cit.>.
Combining these results yields a description of the topological space for all G.
Let us now explain these ideas.
§.§ The colimit theorem
To discuss the tt-geometry of (G), it is instructive to keep in mind the bounded derived category of finitely generated -modules, (), which is a localization of our (G) by <cit.>.
A theorem of Serre <cit.>, famously expanded by Quillen <cit.>, implies that (()) is the colimit of the (( E)), for E running through the elementary abelian p-subgroups of G; see <cit.>. The indexing category for this colimit is an orbit category: Its morphisms keep track of conjugations and inclusions of subgroups.
In Part I, we proved that is set-theoretically partitioned into spectra of derived categories ( ()) for certain subquotients of G, namely the Weyl groups =(N_G K)/K of p-subgroups K≤ G.
It is then natural to expect a more intricate analogue of Quillen's result for the tt-category (G), in which subgroups are replaced by subquotients.
This is precisely what we prove.
The orbit category has to be replaced by a category whose objects are elementary abelian p-sections E=H/K, for p-subgroups K H≤ G.
The morphisms in keep track of conjugations, inclusions and quotients.
See <Ref>.
This allows us to formulate our reduction to elementary abelian groups:
[<Ref>]
There is a canonical homeomorphism
_E∈ ((E)).
The category has been considered before, in Bouc-Thévenaz <cit.>.
Every morphism in is the composite of three special morphisms (<Ref>)
E! E' → E”≃ E”'
where E=E'/N is a quotient of E' (sic!), where E'≤ E” is a subgroup of E” and where E”' is a G-conjugate of E”.
The tt-category (E) is contravariant in E∈ and the tt-functors corresponding to (<ref>)
@C=2em(E”') [r]^-≃ (E”) [r]^- (E') [r]^-Ψ^N (E)
yield the standard conjugation isomorphism, the standard restriction functor, and the less standard modular N-fixed-points functor Ψ^N introduced in Part I.
The latter is a type of Brauer quotient that makes sense on the homotopy category of permutation modules.
Such functors Ψ^N do not exist on derived or stable categories and they distinguish our results and their proofs from the classical theory.
It is an open question whether they will also play a role in the generality of <cit.>.
§.§ Twisted cohomology
The above discussion reduces the analysis of to the case of elementary abelian p-groups E.
As often in modular representation theory, this case is far from trivial and can be viewed as the heart of the matter.
So let E be an elementary abelian p-group.
Our methods will rely on ⊗-invertible objects u_N in (E) indexed by the set (E)=N E[E:N]=p of maximal subgroups.
These objects are of the form u_N=(0→(E/N)→(E/N)→→ 0) for p odd and u_N=(0→(E/N)→→ 0) for p=2. See <Ref>.
We use these ⊗-invertibles u_N to construct a multi-graded ring
(E) = ⊕_s∈ ⊕_q∈^(E)_(E)(,(q)[s]),
where (q) is the ⊗-invertible ⊗_N∈(E) u_Nq(N) for every tuple , that we refer to as a `twist'.
Without these twists we would obtain the standard -graded endomorphism ring ^():=⊕_s∈(,[s]) of which, for ( E), is identified with the cohomology ^(E,k), but for (E) is reduced to the field and therefore rather uninteresting.
We call (E) the (permutation) twisted cohomology of E.
Some readers may appreciate the analogy with cohomology twisted by line bundles in algebraic geometry, or with Tate twists in motivic cohomology.
We can employ this multi-graded ring (E) to describe :
[<Ref>]
The space ((E)) identifies with an open subspace of the homogeneous spectrum of (E) via a canonical `comparison map'.
The comparison map in question generalizes the one of <cit.>, which landed in the homogenous spectrum of ^() without twist.
We also describe in <Ref> the open image of this map by explicit equations in (E).
§.§ Dirac geometry
If the reader is puzzled by the multi-graded ring (E), here is another approach based on a special open cover {H}_H≤ E of indexed by the subgroups of E and introduced in <Ref>. Its key property is that over each open H all the ⊗-invertible objects u_N are trivial: (u_N)H≃[s] for some shift s∈ depending on H and N. For the trivial subgroup , the open 1 is the `cohomological open' of Part I, that corresponds to the image under (-) of the localization (E)( E). See <Ref>.
At the other end, for H=E, we show in <Ref> that the open E is the `geometric open' that corresponds to the localization of (E) given by the geometric fixed-points functor. Compare <cit.>. For E cyclic, these two opens 1 and E are all there is to consider. But as the p-rank of E grows, there is an exponentially larger collection {H}_H≤ E of open subsets interpolating between 1 and E.
This cover {H}_H≤ E allows us to use the classical comparison map of <cit.> locally. It yields a homeomorphism between each H and the homogeneous spectrum of the -graded endomorphism ring ^_H() in the localization (E)H. In compact form, this can be rephrased as follows (a Dirac scheme is to a usual scheme what a -graded ring is to a non-graded one):
[<Ref>]
The space ((E)), together with the sheaf of -graded rings obtained locally from endomorphisms of the unit, is a Dirac scheme.
§.§ Elementary abelian take-home
Let us ponder the -graded endomorphism ring of the unit ^() for a moment.
Recall from <cit.> that the spectrum of the derived category, (( E)), is homeomorphic to the homogeneous spectrum of
the cohomology ring ^(E,)≅^_( E)(),
see <cit.>.
Such a result cannot hold for (E) since the ring ^_(E)()= is too small to provide geometric information.
So we have developed two substitutes.
Our first approach is to replace the usual -graded ring ^() by a richer multi-graded ring involving twists.
This leads us to twisted cohomology (E) and to <Ref>.
The second approach is to hope that the endomorphism ring ^(), although useless globally, becomes rich enough to control the topology locally on , without leaving the world of -graded rings. This is what we achieve in <Ref> thanks to the open cover {H}_H≤ E.
As can be expected, the two proofs are intertwined.
§.§ Touching ground
Combining <Ref> ultimately describes the topological space for all G, in terms of homogeneous spectra of graded rings.
In <Ref> we improve and apply these results as follows.
In <Ref>, we explain how to go from the `local' rings ^_H() over the open H, for each subgroup H≤ E, to the `global' topology of .
In <Ref>, we give a finite presentation by generators and relations of the reduced -algebra (^_H())_, generalizing the usual one for cohomology.
In <Ref>, we express for a general finite group G as the quotient of a disjoint union of for the maximal elementary abelian p-sections E of G by maximal relations.
In <Ref>, we prove that the irreducible components of correspond to the maximal elementary abelian p-sections of G up to conjugation. It follows that the Krull dimension of is the sectional p-rank of G, the maximal rank of elementary abelian p-sections.
(For comparison, recall that for the derived category these irreducible components correspond to maximal elementary abelian p-subgroups, not sections, and the Krull dimension is the usual p-rank.)
And of course, we discuss examples. Using our techniques, we compute for some notable groups G, in particular Klein-four (<Ref>) and the dihedral group (<Ref>).
The latter will lead us to the following picture, whose precise meaning will be explained in <Ref>.
Hopefully, its beauty will entice the reader to proceed beyond the present introduction.
--
§.§ The toolbox
The outline of the paper should now be clear from the above discussion and the #TOCtable of content presented upfront.
It would be an oversell to pretend that Part II is a stand-alone paper.
We import several technical results from Part I, as black boxes.
The ones invoked most often are gathered in <Ref>.
Here is some standard notation used throughout the text.
We write for the tt-spectrum of a tt-category .
For an object x∈, we write (x)=∈x∈ to denote the open complement of (x).
We write p(G) for the set of p-subgroups of G and p(G)/_G for its G-orbits under G-conjugation.
For each subgroup H≤ G, its Weyl group is =(N_G H)/H where N_GH=g∈ GH^g=H.
Let us remind the reader of the essentials of Part I <cit.>.
The canonical localization Υ_G(G)() gives us an open piece :=(())≅(^(G,)) of the spectrum, that we call the `cohomological open'. We write υ_G=(Υ_G)G for the inclusion.
For every H∈p(G) we denote by Ψ^H(G)→() the modular H-fixed-points tt-functor constructed in <cit.>. It is characterized by Ψ^H((X))≃(X^H) on permutation modules and by the same formula degreewise on complexes.
We write Ψ̌^H=Υ_∘Ψ^H for the composite (G)→()(()) all the way down to the derived category of .
For every H∈p(G), the tt-prime (H)=(Ψ̌^H) is a closed point of .
It is also (H)=(^H) where ^H=^_1∘Ψ^H(G)→()→().
All closed points of are of this form by <cit.>.
We write ψ^H=(Ψ^H)(())→ for the continuous map induced by Ψ^H and ψ̌^H=(Ψ̌^H)υ(())ψ^H for its restriction to the cohomological open of .
If we need to specify the ambient group we write ψ^H G for ψ^H, etc.
We saw in <cit.> that ψ^H is a closed map, and a closed immersion if H G is normal.
Every prime ∈ is of the form =_G(H,):=ψ̌^H() for a p-subgroup H≤ G and a point ∈ in the cohomological open of the Weyl group of H, in a unique way up to G-conjugation; see <cit.>.
Hence the pieces G(H):=ψ̌() yield a partition =⊔_H∈p(G)/_GG(H) into relatively open strata G(H), homeomorphic to .
The crux of the problem is to understand how these strata G(H)≃ attach together topologically, to build the space .
The authors thank Ivo Dell'Ambrogio and Beren Sanders for comments and suggestions.
§ THE COLIMIT THEOREM
To reduce the determination of to the elementary abelian case, we invoke the category of elementary abelian p-sections of a finite group G.
Recall that a section of G is a pair (H,K) of subgroups with K normal in H.
We denote by the category whose objects are pairs (H,K) where K H are p-subgroups of G such that H/K is elementary abelian.
Morphisms (H,K)→ (H',K') are defined to be elements g∈ G such that
H^g∩ K'≤ K^g≤ H^g≤ H'.
Composition of morphisms is defined by multiplication in G.
Let us highlight three types of morphisms in .
*
We have an isomorphism g (H,K) (H^g,K^g) in for every g∈ G.
Intuitively, we can think of this as the group isomorphism c_g H/K H^g/K^g.
*
For every object (H',K') in and every subgroup H≤ H', we have a well-defined object (H,H∩ K') and the morphism 1 (H,H∩ K')→ (H',K').
Intuitively, we think of it as the inclusion H/(H∩ K') H'/K' of a subgroup.
*
For (H,K) in and a subgroup L̅=L/K of H/K, for K≤ L≤ H, there is another morphism in associated to , namely 1(H,L)→ (H,K).
This one does not correspond to an intuitive group homomorphism H/L H/K, as K is smaller than L.
Instead, H/L is the quotient of H/K by L̅ H/K.
This last morphism will be responsible for the modular L̅-fixed-points functor.
Every morphism g (H,K)→ (H',K') in is a composition of three morphisms of the above types <ref>, <ref> and <ref> in the following canonical way:
@R=0em@C=1em
H
H
^gH'
H'
∇ [r]^-<ref> ∇ [r]^-<ref> ∇ [r]^-<ref> ∇
K
H∩^gK'
^gK'
K'
where the first two are given by 1∈ G and the last is given by g.
In particular, the rank of the elementary abelian group H/K increases or stays the same along any morphism (H,K)→ (H',K') in this category, as this is true with <ref>, <ref> and <ref>.
To every object (H,K) in , we associate the tt-category (H/K)=((H/K;)).
For every morphism g (H,K)→ (H',K') in , we set K̅=K/(H∩^gK') and we define a functor of tt-categories:
(g)(H'/K')c_g^*(^gH'/^gK')(H/(H∩^gK'))Ψ^K̅(H/K)
using that H/(H∩^gK') is a subgroup of ^gH'/^gK' for the restriction, and using that (H/(H∩^gK'))/K̅=H/K for the modular fixed-points functor Ψ^K̅.
It follows from <cit.> that (-) is a contravariant (pseudo) functor on with values in tt-categories:
.
We can compose this with (-), which incidentally makes the coherence of the 2-isomorphisms accompagnying (<ref>) irrelevant, and obtain a covariant functor from to topological spaces.
Let us compare this diagram of spaces (and its colimit) with the space .
For each (H,K)∈, we have a tt-functor
(G)^G_H(H)Ψ^K(H/K)
which yields a natural transformation from the constant functor (H,K)↦(G) to the functor → of (<ref>).
The above Ψ^K is Ψ^K H.
Since H≤ N_G K, the tt-functor (<ref>) is also ^_H/K∘Ψ^K G(G)→()→(H/K).
Applying (-) to this observation, we obtain a commutative square:
@C=4em@R=2em((H/K))
[r]^-ψ^K H[d]_-ρ_H/K[rd]^(.6)φ_(H,K) ((H))
[d]^-ρ_H
(())
[r]_-ψ^K G
whose diagonal we baptize φ_(H,K).
In summary, we obtain a continuous map
φ_(H,K)∈((H/K))→
whose component φ_(H,K) at (H,K) is the diagonal map in (<ref>).
* Each of the maps ((g))((H/K))→((H'/K')) in the colimit diagram (<ref>) is a closed immersion.
* Each of the components φ_(H,K)((H/K))→ of (<ref>) is closed and preserves the dimension of points (the Krull dimension of their closure).
These statements follow from two facts, see <Ref>:
When N G is normal the map ψ^N((G/N))((G)) is a closed immersion.
When H≤ G is any subgroup, the map ρ_H((H))→((G)) is closed, hence lifts specializations, and it moreover satisfies `Incomparability' by <cit.>.
We are now ready to prove <Ref>:
For any finite group G, the map φ in (<ref>) is a homeomorphism.
Each component φ_(H,K) is a closed map and thus φ is a closed map.
For surjectivity, by <Ref>, we know that is covered by the subsets ψ^K(), over all p-subgroups K≤ G. Hence it suffices to know that the (ρ_E) cover =(()) as E≤ runs through all elementary p-subgroups. (Such an E must be of the form H/K for an object (H,K)∈.) This holds by a classical result of Quillen <cit.>; see <cit.>.
The key point is injectivity.
Take ∈((H/K)) and '∈((H'/K')) with same image in .
Write =_H/K(L/K,) for suitable arguments (K≤ L≤ H, ∈H/L) and note that the map induced by 1(H,L)→ (H,K) in sends _H/L(1,)∈((H/L)) to . So we may assume L=K.
By <cit.>, the image of =_H/K(1,) in is _G(K,ρ̅()) where ρ̅H/K→ is induced by restriction.
Similarly, we may assume '=_H'/K'(1,') for '∈H'/K' and we have _G(K,ρ̅())=_G(K',ρ̅'(')) in and need to show that and ' are identified in the colimit (<ref>).
By <cit.>, the relation _G(K,ρ̅())=_G(K',ρ̅'(')) can only hold because of G-conjugation, meaning that there exists g∈ G such that K'=K^g and ρ̅'(')=ρ̅()^g in GK'.
Using the map g(H,K)→ (H^g,K^g) in we may replace H,K, by H^g,K^g,^g and reduce to the case K=K'. In other words, we have two points =_H/K(1,)∈((H/K)) and '=_H'/K(1,')∈((H'/K)) corresponding to two p-subgroups H,H'≤ G containing the same normal subgroup K and two cohomological primes ∈H/K and '∈H'/K such that ρ̅()=ρ̅'(') in under the maps ρ̅ and ρ̅' induced by restriction along H/K≤ and H'/K≤ respectively.
If we let G̅==(N_G K)/K, we have two elementary abelian p-subgroups H̅=H/K and H̅'=H'/K of G̅, each with a point in their cohomological open, ∈H̅ and '∈H̅', and those two points have the same image in the cohomological open G̅ of the `ambient' group G̅. By Quillen <cit.> (or <cit.>) again, we know that this coalescence must happen because of an element g̅∈G̅, that is, a g∈ N_G K, and a prime ∈H̅∩^gH̅' that maps to and to ' under the maps H̅∩^gH̅'̅→H̅ and H̅∩^gH̅'̅→H̅'̅ respectively.
But our category contains all such conjugation-inclusion morphisms coming from the orbit category of G. Specifically, we have two morphisms
1(H∩^gH',K)→ (H,K) and g(H∩^gH',K)→ (H',K) in , under which the point _(H∩^gH')/K(1,) maps to _H/K(1,)= and _H'/K(1,')=' respectively. This shows that =' in the domain of (<ref>) as required.
By <cit.>, the space is noetherian. Hence the topology is entirely characterized by the inclusion of primes.
Now, suppose that is the image under φ_(H,K)→ of some '∈ for an elementary abelian subquotient E=H/K corresponding to a section (H,K)∈.
Then the only way for another prime ∈ to belong to the closure of is to be itself the image of some point ' of in the closure of '.
This follows from <Ref>.
In other words, the question of inclusion of primes can also be reduced to the elementary abelian case.
§ INVERTIBLE OBJECTS AND TWISTED COHOMOLOGY
In this section we introduce a graded ring whose homogeneous spectrum helps us understand the topology on ((G)), at least for G elementary abelian.
This graded ring, called the twisted cohomology ring (<Ref>), consists of morphisms between and certain invertible objects.
It all starts in the cyclic case.
Let C_p=σ|σ^p=1 be the cyclic group of prime order p, with a chosen generator.
We write C_p=[σ]/(σ^p-1) as [τ]/τ^p for τ=σ-1.
Then the coaugmentation and augmentation maps become:
η:1↦τ^p-1 C_pand: C_pτ↦ 0.
For p odd, we denote the first terms of the `standard' minimal resolution of by
u_p=(0→ C_pτ C_p→ 0).
We view this in (C_p) with in homological degree zero.
One can verify directly that u_p is ⊗-invertible, with u_p-1=u_p^∨≅(0→η C_pτ C_p→ 0).
Alternatively, one can use the conservative pair of functors ^H(C_p)→() for H∈{C_p , 1}, corresponding to the only closed points (C_p) and (1) of ((C_p)). Those functors map u_p to the ⊗-invertibles and [2] in (), respectively.
For p=2, we have a similar but shorter ⊗-invertible object in (C_2)
u_2=(0→ C_2→ 0)
again with in degree zero.
To avoid constantly distinguishing cases, we abbreviate
2':={[ 2 if p>2; 1 if p=2. ].
For any finite group G and any index-p normal subgroup N, we can inflate the ⊗-invertible u_p of <Ref> along π G G/N≃ C_p to a ⊗-invertible in (G).
Let N G be a normal subgroup of index p. We define
u_N:={[ ⋯→ 0→ (G/N)τ (G/N)→ 0 →⋯ if p is odd; ⋯→ 0 → 0 → (G/N) → 0 →⋯ if p=2 ].-1em
with in degree zero. We also define two morphisms
a_N→ u_N
and
b_N→ u_N[-2']
as follows. The morphism a_N is the identity in degree zero, independently of p:
@C=1em@R=1em[d]_-a_N@[r]|-= ⋯[r] 0 [r] [d] [r] [d]^-1 0 [r] ⋯
u_N @[r]|-= ⋯[r] (G/N) [r]_- [r] 0 [r] ⋯
The morphism b_N is given by η→(G/N) in degree zero, as follows:
@C=1em@R=1em[d]_-b_N@[r]|-= [r] [d]^-η 0 [d]
u_N[-1] @[r]|-= (G/N) [r]_- and@C=1em@R=1em[d]_-b_N@[r]|-= [r] [d]^-η 0 [d] [r] 0 [d]
u_N[-2] @[r]|-= (G/N) [r]_-τ (G/N) [r]_-
where the target u_N is shifted once to the right for p=2 (as in the left-hand diagram above) and shifted twice for p>2 (as in the right-hand diagram).
When p is odd there is furthermore a third morphism c_N→ u_N[-1], that is defined to be η→(G/N) in degree zero.
This c_N will play a lesser role.
In statements made for all primes p, simply ignore c_N in the case p=2 (or think c_N=0).
Here is an example of such a statement, whose meaning should now be clear: The morphisms a_N and b_N, and c_N (for p odd), are inflated from G/N.
Technically, u_N depends not only on an index-p subgroup N G but also on the choice of a generator of G/N, to identify G/N with C_p. If one needs to make this distinction, one can write u_π for a chosen epimorphism π G C_p. This does not change the isomorphism type of u_N, namely (π)=(π') implies u_π≅ u_π'.
(We expand on this topic in <Ref>.)
Let N G be a normal subgroup of index p and let q≥ 1. Then there is a canonical isomorphism in (G)
u_Nq≅(⋯ 0→(G/N) τ(G/N) τ^p-1⋯τ(G/N) → 0⋯ )
where the first (G/N) sits in homological degree 2'· q and sits in degree 0.
It is an exercise over the cyclic group C_p.
Then inflate along G G/N.
The morphism b_N→ u_N[-2'] of <Ref> is a quasi-isomorphism and the fraction
ζ_N:=(b_N[2'])∘ a_N→ u_N [2']
is a well-known morphism ζ_N∈_()(,[2'])=^2'(G,) in the derived category (). For G elementary abelian, these ζ_N generate the cohomology -algebra ^(G,), on the nose for p=2 and modulo nilpotents for p odd.
We sometimes write ζ^+_N=a_N/b_N for ζ_N in order to distinguish it from the inverse fraction ζ^-_N:=b_N/a_N that exists wherever a_N is inverted.
Of course, when both a_N and b_N are inverted, we have ζ^-_N=(ζ^+_N)=ζ_N.
The switch of factors (12) u_N⊗ u_N≅ u_N⊗ u_N can be computed directly to be the identity (over C_p, then inflate).
Alternatively, it must be multiplication by a square-one element of ()=^×, hence ±1.
One can then apply the tensor-functor Ψ^G(G)→(), under which u_N goes to , to rule out -1.
It follows that for p odd, u_N[-1] has switch -1, and consequently every morphism → u_N[-1] must square to zero. In particular c_N⊗ c_N=0.
This nilpotence explains why c_N will play no significant role in the topology.
We can describe the image under modular fixed-points functors of the ⊗-invertible objects u_N and of the morphisms a_N and b_N. (We leave c_N as an exercise.)
Let H G be a normal p-subgroup. Then for every index-p normal subgroup N G, we have in (G/H)
Ψ^H(u_N)≅{[ u_N/H if H≤ N; if H≰N ].
and under this identification
Ψ^H(a_N)={[ a_N/H if H≤ N; 1_ if H≰N ].
and
Ψ^H(b_N)={[ b_N/H if H≤ N; 0 if H≰N. ].
Direct from <Ref> and Ψ^H((X))≅(X^H) for X=G/N.
For restriction, there is an analogous pattern but with the cases `swapped'.
Let H≤ G be a subgroup. Then for every index-p normal subgroup N G, we have in (H)
^G_H(u_N)≅{[ [2'] if H≤ N; u_N∩ H if H≰N ].
and under this identification
^G_H(a_N)={[ 0 if H≤ N; a_N∩ H if H≰N ].
and
^G_H(b_N)={[ 1_ if H≤ N; b_N∩ H if H≰N. ].
Direct from <Ref> and the Mackey formula for ^G_H((G/N)).
We can combine the above two propositions and handle Ψ^H for non-normal H, since by definition Ψ^H G=Ψ^H N_G H∘^G_N_G H.
Here is an application of this.
Let H≤ G be a p-subgroup and N G of index p. Recall the `residue' tt-functor ^H=_1∘Ψ^H(G)→() at the closed point (H).
* If H≰N then ^H(a_N) is an isomorphism.
* If H≤ N then ^H(b_N) is an isomorphism.
We apply <Ref> for N_G H≤ G and <Ref> for H N_G H.
For (a), H≰N forces N_G H≰N and H≰N∩ N_G H. Hence Ψ^H(a_N)=Ψ^H N_G H_N_G H(a_N)=Ψ^H N_G H(a_N∩ N_G H)=1_ is an isomorphism.
Similarly for (b), if N_G H≤ N then Ψ^H(b_N) is an isomorphism and if N_G H≰N it is the quasi-isomorphism b_(N∩ N_G H)/H.
Thus ^H(b_N) is an isomorphism in ().
Let us now prove that the morphisms a_N and b_N, and c_N (for p odd), generate all morphisms from the unit to tensor products of u_N's.
This is a critical fact.
Let N_1,…,N_ℓ be index-p normal subgroups of G and abbreviate u_i:=u_N_i for i=1,…,ℓ and similarly a_i:=a_N_i and b_i:=b_N_i and c_i:=c_N_i (see <Ref>).
Let q_1,…,q_ℓ∈ be non-negative integers and s∈. Then every morphism f→ u_1q_1⊗⋯⊗ u_ℓq_ℓ[s] in (G) is a -linear combination of tensor products of (a `polynomial' in) the morphisms a_i and b_i, and c_i (for p odd).
We proceed by induction on ℓ. The case ℓ=0 is just _(G)^()=. Suppose ℓ≥ 1 and the result known for ℓ-1.
Up to reducing to ℓ-1, we can assume that the N_1,…,N_ℓ are all distinct.
Set for readability
v:=u_1q_1⊗⋯⊗ u_ℓ-1q_ℓ-1[s],
N:=N_ℓ,
u:=u_ℓ=u_Nand
q:=q_ℓ
so that f is a morphism of the form
f→ v⊗ uq .
We then proceed by induction on q≥ 0. We assume the result known for q-1 (the case q=0 holds by induction hypothesis on ℓ).
The proof will now depend on p.
Suppose first that p=2. Consider the exact triangle in (G)
@C=.9em@R=1.5em
u(q-1)@[r]|-=[d]_-a_N ⋯ 0 [r]
0 [r] [d]
(G/N) [r]^-τ@=[d]
⋯[r]^-τ (G/N) [r]^-@=[d]
[r] @=[d]
0 ⋯
uq@[r]|-=[d]_- ⋯ 0 [r]
(G/N) [r]^-τ@=[d]
(G/N) [r]^-τ[d]
⋯[r]^-τ (G/N) [r]^-[d]
[r] [d]
0 ⋯
(G/N)[q] @[r]|-= ⋯ 0 [r]
(G/N) [r]
0 [r]
⋯[r]
0 [r]
0 [r]
0 ⋯-1em
where is in degree zero. (See <Ref>.)
Tensoring the above triangle with v and applying _G(,-):=_(G)(,-) we get an exact sequence
@C=1.5em@R=.4em_G(,v⊗ u(q-1))[r]^-· a_N _G(,v⊗ uq)[r] _G(,v⊗(G/N)[q])@=[d]
f @[r]|-⟼@[u]|-90∈ -1em f'∈_N(,^G_N(v)[q])
-.8em
Our morphism f belongs to the middle group. By adjunction, the right-hand term is _N(,^G_N(v)[q]). Now since all N_1,…,N_ℓ=N are distinct, we can apply <Ref> to compute ^G_N(v) and we know by induction hypothesis (on ℓ) that the image f' of our f in this group _N(,^G_N(v)[q]) is a -linear combination of tensor products of a_N_i∩ N and b_N_j∩ N for 1≤ i,j≤ℓ-1, performed over the group N. We can perform the `same' -linear combination of tensor products of a_i's and b_j's over the group G, thus defining a morphism f”∈_G(,v[q]). We can now multiply f” with b_Nq→ u_Nq[-q] to obtain a morphism f” b_N^q in the same group _G(,v⊗ uq) that contains f. Direct computation shows that the image of this f”b_N^q in _N(,_N(v)[s]) is also equal to f'. The key point is that b_Nq is simply η→(G/N) in degree q and this η is also the unit of the ^G_N_N^G adjunction.
In other words, the difference f-f” b_N^q comes from the left-hand group _G(,v⊗ u(q-1)) in the exact sequence (<ref>), reading
f=f”b_N^q+f”' a_N
for some f”'∈_G(,v⊗ u(q-1)). By induction hypothesis (on q), f”' is a polynomial in a_i's and b_j's. Since f” also was such a polynomial, so is f.
The proof for p odd follows a similar pattern of induction on q, with one complication. The cone of the canonical map a_N u_N(q-1)→ u_Nq is not simply (G/N) in a single degree as in (<ref>) but rather the complex
C:=(⋯→ 0→(G/N) τ (G/N)→ 0→⋯)
with (G/N) in two consecutive degrees 2q and 2q-1. So the exact sequence
@C=1.5em@R=.4em_G(,v⊗ u(q-1))[r]^-· a_N _G(,v⊗ uq)[r] _G(,v⊗ C)
has a more complicated third term than the one of (<ref>). That third term _G(,v⊗ C) itself fits in its own exact sequence associated to the exact triangle (G/N)[2q-1]τ(G/N)[2q-1]→ C →(G/N)[2q]. Each of the terms _G(,v⊗(G/N)[∗])≅_N(,^G_N(v)[∗]) can be computed as before, by adjunction.
The image of f in _N(,^G_N(v)[2q]) can again be lifted to a polynomial f'b^q_N:→ v⊗ uq so that the image of the difference f-f'b^q_N in _G(,v⊗ C) comes from some element in _N(,^G_N(v)[2q-1]).
That element may be lifted to a polynomial f”b^q-1_Nc_N:→ v⊗ uq, and we obtain
f=f'b_N^q+f”b_N^q-1c_N+f”'a_N
for some f”'∈_G(,v⊗ u(q-1)) similarly as before.
We can now assemble all the hom groups of <Ref> into a big graded ring.
We denote the set of all index-p normal subgroups of G by
=(G):=N G [G:N]=p.
Let ^=^(G)={q(G)→} be the monoid of twists, tuples of non-negative integers indexed by this finite set.
Consider the (×^)-graded ring
(G)=(G;) := ⊕_s∈ ⊕_q∈^_(G)( , ⊗_N∈u_Nq(N)[s]).
Its multiplication is induced by the tensor product in (G).
We call (G) the (permutation) twisted cohomology ring of G.
It is convenient to simply write
(q)=⊗_N∈(u_N)q(N)
for every twist q∈^(G) and thus abbreviate ^s,q(G)=(,(q)[s]).
The graded ring (G) is graded-commutative by using only the parity of the shift, not the twist; see <Ref>.
In other words, we have
h_1· h_2= (-1)^s_1· s_2 h_2· h_1whenh_i∈^s_i,q_i(G).
For instance, for p odd, when dealing with the morphisms a_N and b_N, which land in even shifts of the object u_N, we do not have to worry too much about the order.
This explains the `unordered' notation ζ_N=a_N/b_N used in <Ref>.
The critical <Ref> gives the main property of this construction:
The twisted cohomology ring (G) of <Ref> is a -algebra generated by the finitely many elements a_N and b_N, and c_N (for p odd), of <Ref>, over all N G of index p. In particular (G) is noetherian.
The reader can verify by hand that (C_2)=[a_N,b_N], without relations, and that (C_p)=[a_N,b_N,c_N]/c_N^2 for p odd, where in both cases N=1 is the only N∈(C_p).
This example is deceptive, for the {a_N,b_N,c_N}_N∈ usually satisfy some relations, as the reader can already check for G=C_2× C_2 for instance.
We systematically discuss these relations in <Ref>.
We conclude this section with some commentary.
The name `cohomology' in <Ref> is used in the loose sense of a graded endomorphism ring of the unit in a tensor-triangulated category.
However, since we are using the tt-category (G) and not (), the ring (G) is quite different from ^(G,) in general.
In fact, (G) could even be rather dull.
For instance, if G is a non-cyclic simple group then (G)=∅ and (G)=.
We will make serious use of (G) in <Ref> to describe for G elementary abelian.
In that case, ^(G,) is a localization of (G).
See <Ref>.
By <Ref>, there is no `collision' in the twists: If there is an isomorphism (q)[s]≃(q')[s'] in (G) then we must have q=q' in ^ and s=s' in .
The latter is clear from ^G(u_N)≅ in (), independently of N. We then conclude from ^N((q))≃[2'q(N)] in (), for each N∈.
We only use positive twists q(N) in (<ref>).
The reader can verify that already for G=C_p cyclic, the ^2-graded ring ⊕_(s,q)∈^2(,u_pq[s]) is not noetherian.
See for instance <cit.> for p=2.
However, negatively twisted elements tend to be nilpotent.
So the ×^-graded version of (G) may yield the same topological information as our ×^-graded one.
We have not pushed this investigation of negative twists, as it brought no benefit to our analysis.
§ AN OPEN COVER OF THE SPECTRUM
In this section, we extract some topological information about from the twisted cohomology ring (G) of <Ref> and the maps a_N and b_N of <Ref>, associated to every index-p normal subgroup N in =(G).
Recall from <cit.> that we can use tensor-induction to associate to every subgroup H≤ G a Koszul object [G]H=_H^G(0→1→ 0). It generates in (G) the tt-ideal (^G_H), see <cit.>:
[G]H_(G)=(^G_H(G)→(H)).
Let N G be a normal subgroup of index p. Then we have:
*
In (G), the object (a_N) generates the same thick subcategory as (G/N). In particular, ((a_N))=((G/N)).
*
In (G), the object (b_N) generates the same thick tensor-ideal as [G]N. In particular, ((b_N))=([G]N)=((^G_N)).
For p=2, we have (a_N)=(G/N)[1] so the first case is clear. For p odd, we have (a_N)[-1]≃(0→(G/N)τ(G/N)→ 0)=(τ(G/N)). Hence (a_N)∈((G/N)). Conversely, since τ^p=0, the octahedron axiom inductively shows that (G/N)∈((τ(G/N))). This settles <ref>.
For <ref>, the complex s:=(b_N)[2'] becomes split exact when restricted to N since it is inflated from an exact complex on G/N. In degree one we have s_1=(G/N), whereas s_0=. Hence <cit.> tells us that the complex s generates the tt-ideal (^G_N(G)→(N)). We conclude by (<ref>).
Let N G be of index p. Then (a_N)⊗(b_N)=0.
By <Ref> it suffices to show (G/N)⊗[G]N=0. By Frobenius, this follows from ^G_N([G]N)=0, which holds by (<ref>).
We now relate the spectrum of (G) to the homogeneous spectrum of (G), in the spirit of <cit.>. The comparison map of <cit.> is denoted by ρ^ but we prefer a more descriptive notation (and here, the letter ρ is reserved for ()).
There is a continuous `comparison' map
_G((G))
mapping a tt-prime to the ideal generated by those homogeneous f∈(G) whose cone does not belong to .
It is characterized by the fact that for all f
_G(Z(f))=((f))=f is not invertible in (G)/
where Z(f)=f∈ is the closed subset of ((G)) defined by f.
The fact that the homogeneous ideal _G() is prime comes from <cit.>.
Equation (<ref>) is essentially a reformulation of the definition.
The usual notation for Z(f) would be V(f), and D(f) for its open complement. Here, we already use V for G and for G(H), and the letter D is certainly overworked in our trade.
So we stick to Z(f) and Z(f)^c.
In view of <Ref>, for any f, the open subset of
(f):=((f))=f is invertible in (G)/
is the preimage by _G→((G)) of the principal open Z(f)^c=f∉.
It is the open locus of where f is invertible.
In particular, our distinguished elements a_N and b_N (see <Ref>) give us the following open subsets of , for every N∈(G):
(a_N)=_G(Z(a_N)^c), the open where a_N is invertible, and
(b_N)=_G(Z(b_N)^c), the open where b_N is invertible.
Since (c_N)^2=0 by <Ref>, we do not have much use for (c_N)=∅.
With notation as above, we have for every N G of index p
(a_N)∪(b_N)=.
We compute (a_N)∪(b_N)=((a_N))∪((b_N))=((a_N)⊗(b_N))=(0_(E))=, using <Ref>.
Every object u_N is not only ⊗-invertible in (G) but actually locally trivial over , which is a stronger property in general tt-geometry.
Indeed, <Ref> tells us that around each point of , either u_N becomes isomorphic to via a_N, or u_N becomes isomorphic to [2'] via b_N.
This holds for one invertible u_N.
We now construct a fine enough open cover of such that every u_N is trivialized on each open.
Let H≤ G be a p-subgroup. Define an open of by
H=[G]H:=⋂_N∈H≰N(a_N) ∩⋂_N∈H≤ N(b_N).
Then the closed point (H)∈ belongs to this open H. Consequently {H}_H∈p(G) is an open cover of .
The point (H)=(^H) belongs to H by <Ref>.
It follows by general tt-geometry that {H}_H is a cover:
Let ∈; there exists a closed point in , that is, some (H) that admits as a generalization; but then (H)∈H forces ∈H since open subsets are generalization-closed.
For a p-group, we now discuss H at the two extremes H=1 and H=G.
Let G be a p-group and F=F(G)=∩_N∈(G)N be its Frattini subgroup. So F G and G/F is the largest elementary abelian quotient of G.
Let G be a p-group with Frattini subgroup F.
The closed complement of the open [G]1 is the support of [G]F, the closed support of the tt-ideal (^G_F) of (G).
In particular, if G is elementary abelian then [G]1 is equal to the cohomological open G=(( G))≅(^(G,)).
By definition, 1=∩_N∈(b_N). By <Ref>, its closed complement is ∪_N∈([G]N).
By <cit.>, for every K≤ G
([G]K)=(H,)H≰_G K
(taking all possible ∈). It follows that our closed complement of 1 is
∪_N∈(G)([G]N) (<ref>)(H,)∃ N∈(G) such that H≰_G N
=(H,)H≰∩_N∈(G)N
=(H,)H≰F(<ref>)([G]F).
The statement with (^G_F) then follows from (<ref>). Finally, if G is elementary abelian then F=1 and (^G_1)=(G) is the tt-ideal of acyclic complexes. The complement of its support is ((G)/(G))=(())=G.
In the above proof, we showed that ∪_N∈(N)=(F) thanks to the fact that ∩_N∈N=F. So the very same argument gives us:
Let G be a p-group and let N_1,…,N_r∈(G) be some index-p subgroups such that N_1∩⋯∩ N_r is the Frattini subgroup F.
(This can be realized with r equal to the p-rank of G/F.)
Then [G]1=∩_i=1^r(b_N_i) already.
Hence if ∈(b_N_i) for all i=1,…,r then ∈(b_N) for all N∈(G).
Let us turn to the open [G]H for the p-subgroup at the other end: H=G.
Let G be a p-group. Then the complement of the open [G]G is the union of the images of the spectra ((H)) under the maps ρ_H=(_H), over all the proper subgroups H≨ G.
By <Ref>, the closed complement of G=∩_N∈(a_N) equals ∪_N∈((G/N)).
For every H≤ G, we have ((G/H))=(ρ_H); see <cit.> if necessary.
This gives the result because restriction to any proper subgroup factors via some index-p subgroup, since G is a p-group.
Let G be a p-group.
This open complement G of ∪_H≨ G(ρ_H) could be called the `geometric open'.
Indeed, the localization functor
Φ^G(G)(G)/(G/H)| H≨ G
corresponding to G is analogous to the way the geometric fixed-points functor is constructed in topology. For more on this topic, see <cit.>.
For G not a p-group, the open G is not defined (we assume H∈p(G) in <Ref>) and the `geometric open' is void anyway as we have (ρ_P)= for any p-Sylow P≨ G.
The strategy to analyze non-p-groups is to first descend to the p-Sylow, using that _P is faithful.
We saw in <Ref> that the complement of G is covered by the images of the closed maps ρ_H=(_H) for H≨ G. We could wonder whether another closed map into covers G itself. The answer is the closed immersion ψ^F((G/F))((G)) induced by the modular fixed-points functor Ψ^F with respect to the Frattini subgroup F G.
This can be deduced from the results of <Ref> or verified directly, as we now outline.
Indeed, every prime =_G(K,) for K≤ G and ∈ comes by Quillen from some elementary abelian subgroup E=H/K≤=(N_G K)/K.
One verifies that unless N_G K=G and H=G, the prime belongs to the image of ρ_G' for a proper subgroup G' of G. Thus if belongs to G, we must have E=H/K=G/K for K G. Such a K must contain the Frattini and the result follows.
§ TWISTED COHOMOLOGY UNDER TT-FUNCTORS
Still for a general finite group G, we gather some properties of the twisted cohomology ring (G) introduced in <Ref>.
We describe its behavior under specific tt-functors, namely restriction, modular fixed-points and localization onto the open subsets [G]H.
Recall that =(G)=N G[G:N]=p.
Twisted cohomology (G) is graded over a monoid of the form .
The ring homomorphisms induced by the above tt-functors will be homogeneous with respect to a certain homomorphism γ on the corresponding grading monoids, meaning of course that the image of a homogeneous element of degree (s,q) is homogeneous of degree γ(s,q).
The `shift' part (in ) is rather straightforward.
The `twist' part (in ^ℓ) will depend on the effect of said tt-functors on the u_N.
Let us start with modular fixed-points, as they are relatively easy.
Let H G be a normal subgroup.
By <Ref>, the tt-functor Ψ^H(G)→(G/H) maps every u_N for N≱H to , whereas it maps u_N for N≥ H to u_N/H.
This defines a homomorphism of grading monoids
γ=γ_Ψ^H×^(G)→×^(G/H)
given by γ(s,q)=(s,q̅) where q̅(N/H)=q(N) for every N/H∈(G/H). In other words, q↦q̅ is simply restriction ^(G)^(G/H) along the canonical inclusion (G/H)(G).
By <Ref>, for every twist q∈^(G), we have a canonical isomorphism Ψ^H((q))≅(q̅).
Therefore the modular fixed-points functor Ψ^H defines a ring homomorphism also denoted
@R=.1emΨ^H-2em
(G) [r]^- (G/H)
(f(q)[s])
@|->[r]
(Ψ^H(f)Ψ^H((q)[s]) ≅(q̅)[s])
which is homogeneous with respect to γ_Ψ^H in (<ref>).
Restriction is a little more subtle, as some twists pull-back to non-trivial shifts.
Let α G'→ G be a group homomorphism. Restriction along α defines a tt-functor α^*=^α_G'∘^G_α(G)→(α)→(G').
Combining <Ref> for ^G_α with the obvious behavior of the u_N under inflation (by construction), we see that α^*(u_N)≅[2'] if N≥α and α^*(u_N)≅ u_α(N) if N≱α (which is equivalent to α(N)∈(G')).
Hence for every (s,q)∈×^(G) we have a canonical isomorphism α^*((q)[s])≅(q')[s'] where s'=s+2'∑_N≥αq(N) and q'(G')→ is defined for every N'∈(G') as
q'(N')=∑_N∈(G) s.t. α(N)=N'q(N).
(In particular q'(N')=0 if N'≱(α).)
These formulas define a homomomorphism (s,q)↦ (s',q') of abelian monoids that we denote
γ=γ_α^*×^(G)→×^(G').
The restriction functor α^* defines a ring homomorphism
@R=.1emα^*-2em
(G) [r]
(G')
(f(q)[s])
@|->[r]
(α^*(f)α^*((q)[s]) ≅(q')[s'])
which is homogeneous with respect to γ_α^* in (<ref>).
For instance, α G G/H can be the quotient by a normal subgroup H G. In that case α^* is inflation, which is a section of modular fixed-points Ψ^H. It follows that the homomorphism Ψ^H in (<ref>) is split surjective. (This also means that the composed effect on gradings γ_Ψ^H∘γ_α^*=𝕀 is trivial.)
Without changing the group G, we can also localize the twisted cohomology ring (G) by restricting to an open H of , as defined in <Ref>.
Recall the elements a_N,b_N∈(G) from <Ref>.
Let H≤ G be a p-subgroup. Let S_H⊂(G) be the multiplicative subset of the graded ring (G) generated by all a_N such that H≰N and all b_N such that H≤ N, for all N∈(G). Recall that the a_N and b_N are central by <Ref>.
We define a -graded ring
:=((G)[S_H])
as the twist-zero part of the localization of (G) with respect to S_H.
Explicitly, the homogeneous elements of consist of fractions f/g where f,g∈(G) are such that g→(q)[t] is a product of the chosen a_N,b_N in S_H, meaning that (q)[t] is the ⊗-product of the corresponding u_N for a_N and u_N[-2'] for b_N, whereas f→(q)[s] is any morphism in (G) with the same -twist q as the denominator.
Thus is -graded by the shift only: The degree of f/g is the difference s-t between the shifts of f and g.
It follows from <Ref> (and <Ref>) that the -graded ring is generated as a -algebra by the elements
ζ^+_N, ξ^+_NH≤ N∪ζ^-_N, ξ^-_NH≰N
where ζ^+_N=a_N/b_N is of degree +2' and ζ^-_N=b_N/a_N of degree -2' as in <Ref>, and where (only for p odd) the additional elements ξ^±_N are ξ^+_N:=c_N/b_N of degree +1, and ξ^-_N:=c_N/a_N of degree -1. (For p=2, simply ignore the ξ^±_N.)
In general, all these elements satisfy some relations; see <Ref>.
Beware that here ξ^-_N is never the inverse of ξ^+_N. In fact, both are nilpotent.
In fact, we can perform the central localization of the whole category (G)
(H)=_G(H):=(G)[S_H]
with respect to the central multiplicative subset S_H of <Ref>.
The tt-category (H)=(G)[S_H] has the same objects as (G) and morphisms x→ y of the form f/g where g→ u belongs to S_H, for u a tensor-product of shifts of u_N's according to g (as in <Ref>) and where f x→ u⊗ y is any morphism in (G) with `same' twist u as the denominator g.
This category (G)[S_H] is also the Verdier quotient of (G) by the tt-ideal (g)g∈ S_H and the above fraction f/g corresponds to the Verdier fraction
x [r]^-f u ⊗ y
y. [l]_-g⊗ 1
See <cit.> if necessary.
The -graded endomorphism ring ^_(H)() of the unit in (H)=(G)[S_H] is thus the -graded ring (S_H(G))= of <Ref>.
There is a general localization U of a tt-category over a quasi-compact open U⊆ with closed complement Z. It is defined as U=(/_Z)^♮.
If we apply this to U=H, we deduce from (<ref>) that U=∩_g∈ S_H(g) has closed complement Z=∪_g∈ S_H((g)) whose tt-ideal (G)_Z is the above (g)g∈ S_H.
In other words, the idempotent-completion of our _G(H)=(G)[S_H] is exactly (G)H.
As with any localization, we know that (_G(H)) is a subspace of , given here by U=∩_g∈ S_H(g)=H.
For G=E elementary abelian and the subgroup H=1, the category _E(1)=(E)1 in <Ref> is simply the derived category _E(1)=(E), by <Ref>.
In that case, E(1)≅^(E;) is the actual cohomology ring of E.
Since H=1≤ N for all N, we are inverting all the b_N and no a_N.
As noted in <Ref>, we obtain the same ring (the cohomology of E) as soon as we invert enough b_N_1,…,b_N_r, namely, as soon as N_1∩⋯∩ N_r=1.
We again obtain an induced homomorphism of multi-graded rings.
Let H≤ G be a p-subgroup and consider the above central localization (-)H(G)_G(H).
As explained in <Ref>, the morphisms a_N and b_N give us explicit isomorphisms (u_N)H≅ if N≱H and (u_N)H≅[2'] if N≥ H.
This yields a homomorphism on the grading
γ=γ_H×^(G)→
defined by γ(s,q)=s+2'∑_N≥ Hq(N) and we obtain a ring homomorphism
(-)H(G) ^__G(H)()=
which is homogeneous with respect to the homomorphism γ_H of (<ref>).
It is easy to verify that the continuous maps induced on homogeneous spectra by the ring homomorphisms constructed above are compatible with the comparison map of <Ref>.
In other words, if F(G)→(G') is a tt-functor and if the induced homomorphism F(G)→(G') is homogenous with respect to γ=γ_F×^(G)→×^(G'), for instance F=Ψ^H or F=α^* as in <Ref>, then the following square commutes:
@C=4em((G')) [r]^-(F)[d]^-_G' ((G)) [d]^-_G
((G')) @->[r]^-(F) ((G)).
This follows from F((f))≃(F(f)) in (G') for any f∈(G).
Similarly, for every H∈p(G) the following square commutes
-2em [G]H=(_G(H)) @^(->[r] [d]_-_(H) ((G)) [d]^-_G
() @^(->[r]
((G))
where the left-hand vertical map is the classical comparison map of <cit.> for the tt-category _G(H) and the ⊗-invertible [1].
The horizontal inclusions are the ones corresponding to the localizations with respect to S_H, as in <Ref>.
In fact, it is easy to verify that the square (<ref>) is cartesian, in view of [G]H=⋂_g∈ S_H(g)=⋂_g∈ S_H_G(Z(g)^c) by <Ref> and (<ref>).
We can combine the above functors. Here is a useful example.
Let H G be a normal subgroup such that G/H is elementary abelian.
Then we have a commutative square
@C=1.5em-2emG/H=(((G/H))) [r]^-ψ̌^H[d]_-_((G/H))^-≃ ((G)) [d]^-_G
(^(G/H,k)) @^(->[r]
((G))
and in particular, its diagonal _G∘ψ̌^H is injective.
The functor Ψ̌^H(G)→((G/H)) is the modular fixed-points functor Ψ^H(G)→(G/H) composed with Υ_G/H(G/H)((G/H)), which is the central localization (-)1 over the cohomological open, by <Ref>; see <Ref>.
Thus we obtain two commutative squares (<ref>) and (<ref>):
@C=1.5em(((G/H))) @^(->[r]^-υ_G/H[d]_-_((G/H))^-≃ ((G/H)) [d]^-_G/H@->[r]^(.5)ψ^H ((G)) [d]^-_G
(^(G/H,k)) @^(->[r]^- ((G/H)) @^(->[r]^- ((G))
the left-hand one for the central localization of (G/H) over the open [G/H]1=G/H, and the right-hand one for the tt-functor Ψ^H(G)→(G/H).
Note that the bottom-right map is injective because the ring homomorphism in question, Ψ^H(G)→(G/H) defined in (<ref>), is surjective by <Ref>.
§ THE ELEMENTARY ABELIAN CASE
In this central section, we apply the general constructions of <Ref> in the case of G=E elementary abelian.
We start with a key fact that is obviously wrong in general (for a non-cyclic simple group, the target space is just a point).
Let E be an elementary abelian group.
The comparison map
_E((E))→((E))
of <Ref> is injective.
Let H,N≤ E with [E:N]=p. Suppose first that H≰N.
We use the map ψ̌^H=(Ψ̌^H)E/H→ of <Ref>.
Then
[ (ψ̌^H)((b_N)) = (ψ̌^H)(((b_N))) by definition, see (<ref>); = ((Ψ̌^H(b_N))) by general tt-geometry; = ((0→)) by <Ref>; = (⊕[1]) = ∅. ]
Thus (ψ̌^H) does not meet (b_N) when H≰N.
Suppose now that H≤ N. A similar computation as above shows that (ψ̌^H)((b_N))=(((E/H))) since in that case Ψ̌^H(b_N) is an isomorphism in ((E/H)). Therefore (ψ̌^H)⊆(b_N) when H≤ N. Combining both observations, we have
(ψ̌^H)∩(b_N)≠∅ ⟺ H≤ N.
Let now ,∈((E)) be such that _E()=_E() in ((E)).
Say =_E(H,) and =_E(K,) for H,K≤ E and ∈E/H and ∈E/K.
(See <Ref>.)
The assumption _E()=_E() implies that ∈(f) if and only if ∈(f), for every f∈(E). In particular applying this to f=b_N, we see that for every index-p subgroup N E we have ∈(b_N) if and only if ∈(b_N).
By (<ref>), this means that for every N∈(E) we have
H≤ N ⟺ K≤ N.
Since E is elementary abelian, this forces H=K. So we have two points ,∈E/H that go to the same image under E/Hψ̌^H((E))_E((E)) but we know that this map in injective by <Ref> for G=E.
In fact, we see that the open H of defined in <Ref> matches perfectly the open () of ((E)) in <Ref>.
Let E be an elementary abelian p-group. Let H≤ E be a subgroup. Then the comparison map of <Ref> restricts to a homeomorphism
_EH()
where is the -graded endomorphism ring of the unit in the localization _E(H) of (E) over the open H.
Recall the tt-category (H)=_E(H):=(E)[S_H] of <Ref>, where S_H⊂(E) is the multiplicative subset generated by the homogeneous elements a_NH≰N∪b_NH≤ N of <Ref>.
In view of <Ref>, it suffices to show that the map _(H)((H))→() is a homeomorphism.
We have injectivity by <Ref>. We also know that is noetherian by <Ref>. It follows from <cit.> that _(H) is surjective. Hence it is a continuous bijection and we only need to prove that it is a closed map.
We claim that (H) is generated by its ⊗-unit . Namely, let =_(H)() be the thick subcategory of (H) generated by and let us see that =(H).
Observe that is a sub-tt-category of (H).
Let N∈ be an index-p subgroup. We claim that (E/N) belongs to .
If N≱H, then a_N is inverted in (H), so (E/N)=0 in (H) by <Ref> <ref>.
If N≥ H, then b_N→ u_N[-2'] is inverted, so u_N∈ and we conclude again by <Ref> <ref> since a_N→ u_N is now a morphism in . For a general proper subgroup K<E, the module (E/K) is a tensor product of (E/N) for some N∈. (Here we use E elementary abelian again.) Hence (E/K) also belongs to as the latter is a sub-tt-category of (H).
In short contains all generators (E/H) for H≤ E. Therefore (H)= is indeed generated by its unit.
It follows from this and from noetherianity of ^_(H)()= that ^_(H)(x,y) is a finitely generated -module for every x,y∈(H).
We conclude from a general tt-geometric fact, observed by Lau <cit.>, that the map must then be closed.
Let E be an elementary abelian p-group.
Let E be the sheaf of -graded rings on obtained by sheafifying U↦^_(E)U().
Then (,E) is a Dirac scheme in the sense of <cit.>.
We identified an affine cover {H}_H≤ E in <Ref>.
This result further justifies the notation for the ring E(H) in <Ref>.
Indeed, this (H) is also the ring of sections E(H) of the -graded structure sheaf over the open H of <Ref>.
Let E be an elementary abelian p-group.
Then the comparison map of <Ref> is an open immersion. More precisely, it defines a homeomorphism between ((E)) and the following open subspace of ((E)):
∈((E))for all N E of index p either a_N∉ or b_N∉.-1em
By <Ref>, the (continuous) comparison map is injective.
Therefore, it being an open immersion can be checked locally on the domain.
By <Ref>, the open H form an open cover of ((E)). <Ref> tells us that each H is homeomorphic to the following open of ((E))
U'(H):=⋂_N≱HZ(a_N)^c∩⋂_N≥ HZ(b_N)^c
(recall that Z(f)^c=f∉ is our notation for a principal open).
So it suffices to verify that the union ∪_H≤ E U'(H), is the open subspace of the statement (<ref>).
Let ∈ U'(H) for some H≤ E and let N∈(E); then clearly either N≱H in which case a_N∉, or N≥ H in which case b_N∉.
Conversely let belong to the open (<ref>) and define H=∩_M∈ s.t. b_M∉M.
We claim that ∈ U'(H). Let N∈. If N≱H then b_N∈ by construction of H and therefore a_N∉.
So the last thing we need to prove is that N≥ H implies b_N∉. One should be slightly careful here, as H was defined as the intersection of the M∈ such that b_M∉, and certainly such M's will contain H, but we need to see why every N≥ H satisfies b_N∉.
This last fact follows from <Ref> applied to E/H.
Consider the spectrum of (C_p) for the cyclic group C_p of order p.
By <Ref>, the reduced ring C_p(1)_ is k[ζ^+] with ζ^+=a/b in degree 2' while C_p(C_p)_=k[ζ^-] with ζ^-=b/a.
(The former is also <Ref>.)
Each of these has homogeneous spectrum the Sierpiński space and we easily deduce that
((C_p))= @R=1em@C=.5em ∙@-@[Brown][rd]_-C_p ∙
.6-0.78em@-@[ForestGreen][ru]_-1
confirming the computation of ((C_p^n)) in <cit.> for n=1.
We can also view this as an instance of <Ref>.
Namely, still by <Ref>, the reduced ring (C_p)_ is [a,b] with a in degree 0 and b in degree -2'.
Its homogeneous spectrum has one more point at the top:
((C_p))= @R=1em@C=.5em@R=.5em ∙@-[ld] @-[rd]
∙@-@[Brown][rd]_-Z(a)^c ∙
.6-0.78em@-@[ForestGreen][ru]_-Z(b)^c
and this superfluous closed point ⟨ a,b⟩ lies outside of the open subspace (<ref>).
Let K≤ H≤ E.
The functor Ψ^K(E)→(E/K) passes, by <Ref>, to the localizations over [E]H and [E/K]H/K, respectively.
On the -graded endomorphisms rings, we get a homomorphism Ψ^K→E/K(H/K) that on generators a_N,b_N is given by the formulas of <Ref>.
By <Ref> this homomorphism Ψ^K→E/K(H/K) is surjective.
For every elementary abelian group E, the spectrum admits a unique generic point η_E, namely the one of the cohomological open E.
We proceed by induction on the p-rank.
Let us write η_E=_E(1,√(0)) for the generic point of E, corresponding to the ideal √(0) of nilpotent elements in ^(E;).
Similarly, for every K≤ E, let us write η_E(K)=_E(K,η_E/K) for the generic point of the stratum E(K)≃E/K.
We need to prove that every point η_E(K) belongs to the closure of η_E=η_E(1) in .
It suffices to show this for every cyclic subgroup H<E, by an easy induction argument on the rank, using the fact that ψ^H((E/H)) is closed.
So let H≤ E be cyclic.
Note that inflation ^E/H_E(E/H)→(E) passes to the localization of the former with respect to all b_N/H for all N∈(E) containing H (which is just the derived category of E/H) and of the latter with respect to the corresponding b_N:
^E/H_E ((E/H))
(E)[b_NN≥ H].
This being a central localization of a fully-faithful functor with respect to a multiplicative subset in the source, it remains fully-faithful. One can further localize both categories with respect to all non-nilpotent f∈^(E/H;) in the source, to obtain a fully-faithful
^E/H_E ((E/H))[ff∉√(0)]
where is obtained from (E) by first inverting all b_N for N≥ H as in (<ref>) and then inverting all ^E/H_E(f) for f∈^(E/H;)√(0).
At the level of spectra, () is a subspace of . By construction, it meets the closed subset (ψ^H)≅((E/H)) of only at the image of the generic point η_E(H). Indeed, inverting all b_N for N≥ H on (ψ^H) corresponds to inverting all b_N/H in (E/H), hence shows that ()∩(ψ^H) is in the image under ψ^H of the cohomological open E/H. Similarly, inverting all f∉√(0) removes all non-generic points of E/H. In particular, the generic point η_E(H) of E(H) is now a closed point of the subspace () of .
Using that (<ref>) is fully-faithful and that the endomorphism ring of the source is the cohomology of E/H localized at its generic point (in particular not a product of two rings), we see that is not a product of two tt-categories and therefore () is not disconnected.
Also η_E belongs to () and is distinct from η_E(H).
Hence the closed point η_E(H)∈() cannot be isolated.
Thus η_E(H) belongs to the closure of some other point in ().
Let then ∈ be a point in the subspace (), such that ≠η_E(H) and η_E(H)∈, which reads ⊊η_E(H).
We know by <cit.> that this can only occur for =(H',) with H'≤ H, that is, either H'=H or H'=1 since here H was taken cyclic.
The case H'=H is excluded, as in the subspace () the only prime of the form (H,) that remained was η_E(H) itself, and is different from η_E(H). Thus H'=1, which means that ∈E=η_E(1) and we therefore have η_E(H)∈⊆η_E(1) as claimed.
We can now determine the Krull dimension of the spectrum of (E).
Let E be a elementary abelian p-group. Then the Krull dimension of is the p-rank of E.
By <Ref>, the dimension of is the maximum of the dimensions of the open subsets H, for H≤ E.
Each of these spaces has the same generic point η_E (by <Ref>) and a unique closed point (H) by <Ref>
(and the fact that (K)∈H forces K and H to be contained in the same subgroups N∈(G) by <Ref>, which in turn forces K=H because E is elementary abelian).
Using <Ref> we translate the problem into one about the graded ring .
Let η_E=_0⊊_1⊊⋯⊊_n=(H) be a chain of homogeneous prime ideals in .
Note that _n-1 belongs to the open Z(f)^c of () for some f=ζ^+_N, H≤ N, or some f=ζ^-_N, H≰N.
Each of these has non-zero degree so the graded ring [f] is periodic.
We deduce that (()) is the maximum of 1+(R) where R ranges over the ungraded rings R=[f]_(0) for f as above.
The reduced ring R_ is a finitely generated -algebra with irreducible spectrum, hence a domain. Therefore (R)=(R_) is the transcendence degree of the residue field at the unique generic point.
As observed above, this generic point is the same for all H≤ E, namely the generic point of 1=(( E)).
We conclude that ()=((( E))) which is indeed the p-rank of E.
In fact, the proof shows that all closed points (H)∈ have the same codimension (height), namely the p-rank of E.
Thus for E elementary abelian, the Krull dimension of is the same as the Krull dimension of the classical cohomological open (( E))≅(^(E,)).
In other words, the spectrum of (E) is not monstrously different from that of ( E), at least in terms of dimension, or `vertical complexity'.
There is however `horizontal complexity' in : each H has its own shape and form, and there are as many H as there are subgroups H≤ E.
We give a finite presentation of the corresponding -algebras in <Ref>.
§ CLOSURE IN ELEMENTARY ABELIAN CASE
In this section, E is still an elementary abelian p-group. Following up on <Ref>, we can now use <Ref> to analyze inclusion of tt-primes , in (E), which amounts to asking when belongs to in .
Using again that every ψ^H((E/H)) is a closed immersion, induction on the p-rank easily reduces the above type of questions to the case where the `lower' point belongs to [E]1=E.
More generally, given a closed piece Z of the cohomological open E, we consider its closure Z̅ in =⊔_H≤ EE(H) and we want to describe the part Z̅∩E(H) in each stratum E(H)≅E/H for H≤ E.
Let H≤ E be a subgroup of our elementary abelian group E. Consider the open subsets [E]H of <Ref>, the cohomological open [E]1=E and their intersection [E]H∩E. Consider also the stratum E(H)=ψ̌^H(E/H), that is a closed subset of [E]H homeomorphic to E/H via ψ̌^H:
@C=2em [E]H E
E/H@^(->[ru]_-ψ̌^H [E]H∩E@_(->[lu] @^(->[ru]
On graded endomorphism rings of the unit (<Ref>) this corresponds to
@C=1em @->>[ld]_-Ψ^H[rd]_(.4)Q ^(E;) [ld]^(.4)Q'
^(E/H;)
-2em ([E]H∩E)-2em
where Q is the localization of with respect to ζ^-_N=b_N/a_N for all N≱H, where Q' is the localization of E(1)=^(E,) with respect to ζ^+_N=a_N/b_N for all N≱H and where Ψ^H is the epimorphism of <Ref> for K=H.
With above notation, let I⊆^(E,) be a homogeneous ideal of the cohomology of E.
Define the homogeneous ideal J=Ψ^H(Q(Q'(I))) in the cohomology ^(E/H;) of E/H by `carrying around' the ideal I along (<ref>):
@C=1em -1em Q(Q'(I)) -1em @|->[ld]
I @|->[ld]
-2em J:=Ψ^H(Q(Q'(I))) -4em
Q'(I) @|->[lu]
Let Z be the closed subset of E defined by the ideal I.
Then the closed subset of E/H defined by J is exactly the intersection Z̅∩E(H) of the closure Z̅ of Z in ((E)) with the subspace E/H, embedded via ψ̌^H.
Once translated by <Ref>, it is a general result about the multi-graded ring A=(E). We have two open subsets, H=∩_s∈ S_HZ(s)^c and E=1=∩_s∈ S_1Z(s)^c for the multiplicative subsets S_H and S_1 of <Ref>.
These open subsets are `Dirac-affine', meaning they correspond to the homogenous spectra of the -graded localizations S_H(A)= and S_1 (A)=E(1)=^(E;), where (-) refers to `zero-twist', as before. The intersection of those two affine opens corresponds to inverting both S_H and S_1, that is, inverting b_N/a_NN≱H from and a_N/b_NN≱H from ^(E;).
This explains the two localizations Q and Q' and why their targets coincide.
The intersection H∩Z̅ coincides with the closure in H of H∩ Z.
The latter is a closed subset of H∩E defined by the ideal Q'(I). The preimage ideal Q(Q'(I)) then defines that closure H∩Z̅ in H.
Finally, to further intersect this closed subset of H with the closed subset E/H=((Ψ^H)), it suffices to project the defining ideal along the corresponding epimorphism Ψ^H^(E/H;).
Before illustrating this method, we need a technical detour via polynomials.
Let I be a homogeneous ideal of the cohomology ^(E,) and let 1≠ H≨ E be a fixed non-trivial subgroup.
Suppose that the only homogeneous ideal containing I and all the elements ζ_N for N≥ H (<Ref>) is the maximal ideal ^+(E,).
Then there exists in I a homogeneous ([ The grading is the usual -grading in which all the ζ_N have the same degree 2'. In particular, the first term ∏_M≱Hζ_M^d in f has degree 2'· d· |M∈M≱H|.]) polynomial f of the form
f = ∏_M≱Hζ_M^d + ∑_mλ_m ·∏_N∈ζ_N^m(N)
for some integer d≥ 1 and scalars λ_m∈ and finitely many exponents m∈^ that satisfy the following properties:
m(N)≥ 1 for at least one N≥ H
and
m(N')<d for all N'≱H.
For simplicity, we work in the subring ^∗⊆^(E,) generated by the ζ_N. (For p=2, this is the whole cohomology anyway and for p odd we only miss nilpotent elements, which are mostly irrelevant for the problem, as we can always raise everything in sight to a large p-th power.)
Let us denote the maximal ideal by =ζ_N| N∈.
It is also convenient to work in the quotient -graded ring
A^∗:=^∗(E,)/I
which is generated, as a -algebra, by the classes ζ̅_N of all ζ_N modulo I.
The assumption about Z(I+ζ_NN≥ H)={} implies that has some power contained in I+ζ_NN≥ H.
In other words when N'≱H we have
(ζ̅_N')^d∈ζ̅_N| N≥ H_A^∗
for d≫1 that we take large enough to work for all the (finitely many) N'≱H.
Consider this ideal J= ζ̅_N| N≥ H of A^∗ more carefully. It is a -linear subspace generated by the classes θ̅_m of the following products in ^*
θ_m:=∏_N∈(ζ_N)^m(N)
with m∈^ such that m(N)≥ 1 for at least one N≥ H.
We claim that J is in fact generated over by the subset of the θ̅_m for the special m∈^ satisfying (<ref>).
Indeed, let J'⊆ J be the -subspace generated by the θ̅_m for the special m.
Then we can prove that the class θ̅_m of each product (<ref>) belongs to J', by using (<ref>) and (descending) induction on the number ∑_N≥ Hm(N).
We conclude that J=J'.
By (<ref>), the monomial ∏_M≱H(ζ̅_M)^d belongs to J and therefore to J': It is a -linear combination of monomials of the form θ̅_m for m∈^ satisfying (<ref>). Returning from A^∗=^∗(E,)/I to ^∗(E,), the difference between ∏_M≱H(ζ_M)^d and the same -linear combination of the lifts θ_m in ^∗(E,) is an element of I, that we callf and that fulfills the statement.
Let Z⊂E be a non-empty closed subset of the cohomological open and let 1≠ H≨ E be a non-trivial subgroup.
Suppose that in E, the subset Z intersects the image of the cohomological open of H in the smallest possible way:
Z∩ρ_H(H) = {(1)}.
Consider the closure Z̅ of Z in the whole spectrum .
Then Z̅ does not intersect the stratum E(H)=ψ^H(E/H).
Hence (H) does not belong to Z̅.
Let I⊂^(E,) be the homogeneous ideal that defines Z. The closed image ρ_H(H) is given by the (partly redundant) equations ζ_N=0 for all N≥ H. It follows that the intersection Z∩ρ_H(H) is defined by the ideal I+ ζ_N| N≥ H.
So our hypothesis translates exactly in saying that I satisfies the hypothesis of <Ref>. Hence there exists a homogeneous element of I
f = ∏_M≱Hζ_M^d + ∑_mλ_m ∏_N∈ζ_N^m(N)
for scalars λ_m∈ and finitely many exponents m∈^ satisfying (<ref>).
We can now use <Ref> and follow Diagram (<ref>) with the ideal I and particularly with its element f.
The element Q'(f) is just f seen in ([E]H∩E). But it does not belong to the image of under Q because f contains some b_M with M≱H in denominators in the ζ_M's.
Still, we can multiply Q'(f) by ∏_M≱H(b_M/a_M)^d=∏_M≱H(ζ_M)^-d to get a degree-zero homogeneous element
f̃=1 + ∑_mλ_m ∏_N∈ζ_N^m'(N)
in the ideal Q'(f), where we set the exponent m'(N):=m(N)-d if N≱H and m'(N):=m(N) if N≥ H.
Note that by (<ref>) the exponent m'(N) is negative if N≱H and is non-negative if N≥ H and strictly positive for at least one N≥ H.
Both types of exponents of ζ_N are allowed in , namely, when N≱H, the element ζ^-_N=b_N/a_N exists in .
In other words, the element f̃∈Q'(f) satisfies
f̃ = Q(1 + g̃)
where g̃∈ belongs to the ideal ζ_N| N≥ H in and must be of degree zero by homogeneity.
Now, for N≥ H, we have Ψ^H(ζ_N)=ζ_N/H by <Ref>.
It follows that Ψ^H(g̃) belongs to the maximal ideal ζ_N̅|N̅∈(E/H)⊆^+(E/H,) of ^(E/H,) and still has degree zero. This forces Ψ^H(g̃)=0 and therefore Ψ^H(1+g̃)=1 in ^(E/H,).
In the notation of <Ref>, we have shown that J contains 1, which implies that Z̅∩E(H)=∅.
Let Z⊂E be a closed subset of the cohomological open, strictly larger than the unique closed point (1) of E. Suppose that in E, the subset Z intersects the images of all proper subgroups trivially, Z∩(⋃_H≨ Eρ_H(H)) = {(1)}.
Then the closure Z̅ of Z in the whole spectrum has only one more point, namely Z̅=Z∪{(E)}.
By <Ref>, we see that Z̅ does not meet any stratum E(H) for H≠ E. Thus the only point of outside of Z itself, hence outside of E, that remains candidate to belong to Z̅ must belong to ((E))∪_H≨ EE(H)=E(E)={(E)}. We know that (E)=(E/H)| H≨ E in (E), by <cit.>. Take ∈ Z different from (1). Since does not belong to any (ρ_H)=((E/H)) by assumption, it must contain (E/H). Consequently, (E)⊆, meaning that (E)∈⊆Z̅.
Let E be an elementary abelian group of rank r. Let be a point of height r-1 in the cohomological open E, that is, a closed point of the classical projective support variety 𝒱_E():=E{(1)}≅(^(E,))≅^r-1_.
Suppose that does not belong to the image ρ_H(H) of the support variety of any proper subgroups H≨ E.
Then the closure of in is exactly the following
{}={(E),,(1)}.
Apply <Ref> to Z={,(1)}, the closure of in E.
We can review the proof of <Ref> in the special case of <Ref>, to see how elements like f∈(1) and f̃∈ come into play.
We do it in the special case where is a -rational point ( if is algebraically closed).
Let 1≠ H≨ E be a non-trivial subgroup (the case r=1 being trivial).
Choose N_0,N_1 E index-p subgroups with H≤ N_0 and H≰N_1.
They define coordinates ζ_0,ζ_1 in ^r-1 (where ζ_i=ζ_N_i as in <Ref>).
There exists a hyperplane of ^r-1
λ_0ζ_0+λ_1ζ_1=0, [λ_0:λ_1]∈^1(),
going through the rational point .
Note that λ_1≠ 0 as ∉ Z(ζ_0)=ρ_N_0(N_0), by assumption.
As in <Ref>, the following two localizations agree
E(H)[(ζ^-_N)| H≰N]=E(1)[(ζ^+_N)| H≰N]
where N E ranges over the index-p subgroups as usual.
We find a lift
f̃:=λ_0 ζ_0ζ^-_N_1+λ_1 ∈(H)
of the element f=λ_0ζ_0+λ_1ζ_1∈(1) of (<ref>) suitably multiplied by ζ_1=ζ^-_N_1 in the localization (<ref>).
Then we have Ψ^H(ζ^-_N_1)=0 since H≰N_1, by <Ref>, so Ψ^H(f̃)=λ_1∈^× is an isomorphism.
We deduce that (f̃) belongs to (H), which shows that does not specialize to (H).
Let E=C_2× C_2 be the Klein-four group.
Let us justify the description of ((E)) announced in <cit.> in some detail:
@C=.0em@R=.4em(E)∙@-@[Gray][rrdd] @-@[Gray][rrrrdd] @-@[Gray][rrrrrrdd] @ @<.1em>@[Red][rrrrrrrrdd] (N_0)∙@-@[Gray][ldd] @-@<.1em>@[Gray][rrrrrrdd]
(N_1)∙@-@[Gray][ldd] @-@[Gray][rrrrrrdd]
(N_∞)∙@-@[Gray][ldd] @-@[Gray][rrrrrrdd]
(1)∙@ @[Gray][ldd] @-@[Gray][dd] @-@[Gray][rrdd] @-@[Gray][rrrrdd]
η_E(N_0)∙@-@<-.4em>@[Gray][rrrrrrrdd]
η_E(N_1)∙@-@<-.1em>@[Gray][rrrrrdd]
η_E(N_∞)∙@-@[Gray][rrrdd]
@.@[RoyalBlue][r]
@ @[Gray][rdd] @.@[RoyalBlue][rrrrrrr]
0∙@-@[Gray][dd]
1∙@-@[Gray][lldd]
∞∙@-@[Gray][lllldd]
∙_η_E-1em
-1em
By <Ref>, we have a partition of the spectrum as a set
=E(E) ⊔ E(N_0) ⊔ E(N_1) ⊔ E(N_∞) ⊔ E,
where we write N_0,N_1,N_∞ for the three cyclic subgroups C_2 and where E=E(1) is the cohomological open as usual. Let us review those five parts E(H)=ψ^H(E/H) separately, in growing order of complexity, from left to right in (<ref>).
For H=E, the stratum E(E)=ψ^E(E/E)={(E)} is just a closed point.
For each cyclic subgroup N_i<E, the quotient E/N_i≃ C_2 is cyclic, so (E/N_i) is the space of <Ref>.
Its image under ψ^N_i is {_E(E),η_E(N_i),_E(N_i)}, defining the (brown) point η_E(N_i):=ψ^N_i(η_E/N_i), as in the proof of <Ref>.
The stratum E(N_i) is the image of the cohomological open E/N_i only, that is, the Sierpiński space {η_E(N_i),(N_i)}, whose non-closed point η_E(N_i) is the generic point of the irreducible {_E(E),η_E(N_i),_E(N_i)} in .
Finally, for H=1, the cohomological open E=(( E))≅([ζ_0,ζ_1]) is a ^1_ with a closed point (1) on top.
We denote by η_E the generic point of as in <Ref> and by 0,1,∞ the three _2-rational points of ^1_ (in green).
The notation refers to all remaining points of ^1_.
The undulated lines indicate that all points of have the same behavior.
Namely, η_E specializes to all points of and every point of specializes to (1) and the (red) undulated line towards (E) indicates that all points of specialize to (E), as follows from <Ref>.
(Note that the latter was rather involved: Its proof occupies most of this section, and relies on technical <Ref>.)
We have described the closure of every point in , except for the _2-rational points 0,1,∞.
For this, we use the closed immersion ρ_N_i((N_i)) induced by restriction _N_i.
The point i is the image of the generic point η_N_i of the V-shaped space ((N_i)) of <Ref>. Hence its closure is (ρ_N_i)={(E_i),i,(1)}. So specializations are exactly those of (<ref>).
We revisit this picture in more geometric terms in <Ref>.
It is possible to extend <Ref> to a general finite group G by means of the Colimit <Ref>.
Let Z⊆ be a one-dimensional irreducible closed subset.
Write its generic point as =(K,) for (unique) K∈p(G)_/G and ∈.
By Quillen applied to G̅=, there exists a minimal elementary abelian subgroup E≤G̅ such that ∈(ρ_EE→G̅), also unique up to G̅-conjugation. This E≤G̅=(N_GK)/K is given by E=H/K for H≤ N_G K containing K. Then =φ_(H,K)() where ∈ is given by =_E(1,) for some ∈E.
By <Ref>, the map φ_(H,K)→ is closed and preserves the dimension of points.
It follows that is also the generic point of a one-dimensional irreducible in .
By minimality of E, the point ∈E does not belong to H' for any proper subgroup H'<E.
By <Ref>, we have ={_E(E),,_E(1)} in .
The map φ_(H,K) sends this subset to {_G(H), , _G(K)}.
In summary, every one-dimensional irreducible subset of is of the form Z={(H),,(K)}, where H and K are uniquely determined by the generic point via the above method.
§ PRESENTATION OF TWISTED COHOMOLOGY
We remain in the case of an elementary abelian group E.
In this section we want to better understand the local -graded rings that played such an important role in <Ref>.
Thankfully they are reasonable -algebras.
Recall that we write C_p= σ|σ^p=1 for the cyclic group of order p with a chosen generator σ.
For brevity we call an -linear surjection π E C_p a coordinate.
For two coordinates π,π' we write π∼π' if (π)=(π').
Finally, for a subgroup H, we often abbreviate H|π to mean H≤(π).
Recall from <Ref> and <Ref> that each coordinate π yields an invertible object u_π=π^*u_p in (E).
It comes with maps a_π,b_π,c_π→ u_π[∗].
If π∼π' then there exists a unique λ∈^× such that π'=π^λ.
Hence, if p=2 then necessarily π=π' and u_π=u_π'.
On the other hand, if p>2 is odd then we still have u_π≅ u_π' as already mentioned.
Explicitly, consider the automorphism λ C_p→ C_p that sends σ to σ^λ.
The isomorphism u_π=π^*u_pπ^*λ^*u_p=(π^λ)^*u_p=u_π' will be the pullback π^*Λ along π of an isomorphism of complexes Λ u_pλ^* u_p.
This isomorphism Λ can be given explicitly by the identity in degree 0 and by the C_p-linear maps C_p→λ^* C_p in degree 1 (resp. 2) determined by 1↦ 1 (resp. 1↦ 1+σ+⋯σ^λ-1).
One checks directly that Λ∘ a_p=a_p and Λ∘ b_p=λ· b_p. By applying π^* we obtain
(π^*Λ)∘ a_π=a_λπand
(π^*Λ)∘ b_π=λ· b_λπ.
Given coordinates π_1≁π_2 set π_3=π_1π_2.
Write u_i, a_i and b_i for u_π_i, a_π_i and b_π_i in (E).
Then we have the relation
a_1b_2b_3+b_1a_2b_3+b_1b_2a_3 =0
as a map from to (u_1⊗ u_2⊗ u_3)[-2'· 2] in (E). (See <Ref> for 2'.)
Let N_i=(π_i) for i=1,2,3, which are all distinct.
Let N=N_1∩ N_2∩ N_3 be the common kernel, which is of index p^2 in E.
By inflation along E E/N, it suffices to prove the lemma for E=C_p× C_p and π_1 and π_2 the two projections on the factors.
We abbreviate u for the complex of permutation E-modules u:=u_1⊗ u_2⊗ u_3.
Consider the permutation module
M:=kC_p⊗ kC_p⊗ kC_p≅ k(E/N_1)⊗ k(E/N_2)⊗ k(E/N_3) which appears as a summand in various degrees of the complex u.
One element in M is of particular interest:
m :=∑_i_1,i_2=0^p-1σ^i_1⊗σ^i_2⊗σ^-i_1-i_2.
It is easy to check that m is E-invariant, thus defines a E-linear map m̃ k→ M, that can be used to define the required homotopies. This depends on p.
If p=2, the homotopy is given by m̃ when viewed from to the only M-entry of u[-2] in degree one.
If p>2, the homotopy is given by (m̃,m̃,m̃) as a map from to the three M-entries of u[-4] in degree one.
Verifications are left to the reader.
We construct a commutative -algebra E(H) by generators and relations. Its generators are indexed by coordinates π E C_p (<Ref>)
ζ_π^+π s.t. H≤(π) ∪ ζ_π^-π s.t. H≰(π).
These generators come equipped with a degree in : If the generator ζ^+_π is set to have degree 2', whereas if the generator ζ^-_π is set to have degree -2'.
We impose the following four families of homogeneous relations. First for every coordinate π and every λ∈^× (for p odd), we have a rescaling relation
*
ζ_π^λ^+=λζ^+_π if H|π and ζ_π^λ^-=λ^-1ζ^-_π if H∤π'
and whenever π_3=π_1π_2 and π_1≁π_2, writing ζ_i^±:=ζ_π_i^±, we impose one of the following relations, inspired by <Ref>:
*
ζ_1^++ζ_2^++ζ^+_3=0, if H|π_1 and H|π_2 (and therefore H|π_3)
*
ζ^-_1+ζ^-_2+ζ^-_1ζ^-_2ζ_3^+=0, if H∤π_1 and H∤π_2 but H|π_3
*
ζ^-_1ζ^-_2+ζ^-_2ζ^-_3+ζ^-_3ζ^-_1=0 if H∤π_i for all i=1,2,3.
Since these relations are homogeneous, the ring E(H) is a -graded ring.
We could also define a multi-graded commutative -algebra E generated by all a_π,b_π subject to the relations in (<ref>) and <Ref>.
This algebra E would be ×^-graded with a_π in degree (0,1_(π)) and b_π in degree (-2',1_(π)).
Then E(H) is simply the `zero-twist' part of the localization of E with respect to the a_π,b_π that become invertible in U(H), that is, those a_π such that H∤π and those b_π such that H|π, following the pattern of <Ref>.
By (<ref>) and <Ref>, there exists a canonical homomorphism
E(H)→
mapping ζ^+_π to a_π/b_π and ζ^-_π to b_π/a_π.
Let H=1. Recall from <Ref> that E(1) is the cohomology ring. Then the homomorphism (<ref>) is the standard one E(1)→^(E;), that maps ζ^+_π to the usual generator ζ_π=π^*(ζ_C_p). Note that here H=1|π for all π, so there is no ζ^-_π.
For E elementary abelian, it is well-known that this homomorphism E(1)→^(E;) is an isomorphism modulo nilpotents.
See for instance <cit.>.
For two subgroups H,K≤ E, the open subsets [E]H and [E]K can intersect in . Similarly, we can discuss what happens with the rings E(H).
Let H,K≤ E be two subgroups. Define S=S(H,K)⊂E(H) to be the multiplicative subset generated by the finite set
ζ^+_π for π with H|π and K∤π∪ζ^-_π for π with H∤π and K|π
and similarly, swapping H and K, let T=S(K,H)⊂E(K) be the multiplicative subset generated by ζ^+_πH∤π and K|π∪ζ^-_πH|π and K∤π. Then we have a canonical isomorphism of (periodic) -graded rings
SE(H)≅ TE(K)
and in particular of their degree-zero parts. Thus the open of (E(H)) defined by S is canonically homeomorphic to the open of (E(K)) defined by T.
The left-hand side SE(H) is the (`zero-twist' part of the) localization of the multi-graded ring E of <Ref> with respect to
a_πH∤π∪b_πH|π∪a_πH|π, K∤π∪b_πH∤π, K|π
=a_πH∤π or K∤π∪b_πH|π or K|π
which is symmetric in H and K.
This completes the proof.
The above isomorphism is compatible with the homomorphism (<ref>), namely the obvious diagram commutes when we perform the corresponding localizations on E(H) and E(K).
Let K≤ H≤ E.
There is a canonical split epimorphism `Ψ^KE(H)E/K(H/K) whose kernel is ζ^-_π|K∤π. It is compatible with the homomorphism Ψ^K of <Ref>, in that the following diagram commutes
@C=3em@R=2emE(H) @->>[d]_-`Ψ^K[r]^-(<ref>) E(H) @->>[d]^-Ψ^K
E/K(H/K) [r]^-(<ref>) E/K(H/K).
Set H̅:=H/K≤E̅:=E/K. Similarly, for every coordinate π E C_p such that K|π, let us write π̅ E/K C_p for the induced coordinate.
The morphism `Ψ^K will come from a morphism “Ψ^KE→E/K, with respect to the homomorphism of gradings (<ref>).
As these algebras are constructed by generators and relations (<Ref>), we need to give the image of generators.
Inspired by <Ref> we define “Ψ^KE→E/K on generators by
a_π↦ a_π̅ if K|π
1 if K∤π
b_π↦ b_π̅ if K|π
0 if K∤π.
It is easy to see that the relations in E are preserved; thus the map “Ψ^K is well-defined.
Let ϖ E E/K and for every π̅E̅ C_p consider the coordinate π=π̅∘ϖ E C_p. Then H̅|π̅ if and only if H|π.
It follows that the morphism passes to the localizations `Ψ^KE(H)E/K(H/K) as announced.
The statement about its kernel is easy and commutativity of the square follows from the fact (<Ref>) that Ψ^K treats the a_π and b_π according to the same formulas.
The section of `Ψ^K is inspired by inflation.
Namely, a_π̅↦ a_π and b_π̅↦ b_π defines a map of graded -algebras E̅→E that is already a section to “Ψ^K and passes to the localizations.
The canonical homomorphism (<ref>) induces an isomorphism
E(H)__
of reduced -graded -algebras.
It follows from <Ref> that the map is surjective.
We will now show that the closed immersion ()(E(H)) is surjective—this will complete the proof, by the usual commutative algebra argument, which can be found in <cit.> for the graded case.
By <Ref>, this is equivalent to showing the surjectivity of the composite with _E, that we baptize β^H
@C=3emβ^H -2em [E]H[r]_-_E^-≃ () @^(->[r] (E(H)).
We proceed by induction on the order of the subgroup H. If H=1 the result follows from <Ref>.
So suppose that H≠ 1 and pick a homogeneous prime ∈(E(H)). We distinguish two cases.
If for every coordinate π E C_p such that H∤π we have ζ^-_π∈ then belongs to V(ζ^-_πH∤π), which we identify with the image of (E/H(1)) by <Ref> applied to K=H.
Namely, we have a commutative square
@C=5em@R=2em[E]H[d]_β^H [E/H]1[d]^β^1[l]_ψ^H
(E(H))
(E/H(1))
[l]_(`Ψ^H)
and since the right-hand vertical arrow is surjective by the case already discussed, we conclude that belongs to the image of β^H in (<ref>) as well.
Otherwise, there exists a coordinate π_1 such that H∤π_1 and ζ^-_π_1∉. Let K:=H∩(π_1) and let S=S(H,K) be defined as in <Ref>:
S=ζ^-_πfor π with H∤π and K|π.
We claim that belongs to the open of (E(H)) defined by S.
Indeed, let ζ_π_2^-∈ S, that is for π_2 with H∤π_2 and K|π_2, and let us show that ζ_π_2^-∉.
If π_2∼π_1 this is clear from ζ^-_π_1∉ and the relation <ref> in E(H).
If π_2≁π_1, let h∈ H K (so that h generates the cyclic group H/K≅ C_p).
As π_1(h)≠ 1 and π_2(h)≠ 1 we may replace π_1 by an equivalent coordinate π̃_1:=π_1^λ such that π̃_1(h)=π_2(h) and therefore
H|π_3:=π̃_1π_2.
Then relation <ref> exhibits ζ_π̃_1^- as a multiple of ζ_π_2^-.
As the former does not belong to (by the previous case), neither does ζ_π_2^-.
At this point we may apply <Ref> for our subgroups H and K.
By <Ref>, we have a commutative triangle:
@C=2em@R=2em [E]H∩[E]K[dl]_β^H[dr]^β^K
(E(H)[S])
@<->[rr]^≈ (E(K)[T])
We just proved that belongs to the open subset in the bottom left corner.
As K is a proper subgroup of H, we know that β^K is surjective by induction hypothesis and we conclude that belongs to the image of β^H as well.
In <Ref> we have proved something slightly more precise, namely that the map
E(H)→/⟨ξ^±_π⟩
(where π ranges over all coordinates)
is surjective with nilpotent kernel.
We expect that E(H) is already reduced, which would imply that (<ref>) is in fact an isomorphism of graded rings.
In particular, for p=2 we expect that E(H)E(H).
§ APPLICATIONS AND EXAMPLES
In this final section, we push our techniques further and compute more examples.
For an elementary abelian group E, <Ref> allow us to think of the geometry of , beyond its mere topology, by viewing as a Dirac scheme.
Consider further the `periodic' locus of , which is the open complement of the closed points (H)H≤ E; see <Ref>.
This is analogous to considering the projective support variety (^(E,))≅^r-1_ by removing the `irrelevant ideal' (1)=^+(E,) from (^(E,)). To avoid confusion with the phrase `closed points', we now refer to the (H) as very closed points, allowing us to speak of closed points of ^r-1_ in the usual sense (as we did in <Ref>).
Removing those finitely many `irrelevant' points allows us to draw more geometric pictures by depicting the (usual) closed points of the periodic locus, as in classical algebraic geometry.
In fact, for any finite group G, we can speak of the periodic locus of to mean the open '((G)):=(H)H∈p(G) obtained by removing the `irrelevant' very closed points.
However, we do not endow these spectra with a scheme-theoretic structure beyond the elementary abelian case, since we do not have <Ref> in general.
We postpone a systematic treatment of the periodic locus to later work. For now we focus on examples.
Let us revisit Klein-four, with the notation of <Ref>.
From the picture in (<ref>) we see that the union of the open subsets [E]1 and [E]E only misses (three) very closed points hence covers the periodic locus.
We have
E(1) =[ζ^+_N_0,ζ^+_N_1,ζ^+_N_∞]/⟨ζ^+_N_0+ζ^+_N_1+ζ^+_N_∞⟩ (=^*(E;)),
E(E) =[ζ^-_N_0,ζ^-_N_1,ζ^-_N_∞]/⟨ζ^-_N_0ζ^-_N_1+ζ^-_N_1ζ^-_N_∞+ζ^-_N_∞ζ^-_N_0⟩
and their homogeneous spectra are both a projective line with a unique closed point added.
(For E(E), the coordinate transformation for i=0,1, ζ^-_N_i↦ζ̃^-_i:=ζ^-_N_0+ζ^-_N_∞, identifies the ring with [ζ̃^-_0,ζ̃^-_1,ζ^-_N_∞]/⟨ζ̃^-_0ζ̃^-_1+(ζ^-_N_∞)^2⟩, which corresponds to the image of a degree-two Veronese embedding of ℙ^1 in ℙ^2.)
Removing the very closed points (<Ref>), it is a straightforward exercise to check that the two lines are glued along the open complement of the _2-rational points, according to the rule (ζ^+_N_i)=ζ_N_i^-.
In other words, we obtain the following picture of '((E)):
To translate between this picture and the one in (<ref>), think of the blue part as , the three green points as the _2-rational points i=0,1,∞ in [E]1=E and the brown points as the η_E(N_i) in [E]E.
In view of later applications let us consider the action induced on spectra by the involution on E=C_2× C_2 that interchanges the two C_2-factors.
Let us say that the two factors correspond to the subgroups N_0 and N_1.
On the generators ζ^±_N_i of E(1) and E(E) in (<ref>), the effect of the involution is
ζ^±_N_0↦ζ^±_N_1 ζ^±_N_1↦ζ^±_N_0 ζ^±_N_∞↦ζ^±_N_∞.
The subrings of invariants in E(1) and E(E) are, respectively,
[e_1^+, e_2^+,ζ^+_N_∞]/⟨e_1^++ζ^+_N_∞⟩≅[e_2^+,ζ^+_N_∞]
and [e_1^-,e_2^-,ζ^-_N_∞]/⟨e_1^-ζ^-_N_∞+e_2^-⟩≅[e_1^-,ζ^-_N_∞]
where e_1^±=ζ^±_N_0+ζ^±_N_1 and e_2^±=ζ^±_N_0ζ^±_N_1 are the first and second symmetric polynomials in ζ^±_N_0 and ζ^±_N_1.
Thus e_i^± has degree ± i.
The homogeneous spectra of these rings (with unique very closed point removed) are again two projective lines ([ More precisely, as already in <Ref>, we are dealing with weighted projective spaces which happen to be isomorphic to projective lines.]) and they are glued together along the complement of two points.
In other words, the quotient of '((E)) by the involution is a ℙ^1_ with two doubled points:
Alternatively, the topological space underlying this quotient may be obtained more directly at the level of <Ref>.
Indeed, this involution fixes the two colored points corresponding to ∞, fixes no other points, and swaps the points corresponding to 0 with the points corresponding to 1, respecting the color.
So, again, the quotient can be pictured as a ℙ^1_ with only two doubled points as in <Ref>.
Let us return to general finite groups. We want to optimize the Colimit <Ref> by revisiting the category of elementary abelian p-sections .
In <Ref>, we gave a `raw' version of the morphisms in the indexing category , which could be fine-tuned without changing the colimit (<ref>).
As with any colimit, we can quotient-out the indexing category by identifying any two morphisms that induce the same map by the functor under consideration, here ((-)). We then still have
_(H,K)∈((H/K)).
The same holds for any intermediate quotient pG. For instance if Z(G) denotes the center of G, we can consider the category pG obtained from by modding out the obvious right action of the group Z(G)· H' on each hom set _((H,K),(H',K')).
Let us illustrate how such reductions can be used in practice.
Let G=C_p^n be the cyclic group of order p^n.
As with any abelian group, using Z(G)=G, the reduced category discussed in <Ref> just becomes a poset.
Here, if we denote by 1=H_n<H_n-1<⋯<H_1<H_0=G the tower of subgroups of G then the poset looks as follows:
@C=.3em@R=.7em (H_1,H_0)
⋯ (H_n,H_n-1)
(H_0,H_0)[ru]
(H_1,H_1)[lu][rru]
⋯ (H_n-1,H_n-1)[ru][llu]
(H_n,H_n)[lu]
From <Ref> we deduce that is the colimit of the diagram
@C=.7em@R=.7em V
V
V
∗[ru]
∗[lu][ru]
[lu]
⋯ [ru]
∗[lu]
with ∗=((1)) and V=((C_p)) the V-shaped space in (<ref>). In the above diagram, the arrow to the right (resp. left) captures the left-most (resp. right-most) point of V.
We conclude that the spectrum of (C_p^n) is equal to
@R=1em@C=.7em_0-1em ∙@-@[Gray] '[rd] '[rr] '[drrr]
∙ -1em_1 _n-1-1em ∙ ∙ -1em_n
_1-1em ∙ ⋯ @-@[Gray] '[ru] '[rr] '[rrru] ∙ -1em_n
This example reproves <cit.>. It will provide the starting point for our upcoming work on the tt-geometry of Artin motives over finite fields.
The category of elementary abelian p-sections is a finite EI-category, meaning that all endomorphisms are invertible.
The same is true of its reduced versions pG and in <Ref>.
<Ref> then implies formally that is the quotient of the spectra for the maximal elementary abelian p-sections by the maximal relations.
Let us spell this out.
Let I be a finite EI-category.
The (isomorphism classes of) objects in I inherit a poset structure with x≤ y if _I(x,y)≠∅.
Maximal objects (I)⊆ I are by definition the maximal ones in this poset.
Now, let x_1,x_2 be two objects in I, possibly equal.
The category (x_1,x_2) of spans x_1← y→ x_2 (or `relations') between x_1 and x_2, with obvious morphisms (on the y part, compatible with the spans), is also a finite EI-category and we may consider its maximal objects.
We denote by (G) the set of maximal objects in .
A word of warning: In general, there can be more maximal elementary abelian p-sections than just the elementary abelian p-sections of maximal rank.
Let G be a finite group.
The components φ_(H,K) of (<ref>) induce a homeomorphism between the following coequalizer in topological spaces
coeq(
@C=5em∐_E_1g_1Lg_2E_2-.5cm((L))@<2pt>[r]^-((g_1))@<-5pt>[r]_-((g_2)) ∐_E∈(G)-.5cm)
and , for `maximal relations' in or any variant of <Ref>.
Applying <Ref> we obtain
≃coeq(
@C=5em∐_E_1g_1Lg_2E_2-.5cm((L))@<2pt>[r]^-((g_1))@<-5pt>[r]_-((g_2)) ∐_E∈)
where E ranges over all elementary abelian p-sections and (g_1,g_2) over all relations.
There is a canonical map from the coequalizer in the statement to this one and it is straightforward to produce an inverse, as with any finite EI-category.
We can apply <Ref> to find the irreducible components of .
The set of irreducible components of is in bijection with the set (G) of maximal elementary abelian p-sections of G up to conjugation, via the following bijection with generic points:
(G)_/G ∼⟷^0
(H,K) ⟼φ_(H,K)(η_H/K).
In particular, ()=p-rank_(G) is the sectional p-rank of G.
We use coequalizer (<ref>).
Recall from <Ref> that for an elementary abelian p-group E is always irreducible.
We get immediately that the map (G)_/G^0 is a surjection.
Assume now that φ_E(η_E)=φ_E'(η_E') for E,E'∈(G) and let us show that E and E' are conjugate p-sections.
By <Ref>, there exists a finite sequence of maximal relations responsible for the identity φ_E(η_E)=φ_E'(η_E') and we will treat one relation at a time.
More precisely, assuming that the generic point in ((E_1)) is in the image of (the map on spectra induced by) some relation E_1Lg_2E_2, with E_1,E_2∈(G), we will show below that both g_i are conjugation isomorphisms (type <ref> in <Ref>).
In particular, E_1,E_2 are conjugate.
And as conjugation identifies the unique generic points in the spectra for E_1 and E_2 one can apply induction on the number of relations to conclude.
As the map induced by g_1 is a closed immersion (<Ref>) it must be a homeomorphism once its image contains the generic point.
From this, we deduce that g_1 itself must be an isomorphism.
(Indeed, the map induced by restriction to a proper subgroup of E_1 is not surjective, already on the cohomological open.
And similarly, the map induced by modular fixed-points with respect to a non-trivial subgroup of E_1 does not even meet the cohomological open.)
Hence L≃ E_1 is maximal too and therefore g_2 is also an isomorphism.
The only isomorphisms in are conjugations (<Ref>) and we conclude.
The second statement follows from this together with <Ref>.
For G not elementary abelian, we already saw with the example of G=Q_8 in <cit.> that can have larger Krull dimension than (()).
And indeed, Q_8 has sectional p-rank two and p-rank one.
For every maximal (H,K)∈(G), since φ_(H,K) is a closed map, it yields a surjection φ_(H,K)φ_H,K(η_H/K) from the spectrum of the elementary abelian E=H/K onto the corresponding irreducible component of . We illustrate this with G=D_8 in <Ref> below, where said surjection coincides with the folding of <Ref>.
We will now explain the meaning of <Ref> and in effect compute ((D_8)) for G=D_8=⟨ r,s| r^4=s^2=1, rs=sr ⟩ the dihedral group of order 8.
We label its subgroups as follows ([ The two Klein-four subgroups are called K and K'. The names L_0 and L_1 for the cyclic subgroups of K (resp. L'_0 and L'_1 in K') are chosen to evoke N_0 and N_1 in <Ref>. The third cyclic subgroup, N_∞, corresponds to C_2=Z(D_8) and is common to K and K'.]):
@C=1.5em@R=1em D_8
K_=⟨ r^2,s⟩@-[ru] C_4=⟨ r⟩@-[u] K'_=⟨ r^2,r^3s⟩@-[lu]
L_0=⟨ s⟩@-[ru] L_1=⟨ r^2s⟩@-[u] C_2=⟨ r^2⟩@-[lu]@-[u]@-[ru] L'_0=⟨ rs⟩@-[u] L'_1=⟨ r^3s⟩@-[lu]
1@-[llu]@-[llu]@-[lu]@-[u]@-[ru]@-[rru]
Since L_0 and L_1 (resp. L'_0 and L'_1) are G-conjugate, by the element r, we have exactly eight very closed points (H) for H∈p(G)_/G. We shall focus on the open complement of these very closed points, the periodic locus '((D_8)) of <Ref>, which is of Krull dimension one.
Since all maps in the coequalizer diagram (<ref>) preserve the dimension of points (<Ref>) we may first remove these very closed points and then compute the coequalizer.
Let us describe (D_8) and the maximal relations.
In addition to the maximal elementary abelian subgroups K and K' there is one maximal elementary abelian subquotient D_8/C_2. So we have three maximal sections: (D_8)={(K,1),(K',1),(D_8,C_2)}.
We compute the relations in the category 2D_8 which is obtained from 2D_8 by quotienting each hom-set ((H,M),(H',M')) by the action of H', as in <Ref>.
One then easily finds by inspection five non-degenerate ([ that is, not of the form x𝕀x𝕀x (which would not affect the coequalizer (<ref>) anyway)]) maximal relations up to isomorphism, pictured as follows:
@C=.5em@R=.5em (D_8,C_2)
(K_,C_2)@[OliveGreen][rru]@[Brown][ld] @[OliveGreen][llu](K'_,C_2)@[Brown][rd]
(K_,1)@(l,lu)^r @[OliveGreen][lll](C_2,1)@[OliveGreen][rrr] (K'_,1)@(ru,r)^r
Here, the loops labeled r represent the relations (K_,1)1(K_,1)r(K_,1), and similarly for K'.
All unlabeled arrows are given by 1∈ D_8, as in <Ref> <ref>-<ref>.
We explain below the brown/green color-coding in the other three relations.
Hence the space '((D_8)) is a quotient of three copies of the space '((E)) for E the Klein-four group, equal to ^1_ with three doubled points as in <Ref>.
Let us discuss the relations. We start with the self-relation corresponding to the loop r on (K_,1).
As the conjugation by r on K_ simply swaps the subgroups L_0 and L_1, we deduce from <Ref> that the quotient of '((K_)) by this relation is a ℙ^1_ with two doubled points, as in <Ref>.
The same is true for K'.
At this stage we have identified the three irreducible components (see <Ref>) and the three remaining relations will tell us how to glue these components.
The three sides of the `triangle' (<ref>) display maximal relations that identify a single point of one irreducible component with a single point of another.
Indeed, each of the middle sections K/C_2, K'/C_2 and C_2/1 is a C_2, whose periodic locus is a single point η_C_2 (<Ref>).
Each edge in (<ref>) identifies the image of that single point η_C_2 in the two corresponding irreducible components in <Ref>.
The color in (<ref>) records the color of that image: A brown point or a green point in the ^1_ with doubled points.
Let us do all three.
First, the relation between the two Klein-fours, K and K', at the bottom of (<ref>), identifies the two green points corresponding to C_2, as we are used to with projective support varieties.
Then, the last two relations in (<ref>), on the sides, identify a brown point in the K- or K'-component with the green point in the D_8/C_2-component corresponding to K_/C_2 and K'_/C_2, respectively.
This is a direct verification, for instance using that (ψ^C_2)((ρ^D_8_K))=(ψ^C_2)(_D_8((D_8/K)))=_D_8/C_2(Ψ^C_2((D_8/K)))=_D_8/C_2((D_8/K))=(ρ^D_8/C_2_K/C_2) in ((D_8/C_2)).
Thus we obtain '((D_8)) from these three identifications on the space of <Ref>.
The result was depicted in <Ref>.
alpha
|
http://arxiv.org/abs/2307.04197v1 | 20230709150552 | Vacuum Integration: UV- and IR-divergencies | [
"I. V. Anikin"
] | hep-ph | [
"hep-ph",
"hep-th"
] |
Mid-infrared spectroscopy with a broadly-tunable thin-film lithium niobate optical parametric oscillator
Amir H. Safavi-Naeini1
August 12, 2023
========================================================================================================
§ INTRODUCTION
In different QFT-models, at the classical level,
the effects of spontaneous symmetry breaking are very important in the context of
the geometrical analysis of the Goldstone theorem.
In this connection, the study of a vacuum state as
the potential minimum plays an significant
role <cit.>.
Meanwhile, the quantum corrections, that tend usually to
distort the geometrical picture, computed within the effective potential (EP) methods allow to
return again to the classical geometrical analysis of the models with spontaneous symmetry breaking.
In the standard EP-approaches, the quantum corrections are given by the
the vacuum integrations with the massive propagators.
However, the special interest is related to the vacuum integrations with the massless propagators.
It is mostly dictated by the use of conformal symmetry (see for example <cit.>).
On the other hand, working with the vacuum massless integrations, it demands some careful considerations.
Indeed, the general dimensional analysis suggests that all vacuum integrations with the massless propagators
lead to zero <cit.>. It is true except a particular case of dimensionless integrand
where the ultraviolet (UV), or infrared (IR), momentum region is only under consideration.
In this case, the arguments of dimensional analysis cannot be applied.
In <cit.>, it has been shown that the vacuum integration of dimensionless and massless integrand
is proportional to δ(n-D/2) where the space dimension is defined as D=d-2ε
(d=2, 4, 6 etc.) and n implies the propagator index.
The delta-function as a singular generated function (distribution) is a well-defined linear functional on the
suitable finite function space. In the case of dimensional regularization, this space should be realized with the
integration measure as dε φ(ε) where φ has a localized support.
However, it is not always convenient, even possible, to deal with the measure as dε
<cit.>. Moreover,
owing to the symmetry properties, the delta-function is usually hiding the information on
the UV(or IR)-divergency.
Following Gorishni-Isaev's method <cit.>,
we present all necessary details on the vacuum integration where the delta-function
has been treated in the frame of the sequential approach <cit.>.
We also demonstrate how the delta-function represents the UV(IR)-regimes.
§ Δ_F(0)-SINGULARITY
Let us consider the simplest case of scalar massless propagator Δ_F(0) giving the tad-pole diagram.
Using the Fourier transform, the propagator Δ_F(0) can be write as
[For the sake of shortness, here and in what follows the momentum loop normalization is hidden in (d^D k).
Moreover, the Euclidian measure of momentum integrations has been implies.]
Δ_F(0) = ∫(d^D k)/k^2=
∫ (d^D k) { C^-1(D,1) ∫ d^Dz e^-i k z/(z^2)^D/2-1}
=C^-1(D,1) ∫ d^Dz δ(z)/(z^2)^D/2-1≡Γ(D/2-1) ∫ (d^Dz) δ(z)/(z^2)^D/2-1,
where the integration measure (d^D z) absorbs the normalization constant
i(-π)^D/2 arising from
C^-1(D,n)=i(-π)^D/2Γ(D/2-n)/Γ(n).
If we assume that D/2-1=0,
then the propagator in Eqn. (<ref>) takes a form of
Δ_F(0) = Γ(0) ∫ (d^Dz) δ(z) ⇒Γ(0),
where, as well-known, the singularity of Γ-function can be presented as
Γ(0) = lim_ϵ→ 0Γ(ϵ)= lim_ϵ→ 0{1/ϵ + ....}.
It is worth to notice that
the condition given by D/2-1=0 should
be applied before the integration over (d^D k) in Eqn. (<ref>) in order to avoid the uncertainty, see also Sec. <ref>.
On the other hand, according to <cit.>, the vacuum integration method applied to
the Feynman propagator results in the delta-function. Let us remind a key moment of Gorishni-Isaev's method.
Using the spherical system (in the momentum Euclidian space), Δ_F(0) can be represented as
Δ_F(0) = ∫(d^D k)/k^2=
1/2∫ dΩ∫_0^∞ dβ β^D/2-2,
where dΩ gives the finite angle measure of integration.
The replacement β = e^y leads to the following expression
Δ_F(0) =
1/2∫ dΩ∫_-∞^∞ (dy) e^iy [(-i)(D/2-1)]=
1/2 | i |δ( D/2 -1 ) ∫ dΩ
or, restoring all coefficients, it reads
Δ_F(0)= - 2i π^1+D/2 δ(1-D/2) |_D=2 = - 2i π^2 δ(0).
So, for the case of D=2, the matching of Eqns. (<ref>) and (<ref>) gives the following
representation
(-i) Δ_F(0)=Γ(0) = - 2 π^2 δ(0).
With this, we may conclude that δ(0)-singularity can be treated as the singularity of Γ(0), see Eqn. (<ref>).
The same inference has been reached by the different method, see <cit.>.
Notice that the physical (UV or IR) nature of the mentioned singularity has been somewhat hidden.
In the dimensional regularization, the UV- and IR-divergencies are associated with the small positive (ε >0)
and negative (ε < 0) regularized parameter ε, respectively.
In this connection, using the α-parametrization, we rewrite Eqn. (<ref>) as
Δ_F(0) = ∫(d^D k)/k^2=
Γ(D/2-1) ∫ (d^Dz) δ(z)/(z^2)^D/2-1
=
∫ (d^Dz) δ(z) {∫_0^∞ dα α^D/2-2 e^-α z^2}=∫_0^∞ (dα) α^D/2-2.
Hence, one gets (modulo the normalization factor which is now irrelevant)
Δ_F(0) = ∫(d^D k)/k^2=
∫_0^∞ (dα) α^D/2-2⇒1/D/2-1{lim_α→∞α^D/2-1 - lim_α→ 0 α^D/2-1}.
From Eqn. (<ref>), one can see that the first term corresponds to the UV-divergency, while the second term –
to the IR-divergency. That is, we have
lim_α→∞α^D/2-1 = [∞]_UV if D >2,
lim_α→ 0 α^D/2-1 = [∞]_IR if D < 2.
In other words, if the dimensional parameter ϵ in D= d - 2ϵ is small one, | ϵ | < 1,
and it varies from the negative to positive variables,
we have the following representation for Δ_F(0)
Δ_F(0) |_d=2 ⇒1/D/2-1{Θ(D>2 | ϵ < 0)
lim_α→∞α^D/2-1 -
Θ(D<2 | ϵ > 0)
lim_α→ 0 α^D/2-1} = 0
⇒δ( 1- D/2 ) |_D≠ 2=0,
where ϵ should be considered as an external independent parameter.
From Eqns. (<ref>) and (<ref>), in the dimensional regularization, one can see that the
positive small ϵ is regularizing the UV-divergency but not IR-divergency.
Thus, every of the methods gives the same final conclusion.
To conclude this section, we remind the other useful representation given by
Δ_F(0) = lim_z^2→ 0Δ_F(z^2) =
lim_z^2→ 01/4πδ_+(z^2)= δ(0), z∈𝔼^4
which is in agreement with Eqns. (<ref>) and (<ref>).
§ VACUUM INTEGRATION AS A LIMIT OF NON-VACUUM INTEGRATION
We now address to the relation between vacuum and non-vacuum integrations.
In the dimensional regularization procedure, we begin with the consideration of
two-point 1PI massless Green function given by
ℐ(p^2)= ∫(d^D k)/k^2(k^2+p^2)=
(c.c.) (p^2)^D/2-2 G(1,1),
where (c.c) implies the coefficient constant and
G(1,1)=Γ(-D/2+2) Γ^2(D/2-1)/Γ(D-2).
Using D=4-2ϵ, we get
ℐ(p^2)= ∫(d^D k)/k^2(k^2+p^2)=(c.c.) (p^2)^-ϵ Γ(ϵ) Γ^2(1-ϵ)/Γ(2-2ϵ).
In Eqns. (<ref>) and (<ref>), the scale dependence of μ^2 is hidden as irrelevant one.
The vacuum integration can be obtained from Eqn. (<ref>) with the help of the corresponding limit as
𝒱_2≡∫(d^D k)/(k^2)^2=
lim_p^2→ 0ℐ(p^2).
There are, however, some subtleties of this limit which are now under our considerations.
Indeed, having used the α-representation, let us calculate the integral of Eqn. (<ref>).
We have the following
ℐ(p^2)= (c.c.)
∫_0^∞ dα dβe^-p^2 αβ/α + β/[α+β]^D/2
=(c.c.) ∫_0^∞λλ^1-D/2∫_0^1 dx e^-p^2λ x x̅,
where
α =λ x_1, β= λ x_2, λ∈ [0, ∞].
The next stage of calculations is to make a replacement as
λ̃= p^2 λ x x̅, dλ̃= p^2 x x̅ dλ
in the exponential function. This replacement simplifies the integrals and it leads to the corresponding combination
of Γ-functions denoted as G(1,1) <cit.>. Ultimately, we reproduce the result
presented by Eqns. (<ref>) and (<ref>).
Now, the first mathematical subtlety is that if we suppose the limits p^2→ 0 and ϵ→ 0
are consequent ones, not simultaneous, it is clear that these limits are not commutative operations, i.e.
[ lim_p^2→ 0, lim_ϵ→ 0] ≠ 0.
On the other hand, if the limits are simultaneous ones we deal with the uncertainty of [0]^0 which should be somehow resolved.
The second subtlety is related to the limit p^2→ 0 and the replacement of Eqn. (<ref>).
Namely, in order to avoid the mentioned uncertainty, we have to implement the limit p^2→ 0 before the possible replacement.
In this case, the limit of p^2→ 0 is well-defined operation and we finally obtain that
lim_p^2→ 0ℐ(p^2) = (c.c.) ∫_0^∞ dλλ^1-D/2 =
1/2- D/2{lim_λ→∞λ^2-D/2 - lim_λ→ 0 λ^2- D/2}
≡∫(d^D k)/(k^2)^2 =𝒱_2.
§ Δ(0)-SINGULARITY
We are now in a position to discuss the treatment of δ(0)-singularity (or δ(0)-uncertainty).
To this aim, we follow to the
sequential approach to the singular generated functions (distributions).
From one hand, based on the dimensional analysis, we may conclude that all massless vacuum integrations disappear, i.e.
𝒱_n=∫(d^D k)/[k^2]^n=0 for n≠ D/2.
However, the case of n=D/2 (or n=2 if ε→ 0) requires the special consideration because
the dimensional analysis argumentation does not now work.
Nevertheless, the nullification of 𝒱_D/2 takes still place but thanks to different reasons.
It turns out, the ultraviolet and infrared divergencies are cancelled each other.
Hence, if only the ultraviolet divergencies are under our consideration, 𝒱_D/2 is not equal to zero.
To demonstrate, we dwell on the vacuum integration which is externally the IR-regularized one.
It is necessary to remind that, in the space with D=d-2ϵ, the positive value of ϵ allows to avoid the UV-divergency.
In the
spherical co-ordinate system, we write the following
representation
𝒱_2=∫_UV(d^D k)/[k^2]^2≡π^D/2/Γ(D/2)∫_μ^2^∞ dββ^D/2-3 with β=|k|^2,
where μ^2 plays a role of IR-regularization and the angular integration given by the measure dΩ is calculated explicitly.
Next, calculating β-integration, we reach the representation as
𝒱_2=
π^2-εμ^-2ε/Γ(2-ε) 1/ε|_ε→ 0,
where it is shown that the ϵ-pole corresponds to the UV-divergency only because the IR-divergency is absent
by construction thanks for μ^2.
This is a very-well known representation used, for example, in <cit.>.
On the other hand, we are able to calculate the vacuum integration by
Gorishni-Isaev's method <cit.>. In this case, 𝒱_n reads
𝒱_n=∫(d^D k)/[k^2]^n =
2i π^1+D/2/(-1)^D/2 Γ(D/2)δ(n-D/2).
Supposing D=4-2ε, the only contribution is given by
𝒱_2=∫(d^D k)/[k^2]^2 =
2i π^3-ε/Γ(2-ε)δ(ε) ≠ 0.
Hence, the delta-function of argument ϵ reflects the UV-divergency.
We specially stress that the representations of 𝒱_2 given by Eqns. (<ref>) and (<ref>)
are equivalent.
The delta-function as a generated function (distribution) is a linear singular functional (which cannot be generated by any locally-integrated functions) defined on the suitable finite function space.
Such a definition is absolutely well but it is not unique one. Namely, the delta-function can be understood with the help of the fundamental sequences of regular functionals provided the corresponding weak limit,
see for example <cit.>.
Besides, one of the delta-function representations is related to the following realization
δ(t)=lim_ε→ 0δ_ε(t)≡lim_ε→ 0 St.F. (-ε≤ t ≤ 0)/ε,
where St.F.(-ε≤ t ≤ 0) implies the well-known step-function without any uncertainties.
Going back to Eqn. (<ref>), one can see that the treatment of δ(ε) as the linear (singular)
functional on the finite function space with dμ(ε)=dεϕ(ε) meets some difficulties
within the dimensional regularization approach. Indeed, for the practical use,
ε is not a convenient variable for the construction of the finite function space because we finally need
to be focused on the limit as ϵ→ 0.
Meanwhile, within the sequential approach <cit.>, the delta-function might be considered as the usual singular (meromorphic)
function and the δ(0)-singularity/uncertainty can be treated as
a pole of the first order <cit.>,
δ(0)=lim_ε→ 0δ_ε(0)≡lim_ε→ 01/ε.
For the demanding mathematician, the representation of Eqn. (<ref>) should be understood merely as a symbol.
That is, δ(0) denotes alternatively the limit of 1/ϵ.
This representation is also backed by the obvious fact that Eqns. (<ref>) and (<ref>)
are equivalent ones.
It is worth to notice that representation of δ(0) through the pole of an arbitrary meromorphic function
should be used very carefully.
For example, if we suppose that (here, z∈𝔼^4 and the delta-function is assumed to be a functional on
the finite function space)
[ δ(z)]^2 = δ(0) δ(z),
the representation given by
δ(z) = lim_ϵ→ 0δ_ϵ(z), δ_ϵ(z) = 1/π^2 ϵ^4
e^- z^2/ϵ^2⇒δ(0) ∼δ_ϵ(0)=1/π^2 ϵ^4
does not satisfy the condition of Eqn. (<ref>).
Another informative example can be found in <cit.>.
§ CONCLUSION
To conclude, we have presented the important explanations regarding the massless vacuum integrations.
In the note, we have demonstrated the preponderance of sequential approach where
the singular generated functions (distributions) are treated as a fundamental
sequences of regular functionals. Due to this treatment, the uncertainty as δ(0) can be resolved
via the meromorphic function of first order.
Also, it has been shown in detail how the delta-function represents either UV-regime or IR-regime.
§ ACKNOWLEDGEMENTS
Our special thanks go to S.V. Mikhailov and L. Szymanowski for very useful and stimulating discussions.
99
Vasilev:2004yr
A. N. Vasilev,
“The field theoretic renormalization group in critical behavior theory and stochastic dynamics,”
Boca Raton, USA: Chapman & Hall/CRC (2004) 681 p
Anikin:2020dlh
I. V. Anikin,
Phys. Part. Nucl. Lett. 18, 290 (2021)
Anikin:2023wkk
I. V. Anikin,
“Conformal Symmetry and Effective Potential: I. Vacuum V_z,x-operation for the Green functions,”
arXiv:2306.15373 [hep-ph]
Anikin:2023ogb
I. V. Anikin,
“Conformal Symmetry and Effective Potential: II. Evolution,”
arXiv:2306.17018 [hep-ph]
Grozin:2005yg
A. Grozin,
“Lectures on QED and QCD,” arXiv:hep-ph/0508242 [hep-ph].
Grozin:2007zz
A. Grozin,
“Lectures on QED and QCD: Practical calculation and renormalization of one- and multi-loop Feynman diagrams,”
World Scientific, 2007,
ISBN 978-981-256-914-1, 978-981-256-914-1
Gorishnii:1984te
S. G. Gorishnii and A. P. Isaev,
Theor. Math. Phys. 62, 232 (1985)
[Teor. Mat. Fiz. 62, 345 (1985)]
Antosik:1973
Antosik P., Mikusinsky Y., Sikorsky R., “Theory of Generalized Functions: A Sequential Approach,”
(PWN – Polish Scientific Publisher, 1973)
Gelfand:1964
I. M. Gelfand and G. E. Shilov,
“Generalized Functions Vol 1 Properties And Operations,”
Academic Press, 1964, ISBN-0-12-279501-6
Efimov:1973pjo
G. V. Efimov,
Int. J. Theor. Phys. 10, 19 (1974)
|
http://arxiv.org/abs/2307.04568v1 | 20230710140046 | Global synchronization on time-varying higher-order structures | [
"Md Sayeed Anwar",
"Dibakar Ghosh",
"Timoteo Carletti"
] | cond-mat.stat-mech | [
"cond-mat.stat-mech",
"math-ph",
"math.DS",
"math.MP",
"nlin.AO",
"nlin.CD",
"nlin.PS"
] |
Physics and Applied Mathematics Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, India
Department of Mathematics and Namur Institute for Complex Systems, naXys, University of Namur, 2 rue Grafé, Namur B5000, Belgium
Synchronization has received a lot of attention from the scientific community for systems evolving on static networks or higher-order structures, such as hypergraphs and simplicial complexes. In many relevant real world applications, the latter are not static but do evolve in time, in this paper we thus discuss the impact of the time-varying nature of high-order structures in the emergence of global synchronization.
To achieve this goal we extend the master stability formalism to account, in a general way, for the additional contributions arising from the time evolution of the higher-order structure supporting the dynamical systems. The theory is successfully challenged against two illustrative
examples, the Stuart-Landau nonlinear oscillator and the Lorenz chaotic oscillator.
Global synchronization on time-varying higher-order structures
Timoteo Carletti
Received January 1, 2015; accepted January 1, 2015
==============================================================
§ INTRODUCTION
In the realm of complex systems, synchronization refers to the intriguing ability of coupled nonlinear oscillators to self-organize and exhibit a collective unison behavior without the need for a central controller <cit.>. This phenomenon, observed in a wide range of human-made and natural systems <cit.>, continues to inspire scientists seeking to unravel its underlying mechanisms.
To study synchronization, network science has proved to be a powerful and effective framework. Here, the interconnected nonlinear oscillators are represented as nodes, while their interactions are depicted as links <cit.>. However, the classical static network representation has its limitation in modeling many empirical systems, such as social networks <cit.>, brain networks <cit.>, where the connections among individual basic units are adaptable enough to be considered to evolve through time. Therefore, the framework of networks has been generalized as to include time-varying networks <cit.>, whose connections vary with time. The results presented in this framework support the claim that synchronization is enhanced by the dynamics of the supporting medium <cit.>.
Another intrinsic limitation of networks is due to their capability to only model pairwise interactions. To go beyond this issue, scholars have brought to the fore the relevance of higher-order structures, which surpass the traditional network setting that models the interactions between individual basic units only through pairwise links <cit.>. By considering the simultaneous interactions of many agents, higher-order structures, namely hypergraphs <cit.> and simplicial complexes <cit.>, offer a more comprehensive understanding of complex systems. These higher-order structures have been proven to produce novel features in various dynamical processes, including consensus <cit.>, random walks <cit.>, pattern formation <cit.>, synchronization <cit.>, social contagion and epidemics <cit.>. Nevertheless, the suggested framework is not sufficiently general for describing systems with many-body interactions that vary with time. As an example, group interactions in social systems have time-varying nature as the interactions among groups of individuals are not always active but rather change throughout time <cit.>. Some early works have begun to investigate the time-varying aspect of many-body interactions in various dynamical processes. For instance, time-varying group interactions have been demonstrated to influence the convergence period of consensus dynamics <cit.> and to predict the onset of endemic state in epidemic spreading <cit.>.
The present work is motivated by these recent research directions, and it aims to take one step further by considering the impact of time-varying higher-order structures in the synchronization of nonlinear oscillators. In this context, a preliminary effort has been reported in <cit.>, that investigates synchronization in time-varying simplicial complexes, limited only to fast switching <cit.> among distinct static simplicial configurations, implying that the time scale of the simplicial evolution is exceedingly fast compared to that of the underlying dynamical system. In contrast, in the present work, we allow the higher-order structures to evolve freely with time, thus removing any limitations on the imposed time evolution of the higher-order structure. We present the results in the framework of hypergraphs, but they hold true also for simplicial complexes. Under such broad circumstances, we develop a theory to determine the conditions ensuring the stability of a globally synchronized state that generalizes the Master Stability Equation <cit.> to a setting where the time evolution of underlying higher-order structures is explicitly considered. The generalized framework we discuss here assumes that the coupling functions cancel out when the dynamics of individual oscillators are identical, which is a necessary condition that must be met for the extended system to have a synchronous solution and it has been frequently used in the literature across various domains. The developed theory reveals that the consideration of temporality in group interactions can induce synchronization more easily than static group interactions, tested on higher-order structures of coupled Stuart Landau oscillators and paradigmatic Lorenz systems.
§ THE MODEL
To start with, let us consider a m-dimensional dynamical system whose time evolution is described by the following ordinary differential equation
dx⃗/dt = f⃗(x⃗) ,
where x⃗∈ℝ^m denotes the state vector and f⃗:ℝ^m→ℝ^m some smooth nonlinear function; let us assume moreover that system (<ref>) exhibits an oscillatory behavior, being the latter periodic or irregular; we are thus considering the framework of generic nonlinear oscillators. Let us now consider n identical copies of system (<ref>) coupled by a symmetric higher-order structure; namely, we allow the nonlinear oscillators to interact in couples, as well as in triplets, quadruplets, and so on, up to interactions among D+1 units. We can thus describe the time evolution of the state vector of the i-th unit by
ẋ⃗̇_i = f⃗(x⃗_⃗i⃗) + ∑_d=1^D q_d∑_j_1,…,j_d=1^n A_ij_1… j_d^(d)(t)g⃗^(d)(x⃗_i,x⃗_j_1,…,x⃗_j_d) ,
where for d=1,…,D, q_d>0 denotes the coupling strength, g⃗^(d):ℝ^(d+1)m→ℝ^m the nonlinear coupling function and 𝐀^(d)(t) the tensor encoding which units are interacting together. More precisely A^(d)_ij_1… j_d(t)=1 if the units i,j_1,… ,j_d do interact at time t, observe indeed that such tensor depends on time, namely the intensity of the coupling as well which units are coupled, do change in time. Finally, we assume the time-varying interaction to be symmetric, namely if A^(d)_ij_1… j_d(t)=1, then A^(d)_π(ij_1… j_d)(t)=1 for any permutation π of the indexes i,j_1,… , j_d. Let us emphasize that we consider the number of nodes to be fixed, only the interactions change in time; one could relax this assumption by considering to have a sufficiently large reservoir of nodes, from which the core of the system can recruit new nodes or deposit unused nodes.
Let us fix a periodic reference solution, s⃗(t), of system (<ref>). We are interested in determining the conditions under which the orbit (s⃗(t),…,s⃗(t))^⊤ is a solution of the coupled system (<ref>), and moreover it is stable, namely the n units globally synchronize and behave at unison. A necessary condition is that the coupling functions vanish once evaluated on such orbit, i.e., g⃗^(d)(s⃗,…,s⃗)=0, for d=1,…, D. This assumption is known in the literature as non-invasive condition.
For the sake of pedagogy, we will hereby consider a particular case of non-invasive couplings and we will refer the interested reader to Appendix <ref> for a general discussion. We are thus assuming the coupling functions g⃗^(d) to be diffusive-like, namely for each d there exists a function h⃗^(d):ℝ^dm→ℝ^m such that
g⃗^(d)(x⃗_i,x⃗_j_1,…,x⃗_j_d)=h⃗^(d)(x⃗_j_1,…,x⃗_j_d)-h⃗^(d)(x⃗_i,…,x⃗_i) .
In this way we can straightforwardly ensure that the coupling term in Eq. (<ref>) vanishes once evaluated on the orbit (s⃗(t),…,s⃗(t))^⊤, allowing thus to conclude that the latter is also a solution of the coupled system.
To study the stability of the reference solution, let us now perturb the synchronous solution (s⃗(t),…,s⃗(t))^⊤ with a spatially inhomogeneous term, meaning that ∀ i∈{1,…,n} we define x⃗_i=s⃗+δx⃗_i. Substituting the latter into Eq. (<ref>) and expanding up to the first order, we obtain
δẋ⃗̇_i = ∂f⃗/∂x⃗_i|_s⃗δx⃗_i+∑_d=1^D q_d∑_j_1,…,j_d=1^n B_ij_1… j_d(t) ∑_ℓ=1^d∂h⃗^(d)/∂x⃗_j_ℓ|_(s⃗,…,s⃗)δx⃗_j_ℓ ,
where
B_ij_1(t) = A_ij_1^(1)(t)- k^(1)_i(t)δ_ij_1 ,
B_ij_1j_2(t) = A_ij_1j_2^(2)(t)-2k_i^(2)(t)δ_ij_1j_2 , …
B_ij_1j_2… j_D(t) = A_ij_1j_2… j_D^(D)(t)-D!k_i^(D)(t)δ_ij_1j_2… j_D ,
being δ_ij_1j_2… j_D the generalized multi-indexes Kronecker-δ, and the (time-varying) d-degree of node i is given by
k_i^(d)(t)=1/d!∑_j_1,..,j_d=1^n A_ij_1… j_d^(d)(t) ,
which represents the number of hyperedges of order d incident to node i at time t. Observe that if 𝐀^(d) is weighted, then k_i^(d)(t) counts both the number and the weight, it is thus the generalization of the strength of a node. Let us now define
k_ij^(d)(t)=1/(d-1)!∑_j_1,...,j_d-1^n A_ijj_1… j_d-1^(d)(t) ,
namely the number of hyperedges of order d containing both nodes i and j at time t. Again, once 𝐀^(d) is weighted, then k_ij^(d)(t) generalizes the link strength. Let us observe that because of the invariance of 𝐀^(d) under index permutation, we can conclude that k_ij^(d)(t)=k_ji^(d)(t). Finally, we define the generalized time-varying higher-order Laplacian matrix for the interaction of order d as
L_ij^(d)(t)=
-d!k_i^(d)(t) if i=j
(d-1)!k_ij^(d)(t) if i≠ j .
Observe that such a matrix is symmetric because of the assumption of the tensors 𝐀^(d). Let us also notice the difference in sign with respect to other notations used in the literature.
We can then rewrite Eq. (<ref>) as follows
δẋ⃗̇_i = ∂f⃗/∂x⃗_i|_s⃗δx⃗_i+∑_d=1^D q_d[∑_j_1=1^n ∂h⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)δx⃗_j_1∑_j_2,…,j_d=1^n B_ij_1… j_d(t) +…+ ∑_j_d=1^n ∂h⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)δx⃗_j_d∑_j_1,…,j_d-1=1^n B_ij_1… j_d(t)]
= ∂f⃗/∂x⃗_i|_s⃗δx⃗_i+∑_d=1^D q_d∑_j=1^n L^(d)_ij(t)[∂h⃗^(d)/∂x⃗_j_1 +…+ ∂h⃗^(d)/∂x⃗_j_d]_(s⃗,…,s⃗)δx⃗_j ,
where we used the fact the ∂h⃗^(d)/∂x⃗_j_1 +…+ ∂h⃗^(d)/∂x⃗_j_d is independent from the indexes being the latter just place holders to identify the variable with respect to the derivative has to be done. Finally, by defining
𝐉_f := ∂f⃗/∂x⃗_i|_s⃗(t) and
𝐉_h^(d) := ∑_ℓ=1^d ∂h⃗^(d)/∂x⃗_j_ℓ|_(s⃗(t),…,s⃗(t))∀ d∈{1,…,D} ,
we can rewrite Eq. (<ref>) in compact form
δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d∑_j=1^n L^(d)_ij(t)𝐉_h^(d)δx⃗_j .
This is a non-autonomous linear differential equation determining the stability of the perturbation δx⃗_i, for instance, by computing the largest Lyapunov exponent. To make some analytical progress in the study of Eq. (<ref>), we will consider two main directions: the functions h⃗^(d) satisfy the condition of natural coupling (see Section <ref>) or the higher-order structures exhibit regular topologies (see Section <ref>). The aim of each assumption is to disentangle the dependence of the nonlinear coupling functions from the higher-order Laplace matrices and thus achieve a better understanding of the problem under study.
§.§ Natural coupling
Let us assume the functions h⃗^(d) to satisfy the condition of natural coupling, namely
h⃗^(D)(x⃗,…,x⃗)=…=h⃗^(2)(x⃗,x⃗)=h⃗^(1)(x⃗) ,
that implies 𝐉_h^(1)=𝐉_h^(2)=…=𝐉_h^(D)
and it allows to eventually rewrite Eq. (<ref>) as follows
δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_j=1^n M_ij(t)𝐉_h^(1)δx⃗_j ,
where
M_ij(t) := ∑_d=1^D q_d L^(d)_ij(t) ∀ i,j=1,… n .
Let us observe that the matrix 𝐌(t) is a Laplace matrix; it is non-positive definite (as each one of the 𝐋^(d)(t) matrices does for any d=1,…, D and any t>0, and q_d>0), it admits μ^(1)=0 as eigenvalue associated to the eigenvector ϕ^(1)=(1,…,1)^⊤ and it is symmetric. So there exists an orthonormal time-varying eigenbasis, ϕ^(α)(t), α=1,…,n, for 𝐌(t) with associated eigenvalues μ^(α)≤ 0. Let us define <cit.> the n× n time dependent matrix 𝐜(t) that quantifies the projections of the time derivatives of the eigenvectors onto the independent eigendirections, namely
d ϕ⃗^(α)(t)/dt=∑_βc_αβ(t)ϕ⃗^(β)(t) ∀α=1,…, n .
By recalling the orthonormality condition
(ϕ⃗^(α)(t))^⊤·ϕ⃗^(β)(t)=δ_αβ ,
we can straightforwardly conclude that 𝐜 is a real skew-symmetric matrix with a null first row and first column, i.e., c_αβ+c_βα=0 and c_1α=0.
To make one step further, we consider Eq. (<ref>), and we project it onto the eigendirections, namely we introduce δx⃗_i=∑_αδx̂⃗̂_αϕ^(α)_i and recalling the definition of 𝐜 we obtain
dδx̂⃗̂_β/dt = ∑_α c_βα(t)δx̂⃗̂_α+[𝐉_f+ μ^(β)(t)𝐉_h^(1)]δx̂⃗̂_β .
Let us observe that the latter formula and the following analysis differ from the one presented in <cit.> where the perturbation is assumed to align onto a single mode, a hypothesis that ultimately translates in the stationary of the Laplace eigenvectors that is 𝐜=0. The same assumption is also at the root of the results by <cit.>; indeed, commuting time-varying networks implies to deal with a constant eigenbasis. In conclusion, Eq. (<ref>) returns the more general description for the projection of the linearized dynamics on a generic time-varying Laplace eigenbasis, and thus allowing us to draw general conclusions without unnecessary simplifying assumptions.
§.§ Regular topologies
An alternative approach to study Eq. (<ref>) is to assume regular topologies <cit.>, namely hypergraphs such that 𝐋^(d)(t) = α_d 𝐋^(1)(t), for d=1,…,D, with α_1=1 and α_d∈ℝ_+. Indeed we can use this assumption to obtain from Eq. (<ref>)
δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_j=1^n L^(1)_ij(t)𝐉_ĥδx⃗_j ,
where
𝐉_ĥ := ∑_d=1^D q_d α_d 𝐉_h^(d) ,
that results in a sort of weighted nonlinear coupling term. We can now make use of the existence of a time-varying orthonormal basis of 𝐋^(1)(t), namely ψ^(α)(t), α=2,…,n, associated to eigenvalues Λ^(α) <0, ψ^(1)(t)=(1,…,1)^⊤ and Λ^(1)=0, to project δx⃗_i onto the n eigendirections, δx⃗_i=∑_αδx̃⃗̃_αψ^(α)_i. Because the latter vary in time we need to define a second n× n time dependent matrix 𝐛(t) given by
d ψ⃗^(α)(t)/dt=∑_βb_αβ(t)ψ⃗^(β)(t) ∀α=1,…, n ,
that it is again real, skew-symmetric, with a null first row and first column, i.e., b_αβ+b_βα=0 and b_1α=0, because of the orthonormality condition of eigenvectors. By projecting Eq. (<ref>) onto ψ^(α)(t), we get
dδx̃⃗̃_β/dt = ∑_α b_βα(t)δx̃⃗̃_α+[𝐉_f+ Λ^(β)(t)𝐉_ĥ]δx̃⃗̃_β .
Let us conclude by observing that the latter equation has the same structure of (<ref>). Those equations determine the generalization of the Master Stability Equation to the case of time-varying higher-order structures. The time variation signature of the topology is captured by the matrices 𝐜(t) or 𝐛(t) and the eigenvectors μ^(α)(t) or Λ^(α)(t), while the dynamics (resp. the coupling) in the Jacobian 𝐉_f (resp. 𝐉_h^(1) or 𝐉_ĥ).
It is important to notice that as the eigenvalues μ^(1)=0, Λ^(1)=0 and the skew-symmetric matrices 𝐜(t), 𝐛(t) have null first row and column, in analogy with the MSF approaches carried over static networks <cit.> and higher-order structures <cit.>, also in the case of time-varying higher-order structures, we can decouple the Master Stability Equation into two components. One component describes the movement along the synchronous manifold, while the other component represents the evolution of different modes that are transverse to the synchronous manifold. The Maximum Lyapunov Exponent (MLE) associated with the transverse modes measures the exponential growth rate of a tiny perturbation in the transverse subspace. It serves as an enhanced form of Master Stability Function (MSF) and provides valuable insights into the stability of the reference orbit. For the synchronous orbit to be stable, the MLE associated to all transverse modes must be negative. Moreover, the MSF approaches applied to static networks and higher-order structures can be simplified by examining the evolution of the perturbation along each independent eigendirection associated with distinct eigenvalues of the Laplacian matrix. Let us observe that this is not possible in the present because the matrices 𝐜(t) and 𝐛(t) mix the different modes and introduce a complex interdependence among them, making it challenging to disentangle their individual contributions. For this reason, one has to address numerically the problem <cit.>.
To demonstrate the above introduced theory and emphasize the outcomes arising from the modified Master Stability Equations (<ref>) and (<ref>), we will present two key examples in the following sections. Indeed, we will utilize the Stuart-Landau limit cycle oscillator and the chaotic Lorenz system as prototype dynamical systems anchored to each individual nodes. To simplify the calculations, we assume that the hypergraph consists of only three nodes, three links and one triangle (face), whose weights change in time. Additionally, the eigenvector projection matrices 𝐜(t) and 𝐛(t) do not vary in time; this assumption results from a suitable choice of the Laplace eigenbasis as explained later in Appendix <ref>. Finally, to simplify the analysis we also assume the Laplace eigenvalues to be constant in time. Let us stress that despite such assumptions, the proposed framework is very general and can be applied to any time varying hypergraphs.
§ SYNCHRONIZATION OF STUART-LANDAU OSCILLATORS COUPLED VIA TIME-VARYING HIGHER-ORDER NETWORKS
The aim of this section is to present an application of the theory above introduced. We decided to use the Stuart-Landau (SL) model as a prototype example for two reasons; first, it provides the normal form for a generic system close to a supercritical Hopf-bifurcation, second, because of its structure, the Jacobian of the reaction part becomes constant once evaluated on the reference orbit and this simplifies the presentation of the results.
A SL oscillator can be described by a complex amplitude w that evolves in time according to ẇ=σ w-β |w|^2w, where σ=σ_+iσ_ and β=β_+iβ_ are complex model parameters. The system admits a limit cycle solution w_LC(t)=√(σ_/β_)e^iω t, where ω=σ_-β_σ_/β_, that is stable provided σ_>0 and β_>0, conditions that we hereby assume.
To proceed in the analysis, we couple together n identical SL oscillators, each described by a complex amplitude w_j, with j=1,...,n, anchored to the nodes of a time-varying hypergraph as prescribed in the previous section, namely
dw_j/dt= σ w_j-β w_j|w_j|^2 + ∑_d=1^D q_d∑_j_1,…,j_d=1^n A_jj_1… j_d^(d)(t)g⃗^(d)(w_j,w_j_1,…,w_j_d) .
For the sake of simplicity, we restrict our analysis to pairwise and three-body interactions, namely D=2 in Eq. (<ref>). We hereby present and discuss the SL synchronization under the diffusive-like coupling hypothesis and by using two different assumptions: regular topology and natural coupling. The case of non-invasive coupling will be presented in Appendix <ref>.
§.§ Diffusive-like and regular topology
Let us thus assume the existence of two functions h^(1)(w) and h^(2)(w_1,w_2) such that g^(1) and g^(2) do satisfy the diffusive-like assumption, namely
[ g^(1)(w_j,w_j_1) = h^(1)(w_j_1)-h^(1)(w_j) and; ; g^(2)(w_j,w_j_1,w_j_2) = h^(2)(w_j_1,w_j_2)-h^(2)(w_j,w_j) . ]
For the sake of definitiveness, let us fix
h^(1)(w)=w and h^(2)(w_1,w_2)=w_1w_2 ,
let us observe that the latter functions do not satisfy the condition for natural coupling, indeed h^(1)(w)=w≠ w^2=h^(2)(w,w).
Let us assume to deal with regular topology, namely 𝐋^(2)=α_2𝐋^(1). Hence following Eq. (<ref>) we can define 𝐉_ĥ = q_1 𝐉_h^(1)+q_2 α_2 𝐉_h^(2). Let us perturb the limit cycle solution w_LC(t)=√(σ_/β_)e^iω t by defining w_j=W_LC(1+ρ_j)e^iθ_j, where ρ_j and θ_j are real and small functions for all j. A straightforward computation allows to write the time evolution of ρ_j and θ_j
ddt(ρ_j
θ_j) =
(
-2σ_ 0
-2β_σ_/β_ 0
)(ρ_j
θ_j)+∑_ℓ L_jℓ^(1)[(
q_1, - q_1,
q_1, q_1,)+ 2α_2 √(σ_/β_)(cos (ω t) - sin (ω t)
sin (ω t) cos (ω t)
)(
q_2, - q_2,
q_2, q_2,)](ρ_ℓ
θ_ℓ) ,
where ω =σ_-β_σ_/β_ is the frequency of the limit cycle solution.
By exploiting the eigenvectors ψ^(α)(t) and eigenvalues Λ^(α)(t) of 𝐋^(1)(t) to project the perturbation ρ_j and θ_j we obtain:
ddt(ρ_β
θ_β) = ∑_α b_βα(ρ_α
θ_α)+{(
-2σ_ 0
-2β_σ_/β_ 0
) +Λ^(β)[(
q_1, - q_1,
q_1, q_1,)
+ 2α_2 √(σ_/β_)(cos (ω t) - sin (ω t)
sin (ω t) cos (ω t)
)(
q_2, - q_2,
q_2, q_2,)]}(ρ_β
θ_β) ,
where the matrix 𝐛 has been defined in Eq. (<ref>).
For the sake of definiteness and to focus on the impact of the time-varying topology, we hereby consider a simple higher-order network structure composed of n=3 nodes, three links and one triangle. Moreover, the eigenvalues are assumed to be constant and the time-derivative of the associated eigenvectors projected on the eigenbasis to return a constant matrix 𝐛, for a given Ω≥ 0
𝐛=[ 0 0 0; 0 0 Ω; 0 -Ω 0 ] .
One can show (see Appendix <ref> and <cit.>) that those assumptions on the hypergraph correspond to two eigenvectors rotating in a plane orthogonal to the constant eigenvector ψ^(1)∼ (1,…,1)^⊤ with frequency Ω>0. The case Ω=0 corresponds thus to a static higher-order network structure.
Under those assumptions, Eq. (<ref>) determines a time periodic linear system whose stability can be determined by using Floquet theory. In order to illustrate our results, we let q_1, and q_2, to freely vary in the range [-5,5], while keeping fixed to generic values the remaining parameters, and we compute the Floquet eigenvalue with the largest real part, corresponding thus to the Master Stability Function (MSF) of Eq. (<ref>), as a function of q_1, and q_2,. The corresponding results are shown in Fig. <ref> for Ω=0 (panel (a)) and Ω = 2 (panel (b)). By a direct inspection, one can clearly conclude that the parameters region associated with a negative MSF (black region), i.e., to the stability of the SL limit cycle and thus to global synchronization, is larger for Ω >0 than for Ω=0.
To study the combined effect of both coupling strengths q_1 and q_2, we set q_1=ϵ_1q_1,0 and q_2=ϵ_2q_2,0, and we compute the MSF as a function of ϵ_1 and ϵ_2, having fixed without loss of generality q_1,0=0.1-0.5i and q_2,0=0.1-0.5i. The corresponding results are presented in Fig. <ref> for static (Ω=0, panel (a)) and time-varying (Ω=2, panel (b)) higher-order structure. We can again conclude that the region of parameters corresponding to global synchronization (black region) is larger in the case of time-varying hypergraph than in the static case.
Our last analysis concerns the relation between the frequency Ω and the size of the coupling parameters ϵ_1, ϵ_2, still assuming q_1=ϵ_1q_1,0 and q_2=ϵ_2q_2,0, on the onset of synchronization. In Fig. <ref> we report the MSF in the plane (Ω,ϵ_1) for a fixed value of ϵ_2 (panel (a)), and in the plane (Ω,ϵ_2) for a fixed value of ϵ_1 (panel (b)). Let us observe that the synchronization can be easier achieved the smaller the value ϵ_j, j=1,2, for which the MSF is negative, having fixed Ω. Let us thus define ϵ̂_1(Ω)=min{ϵ >0 : MSF(ϵ,ϵ_2,Ω)<0}, for fixed ϵ_2, and similarly ϵ̂_2(Ω). The results of Fig. <ref> clearly show that ϵ̂_1(Ω)<ϵ̂_1(0)∼ 3.5 and ϵ̂_2(Ω)<ϵ̂_2(0)∼ 4.2 and thus support our claim that time-varying structures allow to achieve synchronization easier.
To support our analysis, we performed numerical simulations of the SL defined on the simple 3 nodes time-varying hypergraph. We selected (ϵ_1,ϵ_2)=(2.5,0.5) and the remaining parameters values as in Fig. <ref>. By observing the latter figure, we conclude that for the chosen parameters, the MSF is positive if Ω=0 and negative if Ω=2, hence the SL should globally synchronize on the time-varying hypergraph while it would not achieve this state in the static case. Results of Fig. <ref> confirm these conclusions; indeed, we can observe that (real part of) the complex state variable is in phase for all i in the case Ω=2 (right panel), while this is not clearly the case for Ω=0 (left panel).
§.§ Diffusive-like and natural coupling
The aim of this section is to replace the condition of regular topology with a condition of natural coupling and consider thus again, a diffusive-like coupling. Let us thus consider now two functions h^(1)(w) and h^(2)(w_1,w_2) satisfying the natural coupling assumption, namely
h^(1)(w)=h^(2)(w,w) .
For the sake of definitiveness, let us fix
h^(1)(w)=w^3 and h^(2)(w_1,w_2)=w_1(w_2)^2 .
Consider again to perturb the limit cycle solution w_LC(t)=√(σ_/β_)e^iω t by defining w_j=W_LC(1+ρ_j)e^iθ_j, where ρ_j and θ_j are real and small functions for all j. A straightforward computation allows us to write the time evolution of ρ_j and θ_j as,
ddt(ρ_j
θ_j) =
(
-2σ_ 0
-2β_σ_/β_ 0
)(ρ_j
θ_j)
+3σ_/β_∑_ℓ M_jℓ(cos (2ω t) - sin (2ω t)
sin (2ω t) cos (2ω t)
)(ρ_l
θ_l) ,
where ω =σ_-β_σ_/β_ is the frequency of the limit cycle solution and 𝐌 is the matrix q_1 𝐋^(1)(t)+q_2 𝐋^(2)(t) (see Eq. (<ref>)). Let us observe that in this case, the coupling parameters q_1 and q_2 should be real numbers if we want to deal with real Laplace matrices, hypothesis that we hereby assume to hold true.
By invoking the eigenvectors ϕ^(α)(t) and eigenvalues μ^(α)(t) of 𝐌(t), and the matrix 𝐜 (see Eq. (<ref>)), we can project the perturbation ρ_j and θ_j on the eigenbasis and thus rewrite the time variation of the perturbation as follows
ddt(ρ_β
θ_β) = ∑_α c_βα(ρ_α
θ_α)+[
(
-2σ_ 0
-2β_σ_/β_ 0
) +3σ_/β_μ^(β)(cos (2ω t) - sin (2ω t)
sin (2ω t) cos (2ω t)
)](ρ_β
θ_β) .
Let us assume again to deal with an hypergraph made by 3 nodes and consider a time-independent matrix 𝐜
𝐜=[ 0 0 0; 0 0 Ω; 0 -Ω 0 ] ,
for some Ω≥ 0. The eigenvalue μ^(1)=0 of 𝐌 determines the dynamics parallel to the synchronous manifold. On the other hand, the equations obtained for μ^(2) and μ^(3) give the dynamics of transverse modes to the synchronization manifold. Hence the MSF can be obtained by solving the latter equations and provide the conditions for a global stable synchronous solution to exist. In Fig. <ref>, we show the level sets of the MSF as a function of the eigenvalues μ^(2) and μ^(3) while keeping the remaining parameters in Eq. (<ref>) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., Ω=0, while in panel (b) a time-varying hypergraph, i.e., Ω=2, negative values of MSF are reported in black and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues μ^(2) and μ^(3) and thus the SL system can easier synchronize.
§ SYNCHRONIZATION OF LORENZ SYSTEMS NONLINEARLY COUPLED VIA TIME-VARYING HIGHER-ORDER NETWORKS
The aim of this section is to show that our results hold true beyond the example of the dynamical system shown above, i.e., the Stuart-Landau. We thus decide to present an application of synchronization for chaotic systems on a time-varying higher-order network. For the sake of definitiveness, we used the paradigmatic chaotic Lorenz model for the evolution of individual nonlinear oscillators.
We consider again the scenario of regular topology with the toy model hypergraph structure composed of n=3 nodes described previously, the whole system can thus be described by
ẋ_i =a_1(y_i-x_i)+ϵ_2∑_j=1^N∑_k=1^NA^(2)_ijk(x_j^2x_k-x_i^3)
ẏ_i =x_i(a_3-z_i)-y_i+ϵ_1∑_j=1^NA^(1)_ij(y_j-y_i)
ż_i =x_iy_i-a_2z_i ,
where the system parameters are kept fixed at a_1=10, a_2=8/3, a_3=28 for which individual nodes exhibits chaotic trajectory. The pairwise and higher-order structures are related to each other by 𝐋^(2)=α_2𝐋^(1). We assume the eigenvalues of the Laplacian 𝐋^(1) to be constant and the matrix 𝐛 to be given by
𝐛=[ 0 0 0; 0 0 Ω; 0 -Ω 0 ] for some Ω≥ 0.
Let us thus select as reference solution s⃗(t) a chaotic orbit of the isolated Lorenz model and consider as done previously the time evolution of a perturbation about such trajectory. Computations similar to those reported above, allow to obtain a linear non-autonomous system ruling the evolution of the perturbation, whose stability can be numerically inferred by computing the largest Lyapunov exponent, i.e., the MSF. We first considered the impact of the coupling strength, ϵ_1 and ϵ_2 on synchronization; results are reported in Fig. <ref> where we present the level sets of the MSF as a function of the above parameters by using a color code: black dots refer to negative MSF while yellow dots to positive MSF. The panel (a), refers to a static hypergraph, i.e., Ω=0, while the panel (b) to a time-varying one, i.e., Ω=3, one can thus appreciate that the latter setting allows a negative MSF for a larger range of parameters ϵ_1 and ϵ_2 and hence we can conclude that time-varying hypergraph enhance synchronization also in the case of chaotic oscillators.
We conclude this analysis by studying again the relation between the frequency Ω and the size of the coupling parameters ϵ_1, ϵ_2 on the onset of synchronization. In Fig. <ref> we show the MSF in the plane (Ω,ϵ_1) for a fixed value of ϵ_2=0.01 (panel (a)), and in the plane (Ω,ϵ_2) for a fixed value of ϵ_1=0.2 (panel (b)). By using again ϵ̂_1(Ω)=min{ϵ >0: MSF(ϵ,ϵ_2,Ω)<0}, for fixed ϵ_2, and similarly ϵ̂_2(Ω), we can conclude that ϵ̂_1(Ω)<ϵ̂_1(0)∼ 1.4 and ϵ̂_2(Ω)<ϵ̂_2(0)∼ 0.04 and thus supporting again our claim that time-varying structures allow to achieve synchronization easier.
§ CONCLUSIONS
To sum up we have here introduced and studied a generalized framework for the emergence of global synchronization on time-varying higher-order networks and developed a theory for its stability without imposing strong restrictions on the functional time evolution of the higher-order structure. We have demonstrated that the latter can be examined by extending the Master Stability Function technique to the novel framework for specific cases based either on the inter-node coupling scheme or the topology of the higher-order structure. Our findings reveal that the behavior of the higher-order network is represented by a matrix that changes over time and possesses skew symmetry. This matrix is derived from the time-dependent evolution of the eigenvectors of the higher-order Laplacian. Additionally, the eigenvalues associated with these eigenvectors can also vary over time and have an impact on shaping the evolution of the introduced disturbance. We have validated the proposed theory on time-varying hypergraphs of coupled Stuart-Landau oscillators and chaotic Lorenz systems, and the results obtained indicate that incorporating temporal aspects into group interactions can facilitate synchronization in higher-order networks compared to static ones.
The framework and concepts presented in this study create opportunities for future research on the impact of temporality in systems where time-varying group interactions have been observed but not yet thoroughly explored due to the absence of a suitable mathematical setting. Importantly, the fact that our theory does not require any restrictions on the time evolution of the underline structure could offer the possibility to apply it for a diverse range of applications other than synchronization.
apsrev4-1
§ NON-INVASIVE COUPLINGS
Here we will discuss the results corresponding to a slightly more general hypothesis for g⃗^(d), namely to be non-invasive, i.e.,
g⃗^(d)(s⃗,…,s⃗)=0 ∀ d=1,…,D ,
whose goal is again to guarantee that the coupling term in Eq. (<ref>) vanishes once evaluated on the orbit (s⃗(t),…,s⃗(t))^⊤. Indeed by using again x⃗_i=s⃗+δx⃗_i and expanding Eq. (<ref>) up to the first order we get
δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d∑_j_1,…,j_d=1^n B_ij_1… j_d(t) [ ∂g⃗^(d)/∂x⃗_i|_(s⃗,…,s⃗)δx⃗_i+∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)δx⃗_j_1+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)δx⃗_j_d] ;
from Eq. (<ref>) we can obtain
∂g⃗^(d)/∂x⃗_i|_(s⃗,…,s⃗)+∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)=0 ,
and thus rewrite (<ref>) as follows
δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d∑_j_1,…,j_d=1^n B_ij_1… j_d(t) [ ∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)(δx⃗_j_1-δx⃗_i)+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)(δx⃗_j_d-δx⃗_i)] .
Recalling the definition of k^(d)_ij given in Eq. (<ref>) we get
δẋ⃗̇_i = 𝐉_fδx⃗_i+∑_d=1^D q_d (d-1)![∑_j_1=1^n k^(d)_ij_1(t) ∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)(δx⃗_j_1-δx⃗_i)+ …+ ∑_j_l=1^n k^(d)_ij_d(t) ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)(δx⃗_j_d-δx⃗_i)] .
By using the definition of the higher-order Laplace matrix (<ref>) we eventually obtain
δẋ⃗̇_i = 𝐉_fδx⃗_i-∑_d=1^D q_d∑_j=1^n L^(d)_ij(t) [∂g⃗^(d)/∂x⃗_j_1|_(s⃗,…,s⃗)+ …+ ∂g⃗^(d)/∂x⃗_j_d|_(s⃗,…,s⃗)]δx⃗_j .
Let us consider now a particular case of non-invasive function, we assume thus there exists a function φ⃗:ℝ^m→ℝ^m, such that φ⃗(0)=0 and define
g^(d)(x⃗_i,x⃗_j_1,…,x⃗_j_d)=∑_ℓ=1^dφ⃗(x⃗_i-x⃗_j_ℓ) ,
then
∂g⃗^(d)/∂x⃗_j_ℓ = -𝐉_φ(0) ,
where 𝐉_φ(0) is the Jacobian of the function φ⃗ evaluated at 0. In conclusion (<ref>) rewrites as follows
δẋ⃗̇_i = 𝐉_fδx⃗_i-∑_d=1^D q_d∑_j=1^n L^(d)_ij(t) (-d)𝐉_φ(0) δx⃗_j=𝐉_fδx⃗_i+∑_j=1^n G_ij(t)𝐉_φ(0) δx⃗_j ,
where 𝐆(t)=∑_d=1^Dd q_d𝐋^(d)(t) can be considered as an effective time-varying simplicial complex or hypergraph.
Let us now observe that the effective matrix 𝐆(t) is a Laplace matrix; it is non-positive definite (as each one of the 𝐋^(d)(t) does for any d=1,…, D and any t>0), it admits μ^(1)=0 as eigenvalue associated to the eigenvector ϕ^(1)=(1,…,1)^⊤ and it is symmetric. So there exist a orthonormal time-varying eigenbasis, ϕ^(α)(t), α=1,…,n, for 𝐆(t) with associated eigenvalues μ^(α)≤ 0. Similar to before, we define the n× n time dependent matrix 𝐜(t) that quantifies the projections of the time derivatives of the eigenvectors onto the independent eigendirections, namely
d ϕ⃗^(α)/dt(t)=∑_βc_αβ(t)ϕ⃗^(β)(t) ∀α=1,…, n .
By recalling the orthonormality condition (ϕ⃗^(α)(t))^⊤·ϕ⃗^(β)(t)=δ_αβ we can again straightforwardly conclude that 𝐜 is a real skew-symmetric matrix with a null first row and first column, i.e., c_αβ+c_βα=0 and c_1α=0.
Thereafter, we consider Eq. (<ref>), and we project it onto the eigendirections, namely we introduce δx⃗_i=∑_αδx̂⃗̂_αϕ^(α)_i and recalling the definition of 𝐜 we obtain
dδx̂⃗̂_β/dt = ∑_α c_βα(t)δx̂⃗̂_α+[𝐉_f+ μ^(β)(t)𝐉_φ(0)]δx̂⃗̂_β .
This is the required Master Stability Equation, solving which for the calculation of maximum Lyapunov exponents provide the condition for stability of the synchronous solution.
§.§ Synchronization of Stuart-Landau oscillators with non-invasive coupling assumption
To validate the above results we again consider the SL oscillator with a particular case of non-invasive coupling function, namely we assume to exist a real function φ such that φ(0)=0, φ^'(0)≠ 0 and
[ g^(1)(w_1,w_2)=φ(w_1-w_2) , and; g^(2)(w_1,w_2,w_3)=φ(w_1-w_2)+φ(w_1-w_3) . ]
By reasoning as before, we get
[ ddt(ρ_j
θ_j) =
(
-2σ_ 0
-2β_σ_/β_ 0
)(ρ_j
θ_j)+ φ^'(0)∑_ℓ(q_1 L^(1)_jℓ +q_2 L^(2)_jℓ) (
1 0
0 -1
)(ρ_l
θ_l) . ]
By using again the eigenvectors ϕ^(α)(t), eigenvalues μ^(α)(t) of 𝐆(t) and the matrix 𝐜 (see Eq. (<ref>)), we can rewrite the previous formula as
[ ddt(ρ_β
θ_β) = ∑_α c_βα(ρ_α
θ_α)+[(
-2σ_ 0
-2β_σ_/β_ 0
) + φ^'(0)μ^(β)(
1 0
0 -1
)](ρ_β
θ_β). ]
Figure <ref> represent the result for the non-invasive coupling assumption. Here, we consider the non-invasive function so that φ^'(0)=1 and the skew-symmetric projection matrix 𝐜 is considered constant throughout the analysis as earlier. Here we show the level sets of the MSF as a function of the eigenvalues μ^(2) and μ^(3) while keeping the remaining parameters in Eq. (<ref>) fixed at generic nominal values. In panel (a), we consider a static hypergraph, i.e., Ω=0, while in the (b) panel, a time-varying hypergraph, i.e., Ω=2, negative values of MSF are reported in black, and they correspond thus to a global synchronous state, positive values of MSF are shown in yellow; one can clearly appreciate that in the case of the time-varying hypergraph, the MSF is negative for a much larger set of eigenvalues μ^(2) and μ^(3) and thus the SL system can achieve synchronization more easily.
§ STRUCTURE OF THE SMALL HYPERGRAPH
The goal of this section is to provide more details about the construction of the simple time-varying hypergraph used as support for the numerical simulations in the main text. To start with we need to obtain the time-evolution of eigenvectors ψ⃗^(α)(t), which follows the equation
[ dψ⃗^(α)dt=∑_αb_βαψ⃗^(α) , ]
where the matrix 𝐛 has been given in Eq. (<ref>). The eigenvector associated with the least eigenvalue Λ^(1)=0 is constant and is given by ψ⃗^(1)=1/√(3)(1,1,1)^⊤. The other two eigenvectors are obtained by solving the previous equation and are represented as ψ⃗^(2)(t)=v⃗_1cos(Ω t)+v⃗_2sin(Ω t) and
ψ⃗^(3)(t)=-v⃗_1sin(Ω t)+v⃗_2cos(Ω t), where v⃗_1, v⃗_2 are the unknown vectors that should be determined using the constraints to have orthonormal eigenbasis for every t.
Following a few steps of calculation, we can obtain the other two eigenvectors as follows
[ ψ⃗^(2)(t)=1√(6)[ 1; -2; 1 ]cos(Ω t)+1√(2)[ -1; 0; 1 ]sin(Ω t) ,; ψ⃗^(3)(t)=-1√(6)[ 1; -2; 1 ]sin(Ω t)+1√(2)[ -1; 0; 1 ]cos(Ω t). ]
Now recalling our assumption about constant eigenvalues and using the relation 𝐋^(1)_ij(t)=∑_αΛ^(α)ψ⃗^(α)_i(t)ψ⃗^(α)_j(t), we can obtain the entries of the pairwise Laplace matrix as
[ L^(1)_ij(t)=Λ^(2)ψ⃗^(2)_i(t)ψ⃗^(2)_j(t)+Λ^(3)ψ⃗^(3)_i(t)ψ⃗^(3)_j(t), ]
where we use the fact that Λ^(1)=0 for all time t. Finally by using the relation between pairwise adjacency and Laplace matrices L^(1)_ij(t)=A^(1)_ij(t), for i j, we obtain the temporal evolution of the links as
[ A^(1)_12(t)=12-13cos(π/3+2Ω t),; ; A^(1)_13 (t)= 12+13cos(2Ω t),; ; A^(1)_23(t)=12-13cos(π/3-2Ω t), ]
where we have used the fact that the non-zero eigenvalues are given by Λ^(2)=-1 and Λ^(3)=-2.
Again from the regular structure of the hypergraph, we have 𝐋^(2)(t)=α_2𝐋^(1)(t), for all t. Therefore, following the relation (<ref>), entries of the 2nd-order Laplacian 𝐋^(2) can be represented as,
[ L^(2)_ij(t)=α_2[Λ^(2)ψ⃗^(2)_i(t)ψ⃗^(2)_j(t)+Λ^(3)ψ⃗^(3)_i(t)ψ⃗^(3)_j(t)]. ]
Now, the definition of higher-order Laplacian implies that, L^(2)_ij(t)=∑_kA^(2)_ijk(t), i j. Hence, using the above relation and Eq. (<ref>), we can obtain the temporal evolution of the 3-hyperedge as
[ A^(2)_123(t)=1-23cos(π/3+2Ω t), ]
where we have again used the fact that the non-zero eigenvalues are Λ^(2)=-1, and Λ^(3)=-2, and the value of the parameter α_2 has been set α_2=2. Due to the assumption of undirected hypergraph, we also trivially have, A^(2)_123(t)=A^(2)_π(123)(t), where π(123) indicates any permutation of (123). Fig. <ref> portrays the temporal evolution of the links and 3-hyperedge weights. To better understand the evolution of the hypergraph, we provide the graphical evolution of the hypergraph in the accompanying Supplementary Movie, together with the time evolution of the weights of the links A^(1)_ij(t) and of the hyperedge A^(2)_123(t).
|
http://arxiv.org/abs/2307.04781v1 | 20230710121715 | Demonstrations of the Potential of AI-based Political Issue Polling | [
"Nathan E. Sanders",
"Alex Ulinich",
"Bruce Schneier"
] | cs.CY | [
"cs.CY"
] |
bottom=1.5in
0000.000
lightblue!0
Demonstrations of the Potential of AI-based Political Issue Polling
Michael Liut
August 12, 2023
===================================================================
empty
Nathan E. Sanders,*, Alex Ulinich, Bruce Schneier
Berkman Klein Center, Harvard University, 23 Everett St #2, Cambridge, Massachusetts, 02138
Mountain View High School, 3535 Truman Avenue, Mountain View, CA 94040
Harvard Kennedy School, 79 JFK Street, Cambridge, Massachusetts USA 02138
*[email protected]
Political polling is a multi-billion dollar industry with outsized influence on the societal trajectory of the United States and nations around the world.
However, in recent years it has been severely challenged by rising nonresponse rates and other factors that stress its cost, availability, and accuracy.
At the same time, artificial intelligence (AI) chatbots such as ChatGPT have become highly compelling stand-ins for a wide range of human behavior, powered by increasingly sophisticated large language models (LLMs).
Because these LLMs are trained on huge corpora of writing by diverse people captured from across the Internet, they are potentially capable of representing a wide range of beliefs on many policy issues.
Could AI chatbots be an effective tool for anticipating public opinion on controversial issues to the extent that they could be used by campaigns, interest groups, and polling firms?
We have developed a prompt engineering methodology for eliciting human-like survey responses from ChatGPT, which simulate the response to a policy question of a person described by a set of demographic factors, and produce both an ordinal numeric response score and a textual justification.
We execute large scale experiments using this method, querying GPT for thousands of simulated responses at a cost more than three orders of magnitude lower than human surveys.
We compare this simulated data to human issue polling data from the Cooperative Election Study (CES).
We find that ChatGPT is effective at anticipating both the mean level and distribution of public opinion on a variety of policy issues such as abortion bans and approval of the US Supreme Court, particularly in their breakdown along partisan lines (correlation typically >85%).
However, it is much less successful at anticipating demographic (age, race, and gender) differences between respondents.
Moreover, ChatGPT tends to overgeneralize its conception of ideological differences to new policy issues that arose after its training data was collected, such as American support for involvement in the war in Ukraine.
Our work has implications for our understanding of the strengths and limitations of the current generation of AI chatbots as virtual publics or online listening platforms, future directions for LLM development, and applications of AI tools to the political domain.
§ INTRODUCTION
While survey experiments and polling have been powerful tools for political campaigns, parties, and advocacy organizations in the US and around the world for centuries <cit.>, in recent years the cost and difficulty of operating polls has grown dramatically.
Political polling firms commonly recruit panels intended to be representative of, and to achieve high coverage of, their targeted population, such as eligible voters nationally or likely voters in a voting district.
Reaching these populations has become harder primarily because of the growth in survey nonresponse internationally: the failure to contact or refusal of potential participants to be surveyed due to factors such as lack of time, disinterest, and distrust <cit.>.
Moreover, the migration of respondents to new technologies such as cell phones and the Internet, which have uneven and evolving penetration and usage across regions and demographic groups, has constrained the coverage of survey samples .
These effects have generated simultaneous challenges for the quality and cost of political polling, as biases in political engagement and hyper-polarization manifest on response rates <cit.>.
A vast literature has developed on statistical methodologies for designing and postprocessing survey data to overcome these challenges, including methods such as demographic weighting and poststratification <cit.>.
In particular, pollsters have explored methodologies that enable meaningful public opinion research from digital platforms such as Facebook and other social media platforms, where traditional techniques of probability sampling cannot be applied because of the lack of a conventional sampling frame and researcher-controlled contact mechanism .
These various methodologies seem to have been successful at maintaining the predictive accuracy of election polling thus far, even as nonresponse has proliferated <cit.>, and yet there is widespread interest in finding transformative new models for measuring public opinion that could lead to more cost-effective, sustainable. and more reliable polling results <cit.>.
As statistical methodologies have come to play a critical role in collecting, processing, and interpreting political polling data, machine learning (ML) and artificial intelligence (AI) systems may further revolutionize this domain.
In particular, large language models (LLMs) such as ChatGPT, which can be incorporated into AI chatbots and other systems capable of providing human-like responses to natural language prompts, have a wide variety of potential applications in democratic processes, such as assisting lobbying firms <cit.>, helping citizens and stakeholders to formulate and advocate for their opinions <cit.>, facilitating connections between candidates and voters <cit.>, and even helping humans social engineer or hack political systems <cit.>.
Already, researchers have experimented with a variety of social science research and public polling applications of LLMs, such as coding open-ended survey responses <cit.>, inferring the ideology of a politician <cit.>, simulating economic behavior <cit.>, and simulating election results <cit.>.
Because they are trained on wide Internet corpora including opinion writing from a diverse range of people, LLM's have a compelling ability to represent different perspectives and to perform a wide range of tasks without specialized training <cit.>.
We therefore hypothesize that they may be effective at generating individualized responses to policy preference questions that can account for the same factors that influence human respondents, such as demographics.
However, the nature of LLMs limits their potential effectiveness as opinion sampling tools.
Like platforms such as social media, AI chatbots do not have well defined sample frames or well understood coverage characteristics.
Moreover, unlike true survey platforms, using LLMs does not actually involve any solicitation of opinion from an authentic human individual.
Instead, LLMs generate a response predicted to be most acceptable to the user on the basis of a training process such as reinforcement learning with human feedback , which may therefore reflect the incomplete, biased, or even stereotyping properties of its training dataset.
Some specific biases of Internet corpora-trained LLMs are coming in to focus.
One study attempted to assess the age and gender characteristics of ChatGPT by prompting it to express a demographic profile, finding that its responses are biased towards a young (<30 years old) and female profile .
Other investigators identified that an earlier model, GPT-2, is biased in its representation of the opinions of people from nations underrepresented in Internet usage .
Regardless of their ability to reflect the perspectives of a given demographic group, AI models may also exhibit bias in the text they generate; for example, in an analysis of the BERT model, researchers found that neural embeddings learn harmful stereotypes about persons with disabilities .
In this work, we seek to test the capability of current generation AI tools to accurately reflect distributions of public opinion, and to expose insight into its effective sociodemographic coverage as a polling instrument, using a generally available LLM and real public opinion survey questionnaires.
We have developed experimental methods (<ref>) to prompt the AI chatbot ChatGPT to generate public polling-like responses such that it can simulate a survey panel.
We test the model's ability to reflect the shift in valence between demographic groups across a variety of issues, as well as reasonably reproduce the key arguments appealed to by each demographic (<ref>).
We provide an interpretation of this capability in the context of prior Internet-assisted approaches to public opinion research, discuss the limitations of this approach and the current generation of tools, and the implications these capabilities may have as they improve (<ref>), before concluding (<ref>).
§ METHODS
We explore the viability of AI language models to simulate public opinion polling responses by developing a system that automates querying an LLM based on the questionnaire of a survey previously given to people, so that the resulting AI responses are aligned and comparable to human data.[We will publish the code associated with this work at the time the article is accepted.]
§.§ Large Language Model
We use the OpenAI Chat Completion API endpoint, through OpenAI's openai python library,[<https://github.com/openai/openai-python>] to query the gpt-3.5-turbo-0301 LLM for polling responses.
This model was the most recent model from OpenAI optimized for chat applications and made generally available as of April 2023; it is trained on data samples written as late as September 2021.[See <https://platform.openai.com/docs/models/gpt-3-5>]
We generate a balanced sample of n=20 responses per prompt per demographic cross-tab per issue across ideology (in five bins) and three demographic fields with simple categorizations (age in four bins, “man” or “woman” gender, and “white” or “non-white” race), for a total of 1,600 responses across each of seven issue prompts (see Table <ref>) for 11,200 total responses.
Note that this balanced sample does not, therefore, represent any particular target population such as US adults, as our focus is on understanding the performance of LLM's in representing the viewpoints within and across distinct demographic groups.
Because LLMs offer the opportunity to generate data for arbitrary sub-populations at arbitrary sizes, the process to generate a sample representative of a population with defined demographic characteristics is trivial, if the model is successful at accurately reproducing the views of each demographic group.
Regarding our selected demographic classes, we acknowledge that binary categorizations for gender and race are reductive and far from representative of the full spectrum of human gender and racial identity.
Our reason for focusing on these broad classes is to enable initial statistical comparisons with demographic groups well sampled in the CES dataset.
Future work should further explore the representation of AI generated responses associated with nonbinary gender and more diverse racial identities.
These queries were executed at a cost of about $3 USD through the OpenAI API, whereas an online survey of 10,000+ responses on a human population would cost at least 1,000 times that much.
LLMs can be sensitive to the way questions are phrased and what information is provided to prime them before answering a question.
We arrived at a prompt suitable for simulating public polling responses aligned to an established survey questionnaire through several iterations of trial and error in prompt engineering. We used the following prompt template when querying the LLM,
Please write a 1 paragraph letter to the editor from the perspective of a {gender} in the age range of {age} years who identifies as {white} expressing a clear point of view on the policy proposal to: “{issue}”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a {cardinality}-point scale, where 1 represents the position “{low_level}” and {cardinality} represents the position “{high_level}”.
where {gender}, {age}, and {white} are demographic features; {issue} represents the question text from a survey given to humans (<ref>); {cardinality} is the maximum value of the numeric response scale; and {low_level} and {high_level} are descriptions of the bottom and top end of the response scale as defined in the polling questionnaire. The prompt component describing the “Position score:” successfully formats the output so that an ordinal numeric response value can be extracted from the plaintext completion with a simple regular expression. Additionally, we extract the textual descriptors of the top and bottom options on the original scale from the survey questionnaire to align the LLM outputs to the scale the human respondents used.
The prompt template defined above evolved significantly over the course of our experimentation.
Initially, we did not include a “Position score” requirement in the prompt.
We first tested the model's ability to generate realistic-seeming textual arguments in response to policy issue questions, from various demographically-aligned points of view.
Having initially vetted this capability, we then added a brief instruction to the prompt to assign a score on a 1-5 rating and verified that the generated ratings generally agreed with the textual letter generated by the model.
However, we identified two further challenges: 1) the generated position score would be formatted inconsistently and was difficult to extract from the generated text without manual review and, 2) the model would sometimes flip the polarity of the scoring scale, such that a given position would be variously represented as a score of 1 or 5.
To address issue 1, we added far more explicit formatting instructions (“Before the letter, summarize their position with...”), which succeeded at enforcing a formatting for the score that could be reliably extracted.
To address issue 2, we added explicit definitions to the template of the low and high position levels.
In addition to ensuring a consistent directionality of the position score, this instruction has the added benefit of aligning the model-generated score to the rating scale used in the human survey.
We use the default OpenAI system prompt of, “You are a helpful assistant”.
The LLM is not prompted sequentially; rather, each query is executed as an individual user prompt immediately following the system prompt.
Importantly, this allows us to simulate semi-independent poll responses, where the LLMs are probabilistically generating various isolated responses similar to sampling a human population.[
In contrast, a methodology that queried a system like ChatGPT with sequential user prompts would entail state evolution that biases subsequent responses, as iterative prompting is known to change the responses of LLM-based dialog systems, see e.g. .
]
Figure <ref> illustrates an example survey response distribution from this method.
A typical exchange looks like,
Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Liberal Man in the age range of (16, 30][Note that we provide the age range in interval notation reflecting bins from the CES data; the GPT model demonstrates through its completions that it interprets the interval notation accurately.] years who identifies as non-white expressing a clear point of view on the policy proposal to: “Improve background checks to give authorities time to check the juvenile and mental health records of any prospective gun buyer under the age of 21”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “support” and 2 represents the position “oppose”.
Response: Position score: 1
As a young, politically liberal man who identifies as non-white, I fully support the proposal to improve background checks for prospective gun buyers under the age of 21. It is unacceptable that individuals with a history of mental illness or juvenile offenses can easily acquire firearms. This new policy would ensure that authorities have enough time to properly check a person's background before they are allowed to purchase a gun. We have seen far too many tragic shootings in recent years, and it is time that we take meaningful action to prevent future tragedies. By implementing stricter background checks, we can protect ourselves and our loved ones from senseless violence.
§.§ Human Polling Data
As a human polling comparison for our AI-generated responses, we use the 2022 preliminary data release of the Cooperative Election Study (CES) <cit.>.
The CES is an annual online survey of ∼60,000 nationally representative US respondents administered by YouGov.
The full CES Common Content dataset consists of nearly 700 demographic, voting, and issue response variables, covering a wide range of policy- and politics-relevant factors and questions.
We selected policy issue polling questions from the CES dataset on the basis of their ability to test the LLM's ability to represent distinctive demographic groups.
In particular, we looked for questions that are fairly strongly correlated with demographic factors such as age and gender, yet relatively poorly correlated with ideological factors.
In particular, we selected questions on the basis of the empirical correlation calculated between the question-specific ordinal response and the respondent-specific political affiliation in the CES data.
Because of the high degree of partisan polarization in the US political system for so many issue, these questions provide a better test of the demographic response simulation abilities of the LLM than would more ideologically driven questions.
We make some manipulations to the survey data to accommodate generation of equivalent LLM completions. In particular, we constrain policy issue responses to an ordinal scale by removing categories such as “Not sure” (and dropping any associated responses) and replace multi-selection responses “selected” and “not selected” with “strongly agree” and “strongly disagree,” respectively. We also coarsely bin (aggregate) the age demographic variable (which is provided as a birth year integer in the raw dataset).
§ RESULTS
We systematically compare the AI-generated and human respondent issue polling data across the seven queried issues, ideology, and three demographics to understand the quality of the AI-driven approach through its correspondence to a human population.
Figure <ref> illustrates an example of this demographic level comparison for the police_safety question.
This figure demonstrates the general level of correspondence between CES and GPT-generated survey data at the finest granularity of our demographic groups for one question.
The two datasets exhibit a similar pattern of increasing safety reported from the liberal (top of figure) to conservative (bottom) ends of the spectrum.
However, some trends present in the CES data are not reproduced in the GPT results; for example, the significant, age-mediated variation across demographic subgroups among `Very liberal' CES respondents is not present in the GPT data; the GPT model seems to be over-confident in the expected response for the ideological group, regardless of other factors.
In the remainder of this section, we interrogate this correspondence statistically across survey questions and demographic properties.
In some cases, the GPT model demonstrates an excellent capacity to precisely reproduce the public polling response for individual population crosstabs (subgroups of age, gender, race, and ideological identity).
Figure <ref> shows that for the SCOTUS approval questions, there is a ρ=86% Pearson correlation between the CES and GPT polling results across all demographic crosstabs, and an even higher 95% correlation when looking at ideological subgroups only.
Beyond the correlation measure, the absolute reconstruction of the ordinal response is also highly accurate, with a mean absolute percentage error (MAPE) across demographic subgroups of ≲10% in both cases.
Naturally, the AI polling results are less impressive in some other cases.
In the following subsections, we explore the level of correspondence between the GPT and CES results in more depth by question and demographic field.
§.§ Ideological alignment
The AI model demonstrates an excellent ability to predict the alignment of different ideological subgroups across a range of policy issues (Figure <ref>).
The correlation between the AI-generated responses and the CES survey results, aggregated by ideological identification, is extremely high (>85%) for not only the scotus_approval question (Figure <ref>b), but also the abortion_ban (98% correlation), police_safety (94%), and increase_fuel_production (86%) issues.
For the prescription_import (ρ=67%) and gun_background_checks (91%) issues, the AI results are directionally consistent with the survey results and the correlations are still quite strong, but differ in the range and shape of the response, as the GPT results show a step-function-like difference between conservatives and liberals versus the gradual change in the survey data.
These trends are generally reflected in the MAPE values.
Like scotus_approval, abortion_ban has both an excellent correlation and MAPE (5%).
In contrast, the discontinuity in the prescription_import and gun_background_checks response pattern is reflected with higher MAPE values (31% and 29%, respectively).
The increase_fuel_production MAPE value is intermediate (21%).
Lastly, police_safety has a high MAPE (35%) relative to its correlation.
In this case, the high correlation reflects a consistently monotonic relationship between the GPT and CES demographic means, but a mis-calibration such that the GPT responses overestimate the decrease in perceived safety associated with the liberal groups (i.e. the ordinal response value is inflated at the liberal end).
(For discussion of the remaining queried issue, regarding the Ukraine war, see <ref>).
§.§ Distributional similarity
We further investigate the ability of the probabilistic output of the AI models to represent the distributional responses of the human panel. Figure <ref> illustrates the correspondence between question response distributions on each policy issue.
(The widths of these distributions are also illustrated by the error bar lengths in Figures <ref>, <ref>, and <ref>).
The distribution similarity is generally fairly good, with particularly good matches for the binary-valued abortion_ban and prescription_import questions.
The GPT model gets the absolute level of support wrong for the binary-valued questions increase_fuel_production and gun_background_checks; the AI model substantially underestimates the policy provisions' level of support.
For the multi-valued questions police_safety and scotus_approval, the level of matching is intermediate.
The spread of the distributions is similar.
However, as observed above, the GPT responses favor higher ordinal values for police_safety than in the CES data.
For scotus_approval, the median ordinal value (2) is over-represented in GPT responses.
(For discussion of the ukraine_war question, see <ref>.)
§.§ Demographic alignment
Because of the substantial polarization of the American public, ideological differences dominate issue response for many public policy issues and for the CES questions we studied.
It is difficult to find any policy question on which there are large, systematic differences in public opinion that cannot be explained primarily in partisan terms.
Therefore, the success of the GPT model in predicting ideological issue response is significant, and may be sufficient for many polling applications where the ideological distribution of the respondent pool is known (e.g., from jurisdictional voting records).
However, beyond ideological differences, the GPT model fails at anticipating several of the most important demographic trends relevant to these issues (Figure <ref>).
Among the strongest demographic trends in the CES data is that older respondents feel significantly more safe around police than do younger respondents.
While the GPT model does marginally reflect that trend, it does so at a vastly smaller magnitude than the survey data.
There is also a racial difference on this issue, such that non-white respondents are somewhat less likely to feel safe than white respondents, and the GPT model correctly anticipates the size and direction of this effect.
Likewise, the largest demographic trend for the fossil fuel production issue is also by age; younger respondents are more likely to disfavor increasing fuel production. The GPT model misses this trend entirely.
In some cases, the GPT model makes a prediction about demographic trends that seem reasonable, but overshoots what appears in the human sample.
In particular, the AI presumes a large difference in opposition to a complete abortion ban by gender, which seems plausible given this issue's strong identification with women's rights, as in this example:
Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Moderate Woman in the age range of (45, 60] years who identifies as non-white expressing a clear point of view on the policy proposal to: “On the topic of abortion, do you support or oppose a policy to make abortions illegal in all circumstances?”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “support” and 2 represents the position “oppose”.
Response: Position score: 2
As a politically moderate woman in my late 50s who identifies as non-white, I oppose the policy proposal to make abortions illegal in all circumstances. This issue is not as simple as being pro-life or pro-choice, and I firmly believe that a woman should have the right to make decisions about her own body without interference from the government or anyone else. Making abortions illegal would only lead to unsafe and potentially life-threatening procedures, and it would be a violation of a woman's fundamental rights. Instead, we should focus on increasing access to education and resources that help prevent unwanted pregnancies in the first place, and provide support for women who do choose to carry a pregnancy to term. It is important to respect women's autonomy and trust them to make the best decisions for themselves and their families.
However, the CES data shows a far smaller gender difference on this question (but note that there are a wide variety of other questions addressing this complex issue in the CES survey).
In contrast, the GPT model misses an important gender trend on the question about prescription drug imports.
In the human data, women are significantly more likely to oppose the idea of allowing states to import drugs from other countries, a trend that persists strongly across ideological segments.
While GPT overshoots the ideological associations for this issue, it misses the gender skew entirely.
§.§ Generalization
While political and environmental factors are constantly shifting for all the issue questions studied in this work, there was one issue that changed overwhelmingly between when the GPT training data was collected (through September 2021) and when the CES survey was fielded (fall 2022): the Ukraine war.
CES survey respondents would clearly have in mind the interstate conflict that principally began in February 2022 with the Russian invasion of Ukraine, whereas the data used to train the GPT model would have contained only data relevant to prior Russian actions, such as the Russian annexation of Crimea in 2014, the war in Donbas, or the troop buildup that began in spring 2021.
To give the GPT model specific prompting, we modify the question from the CES survey with an explicit year to specifically reference the 2022 invasion.
The CES results reflect widespread, nearly uniform public support for US “involvement” (the question does not specify what means of intervention).
There is little evidence of an ideological signal in the CES data, although a poll taken months later might reflect the growing divide among conservatives over American support for the Ukrainian effort <cit.>.
There is a small gender difference, in that men are more likely to support involvement.
In contrast, the GPT results suggest a much lower level of overall support and a wider range of support (and lack thereof) across all ideological groups (Figure <ref>).
Moreover, the GPT model strongly predicts that “very liberal” respondents would not support US involvement.
This seems like a natural extension of anti-interventionist sentiment in left-wing political opinion exhibited in response to, for example, the US invasion of Iraq, as illustrated by this example,
Prompt: Please write a 1 paragraph letter to the editor from the perspective of a politically Liberal Man in the age range of (16, 30] years who identifies as non-white expressing a clear point of view on the policy proposal to: “The United States should not get involved in response to Russia's 2022 invasion of Ukraine.”. Before the letter, summarize their position with a “Position score:” statement followed by a single number (strictly numeric, with no other description) representing the person's position on the issue on a 2-point scale, where 1 represents the position “strongly agree” and 2 represents the position “strongly disagree”.
Response: Position score: 1
As a politically liberal man in the age range of 16-30 who identifies as non-white, I strongly believe that the United States should not get involved in response to Russia's 2022 invasion of Ukraine. As much as we want to be the global police, we simply do not have the resources nor the legitimacy to intervene in every conflict around the world. American imperialism has caused more harm than good in many cases, and it's time for us to realize that we cannot solve every problem with force. Furthermore, getting involved in another country's conflict has the potential to escalate the situation and put American lives at risk. Let us focus on repairing relationships with our allies and strengthening our own nation instead of stirring up more conflict.
And yet the GPT responses do not well capture the dynamics specific to the Ukraine war, including the nature of the Russian aggression associated with the war, the reporting on possible war crimes and atrocities associated with the conflict, and the vocal support of the Ukrainian cause from the Democratic leader, President Joe Biden.
We will discuss the potential to include such additional information in model inference in <ref>.
§ DISCUSSION
This work demonstrates the potential of AI chatbot models to generate synthetic public opinion polling data that realistically reproduces human responses.
It extends the work of <cit.>, for example, to issue polling.
We provide multiple ways of thinking about how these capabilities arise (<ref>), and discuss limitations, and potential mitigations, for these abilities (<ref>).
This demonstration has significant potential implications for the political polling and market research industries and for consumers of issue polling data such as political campaigns and advocates (<ref>).
§.§ Interpretation
The mechanism by which LLMs can generate synthetic polling data can be viewed alternatively as accessing a virtual public or as a new form of AI-assisted online listening platform.
Under the virtual public framework, we consider the LLM to be simulating a population of individual synthetic respondents akin to a human survey panel.
The multi-head attention architecture used by leading LLMs has a natural interpretation in these terms; to the extent that they capture distinguishable semantic information, each attention head can effectively represent a different perspective on an issue <cit.>.[
In deep learning models, “attention” is a widely used mechanism to differentially weight components of a layer input, effectively guiding the focus of the model.
In transformer models, multiple versions of attention are learned (attention heads) to produce independent attention mechanisms, which may correspond to recognition of distinct lexical patterns such as detecting named entities, representing entity relations, word parts of speech, or even semantic information.
See for further information.
]
Combined with the increasingly human-like reasoning performance and natively probabilistic nature of autoregressive LLMs, these features provide a basis by which models like ChatGPT can generate text emanations and survey responses that appear as if they came from a diverse panel of human respondents.
The online listening interpretation places models like ChatGPT alongside tools for online social media, news, and opinion aggregation like Brandwatch <cit.>, Meltwater <cit.>, and MediaCloud <cit.>, tools widely used by market researchers, brands, and political actors to understand public sentiment and reactions to recent events.
Like those online listening platforms, the source of the LLM's capabilities is a large corpus of Internet-derived training data that reflects a broad range of perspectives that, in aggregate, reflect public opinion and, when disaggregated, can elucidate trends with respect to demographics and other variables.
A substantial advantage of LLMs in principle is that they have reasoning capacity, allowing them to generalize beyond their training data to make predictions about hypothetical events or those that occur outside of the context of their sources.
While the results of <ref> illustrate the limited abilities of current generation LLMs to succeed at this task, this ability represents a major long-term advantage of LLMs and AI generally that is sure to be exploited by companies and other users <cit.>.
LLMs are more akin to a virtual public than an online listening platform, beyond their capability to generalize to new issues, in that they offer an opportunity for AI-assisted pollsters to manipulate context and state.
When using online listening tools, you are limited to the questions and context that actual people have been exposed to and responded to, which makes it impossible to simulate a longform questionnaire like that used in the CES survey.
In the longform questionnaire, respondents (or subsets of respondents) answer questions in sequence and can be primed with certain information, such as factual evidence or talking points, in an effort to measure that contexts' influence on their response.
Because LLMs are capable of accepting sequential prompts and (at some level) of generalizing beyond the specific examples in their training data, they can simulate this kind of longitudinal questionnaire.
§.§ Limitations
A primary challenge in the design of AI polling tools is prompt engineering, as prompting strategies can dramatically effect the reasoning skills and accuracy of LLMs <cit.>.
The LLM model must be prompted not only to elicit demographically accurate differences in real public opinion associated with complex policy issues, but also, preferably, to align its response to established public polling datasets and methodologies.
As a step towards that level of alignment, in this work, we have established a methodology (<ref>) for prompting LLMs to generate both numerical responses aligned to the questionnaire of a real public polling samples as well as explanations of their policy positions.
Improved alignment on numerical responses can lend additional credence to the textual responses generated by the AI models.
The imperfect correspondence between the AI-generated results and the real human survey data presented in <ref> is surely due in part to inadequacies of the LLM used in this work, and in part to the imperfection of the prompt engineering.
Even with existing LLMs like GPT-3.5, a variety of additional model parameters and prompt considerations could enable improvements upon our results. In particular, systematic modification of the LLM's temperature parameter,[<https://platform.openai.com/docs/api-reference/chat/create#chat/create-temperature>] which adjusts variance in the probabilistic generative text output, may have the effect of controlling the spread in opinion responses returned for a given demographic and issue configuration.
Moreover, because GPT models are autoregressive, their outputs may be sensitive to the instructions in our prompt about where to place the numeric “Position score.”
In particular, since chain of thought prompting is known to affect reasoning in LLMs <cit.>, asking it to assert a score before generating the text may significantly condition that response.
Among the most critical ethical considerations in using LLMs is their potential to repeat biases from their training data, including harmful stereotypes and misinformation <cit.>.
In some cases, these biases may reflect actual (if objectionable) distributions of human opinion and beliefs, and in other cases they may reflect the over-representation of those beliefs in certain online sources.
This vulnerability would not only weaken the usefulness of LLMs for public opinion measurement, but could actively create harm from their use.
Similarly, there are biases (perceived and legitimate) in human political polling that limits its usefulness for actionable public opinion measurement <cit.>.
Another key limitation is the availability of training data relevant to novel policy issues.
In particular, the current generation of LLMs are typically trained with fixed datasets that halt at a certain time (e.g., GPT-3.5 was trained on data collected through September 2021), and their training corpora may lack coverage of certain issues (e.g., Internet corpora may reflect a systematic silencing of certain issues, see, e.g., ).
To the extent that LLMs are limited to “parroting” memorized training samples <cit.>, they cannot be expected to accurately extrapolate to the likely reactions of human respondents to truly novel world events.
Moreover, absent highly detailed prompting about the state of the world at the time, LLMs may lack context that would be determinative of human responses; for example, the repeal of the Supreme Court precedent from Roe v. Wade is important context for Americans surveyed on the question of abortion rights in 2023.
This limitation could be mitigated by further development of continuously trained or diachronic LLMs, which can be updated with new training data over time and are aware of the time sensitivity of their training samples <cit.>.
Furthermore, LLMs can be augmented with capabilities to access new sources such as by browsing the web <cit.>, giving them access to new information to inform their responses at prediction time.
§.§ Implications
If this impressive, but nascent, ability of LLMs to realistically reflect ideological and demographic issue alignment improved, it would raise significant challenges and potential benefits for the future of the survey and polling industries.
Given the rapid dissemination and low cost inference for powerful LLMs and AI chatbot systems such as ChatGPT over the past year, an accurate AI-based polling system would become a highly cost-effective alternative to human surveying.
This cost advantage could democratize access to the tool of survey research, giving smaller institutions and individuals greater access to public opinion research.
If problems of survey nonresponse continue (or grow), it may compel survey consumers to increasingly turn to alternative approaches, such as LLMs, which are capable of generating data at arbitrary speed and resolution.
Moreover, the nearly instantaneous response rate from AI models (when not subject to rate limits from the companies that control them) provides an attractive capability to iterate on survey results.
When days or weeks are not required to re-field a survey instrument, marketers and pollsters have a much greater ability to refine and update their questionnaires and collect new data.
However, these abilities will only be actionable to marketers or political users if the significant challenges associated with the current generation of LLMs can be overcome.
It remains to be fully assessed how bias inherent to LLM training data and model design will become imprinted on its outputs, and how that could shape decisions informed by simulated market research studies or simulated polling.
It may be that the web datasets commonly used to train modern LLMs <cit.> will appropriately reflect the distribution of real world public thought, but perhaps only if curated to reflect a specific jurisdiction (e.g., sources primarily from one country) and to be balanced across the ideological spectrum.
At present, these biases and their dependence on large pretraining dataset properties is both difficult to quantify and costly to measure <cit.>.
And it is unclear to what extent such a system could capture rapidly evolving market and political dynamics, either historically or in real time, which is key to most practical uses of survey data
(see <ref> for further discussion).
§ CONCLUSIONS
By sampling from the OpenAI ChatGPT model (GPT-3.5) at scale (>11,000 responses), we have demonstrated the ability of LLMs to generate synthetic political issue polling data that realistically simulates American popular opinion across a variety of controversial topics in some respects.
In particular, we have shown that AI-generated responses have an excellent correlation (typically ρ>85%) with human data within ideological subgroups for many issues.
However, we have also shown the limitations of the AI-based approach to accurate match trends in non-ideological demographic factors such as age, race, and gender, and to extrapolate to public opinion on novel events that occurred after the harvesting of their training data (such as the 2022 war in Ukraine).
We have interpreted these results in terms of multiple frameworks for the role of LLMs, as either virtual publics or online listening tools, and discussed their potential implications on the political polling and market research industries.
While additional development of capabilities for dynamic updating of LLMs, bias reduction, and generalization to novel issue topics is needed for AI tools to robustly supplement human opinion surveying, this study demonstrates the potential utility of even the current generation of AI tools to reduce cost, increase speed, and widen the accessibility of issue polling.
§.§ Acknowledgments
We thank Henry Farrell for thoughtful conversations on the role of AI in democracy, Beth Friedman for her helpful edits, and Xiao-Li Meng and an anonymous editor for their feedback.
|
http://arxiv.org/abs/2307.05295v1 | 20230711143850 | Optimization of Rate-Splitting Multiple Access in Beyond Diagonal RIS-assisted URLLC Systems | [
"Mohammad Soleymani",
"Ignacio Santamaria",
"Eduard Jorswieck",
"Bruno Clerckx"
] | cs.IT | [
"cs.IT",
"eess.SP",
"math.IT"
] |
Optimization of Rate-Splitting Multiple Access in Beyond Diagonal RIS-assisted
URLLC Systems
Mohammad Soleymani, Member, IEEE,
Ignacio Santamaria, Senior Member, IEEE,
Eduard Jorswieck, Fellow, IEEE, and Bruno Clerckx, Fellow, IEEE
Mohammad Soleymani is with the Signal and System Theory Group, Universität Paderborn, Germany, http://sst.upb.de (email: <[email protected]>).
Ignacio Santamaria is with the Department of Communications Engineering, University of Cantabria (email: <[email protected]>).
Eduard Jorswieck is with the Institute for Communications Technology, Technische Universität Braunschweig, 38106 Braunschweig, Germany
(e-mail: <[email protected]>)
Bruno Clerckx is with the Department of Electrical and Electronic Engineering,
Imperial College London, London SW7 2AZ, U.K and with Silicon Austria Labs (SAL), Graz A-8010, Austria (e-mail: <[email protected]>;
<[email protected]>).
The work of Ignacio Santamaria was funded by MCIN/ AEI /10.13039/501100011033, under Grants PID2019-104958RB-C43 (ADELE) and PID2022-137099NB-C43 (MADDIE). The work of Eduard Jorswieck was supported by the Federal Ministry of Education and Research (BMBF, Germany) through the Program of “Souverän. Digital. Vernetzt.” joint Project 6G-RIC, under Grant 16KISK031.
August 12, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper proposes a general optimization framework for rate splitting multiple access (RSMA) in beyond diagonal (BD) reconfigurable intelligent surface (RIS) assisted ultra-reliable low-latency communications (URLLC) systems. This framework can solve a large family of optimization problems in which the objective and/or constraints are linear functions of the rates and/or energy efficiency (EE) of users. Using this framework, we show that RSMA and RIS can be mutually beneficial tools when the system is overloaded, i.e., when the number of users per cell is higher than the number of base station (BS) antennas. Additionally, we show that the benefits of RSMA increase when the packets are shorter and/or the reliability constraint is more stringent. Furthermore, we show that the RSMA benefits increase with the number of users per cell and decrease with the number of BS antennas. Finally, we show that RIS (either diagonal or BD) can highly improve the system performance, and BD-RIS outperforms regular RIS.
Beyond diagonal reconfigurable intelligent surface, energy efficiency, MISO broadcast channels, rate splitting multiple access, spectral efficiency, ultra-reliable low-latency communications.
§ INTRODUCTION
The sixth generation (6G) of communication systems should be around 100 times more reliable than 5G networks and at the same time, provide around 10 times lower latency comparing to 5G networks <cit.>. Moreover, it is expected that 6G networks become around 100 times more energy efficient and around 10 times more spectral efficient than 5G systems <cit.>. To fulfill such ambitious goals, 6G should employ promising technologies such as reconfigurable intelligent surface (RIS) and rate splitting multiple access (RSMA) <cit.>.
In this paper, we propose a general optimization framework to improve spectral efficiency (SE) and energy efficiency (EE) of ultra-reliable and low-latency communication (URLLC) systems by employing RIS and RSMA. Moreover, we investigate whether and how RIS and RSMA can be beneficial in URLLC systems, and show how their possible benefits can vary in different operational points depending on the latency and reliability constraints.
§.§ Related works
To enable supporting low latency, we cannot operate in very large packet length regimes and have to employ shorter packet lengths in which the Shannon rates are not accurate anymore.
In <cit.>, it was shown that the rate for single-input single-output (SISO) point-to-point channel with Gaussian signals can be approximated as
r=
C
-
Q^-1(ϵ)√(V/n_t),
where C is the Shannon rate, n_t is the packet length in bits, Q^-1 is the inverse of the Gaussian Q-function, ϵ is the decoding error probability of the message, and V is the channel dispersion.
The finite-block-length (FBL) rate approximation in (<ref>) is known as the normal approximation (NA). The accuracy of the NA with different operational point has been vastly discussed in <cit.>.
As can be easily verified from (<ref>), the FBL rates are smaller than Shannon rates. In addition, as expected, the shorter the packet length is and/or the more stringent the reliability constraint is, the lower rate can be achieved. Indeed, we have to transmit at a lower rate to ensure a more reliable communication with low latency. Resource allocation and transmission schemes for FBL regimes based on the NA have been studied in <cit.>. In <cit.>, the authors proposed power optimization and beamforming schemes for a broadcast channel (BC). In <cit.>, the authors proposed schemes to maximize the weighted sum rate of a multiple-input single-output (MISO) orthogonal frequency division multiple access (OFDMA) URLLC system. Moreover, the paper <cit.> proposed schemes to maximize the minimum rate and minimum EE of users in a cell-free massive multiple-input multiple-output (MIMO) URLLC system.
One of the targets of 6G is to significantly improve the SE and EE, which can be even more important in URLLC systems. A promising technology that will enable us to meet this target is RIS, which has been shown to enhance the performance of various interference-free and interference-limited systems with Shannon rates and/or the NA in (<ref>) <cit.>.
The use of RIS in FBL regimes has been investigated in
<cit.>. For instance, <cit.> showed that RIS can improve the performance of SISO/MISO BCs with treating interference as noise (TIN) in URLLC systems. Moreover, <cit.> showed that RIS can increase the sum rate of a multi-cell MISO orthogonal frequency division multiplexing (OFDM) BC.
There are different RIS technologies and architectures. In most of the early works, the matrix modeling the RIS coefficients is assumed to be diagonal. However, the system performance can be improved by relaxing the diagonality assumption and using beyond diagonal (BD) RIS architectures <cit.>. In a BD-RIS, each RIS element can be connected to other elements through a circuit <cit.>. There are three different possibilities for BD-RIS based on the connectivity of RIS elements: single-connected, group-connected and fully-connected architectures <cit.>.
Indeed, a regular passive RIS can be considered as a special case of BD-RIS, which can be referred to as single-connected architecture. In a fully-connected BD-RIS, all the BD-RIS elements are connected to each other, while in a group-connected BD-RIS, each element is connected to only a group of elements, which reduces the implementation complexities.
The superiority of BD-RIS over a regular RIS has been studied in <cit.>.
For instance, in <cit.>, the authors proposed a scheme that results in a non-diagonal phase shift RIS matrix, showing that BD-RIS can improve the performance of single- and multi-user MISO BCs.
Moreover, it was shown in <cit.> that BD-RIS can outperform RIS in
dual-function radar-communication
systems. Additionally, <cit.> showed that BD-RIS (with the group- and fully-connected architectures) can enhance sum rate of a MISO BC.
Another promising technology to enhance SE and EE is RSMA, which includes many other technologies such as TIN, non-orthogonal multiple access (NOMA), multicasting, broadcasting, and spatial division multiple access (SDMA) <cit.>. In rate splitting (RS) schemes, there are two types of messages: common and private. Each private message is intended for only a specific user, while common messages are decoded by all or by a group of users depending the employed RSMA scheme <cit.>. Indeed, there are different RSMA schemes based on the number/format of common messages. The simplest RSMA scheme is the 1-layer RS, which is very practical and efficient <cit.>. In 1-layer RS, there is only one common message, which is decoded by all the users, while treating the private messages as noise. Moreover, each user decodes its own private message after decoding and canceling the common message from the received signal. Note that when interference is very weak, the optimal strategy is to treat the interference as noise for sum rate maximization or in terms of the generalized degrees of freedom <cit.>. Furthermore, if interference is strong, the interfering signal should be decoded and canceled from the received signal, which is widely known as successive interference cancellation (SIC) <cit.>. RSMA bridges TIN and SIC, which makes RSMA very flexible and powerful. Based on the interference level at users, RSMA can switch between TIN and SIC, without the need to order users, thus reducing the design complexities.
For a more detailed overview on RSMA, we refer the reader to <cit.>.
The performance of RSMA with FBL has been studied in <cit.>. In <cit.>, the authors proposed a flexible RSMA scheme for a MISO BC with FBL and showed that their proposed scheme outperforms SDMA and NOMA. In <cit.>, it was shown that RSMA can achieve the same minimum or sum rate as NOMA and SDMA with a smaller packet length, meaning that RSMA may reduce latency for a given target rate. In <cit.>, the authors proposed resource allocation schemes for RSMA in a MISO URLLC BC, showing that RSMA provides a higher effective throughput than NOMA. Additionally, they showed that RSMA can reduce latency and enhance reliability.
Finally, we summarize some of the most related works in Table <ref>. As indicated, RIS and RSMA are powerful tools to improve SE and EE of various systems.
However, there are only a limited number of papers that considered the performance of RSMA in RIS-assisted URLLC systems (i.e., <cit.>). To the best of our knowledge, the paper <cit.> is the only paper on RSMA in multiple-antenna RIS-assisted URLLC systems.
Moreover, there is only one paper considering EE metrics in URLLC systems with RSMA, i.e., <cit.>, in which, it was shown that RSMA can increase the global EE of a single-cell MISO RIS-assisted BC.
Thus, the performance of RSMA in multi-antenna RIS-assisted systems should be further studied. Additionally, more energy-efficient RSMA schemes should be developed for URLLC systems.
§.§ Motivation
RSMA is a very effective and flexible tool to manage interference that encompasses a large variety of multiple-access technologies such as NOMA, TIN, SDMA, broadcasting and multi-casting. Moreover, RIS enables optimizing environment by modulating channels, which can be employed to neutralize interference and/or to improve coverage. Hence, one might expect that the RSMA benefits are reduced when RIS is employed since RIS can manage interference in some scenarios especially in multiple-antenna systems. However, in <cit.>, it was shown that RSMA and RIS can be mutually beneficial tools in overloaded systems with Shannon rate. Unfortunately, in FBL regimes, the Shannon rates are not accurate, which makes optimizing parameters more complicated and can bring some new challenges/tradeoffs.
Moreover, the solutions for Shannon rates cannot be used in FBL regime, which further motivates developing specific RSMA techniques for RIS-assisted URLLC systems.
In this work, we study the role of RIS and RSMA in multiple-antenna URLLC systems, and particularly, provide an answer to the following question: What is the impact of RIS on the performance of RSMA in multiple-antenna URLLC systems? We show that RIS can enhance the benefits of RSMA in overloaded systems, but the RSMA benefits decrease (or even become negligible) by optimizing RIS components in underloaded systems. In other words, RIS impacts RSMA performance differently depending on the operating point of the system. Moreover, in this work, we investigate the impact of packet length, which is related to the latency constraint, and the reliability constraint on the RSMA performance. We show that the benefits of RSMA increase when packets are shorter or when the maximum tolerable decoding probability is smaller.
§.§ Contribution
The main goal of this work is to investigate the overall system performance as well as the specific role of RIS and RSMA in URLLC systems. We not only show that RIS and/or RSMA can significantly improve the system performance, but also clarify the role of RIS and RSMA in such improvements.
To this end, we propose an optimization framework for RSMA to enhance SE and EE of MISO (BD-)RIS-assisted URLLC systems, which can be applied to every interference-limited system with 1-layer RS. As shown in Table <ref> and discussed in Section <ref>, there are a limited number of works on RSMA in RIS-assisted URLLC systems, and to the best of our knowledge, the SE of RSMA in multiple-antenna RIS-assisted systems with FBL regimes has not been studied yet.
Thus, it is required to develop
a general optimization framework for RSMA in multiple-antenna RIS-assisted URLLC systems that can solve a large family of optimization problems, including various SE and EE metrics such as the minimum weighted rate, weighted sum rate, minimum weighted EE, and global EE. In this paper, we address this issue by proposing a framework to solve every optimization problem
in which the objective and/or constraints are linear functions of the rates and/or EE of users and/or the received powers.
To clarify the role of RIS/RSMA, we define two operational regimes based on the number of users and the number of BS antennas. We call the system underloaded if the number of BS antennas is higher than the number of users per cell.
Otherwise, we call the system overloaded. We show that RIS and RSMA are mutually beneficial tools in overloaded systems; however, the benefits of RSMA decrease by employing RIS in underloaded systems.
The reason is that
the interference level is lower in underloaded systems than in overloaded systems.
Hence, interference in underloaded systems can be managed in a simpler way by SDMA and optimizing channels through RIS. However, in overloaded systems, we require more powerful interference-management techniques such as RSMA to mitigate interference. We also show that RIS with TIN may even perform worse than RSMA without employing RIS in an overloaded system, which shows the importance of RSMA. To sum up, the role of RSMA is to manage interference especially in overloaded systems. The role of RIS is mainly to improve the coverage in both underloaded and overloaded systems, as well as to partly manage interference in underloaded systems.
We, moreover, aim at investigating the impact of the reliability and latency constraints on the performance of RSMA. We show that the benefits of RSMA in overloaded systems increase when the reliability constraint is more stringent and/or when the packet lengths are shorter. This shows that RSMA can enhance reliability and ensure a lower latency. We also show that the benefits of RSMA increase with the number of users per cell. The reason is that the interference level increases with the number of users per resources, which makes RSMA as an interference-management technique more beneficial. Additionally, we show that the benefits of RSMA decrease with the number of antennas at BSs. When the number of BS antennas increases, indeed the number of resources increases, which makes it easier to manage interference by SDMA. We also show that RIS can significantly improve the EE and SE of the system even with a relatively low number of RIS elements per users.
In this paper, we also develop optimization techniques for BD-RIS with a group-connected architecture of group size two. To the best of our knowledge, this is the first wok that studies BD-RIS in URLLC systems.
We consider two feasibility sets for optimizing the BD/diagonal RIS elements and show that
RIS (either regular or beyond diagonal) can significantly improve the system performance. Moreover, we show that BD-RIS with group-connected architecture of group size two can outperform a regular RIS.
§.§ Paper outline
This paper is organized as follows. Section <ref> presents the system model and formulates the problem.
Section <ref> proposes schemes to optimize the beamforming vectors.
Section <ref> provides solutions for optimizing the BD-RIS elements.
Section <ref> presents some numerical results.
Finally, Section <ref> concludes the paper.
§ SYSTEM MODEL
We propose an optimization framework for 1-layer RS, which can be applied to any interference-limited (BD-)RIS-assisted URLLC system and is able to solve a large family of optimization problems in which the objective/utility function and/or constraints are linear functions of the rates and/or EE of users and/or the received power.
As an illustrative example, we consider a multicell broadcast channel (BC) with L multiple-antenna base stations (BSs) with N_BS transmit antennas, as shown in Fig. <ref>. We assume that each BS serves K single-antenna users, and there are M≥ L BD-RIS and/or regular RISs with N_RIS components each.
§.§ RIS model
In this paper, we consider single-sector BD-RISs in the reflective mode with group-connected architecture of group size two. In this case, the channel between BS i and user k associated to BS l, denoted by u_lk, is
𝐡_lk,i({Θ})
=∑_m=1^M𝐟_lk,mΘ_m
𝐆_mi_Links through RIS+𝐝_lk,i_Direct link∈ℂ^1× N_BS,
where 𝐝_lk,i∈ℂ^1× N_BS is the direct link between BS i and u_lk, 𝐆_mi∈ℂ^N_RIS× N_BS is the channel matrix between BS i and BD-RIS m, 𝐟_lk,mℂ^1× N_RIS is the channel vector between BD-RIS m and u_lk, {Θ}={Θ_m} is the set containing all the BD-RIS components. For a regular RIS, Θ_m is a diagonal matrix, given by
Θ_m
=diag(θ_m_1, θ_m_2,⋯,θ_m_N_RIS),
where θ_m_i is the coefficient corresponding to the i-th element of RIS m.
However, in a BD-RIS, the diagonality assumption is relaxed, and Θ_m is a symmetric non-diagonal matrix. For BD-RIS with group-connected architecture of group size two, Θ_m is a block-diagonal matrix as
Θ_m=diag(Θ_m_1,Θ_m_2,⋯,Θ_m_G),Θ_m_g=Θ_m_g^T,
where Θ_m_g for all m,g is a 2-by-2 symmetric matrix, and G=N_RIS/2. Note that without loss of generality, we assume that N_RIS is an even number.
There can be two different constraints for the symmetric matrices Θ_m_gs.
First, we have the convex constraint Θ_m_gΘ_m_g^H≼ I for all m,g, which results in the following feasibility set:
𝒯_U={Θ_m_g=Θ_m_g^T,Θ_m_gΘ_m_g^H≼ I,∀ m,g}
Second, we have Θ_m_gΘ_m_g^H= I, which yields
𝒯_I={Θ_m_g=Θ_m_g^T,Θ_m_gΘ_m_g^H= I,∀ m,g}
Note that 𝒯_I⊂𝒯_U. Indeed, 𝒯_U includes 𝒯_I as a special case and should not perform worse than 𝒯_I.
Note that if u_lk (or BS i) is not in the reflection space of RIS m, then we have 𝐟_lk,m=0 (or 𝐆_mi= 0). In other words, in order to get a signal through a reflective BD-RIS, the transceivers should be in the reflection space of the BD-RIS. For more details on different architectures of BD-RIS, we refer the reader to <cit.>.
Hereafter, for notational simplicity, we drop the dependency of the channels on Θ_m and represent the channels as 𝐡_lk,i for all i,l,k.
§.§ Signal model
We consider the 1-layer RS to manage intra-cell interference. In the 1-layer RS, each BS transmits one common message, to be decoded by all its associated users, in addition to K private messages intended for each individual user as
𝐱_l=
𝐱_c,ls_c,l_Common message
+
∑_k=1^K𝐱_p,lks_p,lk_Private messages∈ℂ^N_BS× 1,
where s_c,l∼𝒞𝒩(0,1) is the common message of BS l, and s_p,lk∼𝒞𝒩(0,1) is the private message intended for u_lk.
Moreover, 𝐱_c,l and 𝐱_p,lk are, respectively, the beamforming vectors corresponding to the common message s_c,l and private message s_p,lk. Note that s_c,l and s_p,lk for all l,k are independent and identically distributed
proper Gaussian signals.
The received signal for the user u_lk is
y_lk
=
𝐡_lk,l𝐱_c,ls_c,l_Desired C. signal+
𝐡_lk,l𝐱_p,lks_p,lk_Desired P. signal+
𝐡_lk,l∑_j=1,j≠ k^K𝐱_p,lj
s_p,lj_Intracell interference
+
∑_i=1,i≠ l^L𝐡_lk,i𝐱_i_Intercell interference+
n_lk_Noise,
where
𝐡_lk,i∈ℂ^1× N_BS is the channel between BS l and u_lk, given by (<ref>), and n_lk∼𝒞𝒩(0,σ^2) is additive white Gaussian noise, which is independent of the transmitted signals.
§.§ Rate and energy-efficiency expressions
Each user first decodes the common message, treating all other signals as noise.
Thus, the rate of decoding s_c,l at u_lk is <cit.>, <cit.>
r̅_c,lk=
log(1+γ_c,lk)_ Shannon Rate
-
Q^-1(ϵ_c)√(V_c,lk/n_c)_δ_c,lk({𝐱},{Θ}),
where V_c,lk is the channel dispersion for decoding s_c,l at u_lk, n_c is the packet length of the common message in bits,
ϵ_c is the decoding error probability of the common message,
and γ_c,lk is the corresponding SINR given by
γ_c,lk=|𝐡_lk,l𝐱_c,l|^2/σ^2+∑_i≠ l
|𝐡_lk,i𝐱_c,i|^2
+∑_ij|𝐡_lk,i𝐱_p,ij|^2.
The optimal channel dispersion is <cit.>
V_c,lk^opt=1-1/(1+γ_c,lk)^2,
which is not achievable by Gaussian signals in the presence of interference <cit.>.
An achievable channel dispersion for Gaussian signals in interference-limited systems is <cit.>
V_c,lk=2γ_c,lk/1+γ_c,lk.
The common message s_c,l should be decodable for all the users associated to BS l. Hence, the transmission rate of s_c,l must be smaller than or equal to the minimum achievable rate of decoding s_c,l at the users associated to BS l, i.e.,
r_l
≤min_
k{r̅_c,lk}≜ r_c,l.
Each user decodes and cancels the common message. After this, it decodes its own private message, treating the remaining signals as noise. Thus, the decoding rate of s_p,lk at u_lk is <cit.>, <cit.>
r_p,lk=
log(1+γ_p,lk)_ Shannon Rate
-
Q^-1(ϵ_p)√(V_p,lk/n_p)_δ_p,lk({𝐱},{Θ}),
where ϵ_p is the decoding error probability of the private message, n_p is the packet length of the private message in bits, and γ_p,lk is the SINR for decoding s_p,lk given by
γ_p,lk=|𝐡_lk,l𝐱_p,lk|^2/σ^2+∑_i≠ l
|𝐡_lk,i𝐱_c,i|^2
+∑_[ij]≠ [lk]|𝐡_lk,l𝐱_p,lj|^2
,
where ∑_[ij]≠ [lk]|𝐡_lk,i𝐱_p,ij|^2=∑_ij|𝐡_lk,i𝐱_p,ij|^2-|𝐡_lk,l𝐱_p,lk|^2. Similarly, an achievable channel dispersion is <cit.>
V_p,lk=2γ_p,lk/1+γ_p,lk.
Finally, the rate of u_lk is
r_lk=r_p,lk+r_c,lk,
where r_c,lk≥ 0 is the portion of the rate that u_lk gets from the common message. Note that ∑_kr_c,lk≤ r_c,l, where r_c,l is given by (<ref>).
Finally, the EE of u_lk is defined as <cit.>
e_lk=r_lk/p_c+η(𝐱_p,lk^H𝐱_p,lk+𝐱_c,l^H𝐱_c,l/K),
where η^-1 is the power efficiency of the BSs, and p_c is the constant power consumption to transmit data to each user, given by <cit.>.
§.§ Discussion on reliability and latency constraints
The reliability constraint is modeled by the decoding error probabilities, ϵ_c and ϵ_p. The total decoding error probability for a user can be approximated as
ϵ_t=ϵ_c+(1-ϵ_c)ϵ_p≈ϵ_c+ϵ_p.
In general, the error probability of decoding private and/or common messages can be different at each user, based on their service requirements. To simplify the notations, we consider a symmetric system in which the decoding error probability of the private messages are the same at all users. However, this framework can be easily modified for asymmetric scenarios.
The latency constraint can be also translated to a rate constraint. The reason is that, if the latency for a packet with length n bits should be less than T seconds, then its transmission rate should be higher than r≥n/β T (b/s/Hz), where β is the used bandwidth.
Thus, the rate of u_lk should be r_lk≥ r_lk^th=n_p+n_c/β T (b/s/Hz), where T is the latency constraint.
Note that ϵ_t and r_lk^th are upper bounds for the decoding error probability and the latency, respectively, since it may happen that u_lk receives its rate from only the common message <cit.>. In this case, ϵ_t=ϵ_c and r_lk^th=n_c/β T. However, in this paper, we consider the upper bounds to ensure that the latency and reliability constraints are met.
§.§ Problem statement
We consider a general optimization problem, similar to, e.g., <cit.>, as
{𝐱}∈𝒳,{Θ}∈𝒯,𝐫_c
max
f_0({𝐱},{Θ}) s.t. f_i({𝐱},{Θ})≥0,
∀ i,
r_lk≥ r_lk^th,
∀ l,k,
∑_k=1^Kr_c,lk≤min_k{r̅_c,lk}≜ r_c,l({𝐱}),
∀ lk,
r_c,lk≥ 0,
∀ l,k,
where {𝐱} is the set of the beamforming vectors, 𝒳 is the feasibility set for the beamforming vectors, 𝐫_c={r_c,lk,∀ lk} is the set of the common rates, 𝒯 is the feasibility set for RIS components, which can be either 𝒯_I or 𝒯_U,
f_is are linear functions of rates/EEs and/or concave/convex/linear functions of beamforming vectors and/or channels,
(<ref>) is the latency constraint, (<ref>) is the decodability constraint of each common message, (<ref>) is because of non-negative rates. The variables
𝐫_c, {𝐱} and {Θ} are the optimization parameters.
The feasibility set 𝒳 is
𝒳={𝐱_p,lk,𝐱_c,l:𝐱_c,l^H𝐱_c,l
+∑_k𝐱_p,lk^H𝐱_p,lk≤ p_l,∀ l},
where p_l is the power budget of BS l.
The general optimization problem (<ref>) includes a large family of optimization problems for enhancing spectral and energy efficiency of the system. For instance, (<ref>) includes the minimum-weighted-rate maximization (MWRM), weighted-sum-rate maximization (WSRM), minimum-weighted-EE maximization (MWEEM), global EE maximization (GEEM) problems, to mention a few.
§ PROPOSED OPTIMIZATION FRAMEWORK FOR OPTIMIZING BEAMFORMING VECTORS
Our proposed optimization framework is an iterative optimization technique, based on majorization minimization (MM) and alternating optimization (AO). That is, we first fix RIS components to {Θ^(t-1)} and solve (<ref>) over the beamforming vectors {𝐱} to obtain {𝐱^(t)}. We then alternate the optimization parameters and solve (<ref>) for fixed beamforming vectors {𝐱^(t)} to obtain {Θ^(t)}.
We continue this procedure until a convergence metric is met. In this section, we present our schemes to optimize beamforming vectors as well as the RSMA parameters.
To update {𝐱}, we solve (<ref>) for fixed {Θ^(t-1)}, which is a complicated non-convex optimization problem. To this end, we employ an MM-based approach.
That is, we first find suitable surrogate functions for the rates and then,
solve the corresponding surrogate optimization problem.
The surrogate functions should be concave lower bounds for the rates, satisfying the three conditions in <cit.>.
In the following lemma, we present concave lower bounds for r_p,lk and r̅_c,lk that satisfy the conditions in <cit.>.
Concave and quadratic lower bounds for r_p,lk and r̅_c,lk are, respectively,
r_p,lk≥r̃_p,lk=
a_p,lk
+2c_lkℜ{(
𝐡_lk,l𝐱_p,lk^(t-1))^*
𝐡_lk,l𝐱_p,lk}
+
∑_i≠ l
f_p,lkd_lkℜ{(𝐡_lk,i𝐱_c,i^(t-1))^*𝐡_lk,i𝐱_c,i}
∑_[ij]≠ [lk]f_p,lkd_lkℜ{(𝐡_lk,i𝐱_p,ij^(t-1))^*𝐡_lk,i𝐱_p,ij}-
b_p,lkd_lk(
∑_ij|𝐡_lk,i𝐱_p,ij|^2
+
∑_i≠ l|𝐡_lk,i𝐱_c,i|^2
),
r̅_c,lk≥r̃_c,lk=
a_c,lk
+
2d_lkℜ{(
𝐡_lk,l𝐱_c,l^(t-1))^*
𝐡_lk,l𝐱_c,l}
+∑_ije_lkf_c,lkℜ{(𝐡_lk,i𝐱_p,ij^(t-1))^*𝐡_lk,i𝐱_p,ij}
+
∑_i≠ le_lkf_c,lkℜ{(𝐡_lk,i𝐱_c,i^(t-1))^*𝐡_lk,i𝐱_c,i}
-
b_p,lke_lk(
∑_ij|𝐡_lk,i𝐱_p,ij|^2
+
∑_i|𝐡_lk,i𝐱_c,i|^2
),
where t is the number of the current iteration, a_c,lk, b_c,lk, a_p,lk, b_p,lk, c_lk, d_lk, e_lk, f_p,lk, and f_c,lk
are constants, given by
a_p,lk =log(1+γ_p,lk^(t-1))
-
γ_p,lk^(t-1)
-
Q^-1(ϵ_p)/√(n_p)(√(V_p,lk^(t-1))/2
+1/√(V_p,lk^(t-1)))+(f_p,lk-b_p,lk)d_lkσ^2,
a_c,lk =log(1+γ_c,lk^(t-1))
-
γ_c,lk^(t-1)
-
Q^-1(ϵ_c)/√(n_c)(√(V_c,lk^(t-1))/2
+1/√(V_c,lk^(t-1)))+(f_c,lk-b_c,lk)e_lkσ^2,
b_p,lk =γ_p,lk^(t)+ζ_p,lk^(t-1)f_p,lk/2,
b_c,lk
=γ_c,lk^(t)+ζ_c,lk^(t-1)f_c,lk/2,
f_p,lk=2Q^-1(ϵ_p)/√(n_pV_p,lk^(t-1)),
f_c,lk=2Q^-1(ϵ_c)/√(n_cV_c,lk^(t-1)),
c_lk =(σ^2+∑_i≠ l
|𝐡_lk,i𝐱_c,i^(t-1)|^2
+∑_[ij]≠ [lk]|𝐡_lk,l𝐱_p,lj^(t-1)|^2)^-1,
d_lk
=(σ^2+∑_ij|𝐡_lk,i𝐱_p,ij^(t-1)|^2+∑_i≠ l|𝐡_lk,i𝐱_c,i^(t-1)|^2)^-1
e_lk =(σ^2+∑_ij|𝐡_lk,i𝐱_p,ij^(t-1)|^2
+∑_i|𝐡_lk,i𝐱_c,i^(t-1)|^2)^-1
where γ_c,lk^(t-1), V_c,lk^(t-1), γ_p,lk^(t-1) and V_p,lk^(t-1) are, respectively, obtained by replacing {𝐱^(t-1)} in (<ref>), (<ref>), (<ref>) and (<ref>). Moreover, ζ_p,lk^(t-1)=d_lkc_lk^-1 and ζ_c,lk^(t-1)=e_lkd_lk^-1.
Please refer to Appendix <ref>.
Substituting r̃_p,lks and r̃_c,lks in f_is gives the surrogate functions f̃_is and consequently, the following surrogate optimization problem
{𝐱}∈𝒳,𝐫_c
max f̃_0({𝐱},{Θ}) s.t. f̃_i({𝐱},{Θ})≥0,
∀ i,
r̃_lk=r̃_p,lk+r_c,lk≥ r_lk^th,
∀ l,k,
∑_k=1^Kr_c,lk≤min_k{r̃_c,lk}≜r̃_c,l({𝐱}),
∀ lk,
r_c,lk≥ 0,
∀ l,k,
This optimization problem is convex for spectral efficiency metrics, which can be efficiently solved by numerical tools.
We can employ Dinkelbach-based algorithms to find the
solution of (<ref>) for energy efficiency metrics such as GEE and weighted minimum EE. Due to a space restriction, we do not provide the solutions here and refer the reader to <cit.> for more details.
§ PROPOSED OPTIMIZATION FRAMEWORK FOR OPTIMIZING BD-RIS COMPONENTS
In this section, we solve (<ref>) for fixed {𝐱^(t)} to update the BD-RIS components.
To this end, we employ an MM-based algorithm to obtain a suboptimal solution for the complicated optimization problem.
For fixed {𝐱^(t)}, (<ref>) is non-convex because of two reasons. First, the rates are not concave in {Θ}. Second, the set 𝒯_I is non-convex due to the constraint ΘΘ^H= I. To solve (<ref>) for fixed {𝐱^(t)}, we have to handle these two issues. To this end, we first obtain suitable surrogate functions for the rates. Then we try to convexify 𝒯_I. The rates have a similar structure with respect to the channels and beamforming vectors. Thus, we can employ an approach similar to Lemma <ref> to find concave lower bounds for the rates with respect to {Θ}. In the following corollary, we state the concave lower bound for r̅_c,lk. Due to a strict space restriction, we do not provide the concave lower bound for r_p,lk since it is straightforward to obtain it from Lemma <ref> and Corollary <ref>.
A concave lower bound for r̅_c,lk is
r̅_c,lk≥r̂_c,lk=
a_c,lk
+
2d_lkℜ{(
𝐡_lk,l^(t-1)𝐱_c,lk^(t))^*
𝐡_lk,l𝐱_c,lk^(t)}
+∑_ije_lkf_c,lkℜ{(𝐡_lk,i^(t-1)𝐱_p,ij^(t))^*𝐡_lk,i𝐱_p,ij^(t)}
+
∑_i≠ le_lkf_c,lkℜ{(𝐡_lk,i^(t-1)𝐱_c,i^(t))^*𝐡_lk,i𝐱_c,i^(t)}
-
b_p,lke_lk(
∑_ij|𝐡_lk,i𝐱_p,ij^(t)|^2
+
∑_i|𝐡_lk,i𝐱_c,i^(t)|^2
),
where 𝐡_lk,i^(t-1)=𝐡_lk,i(Θ^(t-1)), and the other parameters are defined the same as in Lemma <ref>.
Let us call the concave lower bound for r_p,lk as r̂_p,lk.
Substituting r̂_p,lks and r̂_c,lks in f_is yields the surrogate functions f̂_is as well as the following problem
{Θ}∈𝒯,𝐫_c
max f̂_0({𝐱},{Θ}) s.t. f̂_i({𝐱},{Θ})≥0,
∀ i,
r̂_lk=r̂_p,lk+r_c,lk≥ r_lk^th,
∀ l,k,
∑_k=1^Kr_c,lk≤min_k{r̂_c,lk}≜r̂_c,l({Θ}),
∀ lk,
r_c,lk≥ 0,
∀ l,k,
which is convex only for 𝒯_U. Note that 𝒯_U contains the convex constraint Θ_m_gΘ^H_m_g≼ I, which may not be suitable for implementing in some existing numerical solvers. In the following, we propose a suboptimal approach to rewrite the constraint Θ_m_gΘ^H_m_g≼ I as a series of inequality constraints on scalar optimization parameters, which are referred to as disciplinary convex constraints that can be easily handled in numerical solvers. Moreover, we propose an approach to convexify 𝒯_I in Section <ref>.
§.§.§ Making Θ_m_gΘ^H_m_g≼ I a disciplinary convex constraint
The constraint Θ_m_gΘ^H_m_g≼ I can be equivalently expressed as T= I-Θ_m_gΘ^H_m_g≽ 0.
The matrix T can be written as
T = I-Θ_m_gΘ^H_m_g= I-Θ_m_gΘ^*_m_g
=
[[ 1-|θ_11|^2-|θ_12|^2 -θ_11^*θ_12-θ_12^*θ_22; -(θ_11^*θ_12+θ_12^*θ_22)^* 1-|θ_12|^2-|θ_22|^2 ]]
.
Since T is a 2× 2 Hermitian matrix, T is PSD if and only if the following constraints hold
|θ_11|^2+|θ_12|^2 ≤ 1,
|θ_12|^2+|θ_22|^2 ≤ 1,
ζ_m_g=(1-|θ_11|^2-|θ_12|^2)(1-|θ_12|^2-|θ_22|^2)
-|θ_11^*θ_12+θ_12^*θ_22|^2 ≥ 0.
Note that since Θ_m_g is symmetric, we have Θ_m_g^H=Θ_m_g^*.
The constraints (<ref>) and (<ref>) are convex.
Moreover, the constraint (<ref>) can be simplified to
ζ_m_g=
1-2|θ_12|^2-|θ_11|^2-|θ_22|^2-|θ_12|^2|θ_11^*+θ_22|^2_Concave Part
+|θ_12|^4+|θ_11|^2|θ_22|^2+|θ_12|^2|θ_22|^2+|θ_11|^2|θ_12|^2_Convex Part≥ 0,
which is not a convex constraint since ζ is not a concave function.
However, we can apply CCP to convexify this constraint. That is, we keep the concave part of ζ and find a linear lower bound for the convex part of ζ by the first order Taylor expansion as
ζ_m_g≥ζ̂_m_g=
1-2|θ_12|^2-|θ_11|^2-|θ_22|^2
-|θ_12|^2|θ_11^*+θ_22|^2
+4|θ_12^(t-1)|^2ℜ(θ^(t-1)_12θ_12^*)-3|θ_12^(t-1)|^4
+2|θ_22^(t-1)|^2ℜ(θ^(t-1)_11θ_11^*)
+2|θ_11^(t-1)|^2ℜ(θ^(t-1)_22θ_22^*)
-3|θ_11^(t-1)|^2|θ_22^(t-1)|^2
+2|θ_22^(t-1)|^2ℜ(θ^(t-1)_12θ_12^*)
+2|θ_12^(t-1)|^2ℜ(θ^(t-1)_22θ_22^*)
-3|θ_12^(t-1)|^2|θ_22^(t-1)|^2
+2|θ_11^(t-1)|^2ℜ(θ^(t-1)_12θ_12^*)
+2|θ_12^(t-1)|^2ℜ(θ^(t-1)_11θ_11^*)-3|θ_11^(t-1)|^2|θ_12^(t-1)|^2
≥ 0.
Unfortunately, even though (<ref>) is a convex constraint, it is not still a disciplinary constraint because of the convex function |θ_12|^2|θ_11^*+θ_22|^2.
To address this issue, we employ the first order Taylor expansion to approximate -|θ_12|^2|θ_11^*+θ_22|^2 as a linear function of θ_11, θ_12 and θ_22, which yields ζ̃_m_g.
The constraint ζ̃_m_g≥ 0 is a disciplinary convex constraint, which can be easily implemented in existing numerical solvers.
Finally, by inserting the corresponding constraints into (<ref>), we have the following disciplinary convex problem
{Θ},
𝐫_c
max f̂_0({𝐱},{Θ}) s.t. (<ref>)-(<ref>),
ζ̃>1-ϵ, (<ref>),(<ref>), ∀ m,g.
Let us call the solution of (<ref>) as Θ_m_g^(⋆).
Unfortunately, fulfilling the constraints ζ̃_m_g≥ 0, (<ref>) and (<ref>) does not guarantee Θ_m_g^(⋆)Θ_m_g^(⋆)^H≼ I.
To ensure obtaining a feasible point, we first check if Θ_m_g^(⋆)Θ_m_g^(⋆)^H≼ I holds. If the largest eigenvalue of Θ_m_g^(⋆)Θ_m_g^(⋆)^H is greater than 1, i.e., λ_m_g>1, then we choose Θ̂_m_g=Θ_m_g^(⋆)/√(λ_m_g).
Finally, to ensure generating a sequence of non-decreasing f_0, we update Θ_m_g as
{Θ^(t)}={[ {Θ̂} if f_0({Θ̂})
>
f_0({Θ^(t-1)}); {Θ^(t-1)} Otherwise, ].
§.§ Convexifying 𝒯_I
The constraints Θ_m_gΘ^H_m_g= I and Θ_m_g=Θ^T_m_g can be simplified as
Θ_m_gΘ^H_m_g
=[[ θ_11 θ_12; θ_12 θ_22 ]]
[[ θ_11^* θ_12^*; θ_12^* θ_22^* ]]
=
[[ |θ_11|^2+|θ_12|^2 θ_11^*θ_12+θ_12^*θ_22; (θ_11^*θ_12+θ_12^*θ_22)^* |θ_12|^2+|θ_22|^2 ]]
=[[ 1 0; 0 1 ]],
which results in
|θ_11|^2+|θ_12|^2 =1,
|θ_12|^2+|θ_22|^2 =1,
θ_11^*θ_12+θ_12^*θ_22 =0.
The constraints (<ref>)-(<ref>) are equivalent to |θ_11|=|θ_22|, and
θ_11^*θ_12+θ_12^*θ_22=|θ_12|e^j∠θ_12(θ_11^*+θ_22e^-j2∠ a_12)=0,
which yields θ_22=θ_11^*e^j(2∠θ_12+π).
The constraints (<ref>) and (<ref>) are not convex. To convexify (<ref>), we rewrite it as the following two constraints:
|θ_11|^2+|θ_12|^2 ≤1,
|θ_11|^2+|θ_12|^2 ≥1.
The constraint (<ref>) is a convex constraint, but (<ref>) is not since |θ_11|^2+|θ_12|^2 is a jointly convex function in |θ_11|^2 and |θ_12|^2. Thus, we can employ CCP to convexify |θ_11|^2+|θ_12|^2≥1
and relax it for a faster convergence as
|θ_11^(t-1)|^2+2ℜ(θ_11^(t-1)(θ_11-θ_11^(t-1))^*)
+
|θ_12^(t-1)|^2
+
2ℜ(
θ_12^(t-1)(θ_12-θ_12^(t-1))^*
)
≥
1-ϵ,
where ϵ>0. Similarly, we can convexify (<ref>) by considering the following constraints:
|θ_22|^2+|θ_12|^2 ≤1
|θ_22^(t-1)|^2+2ℜ(θ_22^(t-1)(θ_22-θ_22^(t-1))^*)
+
|θ_12^(t-1)|^2
+
2ℜ(
θ_12^(t-1)(θ_12-θ_12^(t-1))^*
)
≥
1-ϵ,
In addition to (<ref>) and (<ref>), the constraint (<ref>) (or θ_22=θ_11^*e^j(2∠θ_12+π)) is not convex neither. A suboptimal way to convexify this constraint is to fix the phase of θ_12.
For instance, if θ_12 is real (or pure imaginary), then θ_22=-θ_11^* (or θ_22=θ_11^*).
Note that when θ_12=0, the BD-RIS with group-connected architecture of group size two is equivalent to the diagonal RIS, and the constraint (<ref>) is automatically satisfied. Thus, in this case, θ_22 is independent of θ_11, and there is no need to consider θ_22=-θ_11^* (or θ_22=θ_11^*).
As a result, this algorithm never performs worse than the diagonal RIS.
Finally, the corresponding optimization problem in this case
is
{Θ},
𝐫_c
max f̂_0({𝐱},{Θ}) s.t. (<ref>)-(<ref>),
(41),(43)-(45), ∀ m,g,
θ_22=-θ_11^* if θ_12≠0, ∀ m,g,
which is convex and can be efficiently solved.
§ NUMERICAL RESULTS
In this section, we provide some numerical results for the MWRM and MWEEM problems. We consider ϵ_c=ϵ_p and n_p=n_c.
The propagation parameters are based on <cit.>. Moreover, the simulation scenario is a two-cell MISO BC, as depicted in <cit.>, where the BSs/RISs/users positions are chosen similar to <cit.>. Additionally, the considered schemes in the simulations are represented as:
* RS-BD-RIS_U (or RS-BD-RIS_I): The proposed scheme for BD-RIS-assisted systems with rate splitting, BD-RIS with group-connected architecture of group size two and feasibility set 𝒯_U (or 𝒯_I).
* RS-RIS_X (or TIN-RIS_X): The proposed algorithm for RIS-assisted systems with rate splitting (or TIN), regular RIS and feasibility set 𝒯_X, where X can be U or I.
* RS-RIS_R (or TIN-RIS_R): The proposed algorithm for RIS-assisted systems with rate splitting (or TIN), regular RIS and random RIS elements.
* RS-No-RIS (or TIN-No-RIS): The RSMA (or TIN) scheme for systems without RIS.
* Sh-RS-BD-RIS_U: The proposed scheme for BD-RIS-assisted systems with rate splitting, BD-RIS with group-connected architecture of group size two, feasibility set 𝒯_U and considering Shannon rates.
§.§ Fairness rate
In this subsection, we present some numerical results for the fairness rate, which is defined as the maximum of the minimum rate of users. We consider the impact of the reliability constraint, packet length, number of users per cell and power budget on the performance of RSMA and RIS. Additionally, we compare the performance of regular RIS with BD-RIS with group connected architecture of group size two.
§.§.§ Impact of the reliability constraint
Fig. <ref> shows the average fairness rate and performance improvement by RSMA versus ϵ^c for multi-cell MISO BC with N_RIS=20, L=2, M=2, n_t=200 bits, K=6, N_BS=5 and P=10dB.
As expected, the fairness rate for all schemes decreases when the reliability constraint is more stringent.
Moreover, we can observe that the RSMA scheme with BD-RIS with group-connected architecture of group size two and the feasibility set 𝒯_U can outperform the other schemes. Additionally, we observe that RIS (either regular or BD) can significantly improve the system performance and enhance the RSMA benefits. Note that in this example, the number of users per cell is higher than the number of BS antennas, which means that the considered system is overloaded. Interestingly, we can observe that RSMA without RIS can outperform TIN with regular RIS, which shows the importance of employing an effective interference-management technique in overloaded systems.
Finally, we observe that the benefits of RSMA increase with ϵ^c^-1. It shows that managing interference is even more important in URLLC systems. Furthermore, we observe that the RSMA benefits with employing RIS with properly optimized elements.
§.§.§ Impact of the packet length
Fig. <ref> shows the average fairness rate and performance improvement by RSMA versus n_t for multi-cell MISO BC with N_RIS=20, L=2, M=2, n_t=256, K=6, ϵ^c=0.001 and P=10dB. As can be observed, the RSMA scheme with BD-RIS with group-connected architecture of group size two and the feasibility set 𝒯_U can outperform the other schemes. Moreover, we can observe that RIS and RSMA can significantly increase the average fairness rate. As expected, the average fairness rate increases with n_t for all the schemes. However, the benefits of RSMA decrease with n_t. Indeed, when the packet lengths are shorter (or the latency constraint is more stringent), RSMA can provide higher gains, which indicates that RSMA can be even more effective in URLLC systems.
§.§.§ Impact of the number of users per cell
Fig. <ref> shows the impact of the number of users per cell as well as optimizing RIS components on the performance of RSMA.
As can be observed, the benefits of RSMA can highly vary when optimizing RIS elements. Interestingly, we can observe that RIS may decrease RSMA benefits in underloaded systems, while it enhances the RSMA gain in overloaded systems.
This happens because RIS can modulate the channels to mitigate the interference in underloaded systems. Thus, a proper design of RIS elements may decrease the RSMA benefits in underloaded systems.
However, as the number of users grows, the interference becomes more severe, and RIS alone cannot completely mitigate it. As a result, we observe that the RSMA benefits monotonically increase with K when RIS elements are properly designed.
§.§.§ Impact of power budget
Fig. <ref> shows the average fairness rate versus the BSs power budget P for N_BS=8, N_RIS=20, L=2, M=2, n_t=200, ϵ^c=0.001 and different K.
In this figure, the number of users per cell is less than the number of BS antennas, which is referred to as underloaded systems. As can be observed, RSMA cannot provide any considerable benefits in RIS-assisted systems for K=4. However, RSMA can significantly outperform TIN for K=5, especially in higher SNR regimes. Indeed, it shows that RSMA can be still beneficial in underloaded systems. Additionally, we observe that RSMA can significantly outperform TIN with random RIS elements in underloaded scenarios. However, such benefits are lower (or may even vanish) when RIS elements are optimized. This result indicates that RIS can be employed as an interference-management technique in underloaded BCs. However, we should employ more advanced interference-management techniques to fully reap RIS benefits for overloaded systems, which is in line with the findings in <cit.>.
§.§.§ Comparison of RIS technologies
In Fig. <ref> and Fig. <ref>, we compare the performance of various schemes, including regular and BD-RIS with different feasibility sets. In this part, we provide another comparison for regular RIS and BD-RIS with group-connected architecture of groups size two.
Fig. <ref> shows the average fairness rate versus the BSs power budget P for N_BS=5, N_RIS=20, L=2, M=2, n_t=200, and ϵ^c=0.001.
As can be observed, the BD-RIS with group-architecture of group size two and constraint ΘΘ^H≼ I can highly outperform a regular diagonal RIS.
As indicated in Section <ref>, BD-RIS is a more general architecture than regular RIS that includes regular RIS as a special case. Thus, BD-RIS never performs worse than regular RIS.
§.§ Fairness EE
In this subsection, we present some numerical results for the minimum EE of users, which is referred to as the fairness EE.
In Fig. <ref>, we show the average fairness EE versus P_c for N_RIS=20, L=2, M=2, n_p=n_c=200, ϵ_c=ϵ_p=0.001, N_BS=5 and K=7. As can be observed, RIS can highly increase the fairness EE for both RSMA and TIN schemes when RIS components are properly optimized. We can also observe that RSMA can outperform TIN with and without RIS. In the following, we provide an in-depth discussion of the benefits of RIS and RSMA from an EE point of view.
In Fig. <ref>, we show the average fairness EE improvement by RSMA for RIS-assisted systems. As can be observed, the benefits of employing RSMA increase with K. The reason is that, as the number of users grows, the interference level increases, which in turn improves the benefits of employing a powerful interference-management technique such as RS. Indeed, the more overloaded the systems is, the more gain RSMA can provide. We also observe that the RSMA benefits increase with P_c. Indeed, when the constant power consumption is higher, it is more important to properly design the system to get a better EE performance.
Fig. <ref> shows the average performance improvement by RIS for the RSMA schemes versus P_c for N_RIS=20, L=2, M=2, n_t=200, ϵ^c=0.001, and N_BS=5. As can be observed, the RIS benefits decrease with the number users for a fixed N_RIS. However, the RIS benefits are still significant even when the number of RIS elements per user is relatively low (less than 3 per user). This suggests that RIS may be promising in practical scenarios.
We also observe that the RIS benefits increase with P_c, which is in line with the results in Fig. <ref>.
§.§ Summary
Our main findings in the numerical section can be summarized as follows:
* RSMA and RIS can significantly improve the SE and EE of the system. Moreover, the combination of the RSMA and BD-RIS with group-connected architecture of group size two outperforms the other schemes.
* The use of RIS increases the benefits of RSMA in overloaded systems. However, the relative benefits of RSMA decrease by employing RIS in underloaded systems.
* The benefits of RSMA increase with the number of users, K, since the interference level increases when there are more users in the system. However, the benefits of RIS decrease with K when N_RIS is fixed. The reason is that the number of RIS elements per user decreases with K for a fixed N_RIS, which in turn reduces the RIS benefits.
* RSMA provides higher gains when the packet lengths are shorter or when the reliability constraint is more stringent. Therefore, RSMA is more effective in URLLC systems, especially in highly overloaded systems.
§ CONCLUSION
In this paper, we showed that RSMA and RIS can significantly improve the spectral and energy efficiency of MISO multi-cell BC URLLC systems.
Moreover, we investigated the role of RSMA and RIS in URLLC systems, analyzing the impact on performance of different parameters, such as the reliability constraint, packet length, number of users per cell and BS power budget.
We showed that the use of RIS has a different impact on the benefits of RSMA in different operational points. Specifically, RIS decreases RSMA benefits in underloaded systems, while it enhances the RSMA gain in overloaded systems. Indeed,
RSMA and RIS can be mutually beneficial tools in overloaded systems.
In addition, RSMA provides higher gains when the reliability constraint is more stringent and/or when the packet lengths are shorter. Finally, we showed that BD-RIS with group-connected architecture of group size two can outperform regular RIS.
§ PROOF OF LEMMA <REF>
Due to a space restriction, we only provide a proof for the inequality in (<ref>). It is straightforward to extend the proof to obtain the lower bound r̃_p,lk in (<ref>). To prove (<ref>), we obtain a concave lower bound for the Shannon rate as well as a convex upper bound for δ_c,lk({𝐱},{Θ}).
Employing <cit.>, we can obtain a lower bound for the Shannon rate as
log(1+γ_c,lk)≥log(1+γ_c,lk^(t-1))
-
γ_c,lk^(t-1)
+
2
d_lkℜ{(
𝐡_lk,l𝐱_c,l^(t-1))^*
𝐡_lk,l𝐱_c,l}
-
γ_c,lk^(t)e_lk(
σ^2+
∑_ij|𝐡_lk,i𝐱_p,ij|^2
+
∑_i|𝐡_lk,i𝐱_c,i|^2
),
which is quadratic and concave in { x}.
To obtain a convex upper bound for δ_c,lk({𝐱},{Θ}), we employ the inequality in <cit.>, which results in
√(V_c,lk)≤√(V_c,lk^(t))/2
+
γ_c,lk/√(V_c,lk^(t))(1+γ_c,lk)
=√(V_c,lk^(t))/2
+1/√(V_c,lk^(t))(1
-
σ^2+
∑_ij|𝐡_lk,i𝐱_p,ij|^2
+
∑_i≠ l|𝐡_lk,i𝐱_c,i|^2
/σ^2+
∑_ij|𝐡_lk,i𝐱_p,ij|^2
+
∑_i|𝐡_lk,i𝐱_c,i|^2
).
Unfortunately, the upper bound in the right-hand side of (<ref>) is not convex in { x}.
To obtain a convex upper bound for the right-hand side of (<ref>), we employ <cit.>, which yields
√(V_c,lk)≤√(V_c,lk^(t))/2
+1/√(V_c,lk^(t))(
1-2σ^2e_lk
-2∑_ije_lkℜ{(𝐡_lk,i𝐱_p,ij^(t-1))^*𝐡_lk,i𝐱_p,ij}.
.
-
2∑_i≠ le_lkℜ{(𝐡_lk,i𝐱_c,i^(t-1))^*𝐡_lk,i𝐱_c,i}
+e_lkζ_c,lk^(t-1)(
∑_ij|𝐡_lk,i𝐱_p,ij|^2
+
∑_i|𝐡_lk,i𝐱_c,i|^2
)
).
Finally, we can obtain the lower bound in (<ref>) by substituting (<ref>) and (<ref>) in (<ref>).
IEEEtran
|
http://arxiv.org/abs/2307.04833v1 | 20230710181143 | 3D Simulations of Magnetoconvection in a Rapidly Rotating Supernova Progenitor | [
"Vishnu Varma",
"Bernhard Mueller"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.HE"
] |
firstpage–lastpage
Saturation and multifractality of Lagrangian and Eulerian
scaling exponents
in 3D turbulence
Katepalli R. Sreenivasan
August 12, 2023
=====================================================================================================
We present a first 3D magnetohydrodynamic (MHD) simulation of oxygen, neon and carbon shell burning in a rapidly rotating 16 core-collapse supernova progenitor. We also run a purely hydrodynamic simulation for comparison. After ≈180s (≈ 15 and 7 convective turnovers respectively), the magnetic fields in the oxygen and neon shells achieve saturation at 10^11G and 5×10^10G. The strong Maxwell stresses become comparable to the radial Reynolds stresses and eventually suppress convection. The suppression of mixing by convection and shear instabilities results in the depletion of fuel at the base of the burning regions, so that the burning shell eventually move outward to cooler regions, thus reducing the energy generation rate. The strong magnetic fields efficiently transport angular momentum outwards, quickly spinning down the rapidly rotating convective oxygen and neon shells and forcing them into rigid rotation. The hydrodynamic model shows complicated redistribution of angular momentum and develops regions of retrograde rotation at the base of the convective shells. We discuss implications of our results for stellar evolution and for the subsequent core-collapse supernova. The rapid redistribution of angular momentum in the MHD model casts some doubt on the possibility of retaining significant core angular momentum for explosions driven by millisecond magnetars. However, findings from multi-D models remain tentative until stellar evolution calculations can provide more consistent rotation profiles and estimates of magnetic field strengths to initialise multi-D simulations without substantial numerical transients. We also stress the need for longer simulations, resolution studies, and an investigation of non-ideal effects.
stars: massive –- stars: magnetic fields –- stars:interiors – stars:rotation -– MHD –- convection
§ INTRODUCTION
In recent years, multi-dimensional effects during the advanced convective burning stages of massive stars have received significant interest for multiple reasons, and have been studied extensively by means of hydrodynamic simulations. It has been recognised that seed instabilities from convection play an important dynamical role in core-collapse supernova explosions of massive stars
<cit.>. There is also the question of whether
convective boundary mixing by turbulent entrainment and shell mergers may lead to structural changes in the pre-collapse structure of supernova progenitors and compared to current spherically symmetric stellar evolution models <cit.> and affect the nucleosynthesis outcomes from massive stars <cit.>. Finally, multi-dimensional simulations of late-stage convective burning are starting to shed light on angular momentum transport and magnetic evolution inside massive stars <cit.>, which is particularly relevant for our understanding of neutron star birth spin rates and magnetic fields and hyperenergetic supernova explosions that are probably driven by rotation and magnetic fields.
Three-dimensional simulations of convection during advanced burning stages in massive stars have so far largely disregarded two important aspects of real stars – rotation and magnetic fields. The effects of rotation had only been touched upon by the seminal work of <cit.>, and studies in axisymmetry (2D) of <cit.>, while 3D simulations have only started to explore rotation in recent years <cit.>. Similarly, magnetic fields during the advanced burning stages have only been considered recently by <cit.>, and their work was limited to the non-rotating case.
Outside the context of advanced burning stages in massive stars, magnetohydrodynamic (MHD) simulations have been used extensively as a means to study convection over the years, primarily in the context of the Sun and solar-like stars <cit.>. Given the high quality of spatially and temporally resolved solar data <cit.>, these simulations often aim to explain more detailed observational time-varying features on the solar surface <cit.> and envelope convection, which includes understanding the formation of the solar rotation profile <cit.>.
Studies of stars more massive than the sun are currently limited to just a handful of core dynamo simulations of A and B-type stars <cit.>.
Simulations of magnetoconvection during the late burning stages, both in rotating and non-rotating stars, are a necessity for several reasons. Even in slowly rotating massive stars, magnetic fields have been shown to impact the dynamics of the subsequent neutrino-driven explosions <cit.>. For the magnetorotational explosion scenario (e.g., ; see also early work on explosions driven by millisecond magnetars, e.g., ), a better understanding of the interplay between convection, rotation, and magnetic fields in supernova progenitors is even more critical. In this mechanism, a rapidly rotating core and very strong initial magnetic fields are required to launch the very energetic explosion. Such magnetorotational explosions are thought to explain rare, unusually energetic “hypernovae” with energies of up to ∼10^52 erg <cit.>.
The magnetorotational explosion mechanism is linked to the problem of rotation and magnetism in massive stars. Initial conditions for magnetorotational explosion simulations currently come from “1.5D” stellar evolution models that assume shellular rotation and include effective recipes for magnetic field generation and angular momentum transport by hydrodynamic and magnetohydrodynamic processes <cit.>.
There are still many open questions about the treatment of rotation and magnetic fields in stellar evolution models.
Aside from purely hydrodynamic instabilities <cit.>, the interaction of rotation and magnetic fields is a critical issue. Since convective regions are usually assumed to rotate rigidly (as this is the only allowed rotational state in thermal equilibrium; ), attention has usually focused on angular momentum transport and dynamos in non-convective regions.
The dynamo mechanism often implemented in these 1.5D stellar models to generate magnetic fields relies on sufficiently strong differential rotation in convectively stable regions of the star to stretch poloidal magnetic fields into toroidal fields. The dynamo loop is closed by the development of a pinch-type (Pitts-Tayler) instability <cit.>. This mechanism developed by <cit.> is often referred to as the Tayler-Spruit dynamo.
Recently, <cit.> have tried to improve the Tayler-Spruit dynamo mechanism, arguing that the Tayler instability saturates via turbulent dissipation of unstable magnetic field perturbations. This mechanism has a smaller energy dissipation rate and thus allows for stronger magnetic fields and more efficient angular momentum transport than the traditional Tayler-Spruit dynamo.
Other attempts to understand magnetic stellar evolution models have been to derive scaling relationships in convective regions <cit.> and to explore the role of the magnetorotational instability (MRI) <cit.>, driven, in part, by global 3D simulations such as that in <cit.>. These simulations have suggested that the interaction between the different instabilities and flows can be quite intricate, and may induce not only the pinch instability but can also be strongly affected by MRI and magnetic buoyancy.
Since 1.5D stellar evolution models implementing the Tayler-Spruit dynamo predict magnetic fields that are rather weak and predominantly toroidal, the general notion has long been that field amplification processes after the collapse are critical in magnetorotational explosions <cit.>, although this has recently been challenged <cit.>. In particular, for sufficiently strong seed fields in the progenitor, the initial
field strengths and geometry could have a significant
impact on the development of magnetorotational explosions after collapse <cit.>, making an understanding of the pre-collapse magnetic fields in 3D indispensable.
In this study, we present a first simulation of rotating magnetoconvection
during the final phases of shell burning using the
ideal MHD approximation.
This simulation constitutes a first step beyond spherically symmetric prescriptions in stellar evolution models
to predict the magnetic field strength and geometry, as well as its role in angular momentum transport
encountered in the inner shells of massive stars at the pre-supernova
stage. We also compare to a corresponding non-magnetic model of the same progenitor to gauge the feedback of magnetic fields on the convective flow and rotation profiles.
Our paper is structured as follows. In Section <ref>, we describe the
numerical methods, progenitor model, and initial conditions
used in our study. The results of the simulations are presented in Section <ref>. We first focus on the strength and geometry of the emerging magnetic field and then analyse the impact of magnetic fields on the convective flows and rotation, with a focus on the turbulent mixing and angular momentum transport within and between the burning shells. We summarise our results and discuss their implications in Section <ref>.
§ NUMERICAL METHODS AND SIMULATION SETUP
We simulate oxygen, neon, and carbon shell burning
with and without magnetic fields
in a rapidly rotating 16 solar-metallicity helium star from <cit.> with a strong differential rotation profile calculated using the stellar evolution code Kepler. The same progenitor model has previously been used in the PROMETHEUS rotating shell convection simulation of <cit.>.
The structure of the stellar evolution model at
the time of mapping to 3D is illustrated in Figure <ref>.
For our 3D simulations we employ the Newtonian magnetohydrodynamic (MHD) version of the CoCoNuT code as described in <cit.>.
The MHD equations are solved in spherical
polar coordinates using the HLLC (Harten-Lax-van Leer-Contact) Riemann solver <cit.>. The divergence-free condition ∇·𝐁 = 0 is maintained using a modification
of the original hyperbolic divergence cleaning scheme of <cit.> that allows for a variable cleaning speed while still maintaining total energy conservation as described in <cit.>
(building on similar ideas by ).
The extended system of MHD equations for the density ρ, velocity 𝐯, magnetic field 𝐁, total energy density ê, mass
fractions X_i, and the rescaled Lagrange multiplier ψ̂ reads,
∂_t ρ
+∇·ρ𝐯 =
0,
∂_t (ρ𝐯)
+∇·(ρ𝐯𝐯-
𝐁𝐁/4π
+P_tℐ)
= ρ𝐠
-
(∇·𝐁) 𝐁/4π
,
∂_t ê+
∇·[(e+P_t)𝐮
-𝐁 (𝐯·𝐁)
-c_hψ̂𝐁/4π]
= ρ𝐠·𝐯
+
ρϵ̇_nuc
,
∂_t 𝐁 +∇· (𝐯𝐁-𝐁𝐯)
+∇ (c_hψ̂)
= 0,
∂_t ψ̂
+c_h∇·𝐁 = -ψ̂/τ,
∂_t (ρ X_i)
+∇· (ρ X_i 𝐯)
= ρẊ_i
,
where 𝐠 is the gravitational acceleration, P_t is the total (gas and magnetic) pressure, ℐ is the Kronecker tensor, c_h is the hyperbolic cleaning speed, τ is the damping time scale for divergence cleaning, and ϵ̇_nuc and Ẋ_i are energy and mass fraction
source terms from nuclear reactions. This
system conserves the volume integral of a
modified total energy density ê,
which also contains the cleaning field ψ̂,
ê
=ρ(ϵ+v^2/2)+B^2+ψ̂^2/8π,
where ϵ is the mass-specific internal energy.
The simulations are conducted on a grid
with 400×128×256 zones in radius r, colatitude θ, and longitude φ
with an exponential grid in r and uniform
spacing in θ and φ.
To reduce computational costs, we excise the non-convective inner core up to 3,000 km and replace the excised core with a point mass. The grid extends to a radius
of 40,000 km and includes a small part
of the silicon shell, the entire convective oxygen, neon and carbon shells. Our simulations cover the full sphere (4π solid angle).
In the MHD simulation, we impose a homogeneous magnetic field with B_z = 10^7 G parallel to the grid axis as initial conditions.
We implement reflecting and periodic boundary conditions in θ and φ, respectively. For the hydrodynamic variables, we use hydrostatic
extrapolation <cit.> at the inner and outer boundary, and
impose an effectively slip-free inner boundary.
Different from the hydrodynamic simulations of <cit.> and <cit.>, we do not contract
the inner boundary to follow the contraction and
collapse of the core.
The inner and outer boundary conditions for the
magnetic fields are less trivial.
In simulations of magnetoconvection in the Sun,
various choices such as vertical boundary conditions
(B_x=B_y=0), radial boundary conditions (B_θ=B_φ=0), vanishing tangential
electric fields or currents, perfect-conductor
boundary conditions, or extrapolation
to a potential solution have been employed
<cit.>.
Since our domain boundaries are separated from
the convective regions by shell interfaces with
significant buoyancy jumps, we opt for the
simplest choice of boundary conditions and merely
fix the magnetic fields in the ghost zones
to their initial values for a homogeneous vertical magnetic field. We argue that due to the buffer regions at our radial boundaries, and the lack of rotational shear (due to the slip-free boundary conditions), our choice of magnetic boundary conditions should not have a significant impact on the dynamically relevant regions of the star.
Similar to the non-rotating magnetoconvection simulations done in <cit.>, our models will not (and are not
intended to) provide an exact representation of the
pre-collapse state of the particular 16 star
that we are simulating. We would expect, e.g., that
for the particular 16 model, the burning rate
and hence the convective velocities would increase
until the onset of collapse due
to the contraction of the convective oxygen shell. As a consequence
of accelerating convection and flux compression, the magnetic
fields will likely also be somewhat higher at the onset of collapse.
The model is rather meant to reveal the physical principles
governing late-stage magnetoconvection in rapidly rotating massive stars,
and to be representative of the
typical conditions in the burning shells with the understanding
that there are significant variations in convective Mach
number and shell geometry at the onset of collapse <cit.>,
which will also be reflected in the magnetic field strengths in the
interiors of magnetorotational supernova progenitors.
§ RESULTS
§.§ Evolution of the magnetic fields
We simulate two rapidly rotating 16 models, one with and one without magnetic fields. The magnetic model is initiated with a homogeneous magnetic field of 10^7 G. We then allow the geometry of the magnetic field to evolve naturally under the influence of rapid rotation and convection.
The evolution of the root mean square (RMS), volume-averaged magnetic fields in the three convective burning shells we simulate —
the oxygen, neon and carbon burning shell —
are shown in Figure <ref>. We see an initial period of exponential growth of magnetic field strength in each shell before a plateau forms after ≈200s in each shell. The field strengths in the oxygen and neon shells both appear to follow a very similar trajectory, achieving a peak at ≈190s, before a gradual decline sets in.
In each of the shells, convection takes a different amount of time to fully develop, which explains the slight delay from the start of the simulation to the beginning of the exponential field growth. In particular, the carbon shell has a much longer convective turnover timescale τ_c than the other two shells
with an initial value
τ_c≈300 s compared to ≈ 15 s and 25 s for the oxygen and neon shell, respectively, which considerably delays the growth of magnetic fields. The growth of the magnetic fields in the carbon shell already becomes apparent after ≈ 60 s, even without convection being fully developed. This is due to field amplification from strong differential rotation which develops at the base of the shell, and turbulent fluctuations that developed alongside the convective plumes.
Due to the development of convection in the shells, coupled with rapid differential rotation (maximum rotation rate of Ω ≈ 0.104 rad s^-1), we expect the field amplification to be dominated by the αΩ dynamo mechanism, which is often proposed as the mechanism that sustains the solar magnetic field <cit.>.
The mechanism stretches the poloidal magnetic fields into toroidal fields via differential rotation (Ω-mechanism), and the toroidal field is then stretched into a poloidal field due to convective motions (α-mechanism), completing the cycle and amplifying the seed field.
To test if this is the case, we plot the expected growth of the αΩ dynamo for the oxygen shell in Figure <ref>.
To this end, we approximate the growth of the magnetic field via the αΩ mechanism in the oxygen shell by assuming the magnetic field evolves via the simplified evolution equation:
∂B_rms/∂t = Γ_αΩB_rms,
which has a solution for the magnetic field growth of the form B_rms = B_0e^Γ_αΩΔ t, where B_0 is the initial field strength. We take the growth rate of the αΩ dynamo to be Γ_αΩ = (v/L)(Ωτ_c)^1/2 as presented in <cit.> based on dimensional arguments, where v is the convective velocity, L is the radial extent of the convective zone, Ω the rotation rate and τ_c is the convective turnover timescale.
Since τ_c∼ L/V, this effectively amounts to a growth time scale τ_αΩ of the order of the geometric mean of the rotation period P=2πΩ^-1 and the convective turnover time, τ_αΩ=Γ_αΩ^-1∼ (2 π)^-1/2√(P τ_c).
In the evolution plotted in Figure <ref>, we first calculate the RMS averaged values v and Ω in the oxygen shell, as well as angular averages the radial extent of the oxygen shell, L. The convective turnover time is calculated using these averaged quantities, τ_c = L/v. These averaged quantities are used to then determine the growth rate, Γ_αΩ to be used at each time step, to evolve the magnetic field.
The expected growth rate of the αΩ dynamo follows the growth of the magnetic field in the oxygen shell very closely for the first ≈140 s, after which time the magnetic field growth in the simulation slows down, and eventually decays after hitting a peak magnetic field strength of ≈ 10^11 G and 7×10^10 G for the oxygen and neon burning shells, respectively. We see that the expected growth rate from an αΩ dynamo also decreases at later times due to two factors. First, convective velocities drop due to suppression of convection by strong magnetic stresses. Second the rotation rate drops due to large angular momentum fluxes. We will discuss these effects further in Sections <ref> and <ref>.
These effects stop the magnetic field from being amplified via the αΩ dynamo. Since the convection dies down, it
is reasonable to expect that late-time field amplification and saturation is determined by differential rotation alone. Interestingly, the saturation of the field appears to be well described by an amplification mechanism that is driven by the MRI. For the MRI, <cit.> argue that the saturation field is roughly given by
B_sat^2
∝ 4πρ r^2 Ω^2 dlnΩ/dln r,
where ρ is the density, Ω the rotation rate, r the radius and dlnΩ/dln R quantifies the amount of differential rotation present.
<cit.> derive
Equation (<ref>) by assuming that saturation of the magnetic field is achieved in the star when the characteristic mode scale
l_mode≈ v_A
(lnΩ/ ln r)^-1 is equal to the local radius r, since the wavelength of the
mode cannot be larger than the physical size of the unstable region. Here, v_A is the Alfvén velocity (v_A = B/√(4πρ)). On dimensional grounds, one
can expect Equation (<ref>) to hold not just specifically for the MRI, but more broadly for amplification mechanisms driven solely by differential rotation in the ideal MHD regime with negligible resistivity.
Figure <ref> shows that the magnetic field in the oxygen shell saturates at a very similar level to Equation (<ref>).
The strong magnetic fields also result in very rapid redistribution of angular momentum, which slows the rotation rate of the oxygen and neon shells dramatically. Since the magnetic field saturation depends on the rate of rotation, this in turn leads to a drop in the average magnetic field strength by over 50% in these shells, as we see in Figure <ref>. The consequences of this will be discussed in more detail in the next sections.
The carbon shell behaves somewhat differently from its neighbours as the shell is much larger, and already more slowly rotating at the beginning of the simulation (Figure <ref>). Due to the slower rotation and density of the shell, the magnetic fields in the carbon shell saturate at a lower field strength. But unlike the two inner shells, the magnetic stresses here always remain below the radial kinetic stresses (Figure <ref>), so convection continues unimpeded by the magnetic fields, and coupled with differential rotation, sustains a relatively constant magnetic field strength. Unfortunately, due to the very long convective turnover times, we are only able to resolve about one convective turnover in this shell. Pushing this simulation further becomes untenable as the convection in the carbon shell has begun to interact strongly with our outer domain boundary.
Aside from the equilibrium field strength, it is worth investigating the field geometry that has naturally developed in the saturation state.
To this end, we show radial profiles of the
RMS averaged field strength
and of the dipole field strength in Figure <ref>.
The dipole field is calculated by extracting just the ℓ=1 component of the spherical harmonic decomposition:
M̂_ℓ = √(∑_m=-ℓ^ℓ|∫ Y_ℓ m^*(θ,φ) B
dΩ|^2).
Close to the end of the simulation, the RMS field strength appears almost flat throughout the simulated domain, varying only between ≈5×10^9G and 10^10G. The dipole component of the magnetic field reaches about one third of the total field strength in the inner oxygen and neon burning shells, which are no longer convective at this point. Further out, however, the dipole is weaker by comparison to the RMS-averaged field. The slowly rotating and still convective carbon shell may be concentrated in smaller scale field structures that are similar to the non-rotating convective shell presented in <cit.>. However, as the carbon shell has only completed 1–2 convective turnovers, it is difficult to say if this structure will be maintained at later times.
§.§ Impact of magnetic fields on convection and energy generation
As we already briefly mentioned above, the amplification of the magnetic fields in our simulation leads to a very rapid suppression of convection, as well as fast transport of angular momentum out of the affected shells. Here, we attempt to understand the consequences of these dynamical changes by comparing the MHD simulation to a purely hydrodynamical simulation of this progenitor.
As the magnetic field grows, it eventually becomes strong enough to affect the bulk flow in the convection zones. To illustrate this, we compare the spherically-averaged diagonal components of the kinetic (Reynolds) and magnetic (Maxwell) stress tensors R_ij and M_ij. R_ij and M_ij are computed as
R_ij = ⟨ρ v_i v_j⟩,
M_ij = 1/8π⟨ B_i B_j⟩,
where angled brackets denote volume-weighted averages. Note that we do not subtract the mean rotational flow for R_ϕϕ here.
In Figure <ref>
we present the stresses in the MHD model at ≈ 180 s, where the Maxwell stresses begin to be comparable to the radial Reynolds stress, and near the final time-step of the simulation at ≈ 480 s.
In Figure <ref>(a), the radial and meridional magnetic stresses are comparable to the radial kinetic stresses in the innermost regions of the star. This corresponds to pseudo-equipartition of these stress components throughout the oxygen and neon burning shells out to a mass coordinate of ≈ 2.8. The magnetic stresses then generate backreaction against the convective flows in these shells. As shown in Figure <ref>(b) at a later time in the simulation, the backreaction greatly suppresses the convective velocities in these shells, lowering the radial kinetic stresses by several orders of magnitude. We plot the RMS angle-averaged radial kinetic energy near the end of the simulation (≈480 s) in Figure <ref> for both the MHD model (top row) and for the purely hydrodynamic case (bottom row). At this time, the suppression of convective motions in the oxygen and neon shells by strong magnetic fields cause the radial kinetic energy in the inner 2.8 to be about three orders of magnitude lower than what is seen in the hydrodynamic model (≈ 10^16-10^17g cm^-1 s^-2 compared to ≈ 10^19-10^20g cm^-1 s^-2). As mentioned above, the carbon shell has only had time for about one convective turnover, i.e., convection is yet to reach a fully developed state. At the end of our simulation, the carbon shell is still convective, and the kinetic stresses in this shell remain higher than the magnetic stresses.
At early times, we see that the angular Reynolds stresses are the dominant components due to the rapid rotation. This along with the sharp gradients in these stresses that develop means that the shear instabilities can efficiently mix material more efficiently than in the underlying 1D stellar evolution models, where there is little mixing beyond the convective zones on dynamical time scales.
Shear mixing outside the convective regions initially plays a significant role in the MHD model as well. The R_rr stress component, which is indicative of radial motions that contribute to turbulent mixing, stays high outside the convective regions in the MHD model initially
(Figure <ref>(a)).
However, in the MHD model, the transport of angular momentum flattens the rotation profile
(as discussed in detail in Section <ref>)
, which we can see from the change in R_θθ and R_ϕϕ, significantly reducing the shear mixing at late times. In our purely hydrodynamic model, however, the Reynolds stresses remain roughly the same throughout our simulation, leading to continuous enhanced mixing compared to the MHD counterpart. We find that this enhanced shear mixing compared to the initial expectation from the 1D stellar evolution model means that the burning occurs at different regions, outside the initial location of the convection zones.
The consequence of this can be seen by analysing how the mass fractions of several key elements evolve. In Figure <ref>, we compare the mass fractions of silicon, oxygen and neon in the MHD simulation and the purely hydrodynamic simulation. We also plot the radial kinetic energy at the end of the simulations, to further stress the differences in turbulent mixing when magnetic fields are introduced. The plots are limited to the inner 3 of the enclosed mass to focus on the oxygen and neon burning shells, which are most strongly affected by magnetic fields.
The radial profiles of all three elemental mass fractions in the hydrodynamic and MHD model evolve very similarly at early times (up to ≈120 s in Figure <ref>), when the magnetic fields are not strong enough to significantly affect mixing. At later times, however, the differences in mixing become quite apparent. The plots of the silicon mass fraction show that large fractions of silicon are mixed outwards to an enclosed mass of ≈2.65 in the hydrodynamic model, while there is little mixing beyond ≈2.5 in the MHD simulation.
The inhibition of mixing also means that material that gets burnt at the base of a shell is less efficiently replenished by fresh material or not replenished at all.
This effect is seen in the oxygen shell as it burns material in a relatively narrow region around 1.85. In the MHD simulation, a sharp drop in the oxygen and neon mass fractions develops in this region, which gets steeper at later times due to the rapid burning (mostly of oxygen), which is no longer replenished by convective and shear mixing. It is clear that this is caused by the very strong magnetic fields and a significant reduction in turbulent mixing by comparing the mass fractions to the purely hydrodynamic simulation. Without magnetic fields, there is no sharp drop in mass fraction at the bottom of the burning shell. The profiles remain smoother within the shell and are even smoothed beyond the boundaries by shear mixing. The oxygen and neon mass fractions in the oxygen burning region even increase
over time, as the rapid rotation and convection act to continually entrain fresh material into the oxygen and neon shells.
We also find reduced mixing in the neon burning shell between ≈2.20 and 2.40. Here, instead of a sharp drop, we see the steep gradient of neon, where neon is consumed, move outwards after the initial transient where material is mixed inwards.
This has the effect of shifting the location of the base of the burning shell. We show this phenomenon more clearly in Figure <ref>, which presents angle-averaged radial profiles of the energy generation rate at 110 s, 220 s and 450 s (dotted, dashed and solid lines, respectively). We show their evolution in both mass and radial coordinates in Figures <ref>(a) and (b) respectively, from the inner boundary of our simulation 1.75 to an enclosed mass of 3.0, and radius of 3000 km to 12000 km for both the hydrodynamic model and the MHD model. The energy generation profiles for both models show three clear peaks, which correspond to the three burning shells (oxygen, neon and carbon). We see that both simulations quickly deviate from each other, with the energy generation rate in the oxygen and neon shells in the hydrodynamic model receding backwards, and getting stronger at later times, while the opposite is seen in the MHD simulation. We note that the relative change in the position of these energy generation peaks are largely similar in both mass and radial coordinates.
Figure <ref> clearly shows that the second energy generation peak in the MHD model, which corresponds to neon shell burning, moves from ≈ 2.35 (≈ 5400 km) at early times to ≈ 2.50 (≈ 5800 km) later on, compatible with the change in the neon mass fraction profiles in Figure <ref>. Due to the lack of turbulent mixing, the peak of neon burning moves radially outward as neon gets burnt at its initial position. However, since the neon burning rate is extremely temperature-sensitive (∝ T^50, ), the energy generation rate drops significantly when burning is moved to a slightly cooler region.
We see a similar effect for the oxygen shell. At early times, before the magnetic field heavily suppresses turbulent mixing, both the MHD and hydrodynamic model mix material rapidly. The convective boundary mixing (CBM), particularly at the lower boundary of the oxygen shell, entrains material, increasing the size of the oxygen shell, causing it to move inwards (both in mass and in radius). We see the peak energy generation move from ≈ 1.95 (≈ 3900 km) to ≈ 1.85 (≈ 3500 km).
After turbulent convection is suppressed in the MHD simulation, however, the nuclear energy generation peak in the oxygen shell behaves similarly to the neon burning shell, i.e., it moves outward in mass and radius to where oxygen is burned at a lower temperature. The hydrodynamic model, however, continues to entrain material from beneath the oxygen shell, moving the peak of its energy generation deeper into the core. oxygen burning is also a very temperature-sensitive reaction (∝ T^33, ), so these shifts in the burning shells lead to noticeable changes in the total energy generation rate over time.
The consequences of the suppression of mixing in the MHD model are seen more clearly in Figure <ref>. Here, we plot the total (volume-integrated) energy generation in the oxygen, neon and carbon burning shells with time for both the hydrodynamic and MHD models. As we see in Figure <ref>, the lack of mixing in the MHD model leads to the location of the oxygen and neon shells moving radially outwards with time, to cooler regions of the star. This causes a gradual decrease in energy generation after the magnetic fields in these shells first achieve saturation at ≈200 s. The increased mixing in the hydrodynamic model, on the other hand, causes a subsequent increase in peak energy generation rate as the shells move deeper into the star.
Interestingly, after ≈ 220 s of similar energy generation in the carbon shell in both models, the energy generation rate starts to increase in the MHD model and decreases in the hydrodynamic case. This is likely due to the slight change in the position of the carbon burning shell for the hydrodynamic model while it remains mostly stationary in radius in the MHD model. From the profiles in Figure <ref>, this is likely caused by the radial expansion of the neon shell in the hydrodynamic model as its energy generation rate increases, pushing the carbon shell further outwards.
In the MHD model, by contrast, the energy generation rate in the neon shell
has dropped over time and hence there the carbon shell is not driven outward.
§.§ Evolution of rotation and angular momentum transport
In addition to its effect on turbulent mixing, the development of strong magnetic fields leads to rapid redistribution of angular momentum.
The evolution equation for the angular momentum can be obtained by taking the cross product of the position vector 𝐫 with the fluid momentum equation <cit.>.
When including magnetic stresses in the momentum equation and integrating over spherical shells, one obtains <cit.>,
∂⟨ρ v_ϕ r sinθ⟩/∂t -
∇_r·⟨ρ v_r v_ϕ r sinθ +
B_rB_ϕ/4πr sinθ⟩ = 0,
where ∇_r is the radial component of the divergence operator and angled brackets denote averages over solid angle.
We then perform a Reynolds/Favre decomposition <cit.> around a base state with constant angular velocity Ω_z,
Ω_z = ⟨ρΩ_z r^2 sin^2 θ⟩/⟨ρ r^2 sin^2 ⟩ = v_ϕ r sinθ/i_zz,
on spheres, as in <cit.>. Here i_zz = ⟨ρ r^2 sin^2 ⟩/ρ. Note that we use hats and primes for volume-weighted Reynolds averages and their fluctuating components:
X(r) = ⟨ X ⟩ = 1/4π∫ X dω
X'(r,θ,ϕ) = X - X,
where dω = sinθ dθ dϕ is the solid angle element. We denote mass-weighted averages and fluctuations with tildes or angled brackets and double primes:
X(r) = ∫ρ X dω/∫ρ dω
X”(r,θ,ϕ) = X - X.
Applying the usual rules for Favre averages,
⟨ρ X⟩ = ρ̂X, ⟨ρXY⟩ = ρ̂XY and ⟨ρXY”⟩ = 0, we get
∂ρΩ_zi_zz/∂ t +
∇_r · (⟨ρv_r(Ω_z + Ω”_z) r^2 sin^2 θ⟩
+ ⟨ρ v”_rΩ_z r^2 sin^2 θ⟩
+ ⟨ρ v”_r Ω”_z r^2 sin^2 θ⟩
- ⟨B_r B_ϕ/4πr sinθ⟩ ) = 0.
We see from the decomposed angular momentum Equation (<ref>) that angular momentum is transported by four distinct flux terms. In the order they are listed above we have an advective term, a meridional circulation term and turbulent transport, as well as an additional magnetic stress term. From this equation, aside from the usual hydrodynamic terms, the radial flux of angular momentum also depends on the strength of the magnetic fields.
The evolution of the angle-averaged rotation rates in the MHD model and purely hydrodynamic model are depicted in Figure <ref>. The top row shows the rotation rates, Ω, over time for both models, and the bottom row presents angle-averaged specific angular momenta. We note that in comparison to its hydrodynamic counterpart, as expected, the MHD model exhibit more efficient angular momentum transport. At the innermost region of our domain, the rotation rate is lowered by over an order of magnitude due to outward transport of angular momentum, from over 0.10 rad s^-1 initially to ≈ 0.01 rad s^-1). The rotation profile is flattened considerably.
The angular momentum is taken up by carbon shell outside 2.8, whose rotation rate increases. We note that Figure <ref> only shows a limited inner portion of the total simulated carbon shell.
The hydrodynamic model displays much less smooth rotation profiles than the MHD model. It is noteworthy that at the various convective shell boundaries, the rotation profile shows significant dips. Figure <ref>(b) shows dips in rotation at ≈1.85, which is the base of the oxygen shell, between the oxygen and neon shells at a location that varies over time between ≈2.20 and ≈2.40, and finally between the neon and carbon shells at ≈2.80.
It is particularly interesting that some of these dips even reach negative values, i.e., there are shells with net retrograde rotation, which remain quite stable.
On closer inspection of the rotation profile of the MHD model, we see that these dips in the rotation profile also begin to form early on in this case (Figure <ref> (a) at 120s). However, the rotation profile is quickly smoothed once angular momentum transport due to magnetic fields become efficient.
Unlike in the MHD model, where angular momentum is simply transported outward, the redistribution of angular momentum in the hydrodynamic model appears more complicated (Figure <ref> (d)). Effectively, positive angular momentum is transported into convective shells from the shell boundaries. This leads to an interesting non-monotonic rotation profile, where the fastest
rotation rate is not reached at the inner boundary, but instead at 2.42.5 at late times, as well as the aforementioned dips between the convective shells. Redistribution of angular momentum also increases the rate of rotation in the inner carbon shell (around 2.8) and
induces strong (radial) differential rotation there.
To better understand the emerging rotation pattern in the hydrodynamic model and the counterintuitive phenomenon of retrograde rotation, we consider zonal and temporal averages
of the meridional velocity and the rotation rate in Figure <ref>. In Figure <ref> (a), the meridional velocity plotted is an average over ϕ (zonal average) and time of |𝐯_r + 𝐯_θ|, while Figure <ref> (b) plots the same averages of the angular velocity, Ω = v_ϕ/(r sinθ), where r is the radius. Both figures plot the cube root of their original values to retain the direction of the flow, but reduce the dynamic range. The positive (red) meridional velocity represents clockwise motion, with the negative (blue) flows showing counter-clockwise flows (viewed from the North pole).
The left halves of Figure <ref> (a) and (b) show the zonal flow averages of the quantities, while the right halves show snapshots of the two quantities on a meridional slice. From Figure <ref>(b), it is clear that the retrograde rotation occurs mostly at the base of the carbon shell, and to a lesser extent, at the base of the oxygen shell. The snapshot on the right half of Figure <ref>(b) shows that regions of retrograde rotation form at the base of the neon shell as well, but these are more transient and do not show up in the zonal averages. We also observe that for the hydrodynamic model, although the retrograde rotation appears to form as a shell at the base of the carbon shell, in general, the rotation pattern that forms is not shellular. At the base of both the oxygen and carbon shell, we rather see indications of anti-solar differential rotation with faster rotation near the poles.
Such a rotation pattern has been observed before in simulations of surface convection zones.
<cit.> attribute the development of different rotation profiles, in part, to a link between the differential rotation and the meridional circulation in the convection zone. Anti-solar rotation profiles, are attributed to inward angular momentum transport, which establishes a single-celled meridional circulation profile throughout the convection zone. This circulation transports angular momentum polewards, spinning up the poles relative to the equator.
Although our model exhibits antisolar-like rotation profiles at the base of the carbon (8400 km) and oxygen (≈3400 km) shells in Figure <ref>(b), the corresponding meridional circulation in Figure <ref>(a) does not exhibit the single-celled structure expected from <cit.>. We instead find that the meridional flows are not clearly structured, but appear more similar to a case with multi-celled circulation.
We see an analogous meridional circulation flow develop in the inner regions of the carbon shell of our model (above 8400km), with two large cells of material (Figure <ref>(a)). One noticeable difference shown in our model is that the circulation velocities do not drop off towards the poles, instead, they remain very strong.
This may be part of the reason why the model exhibits a stable retrograde shell of material at 8400 km that extends across almost all latitudes as shown in Figure <ref>(b),
rather than being confined to the equatorial region as in the surface convection models presented in <cit.>.
One major difference that our simulations have that likely alters the meridional flows, is the proximity of other convective shells. While in surface convection simulations, the shell is bounded only by a radiative core, in our model, we have three adjacent convective shells. Although these shells are initially separated by thin radiative zones in the 1D stellar structure input model, the rapid rotation and turbulent convection increases the amount of mixing that occurs at the convective shell boundaries in the 3D model, causing the convective shells to start interacting. This adds additional complications to the transport of angular momentum, and in turn, to the meridional circulation.
From extensive studies in solar physics, it is often expected that the stable differential rotation in our Sun is maintained by rotating turbulent convection <cit.>, due to the interplay of buoyancy and inertial forces. We suspect that, similarly, the complicated rotation profile developed in our purely hydrodynamic model occurs due to an interaction between the Coriolis and buoyancy forces. Similar retrograde rotation patterns have been found in studies of rotating solar-like stars, and depend on the Rossby number of the flow <cit.>. To confirm that our model is in the relevant regime where Coriolis forces are strong enough to make such an effect plausible, we analyse how the Rossby number compared to the MHD model, which develops a flat rotation profile. We define the Rossby number of any given burning shell as
Ro = v_conv/2Ω_shellΔ r,
where v_conv is the convective velocity, Ω_shell, is the rotation rate, and Δ r is the radial extent of each burning shell. The time evolution of the Rossby number in the oxygen, neon and carbon shell is shown in Figure <ref>. We find that the Rossby number for the oxygen and neon shell for both models starts at ≈ 0.4 (and remains largely unchanged for the hydrodynamic model), indicating that the flows are strongly shaped by the Coriolis force. The Rossby numbers in the MHD model clearly reflects that the magnetic fields starts to strongly alter the bulk flows. At ≈ 180s the magnetic fields become strong enough to start rapidly transporting angular momentum outwards, slowing the rotation of the star and hence increasing the Rossby number. The Rossby number continues to increase until ≈ 200 s in the neon shell and ≈ 240 s in the oxygen shell, when the suppression of convective flows by magnetic stresses becomes dominant, lowering the Rossby number.
The carbon shells of both models clearly have not reached a convective steady state and hence not even a transient steady state in the Rossby number can be discerned. The MHD case initially transports angular momentum out of the star more efficiently. We see in Figure <ref>(c) that angular momentum from the rapidly rotating inner regions (inner 2.8), is transported out to the carbon shell (2.8 - 3.0). Focusing on the lines at 215 s in Figure <ref>(a), we see that at these times, rotation rate in the inner carbon shell increases in the MHD case compared to the purely hydrodynamic model. This leads to the deviation in Rossby number initially seen from ≈ 150 s, causing the Rossby number in the MHD model to be lowered compared to the hydrodynamic model. After ≈260 s, the Rossby number in the hydrodynamic model starts decreasing gradually. We associate this with the decrease in energy generation compared to the MHD model seen after ≈220 s in Figure <ref>, which would have the effect of decreasing the convective velocity in the carbon shell. The opposite trend is seen for the MHD Rossby number due to the corresponding increase in energy generation.
At first glance, we find a result that appears in opposition to what is found in solar differential rotation simulations
and the shell burning simulation of <cit.> with a very similar setup. As summarised in <cit.>, retrograde rotation is sometimes seen when buoyancy forces dominate the flow (i.e. Ro≥ 1), where solar-like stars develop retrograde rotation at the equators and faster rotation at the poles (anti-solar rotation), and faster rotation at the equator for Ro≤ 1. Figure <ref> shows that although we develop strong retrograde rotation in the hydrodynamic model, the Coriolis force dominates its flow in each shell (i.e. Ro≤ 1).
We note, however, a key difference between these models. The anti-solar rotation profiles in low-mass stars develop throughout the convective zone, whereas we find the retrograde motion to be largely concentrated at the base of the burning shells. The different phenomenology could be explained by the rather disparate conditions in surface convection zones in solar-like stars and convection during advanced burning stages. Solar-like stars have an inner radiative zone and a single convective shell above it, and radiation diffusion plays a role both for the internal structure of the convection zone and especially for the structure of the convective boundaries. Our progenitor has three interacting convective shells, radiative effects are unimportant (and not included in the model), and the structure of the boundary is determined by turbulent entrainment.
We suspect that the cause of the retrograde rotation in our simulation is similar to what is described in <cit.> from hydrodynamic simulations of ice giants, and for solar-like simulations in <cit.>, where convective rolls exhibit a preferred “tilt” in the positive ϕ direction.
The tilted flow structures create a correlation between flows moving inward (outward) and those moving in the negative (positive) ϕ direction.
Due to the strong turbulent mixing between shells, however, it is difficult to see in our models whether this tilted flow structure truly arises. For low Rossby convection in the solar case, this usually results in net angular momentum transport away from the rotation axis, which tends to speed up the equator.
Since the usual Rossby number characterises the flow in the convection zone globally, it alone would not give us insight into dynamics at the convective boundaries or between convective shells. To understand the interplay between the Coriolis and buoyancy forces at the convective boundaries, we instead plot the angle-averaged magnitude of the buoyancy and Coriolis forces (f_B and f_C) per unit mass of the hydrodynamic model in Figure <ref>.
f_B = gδρ/ρ_0
f_C = 2|Ω×𝐯|
Here g is the gravitational acceleration, ρ̂ the RMS averaged density, δρ is the RMS averaged fluctuation from the average density, Ω is the angular velocity vector of the rotation and 𝐯 is the velocity. For simplicity, we assume rotation is confined to v_ϕ as in our initial conditions, i.e., Ω points in the z-direction, giving us only a component for f_C pointing away from the rotation axis. We plot the RMS average of the absolute value of f_C in Figure <ref>. This is to allow for greater clarity in comparing the two forces, since the retrograde rotation would lead to regions of negative f_C. We plot these forces at 57 s, 115 s and 190 s (dotted, dashed and solid lines, respectively). These times were chosen to represent approximate times before the development of retrograde rotation in the hydrodynamic model, when retrograde motion initially begins at the base of the carbon shell, and when it begins at the base of the oxygen shell.
For our simulation, we find the convective boundary overshooting at lower convective boundaries leads to buoyancy forces dominating the Coriolis force in between convective shells (≈ 1.85 and 2.8, see Figure <ref>), which is likely why retrograde motion in our models are confined to the shell boundaries. This is further supported by the fact that we see the “stable” retrograde shell start to form at the base of the oxygen shell (≈ 1.85) when the buoyancy force surpasses the Coriolis force. The evolution of the Coriolis force in Figure <ref> shows that this effect develops due to the transport of angular momentum away from convective boundaries into the convective shells. The magnetic model initially shows a similar force ratio, however, the ratio of buoyancy to Coriolis forces is soon reduced by the rapid rise of magnetic field strength and the subsequent suppression of convection, and hence buoyancy force.
§ CONCLUSIONS
We investigated the evolution of magnetic fields
during advanced convective burning stages in massive stars and their backreaction on the flow in a simulation of the oxygen, neon and carbon shells in a rapidly rotating 16 progenitor shortly before core collapse. For comparison, we conducted a purely hydrodynamic simulation of the same progenitor as well.
The simulations were run for about 8 minutes of physical time, corresponding to about 32 convective turnovers in the oxygen shell.
Rapid differential rotation and convection initially amplify the magnetic fields exponentially via the αΩ-dynamo. However, strong magnetic stresses eventually dominate the radial kinetic stresses. The backreaction of the fields on the flow stops the exponential growth
and suppresses convection in the oxygen and neon burning shells. These shells are effectively turned into convectively stable shells by the strong magnetic stresses and continue to burn fuel at the base of the shells without mixing of fuel and ashes. The magnetic field reaches saturation in the oxygen and neon shells after 180 s (corresponding to ≈12 and 7 convective turnovers respectively). It peaks at 10^11 G in the oxygen shell but decays to 3×10^10 G by the end of the simulation. In the carbon shell, the field appears to saturate at 10^10 G, but this shell has only completed about one turnover during our entire simulation, so a steady state likely has not been reached.
The strong magnetic fields that develop also transport angular momentum much more efficiently than in the purely hydrodynamic model.
Already within the short duration of this simulation, the structure transitions from strong differential rotation into a nearly uniform rotation profile, with significant spin-down of the inner shells.
In the purely hydrodynamic model shell convection is sustained and strong differential rotation is maintained. However, it develops a much more complicated rotation profile than in the underlying
1.5D stellar evolution models.
The emerging rotation profile shows sharp drops at convective boundaries and, during some phases, even shells with retrograde rotation.
We hypothesise that this is due to an instability that occurs when rapid changes of the rotational and convective velocities occur at the convective boundaries, coupled with strong meridional flows towards the poles. The regions with retrograde
rotation are conspicuously associated with spikes in the local Rossby number, i.e., the ratio of the RMS-averaged buoyancy and Coriolis force.
To better understand this phenomenon, we require further studies with different progenitors, varying Rossby number flows, and different grid geometries.
The transition of the oxygen and neon shells to slowly and rigidly rotation, non-convective region significantly reduces turbulent mixing. While the hydrodynamic model rapidly mixes new material into regions that are burning, the MHD model exhibits sharp drops in oxygen and neon mass fractions in the narrow burning regions. One consequence of this difference is that the hydrodynamic model entrains material deeper into the star, moving the peak of the energy generation of the oxygen shell radially inwards, while the location of peak energy generation of the same shell in the MHD simulation moves outwards.
Due to the strong temperature sensitivity of oxygen burning (∝ T^33), this small change in shell position leads to a noticeable change in energy generation between the two models, resulting in increasing nuclear energy generation in the hydrodynamic model and inhibition of nuclear energy generation in the MHD model at late times.
Our results have important implications for core-collapse supernova modelling. For this particular rotating progenitor model, we predict pre-collapse fields of ≈2× 10^10 G in the oxygen shell, similar to what we find for the non-rotating case in <cit.>. Our rotating model exhibits a more gradual drop in field strength with radius. With relatively strong seed fields, we expect less of a delay until magnetic fields can contribute to become relevant for shock revival, i.e.,
by providing an additional “boost” to neutrino heating, as seen in <cit.>. Due to the suppressed convective flows, the perturbation-aided mechanism <cit.> may be less effective, however, asymmetries seeded by the strong magnetic fields may be enough to deliver a similar effect <cit.>. Perhaps most importantly,
the very rapid redistribution of angular momentum transport from the inner shells casts doubt on the viability of a fast magnetorotational explosion powered by a “millisecond magnetar”.
For the right conditions to develop, a mechanism would be required to spin up the proto-neutron star during or after the core collapse for a magnetorotational explosion to be launched.
However, there is still work to be done until the findings from simulations of magnetoconvection in rotating stars can be incorporated into models of magnetically- or magnetorotationally-driven explosions. For example, future simulations will need to include the core and self-consistently follow its contraction and incipient collapse to provide initial conditions for supernova simulations.
However, multi-D simulations of rotating massive stars face a much more fundamental challenge.
The MHD model, and to some extent the hydrodynamic model, rapidly diverges from the initial structure of the stellar evolution model. Current stellar evolution models are clearly far from the actual quasi-steady state conditions that would emerge under the influence of rotation, convection and magnetic fields. Ideally, 3D simulations should cover significantly longer time scales to follow the relaxation of the structure into equilibrium and then study the subsequent evolution on secular time scales, but this is clearly beyond current computational resources. It is therefore very important to make 1D stellar evolution models and MHD models more consistent with each other to minimise deleterious effects from big initial transients that limit the fidelity of 3D simulations. This will require improved formalisms for stellar evolution with rotation and magnetic fields <cit.>.
Developing the appropriate methodology for solving the problem of stellar evolution with rotation and magnetic fields by a combination of 1D and 3D modelling is bound to remain an extraordinary and exciting challenge.
§ ACKNOWLEDGEMENTS
We acknowledge fruitful discussions with R. Hirschi and A. Heger. VV acknowledges support from the STFC (Science and Technology Facilities Council; ST/V000543/1).
BM was supported by ARC Future Fellowship FT160100035. We acknowledge computer time allocations from Astronomy Australia Limited's ASTAC scheme, the National Computational Merit Allocation Scheme (NCMAS), and
from an Australasian Leadership Computing Grant.
Some of this work was performed on the Gadi supercomputer with the assistance of resources and services from the National Computational Infrastructure (NCI), which is supported by the Australian Government, and through support by an Australasian Leadership Computing Grant. Some of this work was performed on the OzSTAR national facility at Swinburne University of Technology. OzSTAR is funded by Swinburne University of Technology and the National Collaborative Research Infrastructure Strategy (NCRIS).
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request
to the authors, subject to considerations of intellectual property law.
mnras
|
http://arxiv.org/abs/2307.04030v1 | 20230708184619 | Adaptive Force-Based Control of Dynamic Legged Locomotion over Uneven Terrain | [
"Mohsen Sombolestan",
"Quan Nguyen"
] | cs.RO | [
"cs.RO"
] |
Adaptive Force-Based Control of Dynamic Legged Locomotion over Uneven Terrain
Mohsen Sombolestan and Quan Nguyen
M. Sombolestan and Q. Nguyen are with the Department of Aerospace and Mechanical Engineering, University of Southern California, Los Angeles, CA 90089, email: [email protected], [email protected].
=========================================================================================================================================================================================================================================
Agile-legged robots have proven to be highly effective in navigating and performing tasks in complex and challenging environments, including disaster zones and industrial settings.
However, these applications normally require the capability of carrying heavy loads while maintaining dynamic motion.
Therefore, this paper presents a novel methodology for incorporating adaptive control into a force-based control system.
Recent advancements in the control of quadruped robots show that force control can effectively realize dynamic locomotion over rough terrain.
By integrating adaptive control into the force-based controller, our proposed approach can maintain the advantages of the baseline framework while adapting to significant model uncertainties and unknown terrain impact models.
Experimental validation was successfully conducted on the Unitree A1 robot.
With our approach, the robot can carry heavy loads (up to 50% of its weight) while performing dynamic gaits such as fast trotting and bounding across uneven terrains.
Adaptive control, Model predictive control (MPC), Quadruped robots, Unknown impact model.
§ INTRODUCTION
Legged robots have numerous potential uses, from search and rescue operations to autonomous construction. To perform these tasks effectively, it is important for the robot to have an accurate understanding of the environment it will be operating in. However, due to the complexity of the robot and the environment, the model of the robot itself might contain a significant level of uncertainty and affect the robot's stability, particularly when performing agile movements. To overcome these challenges, there is a need for the development of a control framework that can effectively compensate for these uncertainties in real-time.
The utilization of convex model predictive control (MPC) with the single rigid body (SRB) model in legged robots <cit.> has greatly enhanced the real-time implementation of diverse walking gaits. Unlike the balance controller based on quadratic programming <cit.>, MPC offers the capability to perform agile motions like jumping <cit.> and high-speed bounding <cit.> for quadruped robots. Additionally, MPC exhibits robustness in traversing rough and uneven terrains. However, it is important to note that MPC assumes perfect knowledge of the dynamic model.
To enhance trajectory tracking in the presence of unknown and changing disturbances, researchers have explored the combination of MPC with adaptive control techniques <cit.>. Additionally, parameter estimation techniques have been employed to further improve the robustness of the control system <cit.>. These approaches aim to adapt the controller and estimate system parameters to effectively compensate for uncertainties and disturbances, leading to improved trajectory tracking performance. It is worth noting that all of these studies were conducted using a position-based controller model.
In this work, we tackle the legged robot locomotion issue in real-world scenarios with a significant level of uncertainty. The uncertainty can come from both the robot model and the environment. Since our proposed method is based on a force controller, it retains the advantage of robustness to uneven terrain. Thanks to MPC as our baseline controller, our framework can be extended to different locomotion gaits and trajectories without adjusting the controller parameters. Additionally, by incorporating the adaptive controller, our control system can handle significant model uncertainty. As a result, our approach enables legged robots to move across different terrains with unknown impact models.
§.§ Related Works
§.§.§ Offline Learning
The offline learner can either leverage a model-based control approach or learn the control system from scratch. Using a model-based method, researchers mainly target learning the dynamic to improve the controller performance <cit.>. One example of this approach is the integration of deep learning with MPC, in which the proposed model tries to learn the cost or dynamic terms of an MPC <cit.>. This hybrid method shows considerable improvement for the aerial robot <cit.> when learning the dynamic model from experimental data.
The major limitation of this method is that it is restricted to the dynamic model learned during the training phase. However, the dynamic model is prone to frequent changes in real-world scenarios due to environmental uncertainties and external disturbances.
To overcome the limitations of previous approaches, there has been growing interest in utilizing reinforcement learning (RL) to train models from scratch. The key advantage of RL models is their ability to adapt swiftly to changes in real-world environments due to being trained in diverse environments with varying properties. In the case of quadruped robots, an RL model can directly predict appropriate joint torques for traversing different types of terrain, as demonstrated by Chen et al. <cit.>. Additionally, Bellegarda et al. <cit.> enable quadrupeds to run quickly while carrying unknown loads by training the model to learn foot positions. However, these methods heavily rely on domain randomization during training to generalize well to challenging environments. Yang et al. <cit.> also propose an end-to-end RL method that utilizes proprioceptive states and visual feedback to predict environmental changes.
§.§.§ Online Learning
To address inaccuracies in model-based controllers, researchers have explored an alternative approach using online learning, particularly supervised learning methods <cit.>. In this approach, the focus is on learning disturbances online <cit.>, and in some cases, researchers also aim to learn the dynamics of the system itself <cit.>. Furthermore, this approach has been successfully applied for online calibration of kinematic parameters in legged robots <cit.>.
In addition to that, in a recent study, a Lipschitz network method has been developed to bridge the model-reality gap in real-time <cit.>. The online learning method shares a close relationship with adaptive control, and numerous studies have explored the combination of these two approaches <cit.>. This combination aims to leverage the advantages of both methods, allowing for dynamic adaptation and continuous learning from real-time data to improve control system performance.
Perhaps closest to our work in terms of online adaption is the learning method presented in <cit.> for legged robots. The authors correct the model behind the controller using a supervised learner while the robot is walking in an unknown environment. The data is collected during the robot's operation to learn a linear residual model which can compensate for system errors. However, in the transition from simulation to experiment, the acceleration estimators make noisy data required for training the model. As a result, the method is only applied to estimate the linear terms since the angular terms data proved to be too noisy to be helpful in the model.
§.§.§ Adaptive Control
The goal of adaptive control is to tune the controller's variables online during deployment <cit.>. Adaptive control has been applied for manipulation tasks to robotic arms <cit.>, mobile robots <cit.>, and quadruped robots <cit.>.
The conventional Model Reference Adaptive Control (MRAC) architecture was originally designed for controlling linear systems in the presence of parametric uncertainties <cit.>. However, it lacks the ability to characterize the input/output performance of the system during the transient phase. To address this limitation and improve the transient performance of adaptive controllers, the L_1 adaptive control offers several advantages over traditional MRAC, such as decoupling adaptation and robustness within a control framework <cit.>.
In addition, by incorporating a low-pass filter in adaptation law, the L_1 adaptive control can provide stability <cit.> and transient performance <cit.>. Therefore, the L_1 adaptive control technique guarantees robustness with fast adaptation <cit.>, an essential criterion in dynamic robotics applications. Recently, by integrating L_1 adaptive controller and Bayesian learner, researchers leverage the fast adaption performance of the L_1 adaptive controllers and introduce a safe simultaneous control and learning framework <cit.>.
For legged robots, the adaptive controller has also been employed to find the value and location of the center of mass <cit.>. Our work on L_1 adaptive control for bipedal robots <cit.> considers a Control Lyapunov Function (CLF)-based controller as a closed-loop nonlinear reference model for the L_1 adaptive controller. It was validated for the robot's walking <cit.> and running <cit.>. However, the control framework in this prior work is based on Hybrid Zero Dynamics <cit.>, which uses joint position control to track the desired trajectory from optimization for each robot joint.
Moreover, in <cit.>, an adaptive control based on a CLF is designed for quadrupeds to interact with unknown objects. Then, they combined the criteria derived by adaptive control as a constraint in an MPC framework. However, adding more inequality constraints to MPC makes the controller more complex in terms of computation. In our approach, we compute a residual vector for compensating dynamic uncertainty, which makes the controller more time-efficient. Additionally, by employing our method, the robot is able to adapt to terrains with unknown impact models.
§.§ Contributions
A preliminary version of this research previously appeared in <cit.>; however, this paper presents several novel contributions to the prior work. This work incorporates the L_1 adaptive controller into the model predictive control (MPC).
The proposed control system leverages MPC due to its robustness to uneven terrain, contact constraint, and generalization to different locomotion gaits. Moreover, by integrating adaptive control into MPC, the proposed model can compensate for significant model uncertainty. In the previous work <cit.>, the robot can only perform quasi-static walking; however, in this work, the robot can perform dynamic motions thanks to MPC. Finally, the authors present new hardware experiments to demonstrate the effectiveness of the proposed adaptive MPC (as illustrated in fig: first fig).
The main contributions of the paper are as follows:
* We introduce a novel control system that combines the L_1 adaptive control into the force-based control system, designed to address the challenges posed by model uncertainty in real-world applications.
* Thanks to MPC, our approach offers greater versatility as it can be adapted to a wide range of locomotion gaits and trajectories. Moreover, our method can handle terrain uncertainty, allowing the robot to navigate rough terrains, such as grass and gravel, as well as high-sloped terrain.
* By integrating the adaptive control into MPC, it is possible for quadruped robots to carry an unknown heavy load (up to 50% of the robot's weight) across challenging terrains, with the capability of executing dynamic gaits such as fast trotting and bounding. This is a significant improvement compared to our previous work, which only allowed the robot to perform quasi-static walking.
* The combination of using MPC for both the reference model and the real model in the adaptive controller makes the control system computationally expensive, leading to potential delays in computation. To ensure real-time performance, we have developed an update frequency scheme for the control system, which allows for the optimized allocation of processing resources to each control component.
* Our proposed approach enables the control system to adapt to terrains with unknown impact models, such as soft terrain. Traversing soft terrain is a challenging task for quadruped robots. The A1 robot can walk on double-foam terrain in different directions using our method. In comparison, the robot cannot maintain its balance using the baseline controller, resulting in a collapse.
The remainder of the paper is organized as follows.
sec: background presents the baseline control architecture for quadruped robots and provides some knowledge on force-based controllers.
In sec: control overview, we will briefly present an overview of our control approach. Then, our proposed adaptive force-based controller using balance controller and MPC will be elaborated in sec: adaptive control and sec: adaptive MPC, respectively.
Furthermore, the numerical and experimental validation are shown in sec: Results. Finally, sec: conclusion provides concluding remarks.
§ PRELIMINARIES
In this section, we present the background on the control architecture of quadruped robots and describe each control component. According to <cit.>, the robot's control system consists of several modules, including a high-level controller, low-level controller, state estimation, and gait scheduler as presented in fig: ControlOverview.
A reference trajectory can be generated for high-level control from user input and state estimation. The gait scheduler defines the gait timing and sequence to switch between each leg's swing and stance phases. The high-level part controls the position of the swing legs and optimal ground reaction force for stance legs based on the user commands and gait timing. As the baseline for the stance leg controller, we will use two common approaches: 1) quadratic program (QP) based balancing controller <cit.> and 2) model predictive control (MPC) <cit.>.
The low-level leg control converts the command generated by high-level control into joint torques for each motor. These modules of the control architecture will be described briefly in the following subsections. More details can be found in <cit.>.
§.§ Gait Scheduler
The A1's gait is defined by a finite state machine using a leg-independent phase variable to schedule contact and swing phases for each leg <cit.>. The gait scheduler utilizes independent boolean variables to define contact states scheduled s_ϕ∈{1 = contact, 0 = swing} and switch each leg between swing and stance phases. Based on the contact schedule, the controller will execute either position control during swing or force control during stance for each leg.
In our previous work <cit.>, we focus on the application of load-carrying tasks, where the load is unknown to the robot or the control system. Having more legs on the ground during walking could also mean that the robot could produce a larger total ground reaction force to support the heavy load. Therefore, we used a quasi-static walking gait to maximize the number of legs on the ground during walking (i.e., 3 stance legs and 1 swing leg throughout the gait).
However, in this paper, our framework is not limited by any specific gait. Similar to the baseline MPC control approach <cit.>, the approach can work for different gaits by only changing the gait definition in the gait scheduler.
§.§ Desired Trajectory
The desired trajectory is generated based on the robot's velocity command. The robot operator commands xy-velocity and yaw rate, then xy-position and yaw are determined by integrating the corresponding velocity. z position contains a constant value of 0.3 m, and the remaining states (roll, roll rate, pitch, pitch rate, and z-velocity) are always zero.
§.§ Single Rigid Body (SRB) Model of Robot
Due to the complexity of the legged robot, a simplified rigid-body model has been used to present the system's dynamic. This model enables us to calculate the ground reaction forces (GRFs) in real-time. A few assumptions have been made to achieve simplified robot dynamics<cit.>:
Assumption 1: The robot has low inertia legs, so their effect is negligible.
Assumption 2: For small values of roll (ϕ) and pitch (θ), the rotation matrix R which transforms from the body to world coordinates, can be approximated as the rotation matrix corresponding to the yaw angle (ψ):
R≅R_z(ψ) = [[ cos(ψ) -sin(ψ) 0; sin(ψ) cos(ψ) 0; 0 0 1 ]]
Therefore, by defining the robot's orientation as a vector of Z-Y-X Euler angles Θ = [ϕ, θ, ψ]^T, the rate of change of the robot's orientation can be approximated as <cit.>:
Θ̇≅R_z(ψ) ω_b
where ω_b is the robot's angular velocity in the world frame.
Assumption 3: For small angular velocity, the following approximation can be made:
d/dt(I_G _b) = I_G _b + _b × (I_G _b) ≈I_G _b
where I_G∈ℝ^3 × 3 is the moment of inertia in the world frame.
Based on the above assumptions, the state representation of the system is as follows <cit.>:
[[ ṗ_c; Θ̇; p̈_c; _b ]] = [[ 0_3 0_3 1_3 0_3; 0_3 0_3 0_3 R_z(ψ); 0_3 0_3 0_3 0_3; 0_3 0_3 0_3 0_3 ]]_D∈ℝ^12 × 12[[ p_c; Θ; ṗ_c; _b ]]_X∈ℝ^12 +
[[ 0_6 × 12; M^-1A ]]_H∈ℝ^12 × 12F + [[ 0_6 × 1; G ]]
with
M = [[ m 1_3 0_3; 0_3 I_G ]] ∈ℝ^6 × 6
A = [[ 1_3 … 1_3; [p_1 - p_c] × … [p_4 - p_c] × ]] ∈ℝ^6 × 12
G = [[ g; 0_3 × 1 ]] ∈ℝ^6
where m is the robot's mass, g∈ℝ^3 is the gravity vector, p_c∈ℝ^3 is the position of the center of mass (COM), p_i∈ℝ^3 (i ∈{1,2,3,4}) are the positions of the feet, p̈_c∈ℝ^3 is body’s linear acceleration, _b∈ℝ^3 is angular acceleration, and F = [F_1^T, F_2^T, F_3^T, F_4^T]^T ∈ℝ^12 are the ground reaction forces acting on each of the robot’s four feet. The term [p_i - p_c] × is the skew-symmetric matrix representing the cross product (p_i - p_c) ×F_i.
Note that p_i and F_i are presented in the world frame. Therefore, the state representation of the system can be rewritten in the compact form:
Ẋ = DX + HF + [[ 0_6 × 1; G ]]
§.§ Balance Controller
One of the baseline control approach for calculating GRFs for quadruped robots is the balance controller presented in <cit.> based on quadratic program (QP) solver. Based on the assumptions presented in sec: simplified robot dynamic, the approximated dynamic model between the body acceleration and GRFs is as follows:
[[ 1_3 … 1_3; [p_1 - p_c] × … [p_4 - p_c] × ]]_A∈ℝ^6 × 12F = [[ m (p̈_c +g); I_G _b ]]_b∈ℝ^6
and the vector b in (<ref>) can be rewritten as:
b = M ([[ p̈_c; _b ]] + G).
Since the model (<ref>) is linear, the controller can naturally be formulated as the following QP problem <cit.>, which can be solved in real-time at 1 kHz:
F^* = F∈ℝ^12argmin (AF - b_d)^T S (AF - b_d)
+ γ_1 F^2 + γ_2 F - F_prev^* ^2
d≤CF≤d̅
F_swing^z=0
where b_d is the desired dynamics. The idea of designing b_d will be elaborated in sec: closed_loop. The cost function in (<ref>) includes terms that consider three goals, including (1) driving the COM position and orientation to the desired trajectories; (2) minimizing the force commands; and (3) minimizing the change of the current solution F^* with respect to the solution from the previous time-step, F^*_prev.
The priority of each goal in the cost function is defined by the weight parameters S∈ℝ^6 × 6, γ_1, γ_2 respectively.
The constraints in the QP formulation enforce friction constraints, input saturation, and contact constraints.
The constraint d≤CF≤d̅ ensures that the optimized forces lie inside the friction pyramid and the normal forces stay within a feasible range. More details can be found in <cit.>.
Besides the friction constraint, we will enforce the force constraints for the swing legs, F_swing=0. The swing legs are then kept in the posing position until it switches to the stance phase. More details on swing leg control are provided in sec: swing leg.
§.§ SRB-based Convex MPC
The calculation of GRFs in quadruped robots is often approached through Model Predictive Control (MPC) <cit.>. This method determines the optimal sequence of inputs over a finite-time horizon, taking into account any constraints within the dynamic model. Every time MPC is executed in the control system, only the first computed control input from the MPC cycle is applied. The inputs determined over the finite time horizon are only used for the optimization problem and are not directly applied in the control system.
To have the dynamic equation in the convenient state-space form, gravity should be added to the state. So, the system can represent as:
Ẋ^c = D^c X^c + H^c F
where
X^c = [[ p_c; Θ; ṗ_c; _b; ||g|| ]] ∈ℝ^13
D^c = [[ 0_3 0_3 1_3 0_3 0_3 × 1; 0_3 0_3 0_3 R_z(ψ) 0_3 × 1; 0_3 0_3 0_3 0_3 g/||g||; 0_3 0_3 0_3 0_3 0_3 × 1; 0_1 × 3 0_1 × 3 0_1 × 3 0_1 × 3 0 ]] ∈ℝ^13 × 13
H^c = [[ 0_6 × 12; M^-1A; 0_1 × 12 ]] ∈ℝ^13 × 12
We consider a linear MPC problem with horizon length k as follows:
min_F_i ∑_i=0^k-1e_i+1^T Q_i e_i+1 + F_i^T R_i F_i
s.t. X^c_i+1 = D_t,iX^c_i + H_t,iF_i
d≤CF_i≤d̅
where
F_i is the computed ground reaction forces at time step i, Q_i and R_i are diagonal positive semi-definite matrices, D_t,i and H_t,i are discrete time system dynamics matrices. The e_i+1 is the system state error at time step i define as e = [e_p, ė_p]^T ∈ℝ^12, with
e_p = [[ p_c-p_c,d; log(R_d R^T) ]]∈ℝ^6, ė_p = [[ ṗ_c-ṗ_c,d; _b -_b,d ]]∈ℝ^6,
where p_c,d∈ℝ^3 is the desired position of COM, ṗ_c,d∈ℝ^3 is the desired body's linear velocity, and _b,d∈ℝ^3 is the desired body's angular velocity. The desired and actual body orientations are described using rotation matrices R_d∈ℝ^3 × 3 and R∈ℝ^3 × 3, respectively. The orientation error is obtained using the exponential map representation of rotations <cit.>, where the log(.):ℝ^3 × 3→ℝ^3 is a mapping from a rotation matrix to the associated rotation vector <cit.>.
The constraint d≤CF_i ≤d̅ is equivalent to the constraint in equation (<ref>) at time step i.
§.§ Swing Leg Control
For the swing legs, the final footstep location for each leg is calculated from the corresponding hip location using a linear combination of Raibert heuristic <cit.>, and a feedback term from the capture point formulation <cit.>. The final footstep locations (p_f,i) are projected on an assumed ground plane and are calculated by:
p_f,i = p_h,i + T_c_ϕ/2ṗ_c,d + √(z_0/g)(ṗ_c - ṗ_c,d)
where T_c_ϕ is the stance time scheduled, z_0 is the height of locomotion and p_h,i∈ℝ^3 is the position of the corresponding hip i. A Beizer curve calculates the desired swing trajectory (including desired position p_d,i and velocity v_d,i) for swing legs which starts from the initial lift-off position p_0,i and ends at the final touch-down location p_f,i.
§.§ Low-level Control
The low-level leg control can generate joint torque commands from the high-level controller. For low-level force control, the controller transforms the force vector to the hip frame by rotation matrix R. Then, joint torques are calculated as follows:
τ_stance, i = -J(q_i)^TR^TF_i
where J(q_i)∈ℝ^3 × 3 is the leg Jacobian matrix and q_i is the joints angle of leg i-th.
To track the desired swing trajectory for each foot, a PD controller with a feedforward term is used to compute joint torques <cit.>:
τ_swing, i = J(q_i)^T[K_p,p(p_d,i - p_i)+K_d,p(v_d,i-v_i)]
where p_d,i and v_d,i are desired foot position and velocity, respectively, p_i and v_i are actual foot position and velocity in the robot's frame, K_p,p∈ℝ^3 × 3 and K_d,p∈ℝ^3 × 3 are the diagonal matrices of the proportional and derivative gains.
§ OVERVIEW OF THE PROPOSED APPROACH
This section will present an overview of our novel control architecture to incorporate adaptive control into the force control framework. While our approach is not limited to any specific adaptive control approach, we decide to use L_1 adaptive control <cit.> thanks to its advancement in guaranteeing fast adaptation and smooth control signals. Note that our proposed control system is designed for the stance leg control part in the control architecture of the quadruped robot (see fig: ControlOverview).
Our prior work <cit.> introduced an adaptive control based on Hybrid Zero Dynamics (HZD) <cit.> for bipedal robots. HZD is a common control approach for bipedal robots since it can handle hybrid and underactuated dynamics associated with this kind of robot.
In this paper, however, our approach leverages the combination of the adaptive control and force control system, which calculates ground reaction forces (GRFs) to achieve highly dynamic locomotion for quadrupeds <cit.>. The use of force control in legged robot systems has several key benefits, including increased robustness in the presence of challenging terrains <cit.> and the ability to accommodate a wide range of dynamic movements <cit.>, such as various types of locomotion gaits. By combining force control with adaptive control strategies that compensate for model uncertainty, achieving an enhanced control system with these advantages is possible.
The overview of our proposed adaptive force-based control system is presented in fig: main adaptive structure. By incorporating a L_1 adaptive controller, we aim to design a combined controller. The force-based controller calculates the optimal GRFs for following the desired trajectory. The adaptive controller calculates the residual parameters for compensating the nonlinear model uncertainty θ in the system dynamic. Therefore, the goal is to adjust adaptive control signal u_a as well as adaptation law to estimate the model uncertainty (θ̂) correctly and make the real model follows the reference model. For the reference model, we employ a similar linear model described in (<ref>), and we will update the reference model in real-time using an ODE solver. Moreover, the vector of uncertainties estimation θ̂ typically has high frequency due to fast estimation in the adaptation law. Thus, we employ a low-pass filter to obtain smooth control signals. We use the same swing leg control to appropriately synchronize the reference and real models. This means that we also use the real model's foot position for the reference model.
In the following sections, we will elaborate on integrating two different force-based control as the baseline controller into the adaptive control. First, in sec: adaptive control, we will describe the proposed method using a QP-based balancing controller, as presented in fig: ControlDiagram_QP. Then, in sec: adaptive MPC, we will show how to incorporate MPC into the adaptive controller in detail, as illustrated in fig: ControlDiagram_MPC.
§ ADAPTIVE FORCE-BASED CONTROL USING THE BALANCE CONTROLLER
In this section, we use the balance controller as the force-based controller, previously demonstrated in <cit.>.
In sec: adaptive MPC, we will present our control framework for integrating the L_1 adaptive control into MPC.
§.§ Closed-loop Dynamics
The L_1 adaptive control is basically designed for trajectory tracking; however, the goal of the balance controller is to compute optimal GRFs. Hence, to integrate the balance controller presented in sec: balance controller into L_1 adaptive control, we should relate the linear model described in (<ref>) into the closed-loop dynamics.
Let us consider the system state error (e) according to equation (<ref>) as the state variable. Therefore, the closed-loop error dynamics in state-space form can be represented as follow:
ė = D_l e + Bu,
where
D_l = [ [ 0_6 1_6; 0_6 0_6 ]]∈ℝ^12 × 12, B = [ [ 0_6; 1_6 ]] ∈ℝ^12 × 6
and u∈ℝ^6 is the control input function. By employing a PD control law, we have
u = [ -K_P -K_D ]e,
where K_P ∈ℝ^6 × 6 and K_D ∈ℝ^6 × 6 are diagonal positive definite matrices. According to definition of matrices D_l and B, from equation (<ref>) it can be obtained that:
ë_p = [[ p̈_c - p̈_c,d; _b - _b,d ]] = u,
where ë_p is the derivative of ė_p presented in (<ref>), p̈_c,d and _b,d are the desired COM linear acceleration and the desired angular acceleration, respectively. Since the desired trajectory is obtained from the velocity command, both desired accelerations p̈_c,d and _b,d are zero vectors. Then from (<ref>) and (<ref>), the desired dynamics can be given by:
b_d = M (u + G),
where M and G are defined in (<ref>).
By substituting (<ref>) into the QP problem (<ref>), we can obtain the optimal GRFs as the input for the low-level leg controller.
The objective of the QP formulation in equation (<ref>) is to find a solution that ensures the actual dynamics AF match the desired dynamics b_d. In general, the QP-based balance controller is capable of achieving the desired control input function outlined in equation (<ref>), thus keeping the error e within a certain range. However, if the desired dynamics vector b_d violates any of the inequality constraints, such as force limits or friction constraints, the controller may yield an optimal solution F^* that may not completely align with the desired dynamics. With this solution, the optimal dynamic b_d^* and u^* can be written as:
b_d^* = AF^*,
u^* = M^-1 b_d^* - G
where in the appendix, we will show that the u^* remains within a bounded range.
Note that the optimal ground reaction force F^* serves as the control input for the robot and the variable u^* acts as an input for the closed-loop dynamic. The closed-loop structure for the robot is depicted in fig: ControlDiagram_QP (the green dashed line).
§.§ Effects of Uncertainty on Dynamic
If we consider uncertainty in the dynamic equation (<ref>) and assume that the matrices D and H are not accurate, then we need to present the dynamic based on the nominal matrices D̅, H̅. The model uncertainty mostly comes from inaccurate values for mass, inertia, and foot position with respect to the center of mass. In addition to that, various terrain (e.g., rough terrain or soft terrain) might have a different impact on the robot, and it is unknown in a practical situation. Therefore, terrain uncertainty should also be considered in the dynamic model. In this section, we solely derive our control equations based on the model uncertainty. In sec: terrain, we will elaborate on how our proposed control system can also consider terrain uncertainty.
There is another parameter involved in the dynamic equation, namely the yaw angle. This angle is obtained through the state estimation, and we assumed that the state estimation has minimal uncertainty.
According to the definition of matrices D and H in (<ref>), the inaccurate value of the dynamic parameter mentioned above reflects on the H matrix. Therefore, the dynamic equation in the presence of uncertainty can be represented as:
Ẋ = DX+ (H̅+H̃) F + [[ 0_6 × 1; G ]]
where H̃ represent the uncertainty in matrix H.
It is worth noting that according to the definition of H in equation (<ref>), the first six rows of H consist of zeros. Thus, we can rephrase the dynamic equation (<ref>) as follows:
Ẋ = DX + H̅F +BG + Bθ
where θ∈ℝ^6
is the vector of uncertainty for six corresponding equations and is defined as follows:
θB^T H̃F
With reference to the state representation given by equation (<ref>), the vector θ can be interpreted as a time-varying disturbance affecting the body and orientation accelerations.
The uncertainty vector θ depends on both time t and F. Since F is obtained through the QP problem (<ref>), it is a function of b_d. Furthermore, b_d is a function of u according to (<ref>). Considering that u is determined by the PD control (<ref>), we can conclude that θ is a function of both the tracking error e and time t. As a result, for any given time t, it is always possible to find α(t)∈ℝ^6 and β(t)∈ℝ^6 satisfying <cit.>:
θ(e,t)=α(t)||e||+β(t).
§.§ Designing Adaptive Controller for Compensating the Uncertainty
By incorporating L_1 adaptive controller, we want to design a combined controller u=u_1+u_2, where u_1 is the control input to follow the desired trajectory for the nominal model as presented in (<ref>) and u_2 is to compensate the nonlinear model uncertainties θ. Therefore, the goal is to adjust the control signal u_2 so that the real model can follow the reference model. For the reference model, we employ a similar linear model described in (<ref>) which, instead of M, the nominal matrix M̅ is being used. The diagram of our proposed force-based adaptive control based on a balance controller is presented in fig: ControlDiagram_QP.
The duplicate version of equation (<ref>) for state space representation presented in (<ref>) by considering combined controller u=u_1+u_2 is as follows:
ė=D_l e+Bu_1 + B (u_2+θ).
Note that the vector of uncertainty θ in equations (<ref>) and (<ref>) are not the same since the state vector of equation (<ref>) is X while the state vector of equation (<ref>) is system error e.
The state representation for the reference model can be expressed as follows:
ê̇=D_l ê+Bû_1+B (u_2+θ̂),
where,
θ̂=α̂||e||+β̂,
and û_1 is defined as:
û_1 = [ -K_P -K_D ]ê.
To compensate the estimated uncertainty θ̂, we can just simply choose u_2=-θ̂ to obtain
ê̇=D_l ê+Bû_1.
However, θ̂ typically has high frequency due to fast estimation in the adaptation law. Therefore, we employ a low-pass filter to obtain smooth control signals as follows:
u_2=-C(s)θ̂,
where C(s) is a second-order low-pass filter with a magnitude of 1:
C(s) = ω_n^2/s^2 + 2 ζω_n s+ ω_n^2 .
According to (<ref>), the b_d for the real model in the presence of uncertainty get the following form:
b_d = M̅ (u_1 + u_2 + G).
Respectively, b̂_d for reference model is as follows:
b̂_d = M̅ (û_1 + u_2 + θ̂ + G).
The QP solver outlined in equation (<ref>) allows us to obtain the optimal GRFs for the real model. Similarly, the optimal GRFs F̂ for the reference model can be obtained as follows:
F̂^* = F̂∈ℝ^12argmin (ÂF̂ - b̂_d)^T S (ÂF̂ - b̂_d)
+ γ_1 F̂^2 + γ_2 F̂ - F̂_prev^* ^2
d≤CF̂≤d̅
F̂_swing^z=0 .
Define the difference between the real model and the reference model ẽ=ê-e, we then have,
ẽ̇=D_l ẽ+Bũ_1+B (α̃||e||+β̃),
where
ũ_1=û_1-u_1, α̃=α̂-α, β̃=β̂-β.
As a result, we will estimate θ indirectly through α and β, or the values of α̂ and β̂ computed by the following adaptation laws based on the projection operators <cit.>,
α̇̂̇=ΓProj(α̂,y_α), β̇̂̇=ΓProj(β̂,y_β)
where Γ∈ℝ^6 × 6 is a symmetric positive definite matrix. The projection functions y_α∈ℝ^6 and y_β∈ℝ^6 are:
y_α =-B^T Pẽ||e||,
y_β =-B^T Pẽ,
where P∈ℝ^12 × 12 is a positive definite matrix that is defined according to the stability criteria using the Lyapunov equation. Moreover, the stability proof of the system is provided in the appendix.
§ ADAPTIVE FORCE-BASED CONTROL USING MPC
Model predictive control (MPC) has been widely used across various fields, from finance to robotics. One of MPC's main advantages is its ability to handle complex systems with multiple inputs and outputs while considering hard control constraints <cit.>.
MPC has also been applied to quadruped robots, providing stable locomotion <cit.>. Thanks to dynamic prediction in MPC, by using the same control framework, it can achieve different dynamic locomotion gaits.
However, MPC's limitations become evident when dealing with significant uncertainty in the dynamic model. For instance, in the case of a quadruped robot carrying an unknown heavy load, MPC fails to track the desired state trajectory, resulting in unstable behavior and deviation from the desired trajectory, especially with dynamic gaits like bounding.
Furthermore, the ability of a robot to traverse soft terrain, where the impact model is unknown, can present a significant challenge. Our proposed approach can tackle this challenge effectively, and we will discuss the details of how it handles the terrain unknown impact model in sec: terrain.
In the previous section sec: adaptive control, we presented an adaptive force-based control framework based on the balance controller.
The balance controller relies on a quadratic program (QP) solver, which is simple to put into practice and well-suited for motions that are slow and safe, like standing and quasi-static walking.
Additionally, the balance controller is an instantaneous control technique, meaning it does not predict the robot's future movement. As a result, the balance controller proves to be ineffective in fast-paced, highly dynamic scenarios. On the other hand, MPC has shown great potential in handling agile motions, even when it comes to underactuated gaits such as bounding.
In this section, we will present a novel control architecture to integrate adaptive control into the MPC framework.
By this proposed framework, we can achieve fast and robust locomotion in the presence of uncertainties. This framework can also be extended to accommodate various dynamic gaits, such as trotting and bounding, in legged robots. As we discussed in a previous section, our approach is not restricted to a specific type of adaptive control, but we have chosen to utilize L_1 adaptive control, which has demonstrated advantages over other adaptive control techniques. The first step in integrating L_1 adaptive control and MPC is to understand the importance of a reference model and the challenges in synchronizing the real model and reference model. We then present our proposed adaptive MPC, which combines conventional MPC <cit.> with adaptive control.
Finally, we address the challenge of real-time computation while having two MPCs in our control system. We will elaborate on how to adjust the frequency of each control component in an optimized manner to allocate enough computation resources for critical control parts and achieve real-time computation.
§.§ Reference Model
Our method aims to design a combined controller based on MPC and L_1 adaptive control that the real model follows the reference model.
In accordance with our previous discussion in sec: L1_adaptive, the combined controller incorporates a control signal u_2 to account for model uncertainty, as indicated in equation (<ref>).
In this section, the auxiliary control signal for this purpose is u_a ∈ℝ^6, thus, the uncertain dynamic equation (<ref>) can be rewritten as follow:
Ẋ = DX + H̅F + BG + B (u_a + θ).
The reference model is similar to the quasi-linear model described in (<ref>) which, instead of H, the nominal matrix H̅ is being used. The proposed adaptive MPC diagram is presented in fig: ControlDiagram_MPC.
We consider a reference model for L_1 adaptive control that arises from MPC. The MPC method is computationally expensive, but replacing it with other simpler control methods, such as the balance controller while simulating the robot's performance using dynamic gaits such as bounding is impossible. The reason is that in bounding gait, the robot's two feet on either the front or rear side touch the ground at each time step, making it challenging to accurately control the height and pitch angle. The MPC approach balances the error in the height and pitch angle and, based on the predicted dynamics of the system in the future, computes the optimal ground reaction forces. As seen in fig: bounding snapshot, the center of mass (COM) height oscillates around the desired value. Thus, the underactuated nature of certain gaits like bounding necessitates the use of MPC as the control system for the reference model.
When implementing MPC for a reference model, one challenge is ensuring that the reference model is synchronized with the real model. This is particularly important when the robot performs a gait with a periodic behavior, such as bounding (see fig: bounding snapshot). In order to correctly compare the real model with the reference model, both should have the same gait schedule. Additionally, the adaptive MPC proposed for legs in the stance phase is independent of the swing leg control. However, the foot position is crucial in calculating the moment of ground reaction force around the center of mass. Therefore, to maintain consistency between the real and reference models, it is important to ensure that the real robot's foot position is fed into the reference model as shown in fig: ControlDiagram_MPC.
The reference model can be expressed as follows:
Ẋ̂̇ = DX̂ + H̅F̂ + BG + B(u_a+θ̂),
where
θ̂=α̂||e||+β̂.
In this case, similar to sec: adaptive control, we use a second-order low-pass filter, same as (<ref>). Therefore, the auxiliary control signal would be:
u_a=-C(s)θ̂.
By defining the difference between the real model and the reference model X̃=X̂-X, we then have:
Ẋ̃̇=DX̃+H̅F̃+B(α̃||e||+β̃),
where
F̃=F̂-F, α̃=α̂-α, β̃=β̂-β.
Since the desired trajectory for both the real model and the reference model is the same (X_d = X̂_d), the difference between the real model and reference model can be defined as:
X̃ = (X̂ - X̂_d) - (X - X_d) = ê - e = ẽ.
Therefore, equation (<ref>) is equal to the following equation:
ẽ̇=Dẽ+H̅F̃+B(α̃||e||+β̃).
The adaption laws and projection functions for computing the value of α and β are the same as equations (<ref>) and (<ref>), respectively. Moreover, the stability of the control system can be proven using the same logic provided in the appendix.
§.§ Adaptive MPC
After computing the auxiliary control signal u_a using the adaptive controller presented in the previous subsection, we will integrate the u_a with the conventional MPC for legged locomotion <cit.> and propose our adaptive MPC framework. We treat the auxiliary control signal u_a as a residual vector in the system's equation to compensate for dynamic uncertainty. Therefore, the u_a should be combined into the state vector and the equation (<ref>) can be written as follow:
η̇ = D^eη + H̅^eF + B^eθ
with the following extended matrices:
η = [[ X^c; u_a ]] ∈ℝ^19
D^e = [[ D^c_13 × 13 0_6 × 6
1_6 × 6
0_1 × 6; 0_6 × 13 0_6 × 6 ]] ∈ℝ^19 × 19
H̅^e = [[ H̅^c; 0_6 × 12 ]] ∈ℝ^19 × 12
B^e = [[ B; 0_7 × 6 ]] ∈ℝ^19 × 6
where H̅^c is the nominal value of H^c. The definition of X^c, D^c, and H^c can be found in (<ref>). Although u_a is considered a part of the state vector in (<ref>), it is just a residual vector for compensating dynamic uncertainty. Therefore, u_a is constant in the state space equation and over the horizons. To this end, the components associated with u_a in matrices D^e and H̅^e are assigned zero, which means u̇_a = 0. Note that the value of u_a will be updated according to the adaptive law, but it is constant during the prediction horizons.
The state representation in (<ref>) is also convenient for discretization methods such as zero-order hold <cit.> for MPC.
Therefore, our adaptive MPC can be designed according to (<ref>) and based on the following discrete-time dynamic:
η_i+1 = D^e_t,iη_t,i + H̅^e_t,iF_i
§.§ Real-time Computation
The main challenge in executing our proposed adaptive MPC framework is ensuring that the computation required is fast enough to be performed in real-time for hardware experiments. If the controller is unable to perform updates at a high frequency, it could result in the robot collapsing during dynamic motion. The control system comprises two MPCs, each with 13 to 19 states predicted over ten horizons. To ensure the robot's balance and allocate sufficient computation resources to each control component, we have devised a scheme, as depicted in fig: ControlDiagram_MPC, to update each control component in an optimized manner.
The robot's sensory data updates in real-time with a frequency of 1 kHz. Thus, the reference model should update with the same frequency to compare the reference model states (X̂) and real model states (X) correctly. The yellow dashed line in fig: ControlDiagram_MPC indicates the update frequency for the reference model. We use the odeint package from Boost software in C++ <cit.> to solve the ODE problem associated with the dynamic equation for the reference model.
One of the critical components in our proposed framework is the adaptive MPC, which is responsible for computing the ground reaction force for the robot, as shown in fig: ControlDiagram_MPC). Through our experimentation, we have determined that for robust locomotion with dynamic gaits, the optimal update frequency for the adaptive MPC should be 300 Hz. In contrast, the reference MPC, which plays a supporting role in the control system, is less sensitive and runs at a slower rate of 30 Hz. In addition, there is a two-millisecond delay between the running of the adaptive MPC and reference MPC to ensure sufficient computational resources are allocated to each component. This means that the two MPC frameworks do not run simultaneously in our control system.
§ ADAPTATION TO UNKNOWN IMPACT MODEL
The dynamic formulation presented in sec: adaptive control and sec: adaptive MPC considers the presence of model uncertainty in real-world situations. It is assumed that the terrain is hard enough to allow the robot receives the desired force as ground reaction forces on its feet. However, this assumption may not hold if the robot walks on soft or elastic terrain with an unknown impact model, which may not generate the desired force needed for stable locomotion.
Some previous studies have included terrain knowledge and contact models in their balancing controllers to address the soft terrain challenge, mainly using a spring-damper model to characterize the soft terrain <cit.>. Some control frameworks for adapting to soft terrain in real-time have also been developed using iterative learning <cit.> and whole-body control <cit.>, without prior knowledge about the terrain. This section demonstrates that the proposed method in sections sec: adaptive control and sec: adaptive MPC can also handle unknown impact models from terrain, allowing the robot to maintain stability while walking on soft terrains.
Assume the computed force F by MPC in (<ref>) cannot be achieved perfectly due to walking on soft terrain. Therefore, equation (<ref>) can be rewritten as follow:
Ẋ = DX + H̅ (F_a + F̃_a) + BG + Bθ
which F_a is the actual ground reaction force exerted on the robot and F̃_a is the difference between the desired ground reaction force and actual reaction force.
Given that F̃_a depends on the tracking error e and time, the uncertainty vector arising from the ground reaction force can be incorporated with θ. Therefore, we can reformulate equation (<ref>) as follows:
Ẋ = DX + H̅F_a + BG + B (θ + θ_F).
where the uncertainty vector θ_F is defined as follow:
θ_F B^T H̅F̃_a
The equation (<ref>) is in the form of equation (<ref>), which uses actual ground reaction force instead of desired ground reaction force. Therefore, all formulations for implementing adaptive controllers are also valid for a situation with an unknown impact model.
§ RESULTS
In this section, we validate our control approach in simulation and hardware experiments on a Unitree A1 robot.
All the hardware experiment's computation runs on a single PC (Intel i7-6500U, 2.5 GHz, 64-bit). For simulation, the control system is implemented in ROS Noetic with the Gazebo 11 simulator, which provides a high-fidelity simulation of the A1 robot. A video showcasing the results accompanies this paper[<https://youtu.be/QmwyysdTk1k>].
We set the control parameters for MPC, the adaption law, and the low-pass filter as presented in Table <ref>. We use one set of parameters for all the experiments with different locomotion gaits, indicating that our approach is easily generalizable.
The following subsections will introduce different experiment results in terms of model and environment uncertainty (see fig: terrain experiment). In each experiment, the robot starts by using a balance controller to stand up and then switches to the MPC framework for walking or running.
§.§ Comparative Analysis
In order to evaluate the performance of our proposed adaptive MPC method, we conduct a comparative experiment with the conventional MPC method presented in <cit.>. The objective is to understand the advantages of integrating the adaptive controller into MPC for quadrupedal locomotion.
§.§.§ Walking with significant model uncertainty
The experiment involves the robot walking and rotating in different directions, using both adaptive and non-adaptive controllers while carrying an unknown load. The results of the experiment show that the adaptive controller provides robust locomotion, with excellent tracking error, even when carrying an unknown 5 kg load. On the other hand, the non-adaptive controller results in a considerable error in the COM height and eventually collapses under the weight of just a 3 kg load. The comparative results for the adaptive and non-adaptive controllers are shown in fig: comparison exp.
§.§.§ Walking on soft terrain
To evaluate the capability of our proposed control method in handling unknown impact models, we conducted an experiment where the robot was made for walking on a double foam, which symbolizes a soft terrain. The performance of both the adaptive and non-adaptive controllers was evaluated and compared. The results are depicted in fig: soft terrain exp, which represents the robot's roll angle. The figure clearly illustrates that the adaptive controller was able to maintain the robot's balance on the soft terrain, while the non-adaptive controller was unable to do so, leading to the collapse of the robot.
§.§ Running with Multiple Gaits
To demonstrate the superiority of our proposed approach for dynamic gaits, we conducted experiments with the robot running while carrying an unknown load. These experiments were carried out for both the trotting and bounding gaits, with an unknown load of 5 kg and 3 kg, respectively. The results of these experiments are shown in <ref>. It can be seen from the figure that the tracking of the center of mass height during the bounding gait is more unstable compared to the trotting gait, which is due to the inherent underactuated nature of the bounding gait.
§.§ Time-varying Load
To demonstrate the effectiveness of our proposed adaptive force control in adapting to model uncertainty, we conducted simulations where the robot carries a time-varying load of up to 92% of its weight during walking. As shown in fig: time_varying result, our approach can enable the robot to adapt to time-varying uncertainty. In the simulation, the robot starts with an unknown 5 kg load. While increasing the robot's velocity, the robot is subjected to a varying external force in the z-direction that rises to 60 N, resulting in an additional unknown 11 kg load. These results indicate that our proposed approach effectively handles high levels of model uncertainty.
§.§ Terrain Uncertainty
To demonstrate the capability of our proposed method to handle terrain uncertainty, we tested the robot navigating various terrain while carrying an unknown 5 kg load. To this end, we tried walking experiments on multiple rough terrains as well as high-sloped terrain, and we got impressive results.
§.§.§ Rough terrain
We tested the robot navigating various rough terrains such as grass and gravel. The robot walks and rotates in multiple directions while carrying an unknown 5 kg load. Some snapshots of the robot walking on diverse rough terrain are presented in fig: terrain experiment. Our approach is based on a force controller and retains the robustness features of the baseline framework, allowing the robot to handle the rough terrain effectively.
§.§.§ Sloped terrain
To enable the robot to climb the sloped terrain perfectly without vision, we need to adjust its orientation to make its body parallel to the walking surface. This is done by using the footstep location to estimate the slope of the ground. For each i-th leg, we can measure the foot position p_i = (p_x,i, p_y,i, p_z,i) and build the vector of feet x-position (p_x), y-position (p_y), and z-position (p_z). Thus, we can model the walking surface as a plane:
z(x,y) = a_0 + a_1 x + a_2 y
and the coefficients (a_0, a_1, and a_2) will be obtained through the solution of the least square problem using p_x, p_x, and p_x data (see <cit.> for more details).
Note that the desired roll and pitch angles for the robot will be modified on the slope according to the following:
roll = arctan (a_2) , pitch = arctan(a_1).
As a result, the reference model's desired pitch and roll angles must be adjusted to the non-zero values determined as described above. It's important to note that the reference model utilizes the actual foot position of the robot, so there is no need to make any changes to the reference model's footstep planning when the robot is attempting to climb a slope.
§ CONCLUSION
In conclusion, a novel control system has been presented that incorporates adaptive control into force control for legged robots walking under significant uncertainties. We have demonstrated our proposed approach's effectiveness using numerical and experimental validations. The experiments show the success of the implementation of the proposed adaptive force control on quadruped robots, allowing them to walk and run while carrying an unknown heavy load on their trunk. The results are remarkable, with the robot being able to carry a load of up to 5 kg (50% of its weight) while still keeping the tracking error within a small range and maintaining stability even in all directions. The experiment demonstrates that the proposed adaptive force control system cannot only adapt to model uncertainty but also leverage the benefits of force control in navigating rough terrains and soft terrain. On the other hand, the baseline non-adaptive controller fails to track the desired trajectory and causes the robot to collapse under uncertainty.
§ ACKNOWLEDGMENTS
The authors would like to thank Yiyu Chen at Dynamic Robotics and Control lab (DRCL) for his help in conducting the hardware experiments.
§.§ Linear Quadratic Lyapunov Theory
According to Lyapunov theory <cit.>, the PD control described in (<ref>) will asymptotically stabilize the system if
A_m = [ 0_6 1_6; -K_P -K_D ]∈ℝ^12 × 12
is Hurwitz. This means that by choosing a control Lyapunov function candidate as follows:
V(e) = e^TPe,
where P∈ℝ^12 × 12 is the solution of the Lyapunov equation
A_m^T P + PA_m = -Q_L,
and Q_L∈ℝ^12 × 12 is any symmetric positive-definite matrix. We then have:
V̇(e,u) + λ V(e) = e^T (D_l^T P + PD_l) e
+ λ V(e) +2 e^T PBu ≤ 0,
where,
λ = λ_min(Q_L)/λ_max(P) > 0.
As a result, the state variable e and the control input u always remain bounded:
e≤δ_η, u≤δ_u.
However, the control signal u^* (<ref>) we construct by solving QP problem (<ref>), is not always the same as u. Based on the friction constraints present in equation (<ref>), the value of F^* is always bounded. Besides, according to the definition of A, M, and G, these matrices also have bounded values. Thus, it implies that:
u^* ≤δ_u^*.
Therefore, the vector of difference between u and u^* can be defined as:
Δ = u^* - u
which is also bounded according to (<ref>) and (<ref>):
Δ≤δ_Δ.
By substituting u^* in (<ref>), we have:
V̇(e,u^*) + λ V(e) ≤ 2 e^T PBΔ≤ϵ_V,
where
ϵ_V = 2 Pδ_ηδ_Δ.
§.§ Stability Analysis
Theorem: Consider the system dynamics with uncertainty described by (<ref>), and a reference model described by (<ref>). Assume the use of an L_1 adaptive controller with the optimal closed-loop control signal given by (<ref>), the adaptive control signal given by (<ref>), and the adaptation laws given by (<ref>). Then, under the aforementioned L_1 adaptive controller, the tracking error between the real model and reference model denoted as ẽ, as well as the errors between the real and estimated uncertainty, denoted as α̃ and β̃, respectively, are bounded.
Proof: Let us consider the following control Lyapunov candidate function:
Ṽ=ẽ^TPẽ+α̃^TΓ^-1α̃+β̃^TΓ^-1β̃.
Therefore, its time derivative will be
Ṽ̇=ẽ̇^TPẽ+ẽ^TPẽ̇ +
α̇̃̇^TΓ^-1α̃+α̃^TΓ^-1α̇̃̇
+
β̇̃̇^TΓ^-1β̃+β̃^TΓ^-1β̇̃̇,
in which we have
ẽ̇^TPẽ+ẽ^TPẽ̇ =
(D_lẽ+BF̃)^TPẽ
+
ẽ^TP(D_lẽ+BF̃)
+
α̃^TB^T||e||Pẽ+ẽ^TPBα̃||e||
+β̃^TB^TPẽ+ẽ^TPBβ̃.
Because ẽ=ê-e satisfies the condition imposed by (<ref>), it implies that:
(D_lẽ+BF̃)^TPẽ +
ẽ^TP(D_lẽ+BF̃)
≤
-λẽ^TPẽ + ϵ_Ṽ,
where
ϵ_Ṽ = 2 Pδ_ẽδ_Δ̃.
Furthermore, with the property of the projection operator <cit.>, we have the following:
(α̂-α)^T(Proj(α̂,y_α)-y_α)≤ 0,
(β̂-β)^T(Proj(β̂,y_β)-y_β)≤ 0.
From (<ref>) and (<ref>), we can imply that
α̃^TΓ^-1α̇̃̇≤α̃^Ty_α-α̃^TΓ^-1α̇,
β̃^TΓ^-1β̇̃̇≤β̃^Ty_β-β̃^TΓ^-1β̇.
We now replace (<ref>), (<ref>) and (<ref>) to (<ref>), which results in
Ṽ̇ ≤
-λẽ^TPẽ + ϵ_Ṽ
+
α̃^T(y_α+B^TPẽ||e||)-α̃^TΓ^-1α̇
+
(y_α^T+ẽ^TPB||e||)α̃-α̇^TΓ^-1α
+
β̃^T(y_β+B^TPẽ)-β̃^TΓ^-1β̇
+
(y_β^T+ẽ^TPB)β̃-β̇^TΓ^-1β̃
So, by using the chosen projection functions (<ref>), then we conclude that:
Ṽ̇+λṼ≤ϵ_Ṽ +
λα̃^TΓ^-1α̃+
λβ̃^TΓ^-1β̃
-α̃^TΓ^-1α̇ -α̇^TΓ^-1α̃
-β̃^TΓ^-1β̇ -β̇^TΓ^-1β̃.
We assume that the uncertainties α, β, and their time derivatives are bounded. Furthermore, the projection operators (<ref>) will also keep α̃ and β̃ bounded (see <cit.> for a detailed proof about these properties.) We define these bounds as follows:
||α̃|| ≤ α̃_b ,
||β̃|| ≤β̃_b ,
||α̇|| ≤ α̇_b ,
||β̇|| ≤β̇_b.
Combining this with (<ref>), we have,
Ṽ̇+λṼ≤λδ_Ṽ,
where
δ_Ṽ=2||Γ||^-1(α̃_b^2+β̃_b^2+1/λα̃_bα̇_b+1/λβ̃_bβ̇_b) + 1/λϵ_Ṽ.
Thus, if Ṽ≥δ_Ṽ then Ṽ̇≤ 0. As a result, we always have Ṽ≤δ_Ṽ.
In other words, by choosing the adaptation gain Γ sufficiently large and P relatively small, we can limit the Control Lyapunov Function (<ref>) in an arbitrarily small neighborhood δ_Ṽ of the origin.
According to (<ref>) and (<ref>), achieving a small value for P depends on choosing a proper value for K_P, K_D, and Q_L. Therefore, the value of PD gains affects the stability of the whole system.
Finally, the tracking errors between the dynamics model (<ref>) and the reference model (<ref>), ẽ, and the error between the real and estimated uncertainty, α̃, β̃ are bounded as follows:
||ẽ|| ≤√(δ_Ṽ/||P||) ,
||α̃|| ≤√(||Γ||δ_Ṽ) ,||β̃|| ≤√(||Γ||δ_Ṽ).
[
< g r a p h i c s >
]Mohsen Sombolestan
received his B.Sc. degree in mechanical engineering in 2017 from Sharif University of Technology, Tehran, Iran, and his M.Sc. degree in mechanical engineering in 2020 from Isfahan University of Technology, Isfahan, Iran. He is working toward a Ph.D. in mechanical engineering from University of Southern California, Los Angeles, CA, USA.
His research interests include control system design in robotic applications, especially legged robots, focusing on adaptive control and reinforcement learning.
[
< g r a p h i c s >
]Quan Nguyen
is an assistant professor of Aerospace and Mechanical Engineering at the University of Southern California (USC). Before joining USC, he was a Postdoctoral Associate in the Biomimetic Robotics Lab at the Massachusetts Institute of Technology (MIT). He received his Ph.D. from Carnegie Mellon University (CMU) in 2017 with the Best Dissertation Award.
His research interests span different control and optimization approaches for highly dynamic robotics, including nonlinear control, trajectory optimization, real-time optimization-based control, and robust and adaptive control. His work on the MIT Cheetah 3 robot leaping on a desk was featured widely in many major media channels, including CNN, BBC, NBC, ABC, etc. Nguyen won the Best Presentation of the Session at the 2016 American Control Conference (ACC) and the Best System Paper Finalist at the 2017 Robotics: Science & Systems Conference (RSS).
|
http://arxiv.org/abs/2307.07225v1 | 20230714084147 | Degaussing Procedure and Performance Enhancement by Low-Frequency Shaking of a 3-Layer Magnetically Shielded Room | [
"Fabian Allmendinger",
"Benjamin Brauneis",
"Werner Heil",
"Ulrich Schmidt"
] | physics.ins-det | [
"physics.ins-det",
"hep-ex",
"nucl-ex",
"physics.app-ph"
] |
[email protected]
Physikalisches Institut, Universität Heidelberg
Physikalisches Institut, Universität Heidelberg
Institut für Physik, Universität Mainz
Physikalisches Institut, Universität Heidelberg
We report on the performance of a Magnetically Shielded Room (MSR) intended for next level ^3He/^129Xe co-magnetometer experiments which require improved magnetic conditions. The MSR consists of three layers of Mu-metal with a thickness of 3 mm each, and one additional highly conductive copper-coated aluminum layer with a thickness of 10 mm. It has a cubical shape with an walk-in interior volume with an edge length of 2560 mm. An optimized degaussing (magnetic equilibration) procedure using a frequency sweep with constant amplitude followed by an exponential decay of the amplitude will be presented. The procedure for the whole MSR takes 21 minutes and measurements of the residual magnetic field at the center of the MSR show that |B|< 1nT can be reached reliably. The chosen degaussing procedure will be motivated by online hysteresis measurements of the assembled MSR and by Eddy current simulations showing that saturation at the center of the Mu-metal layer is reached. Shielding Factors can be improved by a factor ≈ 4 in all directions by low frequency (0.2 Hz), low current (1 A) shaking of the outermost Mu-metal layer.
Degaussing Procedure and Performance Enhancement by Low-Frequency Shaking of a 3-Layer Magnetically Shielded Room
Ulrich Schmidt
August 12, 2023
=================================================================================================================
§ INTRODUCTION
Magnetically Shielded Rooms (MSRs) have become increasingly important both in fundamental physics and applied research due to technological developments in the field of large-volume, and thus walk-in, accessible shielding rooms. Residual magnetic fields of less than 1 nT with field gradients below 10 pT/cm with strong damping of electromagnetic distortions over a wide range of frequencies can be achieved in a volume of >1 m^3.
There are various applications in applied research like Biomagnetism <cit.> or ultra-low field nuclear magnetic resonance <cit.>. Magnetic shielding is also crucial for experiments in fundamental physics, prominently in searches for Electric Dipole Moments (EDMs) of elementary particles or composite particles like the neutron <cit.> or the neutral ^199Hg atom <cit.>.
The MSR described in this work is intended for the ^3He/^129Xe co-magnetometer experiment, a high precision experiment at low energies which can address a variety of fundamental questions associated with symmetry violations in nature <cit.>. Worth mentioning here are: the measurement of the CP-violating EDM of the ^129Xe atom <cit.>, looking for a violation of Lorentz Invariance <cit.>, and searching a spin-dependent P- and CP-violating nucleon-nucleon interaction mediated by Axions or axion-like particles <cit.>. In short, the measurement principle is: Two co-located spin samples (hyperpolarized ^3He and ^129Xe gas) are used as a sensitive probe for these non-magnetic spin interactions <cit.>. Their Larmor frequencies measured by low-temperature SQUIDs are compared while the effect under investigation is varied by, for example, periodically inverting an applied electric field leading to a corresponding modulation of the Larmor frequency of ^129Xe for a non-zero EDM.
Next level ^3He/^129Xe co-magnetometer experiments require improved magnetic conditions as follows: Firstly, statistical uncertainties are anti-proportional to the Signal-to-Noise Ratio (SNR) which can be increased by suppressing the effect of external noise sources. A common parameter to describe the performance of magnetic shields is the Shielding Factor (SF), the ratio of the magnetic flux density B measured at the center of the shielded volume and the magnetic flux density without shielding at the same position. At the relevant frequency range (≈ 1 to 20 Hz) shielding factors have to exceed 3000. As the shielding material in itself is a noise source due to Johnson noise <cit.>, a larger distance (>1 m) of the the magnetometers to the shielding material is necessary. Secondly, measurement sensitivity is influenced by the stability and homogeneity of the magnetic field inside the MSR. Spin coherence times T_2^* of several hours can be achieved only with low magnetic field gradients on the 10 pT/cm order of magnitude <cit.>. Statistical uncertainties are proportional to (T_2^*)^-3/2, so that measurement sensitivity benefits strongly from low gradients in a central volume (20x20x20 cm^3) containing the spin samples. Experience from previous experiments teaches that our measurement method using co-magnetometry works well, if the relative drift of the resulting inner magnetic field is less than 10^-3 per hour. Since the guiding field is ≈1 µT, this corresponds to a drift of the residual field of less than 1 nT/h. As static magnetic field gradients can be further compensated by a gradient coil system inside the MSR, residual fields of less than 1 nT should be strived for to keep the resulting gradients and their potential temporal drifts sufficiently low.
It should be noted that the magnetic requirements for the ^3He/^129Xe co-magnetometer experiment are more relaxed compared to neutron EDM experiments, e. g., at PSI <cit.>, which need the specified residual field values and field homogeneity over a larger volume of 1x1x1 m^3 due to the larger EDM spectrometer size.
To meet these requirements, a magnetically shielded room was installed at Physikalisches Institut at Heidelberg, Germany. The design of this MSR, its degaussing procedure and resulting magnetic properties are the focus of this paper.
The outline of the work is as follows:
In the first part, specifications and design details of the magnetic shielding are given, followed by a description of the degaussing (demagnetization) procedure with its electronics setup and degaussing sequence. The next part covers the magnetic performance including residual field and shielding factors after proper degaussing without and with shielding factors enhancement by low-frequency shaking. The third part motivates the unusual choice of the degaussing parameters (especially the low frequency of 0.5 Hz) by means of an online hysteresis measurement of the assembled MSR and Eddy-current simulations. Finally, an outlook will be given pointing to future magnetic gradients measurements and an additional active residual field and gradients compensation.
§ PROPERTIES OF THE MAGNETICALLY SHIELDED ROOM
To attain the required magnetic conditions the usual approach of passive magnetic shielding was adopted: The sensitive experiment is enclosed by a high-permeability material which acts as a flux shunt for the external field that may vary in time (earth magnetic field, moving ferromagnetic objects like trucks, electric equipment). In this case, the MSR, manufactured by Vacuumschmelze <cit.>, has a cubical shape and consists of three concentric layers of Mu-metal, a NiFe-alloy, with a thickness of 3 mm each. The edge lengths are 2965 mm, 2605 mm and 2560 mm, respectively. The construction of each of the three Mu-metal layers was as follows: Layers of thin (600 µm) Mu-metal sheets were laminated crosswise to form larger (up to 1300x1300 mm^2) and thicker (3 mm) modules (flat sheets and edge pieces). Low magnetic resistance between modules was achieved by overlapping areas. The thickness of the modules was reduced at overlapping areas to ensure a constant thickness of 3 mm throughout the Mu-Metal wall. This way, air gaps were avoided.
An additional highly conductive copper-coated aluminum layer with a thickness of 10 mm between the outer and middle Mu-metal layer serves as an electromagnetic shield (Eddy-current layer) for higher frequencies (including RF shielding).
All shielding layers are fixed to an aluminum beam support structure that also acts as a rigid mounting frame for all components of the experiment inside the MSR, thereby preventing or reducing vibrations of individual components relative to each other and the Mu-metal walls.
The whole MSR rests on a rigid platform build from aluminum beams and plates with a total height of 210 mm. The reasons are, firstly, to increase the distance from the bottom Mu-metal floor to the Lab floor which contains magnetized steel enforcement, and secondly, allowing future installments of active compensation coils of the ambient magnetic field.
The MSR is accessible through a door (clearance 2000 mm x 1000 mm) that is placed at the center of the front wall, slightly shifted to the bottom to ensure level entrance to the inside. The door is opened and closed manually. Latching clamps are activated pneumatically and ensure a constant and well-defined pressure between the overlapping shielding material of wall and door, and thereby, good magnetic contact. Similarly, good electrical contact between the individual parts of the Eddy-current layer prevents a degrading shielding factor at higher frequencies.
Inside the MSR, a walkable wooden floor allows the manual installation of components. The wooden floor rests on the support structure. All components of the measurement setup inside the MSR (like coils, magnetometers) can be attached to the aluminum mounting frame. Therefore, the two inner Mu-metal layers have cutouts for 9 mounting points per wall including ceiling and floor, except the front wall, which has 6.
There are several RF-shielded ports and openings with a diameter between 60 and 160 mm that are used for ventilation, transfer of polarized gases etc. The ports are positioned closely to corners and edges of the MSR to minimize the disturbance of the magnetic field inside.
The MSR is equipped with degaussing coils consisting of 5 turns of 2.5 mm^2 copper wire around each of the 12 edges of each Mu-metal layer. There are cutouts at the corners of the respective Mu-metal layers for feeding through the coils and connecting wires. Four coils each are connected in series to a degaussing coil unit generating a closed magnetic flux loop through four of the Mu-metal walls. Fig. <ref> shows the magnetization vectors inside a Mu-metal layer generated by such a single degaussing coil unit (result of a simulation using Radia <cit.>). The magnetization is homogeneous inside four walls, while the magnetic flux leakage into the remaining two walls (in our example the top and bottom walls) of the Mu-metal layer is negligible small. Usually all three spatial directions are degaussed successively, covering each wall twice. However, the flux loops using only two degaussing coil units cover all six walls already, which can be utilized in time-saving degaussing procedures (see Tab. <ref>). The other two Mu-metal layers have an identical coil configuration, so that there are in total 9 degaussing coil units. Each degaussing coil unit is connected to the external current source via twisted pair feed lines with an additional electric shielding. Whenever a degaussing coil unit is currently not in use, double switching relays break both connections to the amplifier to avoid feeding RF noise into the MSR.
The experiments coordinate system is indicated in Fig. <ref>. The main axes are parallel to the Mu-metal walls with the x-axis left to right, y-axis bottom to top, and z-axis back to front wall (seen from outside the MSR, facing the door; the front wall contains the door). The origin of the coordinate system coincides with the center of the MSR.
There are three quadratic Helmholtz-like coil pairs for shielding factor measurement with 7 turns per single coil of 0.75 mm² copper wire along the outer edges of the MSR. The dimensions are approximately 3 m by 3 m with a spacing of 3 m. The hypothetical magnetic field produced by these coils at the center of the MSR if no Mu-metal was present can be calculated according to
[ B_x; B_y; B_z ]
=
[ 2.196 0 0; 0 2.162 0; 0 0 2.196 ]·[ I_x; I_y; I_z ]· µT/A
where I_x, I_y and I_z are the coil currents.
§ DEGAUSSING PROCEDURE
The inner two Mu-metal walls of the MSR will be magnetized if exposed to stronger magnetic fields which is usually the case when the door of the MSR has been opened, for example. To reach reproducible magnetic conditions (residual field and shielding factors), degaussing, i. e., the elimination of a remnant magnetization of the whole MSR is necessary. This is usually achieved by applying a decreasing sinusoidal current to the degaussing coil units. The following conditions must be met to effectively demagnetize: Firstly, the maximum peak applied magnetic field (or current) must be sufficiently high to saturate the magnetic material in every region of the shielding. Secondly, the amplitude decrease must be slow enough, so that consecutive maxima have a small difference (rule of thumb: 1%), leading to a random orientation of magnetic domains with zero magnetization <cit.>.
§.§ Degaussing setup
The degaussing hardware consists of the following main building blocks as depicted in the bottom part of Fig. <ref>: waveform generation, power amplifier, and degaussing coil units. The waveform is generated in software using a microcontroller which updates a digital-to-analog converter (DAC) with a resolution of 20 bits (Analog Devices AD5790) periodically with an update rate of 1 kHz. Specific degaussing waveforms are discussed in the following paragraph. After low-pass filtering (cutoff frequency ≈200 Hz) the signal is fed into a 4-quadrant power amplifier (Töllner TOE 7621-20, <cit.>) with maximum voltage and current output of ±20 V and ±16 A. The output of the power amplifier is connected to the different degaussing coil units by double switching relays controlled by the microcontroller.
Note that there is no transformer in our setup which is often used to eliminate any current offset <cit.>. Here, we use a different approach: Feeding back the digitized current and voltage output of the power amplifier to the microcontroller allows for an online offset compensation in software, i. e., slowly adjusting an offset to the DAC output to compensate offset errors further down the signal chain. The main reason lies in the unusually low degaussing frequency below 1 Hz (needed to effectively degauss the 3 mm thick Mu-metal walls, see Sec. <ref>) that makes transformer coupling very unpractical.
§.§ Degaussing waveforms and sequence
The concept of using a microcontroller and a DAC allows for freely programmable degaussing waveforms including a combination of amplitude and frequency modulation. Several degaussing sequences have been tested and compared to each other with respect to shielding performance and total degaussing duration (see Tab. <ref>). In our case it proved useful to describe waveforms as a combination of amplitude and frequency modulation as depicted in Fig. <ref>. Waveforms are subdivided into four sections: In the first section (duration: a few seconds) the waveform is a sinusoidal signal with frequency f_0 and linearly increasing amplitude, followed by a second section (duration: a few seconds) with constant frequency and constant amplitude. In the third section ("sweep"), the frequency increases linearly from f_0 to f_1 within a period of t_sweep while the amplitude stays constant. In the last section, the waveform is a sinusoidal signal with frequency f_1 and an exponentially decreasing amplitude, where r gives the ratio of consecutive maxima. The fourth section lasts until the remaining amplitude is too small to be resolved by the 20-bit DAC.
As all three layers have to be demagnetized, the question arises which layer sequence and there in turn which ordering of the two or three flux loop directions should be used. We defined and tested degaussing sequences as shown in Tab. <ref>. Here, we define the degaussing directions as follows: y-direction means the closed flux loop in the x-z-plane, i. e., a magnetic flux through the left, back, right, front walls as depicted in Fig. <ref>. Correspondingly, we have the x-direction with a closed flux loop in the y-z-plane, and the z-direction with a closed flux loop in the x-y-plane.
The impact of different degaussing waveforms and sequences on the MSR shielding performance (especially the residual field) will be covered in the next section.
§ SHIELDING PERFORMANCE MEASUREMENT SETUP AND RESULTS
The magnetic shielding performance is usually characterized by 1) Shielding Factors (SFs), i. e., the attenuation factors for external magnetic perturbations which generally are frequency and amplitude dependent, 2) the residual magnetic field inside the MSR (typically measured at the center) and its drift over time, and 3) magnetic field gradients in a volume of interest <cit.>. The measurements of these characteristic values are covered in the following sections.
§.§ Magnetometer gain and offset calibration
Here, we use the following Fluxgate magnetometer models from Stefan Mayer Instruments <cit.>: the tri-axial model FLC3-70 with an amplitude noise density of ρ=120 pT/√(Hz) at 1 Hz, and the single axis model FL1-100 with lower noise of ρ=20 pT/√(Hz) at 1 Hz. We calibrated the magnetometers using a long solenoid with precisely known coil factor and a sinusoidal current (AC measurement, f=1 Hz). Gain drifts are small and of minor concern in the context of this paper. However, magnetometer offset (and drift thereof) have to be precisely known to measure small residual fields of order 1 nT reliably as this close to the sensitivity limit of the Fluxgates. The solution is: We monitor the offset by rotating the sensor on a non-magnetic 2-axis turntable, so that each axis of the sensor is inverted at least once. The sensor reading due to an actual magnetic field changes sign; however, the offset stays constant. Fig. <ref> shows the mechanical setup with the non-magnetic 2-axis turntable made of plastic which can be turned by two sets of non-magnetic Bowden cables (poly-amid string in a PEEK resp. poly-amid cable housing). The part outside the MSR pulling the Bowden cables is actuated by two stepper motors. To reduce mechanical vibrations, the setup is rotated slowly; a 360^∘ rotation takes typically 20 s. The sensor is first rotated by 360^∘ in the horizontal plane, then by 360^∘ around the symmetry axis of the cylindrical senor housing. A sinusoidal fit to the data gives the sensor offsets (for each sensor axis individually) and the actual magnetic field amplitudes. The order of magnitude of sensor offsets is 10 nT, with drifts of less than 0.5 nT/h.
§.§ Shielding Factors
Shielding Factors have been determined separately for each main axis for a set of different frequencies and amplitudes using the following method: A signal generator and a 4-quadrant amplifier (Töllner model TOE 7621-40, max. voltage ± 40 V, max. current ± 8 A) control a current through one of the three Helmholtz-like coil pairs at the outer edges of the MSR corresponding to the x-, y-, and z-direction, respectively, generating an excitation field. The calibrated vector Fluxgate (tri-axial model FLC3-70, Stefan Mayer Instruments) was positioned at the center point of the MSR. Sensor output and current monitor output of the amplifier are digitized using a multi-channel 24-bit ADC. SF measurements were performed after degaussing with sequence #3 (see Tab. <ref>). A typical external signal (calculated signal at the position of the sensor according to Eqn. <ref> without shielding generated by one of the Helmholtz coil pairs with AC current amplitude I and frequency f) and the corresponding internal signal are shown in Fig. <ref>. Amplitudes and phases are extracted by fitting a sinusoidal model including an offset and linear drift, as well as up to 3rd-order harmonics to the data (harmonics generation is a result of the non-linear behavior of Mu-metal). SFs are calculated by dividing the theoretical excitation field amplitude without shielding at the sensor position (according to Eqn. <ref>) by the measured absolute signal amplitude. It is interesting to note that: firstly, an an-isotropic behaviour of the MSR is possible, meaning that, an excitation in z-direction, for example, can lead to measurable field amplitudes not only in the z-sensor, but also in the x- and y-sensors. Therefore, the absolute value has been used in the denominator in the shielding value definition above. Secondly, the resulting inner field is phase shifted with respect to the the excitation field outside the MSR. Measured phase shifts are given in Fig. <ref>.
Shielding factors do not only depend on frequency and amplitude, but also on the direction of the external signal and slightly on the magnetization status of the MSR. SFs decrease only after the MSR was massively magnetized which is a rare occurrence in practice (e. g., using tools containing permanent magnets like electric screwdrivers inside the MSR). SFs for a wider range of parameters are given in Figs. <ref> and <ref>.
Shielding Factors for all three directions as a function of frequency for an external amplitude of 1.75 µT are shown in Fig. <ref> (bottom data points, no shaking). SFs are constant at low frequencies up to 0.01 Hz and increase substantially with higher frequencies. At high frequencies, shielding is dominated by Eddy currents inside the highly conductive copper plated aluminum layer. Measurement uncertainties increase above 5 Hz due to the small internal signal (sensor noise limit) resulting from high SFs. Therefore, the SF for 10 Hz was measured using the single-axis Fluxgate magnetometer FL1-100 (Stefan Mayer Instruments) in the z-direction only.
Fig. <ref> shows the amplitude dependency of SFs
which increase with increasing excitation field strength. In practice, SFs at low excitation below 1 µT are most relevant as typical noise and disturbances of the ambient magnetic field are low (see Fig. <ref>, top). A quadratic model was fitted to the data (fit results see Fig. <ref>).
§.§ Shielding Factor enhancement by low frequency, low current Mu-metal shaking
The enhancement of Shielding Factors of a high-permeability magnetic shield using "shaking", i. e., the impression of an alternating magnetic field on the ferromagnetic material, has been known for a long time <cit.>. Shaking frequencies f_s were typically the power line frequency (50 or 60 Hz) or even higher frequencies f_s=400 Hz to 1 kHz <cit.>. The origin of this effect is not fully understood; D. Cohen states in <cit.>: "One concludes that there is no obvious explanation for the shielding enhancement; apparently the [magnetic] domains need only be in motion to obtain enhancement."
Here, we show that SFs can be improved by a factor 4 by low frequency, low current shaking of the outermost Mu-metal layer: The signal generator, power amplifier and one of the degaussing coil units (here: z-direction) are used to generate the continuous alternating magnetic field inside the Mu-metal walls. SFs were measured as before using Helmholtz-like coil pairs at the outer edges of the MSR and a tri-axial Fluxgate at the center point of the MSR. Fig. <ref> shows the resulting SF with shaking applied for a set of different shaking frequencies f_s as a function of the shaking current I_s. For a given f_s one finds a SF maximum at a certain current: I_s≈1 A for f_s=0.1 and 0.2 Hz. Optimal I_s increases with increasing f_s and probably lies outside the inspected range for f_s=20 Hz. Optimized SFs (e. g., for f_s=0.1, 0.2 or 5 Hz) are a factor ≈4 above SFs without shaking for a wide frequency range of the external signal generated by the Helmholtz coil pairs (see. Fig. <ref>) up to 1 Hz. Above 1 Hz, they approach SFs without shaking. This is to be expected as the induction of Eddy currents in the aluminum layer is the dominant shielding mechanisms for higher frequencies, which is independent of shaking.
Two observations: First, one would assume that the (symmetry-breaking) choice of the shaking direction would lead to an-isotropic shielding performance. However, we found that shaking improves shielding factors in all directions almost equally. Second, shaking improves shielding performance not only below the shaking frequency f_s, but also above.
The advantages of shaking at low frequencies are: The required electrical power for shaking at f_s=0.2 Hz and I_s=1 A is only 1 W compared to e. g. ≈250 W at f_s=20 Hz and I_s=16 A (see Fig. <ref>). This results in less heat deposited in the degaussing coils causing smaller temperature drifts of the surrounding Mu-metal. The smaller shaking current leads to a smaller shaking signal leakage into the interior of the MSR (which is already very small due to the closed magnetic flux loop inside Mu-metal walls). Limiting shaking to the outermost Mu-metal layer, which is the case here, effectively reduces shaking-signal leakage into the MSR, because the two inner Mu-metal layers remain for shielding the shaking signal. Measured shaking leakage at f_s=0.2 Hz and I_s=1 A at the center of the MSR is less than 8 pT (measured with a lock-in technique with an integration time of several hours). If shaking-signal leakage into the MSR is found to be of potential concern, tuning of the exact shaking frequency so as not to interfere with the actual measurement signal is possible. In our case, the Larmor frequencies of ^129Xe and ^3He are ≈5 Hz and ≈12 Hz, respectively, so that shaking leakage at f_s=0.2 Hz is of no concern. Furthermore, 0.2 Hz is already in the elevated 1/f-noise region of the SQUID magnetometers <cit.>.
§.§ Residual magnetic field and drift
The calibrated vector fluxgate (tri-axial model FLC3-70, Stefan Mayer Instruments) was positioned at the center point of the MSR. After leaving the door open for 30 minutes to create a reproducible magnetized state of the MSR, the door was closed and one of the different degaussing sequences according to Tab. <ref> was applied. Once the sequence was finished, the internal magnetic field was monitored for at least 24 hours. Fluctuations of the magnetic field outside the MSR were monitored by an identical fluxgate positioned 1 m in front of the center of the front wall.
Fig. <ref> (blue) shows the typical drift of the residual field at the center of the MSR over one day after applying degaussing procedure #3. The residual field is slightly below 1 nT immediately after degaussing, then decreases to ≈ 200 pT within the next three hours, and stays stable with fluctuations of less than 200 pT. This drift of the inner magnetic field within the first hours is reproducible. Most likely it is caused by a magnetic relaxation or disaccommodation <cit.> of the innermost Mu-metal layer as there is no obvious correlation between the internal and external magnetic field (see Fig. <ref>).
The upper limits of absolute residual fields after applying the different degaussing sequences are listed in Tab. <ref>.
The results show, that an excellent residual magnetic field inside the MSR can be achieved with a degaussing sequence using a frequency sweep (sequence # 3 in Tab. <ref>) in 21 minutes.
§.§ Estimation of magnetic field gradients
As magnetic field homogeneity strongly influences the spin coherence time and thereby the measurement sensitivity, the gradients in a central volume (200x200x200 mm^3) containing the spin samples are of great interest. Therefore, a vector Fluxgate (tri-axial model FLC3-70, Stefan Mayer Instruments) was mounted on a non-magnetic rail system. This allowed for residual-field measurements not only at the center of the MSR, but also over the larger span from wall to wall. We chose a straight line along the x-direction, i. e., from x= -1020 to +1020 mm, through the center of the MSR (y=z=0). Magnetometer offsets were determined, and then, shortly after degaussing with sequence # 3 (see Tab. <ref>), the sensor was moved in steps of 60 mm by pulling on strings which were guided outwards via pulleys. Immediately after the measurement process was completed, magnetometer offsets were determined again. The field components as a function of x are shown in Fig. <ref>. Magnetic field gradients were determined by taking the derivatives with respect to x of 3rd-order polynomial fits to the measurement data. The magnitude of the individual gradients was less than 10 pT/cm in the central volume. Note that only three of the five independent magnetic gradients were determined; however, all are expected to be on the same order of magnitude due to the symmetry of the MSR. The field homogeneity achieved is high enough so that transverse spin relaxation times T_2^* of many hours will be reached under measurement conditions described, e. g., in <cit.>.
§ MOTIVATION FOR THE CHOSEN DEGAUSSING PROCEDURE
In this section, we motivate the unusual choice of our degaussing procedure (especially the low frequency of 0.5 Hz and the frequency sweep, see Fig. <ref>) by means of an online hysteresis measurement of the assembled MSR and Eddy-current simulations. The measured material properties are used as input parameters for the simulations.
§.§ Hysteresis measurements
The material properties, especially the magnetic permeability μ_r, of Mu-metal in the assembled MSR are of great interest, as mechanical stress during assembly might lead to a degradation of the magnetic properties, e. g., reduced μ_r. Measuring the magnetic hysteresis curve, i. e., the magnetic flux density B vs. the magnetic field H gives access not only to μ_r, but also to a possible remanence. The degaussing setup (see Fig. <ref>) can be used to measure the magnetic hysteresis curve of the Mu-metal of the fully assembled MSR assuming homogeneous material properties (wall thickness and permeability, for example) by the following technique:
The magnetic field inside the closed flux loop (as depicted in Fig. <ref>) with total length l=4·3 m embracing the four walls is given by
H=N_1· I/l .
Here, N_1=20 is the winding number of the excitation coil (=degaussing coil unit), carrying a current of I.
A pickup coil with winding number N_2=10 was placed along the edge connecting two walls of the outer Mu-metal layer. A high impedance measurement of the induction voltage U_ind gives access to the magnetic flux density B via
U_ind = -N_2dΦ/dt=-N_2AdB/dt
⇒ B = -1/N_2A∫ U_ind dt+C .
Here, A=3·0.003 m^2 is the cross section of the Mu-metal wall covered by the pickup coil. The integration constant C is of no further importance and will be adjusted later so that the hysteresis curve is symmetrical around B=0. In practice, U_ind is sampled with a frequency f_ADC, so that the integral in Eqn. <ref> can be approximated using
S=∫ U_ind dt ≈ ∑_i U_ind(t=i/f_ADC)·1/f_ADC .
The measurement process is as follows (see Fig. <ref>): A sinusoidal signal with frequency f is fed to the power amplifier that delivers a proportional current with a typical amplitude of I_0=15 A to the excitation coil. The current-monitor output of the power amplifier and the pickup-coil voltage are monitored using a 18-bit ADC with adjustable sampling frequency (typically f_ADC=10 kHz). The current and induction-voltage data are averaged over several periods (typically 10 to 100) to reduce noise. Then the voltage integral S is calculated using Eqn. (<ref>). Typical I, U_ind and S curves for f=0.5 Hz are shown in Fig. <ref>. Using Eqns. (<ref>) and (<ref>), and plotting B(t) vs. H(t) gives the hysteresis curve (see Fig. <ref>, top). For low frequencies f=0.02 Hz to f=0.5 Hz, one can easily identify the two branches for increasing and decreasing H and the saturation value at B≈ 0.4 T. The slope gives the magnetic permeability μ_0μ_r with μ_r≈ 2.5·10^5 at the steepest part of the narrow f=0.02 Hz curve.
The strongly non-linear relation between the magnetic flux density B and the magnetic field H inside the Mu-metal can be modelled according to:
B(H) = α2/πarctan( β H) +γ H
with the three free parameters α (saturation), β (linked to μ_r) and γ (a linear contribution). Note that, here, we use a purely phenomenological model with no remanence (Mu-metal is very soft-magnetic) with the sole purpose of extracting the relevant material properties that will be used as input parameters for Eddy-current simulations covered in the next section. From hysteresis measurements of the MSR walls at very low frequencies (f=0.01 Hz) and H between ≈-24 and +24 A/m, we extracted the parameters α=0.4 T, β=1.26 m/A and γ=1.6·10^-3 Tm/A. Note that the model according to Eqn. (<ref>), especially the linear contribution, is only valid for small magnetic fields |H|<24 A/m. The linear increase was also observed in <cit.> for a similar magnetic field of H≈20 A/m.
For increasing frequency, the curves become broader and approach ellipses. The reason lies in the induction of Eddy currents within the 3 mm thick Mu-metal walls which effectively shield the internal material, i. e., the resulting magnetic field at the center of the Mu-metal walls is reduced, with the consequence that saturation is not reached. Details and a numerical simulation of this effect will be covered in the next section.
In conclusion, our investigations have shown that the shape of measured hysteresis curves of a given high-permeability material depends strongly on the measurement frequency (see Fig. <ref>) and on the sample geometry (especially the material thickness). Hysteresis-curve measurements of assembled MSRs with a wall thickness >1 mm should be performed with low frequencies f<0.1 Hz. In our case we have extracted a μ_r≈ 2.5·10^5 of the assembled Mu-metal layer.
§.§ Eddy-current simulation
We wish to investigate the effective shielding of internal material within the Mu-metal wall (actual thickness: 3 mm) caused by Eddy currents.
The thickness of the Mu-Metal wall is small compared to its length and width. For our simulation, we consider a Mu-metal plate parallel to the xz-plane with infinite height and infinite length and finite thickness 2b=3 mm in y-direction (spanning from y=-b to y=+b). Note that, here, we defined a coordinate system which differs from the one introduced at the beginning (Fig. <ref>). The degaussing coils in our simulation now produce an external homogeneous oscillating field in z-direction of
H⃗(x⃗, t) = H_0 sin(2 π f t) e⃗_⃗z⃗ .
As the frequency f of the oscillating field is very low, we can use the quasi-static Maxwell equations and Ohm's law:
∇×H⃗ = j⃗
∇×E⃗ = -∂_t B⃗
∇·B⃗ = 0
j⃗ = σE⃗ .
with current density j⃗, electric field E⃗, and conductivity σ.
Furthermore, we need the relation between the magnetic flux density B and the magnetic field H inside the Mu-metal according to Eqn. (<ref>).
Taking the curl (∇×) of Eqn. (<ref>) and using the curl-of-the-curl identity, then applying Eqns. (<ref>) and (<ref>) we obtain:
∇(∇·H⃗) - ΔH⃗ = -σ∂_t (B⃗(H⃗))
We aim to find a solution to the set of differential Eqns. (<ref>) and (<ref>) with the boundary condition in Eqn. (<ref>).
To do so, we take advantage of the symmetry of the problem: As the Eddy currents always point in the x-direction and H⃗ always points in the z-direction, one can reduce the problem to one dimension:
H⃗=(0,0,h(y,t)) .
Then, the partial differential Eqn. (<ref>) can be simplified to
∂_y^2 h(y,t) = σ∂_t(h(y,t)) B^'(h(y,t))
with the boundary conditions
h(-b, t) = h(b, t) = H_0sin(2π f t)
h(y, 0) = 0 .
Here, B^' is the derivative of B with respect to its argument. The input parameters are the material properties of Mu-metal: the electrical conductivity σ=8.72·10^5 S/m <cit.>, and the B(H)-dependency, modelled according to Eqn. (<ref>) with the free parameters α=0.4 T, β=1.26 m/A and γ=1.6·10^-3 Tm/A, extracted from hysteresis measurements (see Sec. <ref>).
Numerical solutions of the differential equation were found using the FEM-Solver of Wolfram Mathematica on a rectangular 200x5000 grid in y- and t-direction. Simulated "hysteresis loops" can be extracted by numerically integrating the flux density in space to get the magnetic flux "seen" by the pick up coil (see Fig. <ref>, bottom). Simulation results reproduce the measured hysteresis loops for a broad range of frequencies. We therefore conclude, that our Mu-metal model is accurate enough for the intended purpose of studying the effect of the Eddy-current shielding of the Mu-metal core. Note that the hysteresis loop broadening with higher frequencies is solely a result of induced Eddy currents as the B(H) dependency in Eqn. (<ref>) models Mu-metal as completely soft-magnetic.
Fig. <ref> shows the magnetic field H normalized to the excitation field amplitude H_0=24.1 A/m as a function of time at the center (y=0, red curve) and at the surface (y=1.5 mm, black curve) of the 3 mm thick Mu-metal wall for f=0.5 Hz (top) and 10 Hz (bottom). One observes that for f=10 Hz, the maximum of the internal field (y=0) is reduced to H/H_0≈0.5 (and shifted in time), while for the lower frequency no relevant reduction of the internal field is observed. Fig. <ref> shows this effect, i. e., the maximum normalized field at the Mu-metal core, for a broader range of frequencies for the actual wall thickness 2· b=3 mm (black), and for 2· b=6 mm (red). Doubling the Mu-metal wall thickness reduces the frequency where H/H_0=0.5 from 10 to 2.5 Hz. Fig. <ref> shows that (for 2· b=3 mm) the maximum H is not only reduced at the center but also in a larger central volume, e. g., for f=10 Hz H/H_0≈0.5 for more than half of the Mu-metal volume (y from -0.9 to 0.9 mm).
The Eddy-current simulations have the following possible limitations: The assumption of infinite Mu-metal walls in two dimensions is a good approximation (3 mm wall thickness compared to the 3 m wall height and length, furthermore four walls generate a closed loop flux path); however, the edges might behave differently. Secondly, we assumed homogeneous material properties; variations in material thickness (due to small overlaps), permeability, or conductivity (which might be smaller between laminated sheets) are not accounted for. Finally, for a deeper understanding of the degaussing process, the magnetic properties of Mu-metal have to be described by a more detailed model: e. g., the frequency-dependent Jiles-Atherton model <cit.> considering the energy changes of magnetic domains during magnetization (or degaussing), or the computation-time efficient phase-shift approach <cit.>.
§.§ Consequences for degaussing and shaking method
For effective degaussing leading to a reproducible low residual field inside the MSR, the maximum peak applied magnetic field must be sufficiently high to saturate the magnetic material in every region of the shielding. This calls for a low degaussing frequency to saturate the innermost material. We chose f=0.5 Hz to have a large safety margin accounting for edge effects and material in-homogeneity like variations in thickness (due to small overlaps) or permeability. At f=0.5 Hz saturation is reached at the core of the Mu-metal even for a wall thickness of 6 mm (see Fig. <ref>). Note that for degaussing effectively at f=0.5 Hz, H_0≈24 A/h is a sufficiently large initial magnetic field strength. The reason is that for f=0.5 Hz the two branches of the hysteresis curve coincide in the region |H|>15 A/m (see Fig. <ref>).
Then, the amplitude decrease (see last section in Fig. <ref>) must be slow enough, so that consecutive maxima have a small difference (ratio of consecutive maxima r≈0.99), leading to a random orientation of magnetic domains. This is usually achieved by slowly decreasing the current through the degaussing coils which leads to long time constants at low degaussing frequencies and therefore long total duration (see #1 in Tab. <ref>).
An alternative can be deduced from Fig. <ref>:
Increasing the frequency of the degaussing current while the amplitude stays constant leads to decreasing resulting fields in the center of the Mu-metal. We chose a linear frequency increase (sweep) from f=0.5 to 10 Hz within t_sweep=100 s. Subsequently, f=10 Hz stays constant and the current amplitude decreases exponentially. Due to the higher frequency, the time constant is substantially shorter for a given r. Together with time-saving modifications of the degaussing sequence and r, a reproducible residual magnetic field below 1 nT can be achieved in 21 minutes (see #3 in Tab. <ref>).
Similarly, the Eddy-current simulations motivate shaking at low frequencies. A low-frequency shaking field penetrates to the core of the Mu-metal layer leading to increased shielding performance, while the shaking current can be drastically reduced (see Fig. <ref>).
§ SUMMARY AND OUTLOOK
We described the properties of the Magnetically Shielded Room (MSR), manufactured by Vacuumschmelze, intended for next level ^3He/^129Xe co-magnetometer experiments which require improved magnetic conditions. The degaussing (magnetic equilibration) procedure was improved for the 3 mm thick Mu-metal layers. The key is reaching saturation at the center of the Mu-metal layer by using a low initial degaussing frequency while reducing the necessary time for degaussing using a frequency sweep with constant amplitude followed by an exponential decay of the amplitude. Degaussing parameters were found by online hysteresis measurements and by Eddy-current simulations. The investigations have shown that Eddy currents have to be taken into account in hysteresis-curve measurements of high-permeability material like Mu-metal. Such measurements, e. g., on assembled MSRs with a wall thickness >1 mm, have to be performed with low frequencies f<0.1 Hz. Using higher frequencies causes a substantial broadening of the measured hysteresis curve and might lead to a false interpretation of the magnetic properties. In our case we have extracted a remarkably high permeability μ_r≈ 2.5·10^5 of the assembled Mu-metal layer. The degaussing procedure for the whole MSR takes 21 minutes and measurements of the residual magnetic field using Fluxgate magnetometers show that |B|< 1nT can be reached reliably. Shielding Factors can be improved by a factor ≈ 4 in all directions by low frequency (0.2 Hz), low current (1 A) shaking of four walls of the outermost Mu-Metal layer.
Determining the properties of the residual field inside the MSR, especially gradients and their temporal stability, is essential for the intended use of the MSR for ^3He/^129Xe co-magnetometer experiments. As a first estimate, we found that the order of magnitude of the gradients is 10 pT/cm in the central volume. However, noise and offset drift of Fluxgate magnetometers quickly limit the precision that can be achieved with the method above. Consequently, we intend to use the following method: Magnetic field gradients can be extracted very precisely and accurately from transverse relaxation rates of precessing spin samples, e. g., gaseous, nuclear spin polarized ^3He and ^129Xe atoms in a spherical cell, which was demonstrated in previous work <cit.>, where a resolution below pT/cm was reached. Finally, gradient compensation by a coil system is intended: systematically adjusting the gradient coil currents and simultaneously monitoring and minimizing transverse relaxation rates according to a downhill simplex algorithm <cit.> leads to minimized magnetic field gradients in the volume of interest which in this case is the volume occupied by the spin sample.
We acknowledge the excellent support during the planing and construction phase of the MSR provided by L. Bauer, M. Hein, M. Wüst, M. Staab and J. Gerster of VACUUMSCHMELZE GmbH, Germany, and are grateful for fruitful discussions about amplitude-dependent shielding factors. We thank H.-J. Krause, FZ Jülich and S. Hummel, PI Heidelberg for their technical support. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), Grant No. 455449148.
§ AUTHOR DECLARATIONS
§.§ Conflict of Interest
The authors have no conflicts to disclose.
§.§ Author contributions
Fabian Allmendinger: Conceptualization (lead); methodology (lead); investigation (lead); formal analysis (lead); writing – original draft (lead); writing – review and editing (equal).
Benjamin Brauneis: Methodology; investigation; software; writing – review and editing (equal).
Werner Heil: Methodology; writing – review and editing (equal).
Ulrich Schmidt: Supervision; methodology; writing – review and editing (equal).
§ DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request.
*
unsrt
|
http://arxiv.org/abs/2307.05129v1 | 20230711091122 | DFR: Depth from Rotation by Uncalibrated Image Rectification with Latitudinal Motion Assumption | [
"Yongcong Zhang",
"Yifei Xue",
"Ming Liao",
"Huiqing Zhang",
"Yizhen Lao"
] | cs.CV | [
"cs.CV"
] |
DFR: DEPTH FROM ROTATION BY UNCALIBRATED IMAGE RECTIFICATION
WITH LATITUDINAL MOTION ASSUMPTION
This work is supported by the National Key R&D Program of China (No. 2022ZD0119003), Nature Science Foundation of China (No. 62102145), and Jiangxi Provincial 03 Special Foundation and 5G Program (Grant No. 20224ABC03A05).
Yongcong Zhang^1,∗*Both authors contributed equally to this research. Yifei Xue^2,∗ Ming Liao^2 Huiqing Zhang^1 Yizhen Lao^1,††Corresponding author:[email protected].
^1 Hunan University
^2
Jiangxi Provincial Natural Resources Cause Development Center
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================
Despite the increasing prevalence of rotating-style capture (e.g., surveillance cameras), conventional stereo rectification techniques frequently fail due to the rotation-dominant motion and small baseline between views. In this paper, we tackle the challenge of performing stereo rectification for uncalibrated rotating cameras. To that end, we propose Depth-from-Rotation (DfR), a novel image rectification solution that analytically rectifies two images with two-point correspondences and serves for further depth estimation.
Specifically, we model the motion of a rotating camera as the camera rotates on a sphere with fixed latitude. The camera's optical axis lies perpendicular to the sphere's surface. We call this latitudinal motion assumption. Then we derive a 2-point analytical solver from directly computing the rectified transformations on the two images. We also present a self-adaptive strategy to reduce the geometric distortion after rectification. Extensive synthetic and real data experiments demonstrate that the proposed method outperforms existing works in effectiveness and efficiency by a significant margin.
Structure-from-Motion, stereo rectification, image matching
§ INTRODUCTION
Image rectification is vital for efficient stereo matching by forcing the point correspondence restricted in the same scan line (row). This process significantly reduces the computational cost for further depth estimation and thus is widely used in 3D vision applications, such as robotics, autonomous driving, and augmented reality.
Motivations. Classical image rectification techniques apply homographies on a pair of images (i.e., the master and the slave image) whose epipolar geometry is pre-computed. Thus the epipolar lines in the original images map to horizontally aligned lines in the transformed images <cit.>.
However, we found the ubiquitous 2D rotating cameras, such as surveillance cameras and tripod head cameras on UAVs (Fig. <ref>), fail in applying conventional image rectification for the three shortcomings shared in the existing works (Fig. <ref>):
1) Cumbersome calibration. The works of <cit.> require off-line intrinsic calibration and have to fix the camera setting, such as focal length. However, such cumbersome and stringent requirements are challenging to maintain in real-world applications.
2) Poor epipolar geometry estimation with short baseline. Another type of rectification methods <cit.> demand estimated epipolar geometry as input. Note that accurate epipolar computation is highly dependent on establishing a sufficient baseline between images so that the rectification can be reliably estimated. However, the rotating cameras produce extremely short baselines that violate such an assumption.
3) Over-distorted rectification. Existing solutions could lead to significant geometric distortion among rectified images once the slave and master images have significant relative rotation, which is not preferred for high-quality depth-based applications <cit.>.
Contributions. This paper tackles the challenge of stereo rectification for uncalibrated rotating cameras with extremely short baselines. To this end, we propose a novel image rectification solution that analytically computes the rectified transformations to the two images with only two-point correspondences and serves for the further depth estimation, short for Depth-from-Rotation (DfR). Specifically, we model the motion of a rotating camera as the camera rotates on a sphere with fixed latitude. The camera's optical axis lies perpendicular to the sphere's surface. We call this "latitudinal motion assumption". Then we derive a 2-point analytical solver that computes the rectified transformations for the two images directly. We also present a self-adaptive strategy to reduce the geometric distortion after rectification.
Previous research <cit.> presents structure from motion (SfM) solution with spherical motion, which is similar to our latitudinal motion. However, we want to highlight that <cit.> requires accurate pre-calibration of intrinsic while the proposed DfR is a calibration-free method. Besides, the radius of sphere in <cit.> is assumed as brachium (≈0.8m) while with only a centimeter length for rotating camera case. The most related work <cit.> rectifies uncalibrated cameras with slight movements. Nevertheless, such a method assumes a small but pure translation by neglecting the rotation. In contrast, we consider both rotation and translation produced by latitudinal motion.
Differing from previous methods, our contributions are:
∙ We describe the rotating camera as a latitudinal motion and find that rectified transformations for such a case can be computed directly and efficiently with only two-point correspondences.
∙ Extensive experiments demonstrate that the proposed rectification method DfR outperforms state-of-the-art techniques in align accuracy and distortion suppression. The code has been uploaded to GitHub[https://github.com/zhangtaxue/DFR].
§ RELATED WORK
Finding the epipolar geometry is the critical step in the previous approach to launching the rectification. A detailed review of the related techniques is in <cit.>. Typically, <cit.> proposes to compute the fundamental matrix first and then extract the homographies followed by a decomposing to reduce the geometric distortion.
Gluckman and Nayar <cit.> present a rectification approach that
minimizes the re-sampling effect. <cit.> introduces an implicit image rectification approach by computing homographies directly from point correspondences.
<cit.> alternatively utilizes a Quasi-Euclidean-based rectification algorithm.
More recently, <cit.> proposes a direct uncalibrated image rectification solution for the monocular camera with a small translation.
As opposed to existing methods that require calibrated stereo rig, estimated epipolar geometry, or pure translation assumption, we propose a practical solution called DfR to rectify two uncalibrated images captured by a rotating camera with an extremely short baseline.
§ METHODOLOGY
§.§ The Geometry of Latitudinal Camera Motion
We highlight that the rotating camera, such as a surveillance camera, follows a "latitudinal camera motion" trajectory. With this assumption, the camera rotates at a constant latitude ⌢C^1C^2 with a fixed distance from an origin, and the optical axis is aligned with the ray between the origin and camera center | OC^1 | (Fig. <ref>(a)(c)).
Assuming a 3D point 𝐏_i=[X_i,Y_i,Z_i]^⊤ is captured by C^1 and C^2 as 𝐩_i^1=[x_i^1,y_i^1] and 𝐩_i^2=[x_i^2,y_i^2]:
α_i [𝐩_i^j,1]^⊤ = 𝐊^j[𝐑^j|𝐭^j][𝐏_i,1]^⊤
where 𝐑^j∈ SO(3) and 𝐭^j∈ℝ^3 are rotation matrix and translation vector of j^th camera. α_i is a scalar w.r.t depth.
𝐊 is a 3 × 3 matrix known as the calibration matrix containing the intrinsic parameters of a camera. Since the majority of the existing works assume the principal points are usually also close to the image center and 𝐊 keeps constant between two views <cit.>, we adhere to this assumption too in this paper. Thus 𝐊 = 𝐊^1=𝐊^2 are defined as:
𝐊 = diag(f_x,f_y,1)
where f_x and f_y are focal lengths along x and y axes.
§.§ Finding the Rectified Matrices
The primary aim of the rectification that we seek is to transform the stereo under latitudinal camera motion into a laterally displaced stereo setting (fig. <ref>(d)). We present a novel rectification solution which leads to such transformation.
Specifically, we derive the rotation-based rectification (Sect. <ref>) and introduce the analytical solver for rectified matrices (Sect. <ref>). We also present a self-adaptive strategy to suppress the geometric distortion (Sect. <ref>) and integrate these steps into a complete rectification pipeline (Sect. <ref>).
§.§.§ Derivation of Rectified Matrices to C^1 and C^2
We show that stereo C^1↔ C^2 under latitudinal camera motion can be rectified to laterally displaced stereo C̃^1 ↔C̃^2 by applying two homographies 𝐇_1 and 𝐇_2 to C^1 and C^2 respectively.
Proposition 1. When two views C^1 and C^2 have a constant but unknown focal length and under latitudinal camera motion, after applying the 𝐇_1 and 𝐇_2 defined as
𝐇^1=[
[ cos(β)cos(α) -f_xsin(α)sin(b)/f_y -f_xcos(α)sin(β); f_ycos(β)sin(α)/f_x cos(α) -f_ysin(α)sin(β); sin(β)/f_x 0 cos(β) ]]
𝐇^2=[
[ cos(β)cos(α) f_xsin(α)sin(b)/f_y f_xcos(α)sin(β); -f_ycos(β)sin(α)/f_x cos(α) -f_ysin(α)sin(β); -sin(β)/f_x 0 cos(β) ]]
to C^1 and C^2 respectively, they will be transform to C̃^1 and C̃^2 that under perfect laterally displace. Where α and β are two angles that control the rotations of C^1 and C^2.
Proof. As shown in Fig. <ref>(a), we assume the frames of C^1 and C^2 are w.r.t world coordinate system O. Thus, with latitudinal camera motion assumption, the poses of C^1 and C^2 are:
[𝐑^1|𝐭^1] = [𝑅(α,-β,0),𝐳]
[𝐑^2|𝐭^2] = [𝑅(-α,β,0),𝐳]
where 𝐳=[0,0,-1]^⊤ and 𝑅(α,β,γ) = R_z(α )R_y(β )R_x(γ) is the matrix multiplication of atomic matrices whose yaw, pitch, and roll angles are α,β, γ. Thus, 𝐑^1 and 𝐑^2 are:
𝐑^1=[
[ cos(β)cos(α) sin(α) cos(α)sin(β); -cos(β)sin(α) cos(α) -sin(β)sin(α); -sin(β) 0 cos(α) ]]
𝐑^2=[
[ cos(β)cos(α) -sin(α) -cos(α)sin(β); cos(β)sin(α) cos(α) -sin(β)sin(α); sin(β) 0 cos(α) ]]
Obviously, we can rotate C^1 and C^2 to coincide with the world frame by rotation matrices 𝐑^1^-1 and 𝐑^2^-1. Note that with such a setting, the two novel views are laterally displaced stereo. Thus, the homographies which can perform the rectification are:
𝐇^1=𝐊𝐑^1^-1𝐊^-1 𝐇^2=𝐊𝐑^2^-1𝐊^-1
Finally, by subsisting Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>), we can obtain Eq. (<ref>) and Eq. (<ref>).
§.§.§ Computation of 𝐇^1 and 𝐇^2
∙ Decomposition of 𝐇^1 and 𝐇^2.
Follows the homograph decomposition introduced in <cit.>, we express the rectified homographies in
Proposition 1 as the matrix multiplication of two 3×3 matrices:
𝐇^1 = 𝐇_s^1 𝐇_y^1 𝐇^2 = 𝐇_s^2 𝐇_y^2
where,
𝐇_s^1 = [ S_a S_b 0; 0 1 0; 0 0 1 ] 𝐇_s^2 = [ S_a -S_b 0; 0 1 0; 0 0 1 ]
𝐇_y^1 = [ 1 0 0; h_21 h_22 h_23; h_31 0 h_33 ]𝐇_y^2 = [ 1 0 0; -h_21 h_22 h_23; -h_31 0 h_33 ]
We use 𝐇^1_y and 𝐇^2_y to align the correspondences along the y-axis while 𝐇^1_s and 𝐇^2_s serve to reduce the geometric distortion of the transformed images.
∙ Computation of 𝐇^1_y and 𝐇^2_y. Notice that the elements of 𝐇^1_y and 𝐇^2_y share the same values h_21, h_22, h_23, h_31, and h_33 but with different signs.
Thus, we can compute 𝐇^1_y and 𝐇^2_y by recovering the values of these elements.
Proposition 2. Given two-point correspondences 𝐩_1^1↔𝐩_1^2 and 𝐩_2^1↔𝐩_2^2, we first arbitrarily set the values of h_22 and h_23 and then obtain
h_31 = t_1/h_22,
h_33 = 1/h_22,
h_21=t_1h_23+t_2h_22
where t_1 and t_2 can be extracted by 𝐭=𝐀^-1𝐛 with following definition:
[ -(x^2_1y^1_1+x^1_1y^2_1) (x^1_1+x^2_1); -(x^2_2y^1_2+x^1_2y^2_2) (x^1_2+x^2_2) ]_𝐀[ t_1; t_2 ]_𝐓 = [ -y^1_1+y^2_1; -y^1_2+y^2_2 ]_𝐛
where 𝐀 and 𝐛 are 2×2 matrix and 2×1 vector consisted by the coordinates of the two correspondences.
Proof. By given a point correspondence 𝐩_i^1↔𝐩_i^2, we minimize
the vertical alignment error after applying 𝐇^1_y and 𝐇^2_y via
[𝐇_y^1, 𝐇_y^2] = min∑_i(𝐡^1_2𝐩^1_i/𝐡^1_3𝐩^1_i-𝐡^2_2𝐩^2_i/𝐡^2_3𝐩^2_i)^2
where 𝐡_x^y is the x^th row of 𝐇^y. Let
h^1_2p^1_i/h^1_3p^1_i-h^2_2p^2_i/h^2_3p^2_i = 0,
we can substitute elements of 𝐇 in Eq. (<ref>) and coordinates of 𝐩_i into Eq. (<ref>)
h_22h_31(-x^2_iy^1_i-x^1_iy^2_i) +(-h_23h_31+h_21h_33)(x^1_i+x^2_i)=(y^2_i-y^1_i)
where we force h_22h_33=1 since homography matrix is up to scale. Then with two-point correspondences i=1,2, we can get Eq. (<ref>).
∙ Computation of 𝐇^1_s and 𝐇^2_s. We employ the method introduced in <cit.> to compute S_a and S_b. Since this section does not contain our contributions, we provide only the essential information for understanding the remainder of this paper. More details of the algorithm are available in <cit.>.
§.§.§ Geometric Distortion Suppression
Recall that proposition 1 can hold with arbitrary value settings of h_22 and h_23. However, we point out that their values decide the geometric distortion level. Thus, a good value choice of h_22 and h_23 suppresses the distortion after rectification.
Experimentally, we found out that the value of h_23 has a tiny inference about the shape of the output frame. Thus, it is critical to decide the value of h_22. As shown in Fig. <ref>(b), we assume the vertexes of original frame are 𝐩_1=[-W/2,-H/2,1]^⊤, 𝐩_2=[W/2,-H/2,1]^⊤, 𝐩_3=[-W/2,H/2,1]^⊤, and 𝐩_4=[W/2,-H/2,1]^⊤ (H and W are the height and width of image). After rectification by applying 𝐇^1 and 𝐇^2, the four vertexes becomes p̃_̃1̃, p̃_̃2̃, p̃_̃3̃, and p̃_̃4̃. Based on the description of 𝐇^1 and 𝐇^2in Eq. (<ref>), we can measure the heights of the left edge and right edge as
{[ l_left = p̃_̃3̃^̃ỹ - p̃_̃1̃^̃ỹ = 2Hh_22^2/2-t_1W-H; l_right = p̃_̃4̃^̃ỹ - p̃_̃2̃^̃ỹ = 2Hh_22^2/2+t_1W-H; ].
To reduce the geometric distortion, we force l_left+l_right = 2H. Thus, we have the instruction to set h_22:
[ h_22 = √(4-W^2t_1^2/2) ]
§.§ Pipeline
Note that we can only use two-point correspondences to perform rectification. However, we propose a RANSAC-like framework to robustly estimate the rectified matrices 𝐇^1 and 𝐇^2 by using the vertical alignment error VAE between y coordinates of a rectified correspondence 𝐩̃_̃𝐥̃=[x̃_̃l̃ ỹ_̃l̃]^⊤ and 𝐩̃_̃𝐫̃=[x̃_̃r̃ ỹ_̃r̃]^⊤
VAE=∑_i=1^n(|ỹ_̃l̃ĩ-ỹ_̃r̃ĩ|)/n
We describe the complete rectification pipeline in Alg. <ref>.
§ EXPERIMENT
§.§ Synthetic Data
∙ Comparison Methods. We compare the proposed method DfR against to two state-of-the-art works:
∘ Loop <cit.>: Classical uncalibrated stereo rectification approach.
∘ DSR <cit.>: Direct stereo rectification with small motion assumption.
∘ DfR: The proposed method in this paper.
∙ Metrics. We evaluate the solutions by two metrics:
∘ VAE: Vertical alignment error defined in Eq. (<ref>).
∘ NVD: We measure the geometric distortion by a normalized vertex distance. Let 𝐯_1=[0 0 1],𝐯_2=[W-1 0 1],𝐯_3=[0 H-1 1],𝐯_4=[W-1 H-1 1] be the four vertices. Then
NVD=d_1+d_2+d_3+d_4/√(W^2+H^2)
where d_i is the distance from 𝐯_i to its rectified point.
∙ Setting. We generated two uncalibrated pin-hole cameras filming a cube scene under latitudinal motion. We set the radius as 1cm and varied the scene depth from 0.5m to 200m, the roll angle between two views from 0 to 45 degs, and the pitch angle from 0 to 90 degs. The results are obtained after averaging the errors over 50 trials at each setting.
∙ Results. The results in Fig. <ref> show that the classical rectification method Loop <cit.> provides the most unstable estimation and huge error in VAE. While DSR achieves much more robust results and higher accuracy. However, the proposed method DfR outperforms Loop and DSR with a significant margin. Similarly, DfR produces rectified results with much smaller and more stable NVD score than DSR and Loop, which indicates that the proposed method leads to less geometric distortion.
Fig. <ref> is to explore in more detail the effect of different conditions on the image rectification results. We mainly modify 3 conditions: the pitch angle ry of the rotating camera swinging left and right, the roll angle rx of the camera swinging up and down, and the image noise. For any points in Fig. <ref>, the three conditions notes on the vertical axis, the horizontal axis, and the subheadings are shown. Under those conditions, the point is colored green if our proposed method is superior, blue if the method of Loop <cit.> is superior, and red if the DSR <cit.> is superior. DfR achieves the best performance in both VAE and NVD under various stereo relative pose settings (roll and pitch).
∙ Ablation Study. We ablate the geometric distortion suppression strategy (Sec. <ref>) used in the proposed DfR. As shown in Fig. <ref>, Visually, DfR produces significantly limited distortion over Loop and DSR. Quantitatively, Loop provides the biggest NDV while DfR achieves the smallest ones. It is vital to notice that the geometric distortion becomes larger when we remove the proposed suppression strategy.
∙ Running Time. The experiment was run on a laptop with Intel I5 CPU. the running times are reported in Tab. <ref> with 720×960 pix as input. The results show that DfR achieves 4× and 400× speedup over DSR and Loop, respectively.
§.§ Real Data
To validate the effectiveness of realistic images, we use rotating surveillance cameras to collect 300 pairs of images in various scenarios. We use SIFT feature for detection and matching, followed by stereo rectification with Loop, DSR, and DfR, respectively. Then we perform an SGM dense matching based on the three rectified results to extract the depth images.
Results in Fig. <ref> show that Loop and DSR produce blurred depth maps while the proposed approach DfR outputs clean and sharp ones instead. This verifies the effectiveness of DfR in aligning the point matches into the same row.
§ CONCLUSION
This paper presents a novel image rectification solution to uncalibrated cameras with latitudinal motion assumption.
The proposed DfR achieves high accuracy in alignment with minor geometric distortion. Extensive experiments demonstrate the effectiveness and efficiency of the proposed solution. The presented DfR can be applied as a pre-processing step to stereo matching for many applications, such as 3D visual surveillance and 3D perception of robotics.
00
hartley2003multiple Richard Hartley and Andrew Zisserman, Multiple view
geometry in computer vision, 2003.
fusiello2000compact Andrea Fusiello, Emanuele Trucco, and Alessandro
Verri, “A compact algorithm for rectification of stereo
pairs,” Machine vision and applications, 2000.
ventura2016structure Jonathan Ventura, “Structure from motion on a sphere,”
in ECCV, 2016.
loop1999computing Charles Loop and Zhengyou Zhang, “Computing rectifying homographies for stereo vision,” in CVPR, 1999.
sweeney2019structure Chris Sweeney, Aleksander Holynski, Brian Curless,
and Steve M Seitz, “Structure from motion for
panorama-style videos,” arXiv, 2019.
xiao2018dsr Ruichao Xiao, Wenxiu Sun, Jiahao Pang, Qiong Yan, and Jimmy Ren, “Dsr: Direct self-rectification for uncalibrated dual-lens cameras,” in 3DV, 2018.
zhang1998determining Zhengyou Zhang, “Determining the epipolar geometry and its uncertainty: A review,” IJCV, 1998.
gluckman2001rectifying Joshua Gluckman and Shree K Nayar, “Rectifying transformations that minimize resampling effects,” in CVPR, 2001.
isgro1999projective Francesco Isgro and Emanuele Trucco, “Projective rectification without epipolar geometry,” in CVPR, 1999.
fusiello2011quasi Andrea Fusiello and Luca Irsara, “Quasi-euclidean epipolar rectification of uncalibrated images,” Machine
Vision and Applications, 2011.
|
http://arxiv.org/abs/2307.05616v1 | 20230711021418 | Image Reconstruction using Enhanced Vision Transformer | [
"Nikhil Verma",
"Deepkamal Kaur",
"Lydia Chau"
] | cs.CV | [
"cs.CV"
] |
SAM-U: Multi-box prompts triggered uncertainty estimation for reliable SAM in medical image
Guoyao Deng1, Ke Zou1,3, Kai Ren2, Meng Wang3, Xuedong Yuan2, Sancong Ying2 and Huazhu Fu3
August 12, 2023
==============================================================================================
Removing noise from images is a challenging and fundamental problem in the field of computer vision. Images captured by modern cameras are inevitably degraded by noise which limits the accuracy of any quantitative measurements on those images. In this project, we propose a novel image reconstruction framework which can be used for tasks such as image denoising, deblurring or inpainting. The model proposed in this project is based on Vision Transformer (ViT) that takes 2D images as input and outputs embeddings which can be used for reconstructing denoised images. We incorporate four additional optimization techniques in the framework to improve the model reconstruction capability, namely Locality Sensitive Attention (LSA), Shifted Patch Tokenization (SPT), Rotary Position Embeddings (RoPE) and adversarial loss function inspired from Generative Adversarial Networks (GANs). LSA, SPT and RoPE enable the transformer to learn from the dataset more efficiently, while the adversarial loss function enhances the resolution of the reconstructed images. Based on our experiments, the proposed architecture outperforms the benchmark U-Net model by more than 3.5% structural similarity (SSIM) for the reconstruction tasks of image denoising and inpainting. The proposed enhancements further show an improvement of ~5% SSIM over the benchmark for both tasks.
§ INTRODUCTION
Image reconstruction is an active research area where the main goal is to produce clear images in some limited environment. Digital image reconstruction involves removing noise and blur from the input images. Deblurred and denoised images are essential in applications such as healthcare where images like Magnetic Resonance Imaging (MRI) are hard to obtain and are often blurred since subjects may move during scanning, making the images hard to interpret. Recently, Vision Transformer (ViT) <cit.> has shown great success on image classification tasks. However, its usage for image reconstruction tasks is limited.
In this project, we propose a novel image reconstruction framework using ViT by enhancing various components of the ViT architecture. The inputs to a standard ViT are the pixel patches, called tokens, generated from the original image. These tokens are concatenated with the positional embeddings and fed to the transformer, after which attention is applied. In this project, the enhancements we propose to the ViT are as follows:
* Use the SPT technique <cit.> to improve the tokenization process by generating overlapping tokens.
* Use the RoPE method <cit.> to improve the way positions are encoded with the tokens.
* Enhance the attention mechanism to avoid smoothing of attention scores by using a learnable temperature employing the LSA technique <cit.>.
* Use a discriminator to calculate the binary cross entropy loss of the reconstructed images to further improve the resolution of the reconstruction.
We tested our architecture on two reconstruction tasks, image denoising and inpainting, and compared the results against the benchmark U-Net model and the baseline vanilla ViT. We also performed a comprehensive analysis of the various proposed enhancements. The proposed framework can be used for various critical applications such as MRI.
§ RELATED WORK
For a computer vision system, image processing is a key component but considered as a low-level analysis of images. The results of processing visual data largely affect the high-level tasks such as image recognition or object detection. Deep learning has been widely used for a plethora of these tasks such as image super-resolution, deblurring, denoising, inpainting and colorization of images. The tasks that this project focuses on are as follows:
* Denoising: Image noising is the addition of some noise function to each image pixel such as Gaussian noise, Brownian noise or impulse-valued noise <cit.>. These noises can be caused by sharp and sudden disturbances in image signals. Impulse-valued noise (salt and pepper noise) presents itself as sparsely occurring white and black pixels in a gray image. Denoising is the task of removing such noise.
* Inpainting: It is the task of reconstructing missing regions in an image <cit.>. The missing/ damaged parts are filled such that the reconstructed image looks realistic. Commonly introduced paintings are vertical or horizontal bars with all pixel values replaced by 0, i.e., black pixels.
For constructing high resolution images from low-resolution data, SRCNN is a pioneering research work <cit.>. Similarly for denoising, deep Convolutional Neural Network (CNN) architectures have been experimented with varying layer configurations and activations. Some significant works include the multi-scale CNN-based model proposed by Nah et al. <cit.> that uses the coarse-to-fine approach. Later, Lim et al. <cit.> proposed a deep spectral-spatial network that considers both spectral as well as spatial aspects in a cascade of two networks. The success of CNNs is often partially attributed to the inductive biases inherent in CNNs, allowing impressive data efficiency. U-shaped architectures are also popular for denoising tasks and serve as a benchmark for comparison. Various enhancements have since been made to the U-Net architecture such as Thesia et al. <cit.> used latent features of various U-Nets to determine the input image noise distribution which help with denoising.
Based on the success of transformer-based models in text processing tasks, the next important landmark in computer vision is the use of attention-based models for image processing. Some significant works include introduction of spatial attention for image augmentation <cit.> and replacement of CNN with self-attention blocks. Wu et al. <cit.> proposed the use of transformer-based pre-training for image recognition and Chen et al. <cit.> then proposed the use of GPT model for image classification.
Transformers for vision, called vision transformers <cit.> are convolution-free architecture that have shown superior performance compared to the state-of-the-art CNNs for image classification when trained on millions of images. Recently, authors in <cit.> used attention mechanism along with convolution to account for the fact that in CNNs, as the depth of neural network increases, shallow layers start losing their effect as compared to deep layers in attention guided CNNs.
Some variants of U-Net involving efficient self-attention working on high resolution images <cit.> have also been proposed. The authors in <cit.> enhanced the local attention in transformer by shifting windows, which increased the locality bias and resulted in data efficiency inspired from <cit.>. However, not much work has been done on using transformers for low-level image tasks like reconstruction, specifically denoising and inpainting, which are the focus of this project.
Recently, Generative Adversarial Networks (GANs) for image deblurring task are also being used often. Generative models such as DeblurGAN <cit.> and conditional GANs <cit.> to locate and sharpen the blurred edges have shown great success. Previous studies have shown that GANs can be used to synthesize high-resolution photo-realistic images <cit.>. Duran et al. <cit.> used transformer-based generator and convolutional discriminator for image reconstruction. The authors showed that this approach shows better reconstructions while retaining the benefits of attention-based generator.
§ FORMAL DESCRIPTION
This section describes the standard ViT architecture and its limitations, followed by descriptions of the various proposed enhancements for two image reconstruction tasks, image denoising and inpainting.
§.§ Vanilla ViT
The ViT is based on transformer architecture. For image classification, the input image is spatially divided into a sequence of N equally sized image patches. These patches are then input as a sequence of linear embeddings to the transformers, called as patch embeddings. Since the transformer encoder itself does not inherit any notion of positional information, learnable position embeddings are introduced. A learnable classification token (CLS) is prepended to the sequence of patch embeddings. These N + 1 feature vectors serve as input to the transformer encoder. For image classification task, only the output representation of the classification token is fed into a classification head, which returns the estimated class label of the input image. The final output is learned using a cross-entropy loss using softmax over the last layer’s output.
In this case, the CLS token head helps in classification and all the other context token heads generated from the actual image patches are discarded. For the task of image reconstruction, we discard the classification token as it becomes redundant for image reconstruction. The classification head is then replaced by a reconstruction head that maps the transformer output back to a visual image. The high-level architecture is shown in Fig <ref>.
However, the problem with this ViT architecture is that it requires a lot of data and memory for effective learning. Therefore, we need to make some modifications to make use of the limited compute resources. Another problem is that ViT inherently lacks locality inductive bias. Two main reasons for this behaviour are:
* When ViT generates patches, the patches are non-overlapping, and thus, at a time pixels in one patch are only able to attend to each other and can’t attend to other patches.
* Since images have a large number of features, the distribution of attention scores for these features often becomes too smooth and the goal of attention to focus on certain tokens is lost.
We present some techniques to deal with these issues in the following sections.
§.§ Shifted Patch Tokenization (SPT)
SPT aims to alleviate the problem of non-overlapping patches. The idea is to consider the interactions between neighbouring pixels during token creation. The receptive fields of tokens are determined by tokenization. The patch generation process in ViT is similar to the convolution operation where the kernel size and stride is equal to the patch size of ViT. But the receptive field of a standard ResNet50 model is about 30 times larger than ViT, hence there is lack of locality inductive bias.
To resolve this, SPT tries to increase the receptive field. First, the input image is shifted by half the patch size in 4 directions. Then these 4 images are cropped to the same size as the original image and the remaining pixels are padded with zeros. Then all the cropped images are concatenated with the original image. These concatenated features are finally divided into non-overlapping patches, which are flattened for input to the model. The flattened patches are converted into tokens through layer normalization and projection.
§.§ Rotary Position Embeddings (RoPE)
As mentioned earlier, the ViT input embeddings are a combination of patch embeddings and positional embeddings. The interactions between tokens can be enhanced by changing the way patch embeddings are generated using SPT. RoPE can then enhance the positional embeddings. The standard ViT uses absolute embeddings where each patch, along with its position is encoded into a single embedding. RoPE suggests using relative positions instead <cit.>, such that we can have information about what part is more important. The difference between absolute positions of two tokens is encoded into a single embedding. This ensures that as the relative distance between two patches increases, the RoPE distance decreases, which is desired in case of images.
§.§ Locality Self-Attention (LSA)
Another technique that we use is an enhancement to the attention mechanism. The equations below summarize the attention mechanism of the standard ViT.
R(x) = xE_q(xE_k)^T
SA(x) = softmax(R/√(d_k))xE_v
The similarity matrix R in equation <ref> is obtained after matrix multiplication of Q and K vectors, multiplied by their weight matrices. The diagonal tokens in R represent the self-token relations and the other entries represent the inter-token relations. The final attention scores are obtained by taking a softmax of the similarity matrix vector divided by a temperature term, which is the square root of the dimensions of the key. However, the dot product of self-tokens of query and key becomes too large and dominates the other terms. Hence, equation <ref> would give relatively high scores to self-token relations and small scores to inter-token relations. The second problem is that the √(d_k) term is added to avoid softmax from generating very small gradients. But in case of images, d_k can be very large and cause smoothing of attention scores.
So, to avoid this, two techniques are proposed <cit.>. The first is diagonal masking. Since self-tokens smooth out attention, it is best to remove those so that scores are more uniformly distributed over the inter-token relations. The other solution is to use a temperature parameter that is learnt during training. This new temperature is found to be lower, and hence, doesn’t smooth out attention scores.
§.§ Adversarial Loss
The model is trained using a combined SSIM loss and adversarial cross entropy loss <cit.>. Since ViT-based generator is unstable when training with CNN-based discriminator, the proposed model uses a ViT-based discriminator <cit.> which has a structure that assimilates the ViT-generator. Additional regularization techniques, such as overlapping input image patches <cit.> and L2 attention <cit.> are used to further stabilize the training process. The architecture of the discriminator is shown in Fig <ref>.
§ EXPERIMENTAL SETUP
This section describes the details of the proposed architecture such as dataset used, metrics measured and hardware/ memory constraints.
§.§ Dataset
The proposed ViT architecture and the baseline (U-Net) architecture were evaluated based on the Tiny ImageNet dataset, which is a subset of the ImageNet dataset from the famous ILSVRC challenge <cit.>. The dataset contains 100,000 images of 200 classes downsized to 64×64 images. Due to time and computational resource constraints, subsets of Tiny ImageNet training and testing datasets were used. 20,000 images were used for training and 4,000 images were used for testing. The images were also converted into grayscale for training and evaluation. Data augmentation techniques, such as mirroring and randomly rotating some images by multiples of 90^∘, were used to ensure that the model performs well regardless of image orientation.
§.§ Evaluation System and Metrics
All the models were evaluated based on the two image reconstruction tasks using three metrics - Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM) and Normalized Mean Square Error (NMSE). For the image denoising task, the images were blurred by adding Gaussian noise with a variance of 0.05 and the systems were evaluated based on the ability to reconstruct a denoised image. For the image inpainting task, rows of pixels in the images were randomly blacked out and the models were evaluated based on the ability to reconstruct the images without the black-out pixels.
Regarding the evaluation metrics, PSNR was used to compare the maximum power of a signal and the power of the noise in the images. In particular, PSNR is sensitive to Gaussian additive noise <cit.>. SSIM and NMSE both measure the similarity between the original image and the reconstructed image. SSIM quantifies the visibility of differences between a distorted image and a reference image based on properties of the human visual system <cit.>. A higher SSIM or lower NMSE is preferred and indicates that the reconstructed image is closer to the original image before adding noise or masks.
§.§ Compute hardware and memory used
All the models were run on a 8 core NVIDIA RTX A6000 GPU with 32G memory and different batch sizes were chosen for different experiments such that the maximum number of images could fit into each batch, given the memory. Based on our testing, the models are not sensitive to batch sizes and changing batch size does not cause noticeable difference in metrics.
§ EXPERIMENTS AND RESULTS
Four experiments were performed to examine the performance of the benchmark U-Net architecture, ViT architecture and individual enhancement techniques, and to explore the optimal combination of enhancements proposed. The experiments are discussed in the following sections.
§.§ Experiment 1: Comparing U-Net and ViT
In the first experiment, we compared the performance of vanilla ViT model and the benchmark U-Net model to evaluate whether the image reconstruction tasks with ViT architecture can match or outperform the conventional U-Net model. Based on the results shown in Table <ref>, vanilla ViT architecture outperforms the U-Net architecture on the three metrics for both denoising and inpainting task by huge margins. Thus, the vanilla ViT for image reconstruction forms a good baseline for further experiments.
§.§ Experiment 2: Using LSA, SPT and RoPE in ViT
The second experiment was performed to analyze the impact of individual enhancement techniques. As shown in Table <ref>, for LSA, there is no significant difference in metrics before and after adding LSA to the vanilla ViT. It performed slightly better than the vanilla ViT in the inpainting task but underperformed in the denoising task. This might be because the smoothing of attention scores may not be a problem for this dataset since the images are scaled down to 64x64 pixels. LSA might be able to show more improvement for another dataset with larger images. For SPT and RoPE, the metrics for both enhancement methods outperformed the vanilla ViT in denoising and inpainting tasks. RoPE model showed the best results out of all the enhancement techniques. This means that the ViT benefited the most from changing the positional embedding.
§.§ Experiment 3: Adding adverserial loss function
After experimenting with individual enhancements, we conducted experiments on the discriminator. The setup of the third experiment was similar to the second experiment, except that the discriminator described in section <ref> was added to the architecture and the network was trained based on adversarial loss. As shown in Table <ref>, the discriminator improved the performance of the models overall, but the improvement is marginal. This means that this dataset does not benefit much from the addition of the discriminator. Also, similar to the last experiment, the model with RoPE and discriminator outperformed other models in all cases.
§.§ Experiment 4: Combination of techniques
The last experiment was conducted to explore the optimal combinations of enhancement techniques so as to propose one final model for each task. For image denoising, the best results were seen when LSA, SPT, RoPE and discriminator were used together. However, there was only marginal improvement in comparison to using only RoPE. For image inpainting, the combination of all enhancement techniques also resulted in the best metrics. Note that other combinations were also tested but only the best performing ones have been listed in Table <ref>. The images generated from the best combination for both the tasks are shown in Fig <ref> and <ref>.
§ LIMITATIONS AND FUTURE SCOPE
Based on a thorough analysis of the proposed techniques, we identified some limitations of our project that could be enhanced in the future. On comparing the original and reconstructed image, we noticed that the proposed system tends to overly smoothen out some of the image details, such as the smudges the edges. See figure <ref> for illustration. This suggests that the current system is not perfect in distinguishing between image details and noise in the image and further enhancements may be required.
Furthermore, to ensure that the experimental analysis can be generalized well to other cases, more comprehensive experiments need to be performed. For example, it would be beneficial to perform experiments on more datasets and colored images. Datasets with larger image sizes may help uncover some more interesting details, such as LSA can be expected to perform better for larger images. Also, the performance of the models can be examined based on a wider variety of image reconstruction tasks such as adding different noise filter, image super-resolution, deblurring etc.
It would also be interesting to explore in depth how the different techniques interact with each other as part of the whole architecture and whether two techniques counteract the effect of each other.
§ CONCLUSION
In this project, we demonstrated that ViT-based architecture can be used for image reconstruction tasks such as image denoising and inpainting, and potentially outperforms the conventional U-Net architecture. We proposed a novel architecture involving four enhancements to various components of the ViT - LSA, SPT, RoPE and adding a discriminator. We showed that the latter three techniques improve the image reconstruction capability of the ViT architecture on the Tiny Imagenet dataset. Based on the experiments, out of all the techniques, RoPE performed best individually and the combination of all enhancement techniques resulted in the best performance for both denoising and inpainting tasks. The proposed architecture can be used for a wide variety of applications where reconstructing noiseless images is critical.
§ ACKNOWLEDGEMENT
The authors thank the CSC2547 course staff, including Prof. Anthony Bonner for his constant guidance and motivation for the project. The authors are also grateful to the Department of Computer Science, University of Toronto for providing computational resources, without which the project would not have been possible.
plainnat
|
http://arxiv.org/abs/2307.07568v1 | 20230714181931 | Variational Prediction | [
"Alexander A. Alemi",
"Ben Poole"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
Sparsified Simultaneous Confidence Intervals for High-Dimensional Linear Models
Xiaorui Zhu, Yichen Qin, and Peng WangXiaorui Zhu is Assistant Professor in the Department of Business Analytics & Technology Management, Towson University. Yichen Qin is Associate Professor in the Department of Operations, Business Analytics, and Information Systems, University of Cincinnati. Peng Wang is Associate Professor in the Department of Operations, Business Analytics, and Information Systems, University of Cincinnati.
University of Cincinnati
April 20, 2022
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
tex/abstract
tex/body
tex/goodappendices
|
http://arxiv.org/abs/2307.04292v1 | 20230710005828 | A Demand-Driven Perspective on Generative Audio AI | [
"Sangshin Oh",
"Minsung Kang",
"Hyeongi Moon",
"Keunwoo Choi",
"Ben Sangbae Chon"
] | eess.AS | [
"eess.AS",
"cs.AI"
] |
[
A Demand-Driven Perspective on Generative Audio AI
equal*
Sangshin Ohequal,comp
Minsung Kangequal,comp
Hyeongi Mooncomp
Keunwoo Choicomp
Ben Sangbae Choncomp
compGaudio Lab, Inc., Seoul, South Korea
Ben Sangbae [email protected]
Audio AI, ICML
0.3in
]
To achieve successful deployment of AI research, it is crucial to understand the demands of the industry. In this paper, we present the results of a survey conducted with professional audio engineers, in order to determine research priorities and define various research tasks. We also summarize the current challenges in audio quality and controllability based on the survey. Our analysis emphasizes that the availability of datasets is currently the main bottleneck for achieving high-quality audio generation. Finally, we suggest potential solutions for some revealed issues with empirical evidence.
§ INTRODUCTION
The use of audio generative models has the potential to significantly impact a variety of industries. Although essential, the process of creating foley effects is often tedious, non-reproducible, and lacks scalability. Moreover, the utilization of pre-recorded sounds is not conducive to real-time or interactive applications, rendering it inadequate for fields like gaming, metaverse, or any domain requiring the simulation of lifelike environments. The advent of generative audio AI offers a promising solution to address these limitations, significantly impacting areas like film production, gaming, social platforms, and more.
Audio synthesis research has a long history <cit.>, but we will focus on the data-driven approaches as they are the recent pioneers with huge potential.
The current generative audio AI is still in its early stages, necessitating further advancements in various aspects. We present this paper to provide a demand-driven perspective on task definitions, challenges, and potential solutions within audio generation. Specifically, our focus is on general audio, excluding speech and music.
The key contributions of this paper include:
* A survey with individuals working in movie sound productions to share insights into the industry-side demands.
* Detailed definitions and review of distinct tasks in audio generation regarding input types and conditions.
* A summary of the related challenges towards industrial demands and a proposal on potential solutions supported by empirical evidence, including a method with which we achieved 2nd place in the foley synthesis challenges at DCASE 2023.
§ DEMANDS FROM INDUSTRY
To gather insights regarding the impact of audio generative models on the industry, we first interviewed two professionals from the field of movie sound production. They highlighted that their role extends beyond that of sound technicians, as they contribute to the artistic dimension of creating immersive and captivating sound experiences.
Despite the inevitable laborious nature of foley and sound effect recording, they are compelled to record new sounds since existing sounds are hardly reusable. While they have a vast library of previous sound stems, there is effectively no efficient method at hand for searching and finding suitable sounds. Even if they find a suitable sound, they have to spend time on editing the time synchronization and sound tone.
Based on this knowledge, we conducted a survey involving 18 individuals working in movie sound production, addressing the topic of AI audio generation. We first presented them with some examples of AI image generation applications and a demo page[<https://audioldm.github.io/>] of a recent text-to-audio model <cit.>.
We then asked three following primary questions with multiple-choice options.
Q1. What are the major challenges faced in foley recording? The most frequently selected option for this question was the time synchronization problem. Following that, respondents expressed the importance of audio quality and consistency in tone with the synchronous recording.
In the additional comments, respondents emphasized again that for foley sound, audio quality, synchronization with the scene, and consistency in tone with other sound sources are crucial – to the point that without a good synchronization, some might only consider using AI-generation for ambient sounds.
This indicates that relying solely on text-based conditioning may not be sufficient for a majority of use-cases.
Q2. What is the limitation(s) of the current text-conditioned audio generation as a product?
The survey result is plotted in Figure <ref>. In this question, it was found that audio quality presents the most significant challenge for practical usage. According to their comments, the concerns about quality encompass other aspects such as low fidelity, low sampling rate, roughness, and other related factors. A majority of respondents expressed complaints regarding the sample rate. It is noteworthy that while the industry requires full-band signals at 48kHz or higher, most of the current systems still operate within the 16kHz-24kHz range <cit.>.
For creativity, which was the second most frequently chosen category, it refers to the generation of new sounds that fulfill artistic intentions, e.g., creating “the sound of a lightsaber in Star Wars." The terms such as edit and text, which received the third and fourth highest numbers of votes, indicate the problems of controllability.
Q3. How would you like to condition the audio generation? As in Figure <ref>, the most frequently chosen option is the utilization of video for time synchronization and achieving an appropriate sound tone. More than half of the respondents were interested in generating similar sounds to reference audio samples. The third and fourth popular options, namely interp. and consistn., are related to refining the generated audio based on reference audio samples. The respondents seemed to show their hope for a more efficient workflow in Q3, in contrast to showing their expectations in Q2.
This survey result presents important remarks on generative audio research. First, texts and videos are complementary to each other towards a more complete generative audio system. Second, sound and event synchronization is an important topic that deserves more attention. Third, although it is somewhat deviated from our topic, high-quality audio indexing, search, and separation may be also a solution for some of the problems generative audio AI aims to solve.
Based on this understanding, we delve into the current state and challenges of the audio generation field in the following sections.
§ TASK DEFINITIONS
In a recent proposal paper on foley sound synthesis challenge <cit.>, the audio generative AI task is specified based on the input and output types. The authors outline three distinct input types: i) category index, ii) text description, and iii) videos. While the categorization of output types is not explicitly stated, it can be inferred as follows: i) individual foley sounds representing a single event, ii) a combination of multiple events and/or ambient sounds, and iii) a comprehensive soundtrack comprising foley sounds ambient elements, and spatially enhanced mixing. We will focus on the input types since the determination of output types is primarily governed by technical feasibility, allowing a limited scope with the current technology.
§.§ Input Types
First, a category index, that indicates a single type of audio event, would be the simplest form of input type for a sound synthesis system. This was adopted in some previous works <cit.> and this year's DCASE Task 7 <cit.>. Solutions with this approach would improve foley recording processes for some popular categories such as dog barks, door slams, or footsteps.
The second type would be text descriptions as employed in recent research <cit.>, relying on audio caption datasets.
There are several promising aspects associated with this text-to-audio approach. i) Extensive research has already been conducted on text-to-X generation (e.g., text-to-image generation studies <cit.>), which simplifies its adaptation for audio generation purposes. ii) The familiarity of users with UI/UX utilizing text inputs further supports the feasibility of this approach.
However, there are difficulties as well. i) Compared to text-image pairs, there is a scarcity of text-audio pairs available for training models <cit.>.
For example, the number of items of AudioCaps <cit.>, the largest audio captioning dataset, is 0.013% of (or 7561 times smaller than) that of LAION-400M, an text-image pair dataset <cit.>.
ii) Text input has limitations in providing highly detailed descriptions at a professional level, as audio engineers rely on precise controls like knobs and sliders to make fine adjustments to the sound (e.g., equalizers).
Third, video input types have pros and cons. Unlike the previous input types, videos may provide the exact timings of events <cit.>. As its importance was discussed in Section 2, there is a huge potential for improving the workflow of video creation in this scenario by efficient time synchronization. However, the video itself does not provide complete information because it is common that not everything visible should sound, as well as not everything that sounds is visible.
Additionally, there are deliberate artistic intentions involved in video creation such as muting/exaggerating certain sounds. These artistic decisions may vary significantly. Therefore, when developing video-to-sound generation methods, the ability to edit and manipulate the generated audio becomes crucial, just as it is important for text-based generation approaches as we will discuss in the following section.
§.§ Conditioning
Conditioning can be viewed as a form of input in a broader sense and is deeply related to controllability and editability. AudioLDM pioneered sound editing through text-based approaches <cit.>, and we believe that this direction of research will continue toward more diverse, intuitive, and fine-grained conditioning. For example, users may want to control factors such as sound bandwidth, F0 contours, temporal and spectral envelopes, etc. Our exploration of these product development considerations will continue in the following sections.
§ CHALLENGES
§.§ Dataset Improvement for Audio Quality
Recently, there have been some generative AI products successfully deployed on language and image <cit.>. However, the current state of audio generation research does not seem mature enough to be adopted into professional sound production. As audio quality was the most prominent issue as in Figure <ref>, we focus on the issues and potential solutions on datasets to improve the generated audio quality in this section.
footnote-5
<https://sound-effects.bbcrewind.co.uk>
footnote1
<https://www.epidemicsound.com/sound-effects/>
footnote1
<https://www.freetousesounds.com/all-in-one-bundle/>
footnote1
<https://sonniss.com/gameaudiogdc>
footnote1
<https://wesoundeffects.com/we-sound-effects-bundle-2020/>
footnote1
<https://www.paramountmotion.com/odeon-sound-effects>
First of all, the current data scarcity deteriorates the model training and resulting audio quality. Compared to image generation datasets that go beyond a few billion pairs <cit.>, there are much less text-paired audio data available <cit.>. Moreover, most of such paired datasets are weakly labeled, i.e. their labels or captions lack time resolution. This is problematic as it is common practice to slice audio signals for ease of training and memory-related issues.
Since the text in the pairs depicts audio coarsely in the time axis, there should be potential risks of mismatching when the audio signal is sliced into smaller segments for some practical reasons.
Augmentation method <cit.> or using a contrastive embedding network <cit.> can help this, but not as an absolute treatment.
The characteristics of the audio itself even exacerbates the problem. It is a difficult problem to separate foreground and background audio sources, and obtaining isolated audio recording would remain to be costly. The spatial characteristics of the recording environment often have negative affects to the recording quality. Altogether, there are many factors that make it tricky to create a studio-quality audio dataset. We listed available audio datasets in Table <ref>. Since the largest datasets in the list are collected or curated from crowd-sourced audio <cit.> or video <cit.>, their recording conditions may vary and are usually not good. Thus, the samples from those datasets often suffer from severe background noises, low recording bandwidth / bit rate, and various types of distortion. Clean datasets are limited to several commercial sound effect libraries.
To this trade-off problem of more data vs. clean data, we propose a solution called quality-aware training (QAT). This can be simply done by prompting, i.e., appending dataset labels indicating the quality of the dataset in the text input. QAT enables to utilize a broader range of datasets. During the training phase, a model can learn from both clean and noisy datasets with quality labels. As a result, the model would learn not only the concepts of different audio events but also their audio quality; i.e., the model would have compositionality of audio events and audio quality. During the inference phase, we can force the model to generate clean signals by conditioning the model, i.e., by appending `clean' labels to the text input.
This enabled us to use all data pairs regardless of their quality without deteriorating their output quality.
In our experience, this approach let us control the audio quality, reverberation, signal bandwidth, and audio event independently and achieve 2nd place in the recent foley synthesis challenge at DCASE 2023 <cit.>. Details about experiments are provided in Appendix B.
§.§ Methodological Improvement for Controllability
Controllability was another major concern in our survey, as the audio engineers have specific intent about how the generated output should sound. Audio generation may take a long time, hence it is crucial for deployable audio AI systems to have effective controllability
Classifier-free guidance is a widely adopted solution for the problem across diffusion-based and Transformer-based generative models. At the cost of sample qualities by extrapolating intermediate features or logits, it introduces diversity, which would make exploration easier for the users of generative audio AI systems. Most of the recent text-to-audio generation research adopted this technique <cit.>.
Controllability can be also attained by introducing new features or new modalities, for example, a reference audio or a conditioning video as in Figure <ref>. As AudioLDM demonstrated audio manipulation without fine-tuning <cit.>, we believe text-guided audio-to-audio generation is a compelling research direction towards deployable generative audio AI. Video-based foley generation has been less popular, but it would be an interesting direction for future research along with the existing research <cit.>. Finally, conventional signal features such as F0 contour or envelopes can be a great user interface for experienced audio engineers. As those features are easy to extract from audio signals, it is plausible to use them as one of the inputs during the training phase, then build a user interface that allows control of the generated output by modifying the features.
§ CONCLUSION
In this paper, we presented a survey conducted with sound engineers in the movie industry. Based on the survey results, we have provided task definitions for audio generation research and identified related research challenges. Our objective was to bridge the gap between current research and industry practices, offering potential solutions to address the challenges of audio quality and controllability.
Surprisingly, there are limited opportunities for researchers to gain insights from the industry side. We believe that this work serves as a valuable starting point for understanding the difficulties faced by both researchers and potential users, ultimately aligning our efforts to solve the real-world problems.
While our perspective focuses on the movie industry, it is important to acknowledge that neighboring industries may face different challenges with varying priorities. For example, the demand for real-time generation systems may be stronger in the virtual reality or gaming industry, while the standards for audio quality or artistic intent may be lower for non-professional movie creation platforms such as YouTube. We hope that our work represents a meaningful step towards comprehending the diverse demands placed on generative audio AI and its diverse applications.
icml2023
§ DETAILS OF SURVEY IN SECTION <REF>
§.§ Exact expression of the options in Figure <ref> and Figure <ref>
§.§ Results on the other questionnaire
§ EXPERIMENT RESULTS FOR SECTION <REF>
|
http://arxiv.org/abs/2307.04308v1 | 20230710022738 | CT-BERT: Learning Better Tabular Representations Through Cross-Table Pre-training | [
"Chao Ye",
"Guoshan Lu",
"Haobo Wang",
"Liyao Li",
"Sai Wu",
"Gang Chen",
"Junbo Zhao"
] | cs.LG | [
"cs.LG"
] |
[1]
Zhejiang University
Hangzhou
China
[email protected]
Chao Ye and Guoshan Lu are co-first authors of the article.
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Zhejiang University
Hangzhou
China
[email protected]
Junbo Zhao is the corresponding author.
Zhejiang University
Hangzhou
China
[email protected]
Tabular data — also known as structured data — is one of the most common data forms in existence, thanks to the stable development and scaled deployment of database systems in the last few decades.
At present however, despite the blast brought by large pre-trained models in other domains such as ChatGPT <cit.> or SAM <cit.>, how can we extract common knowledge across tables at a scale that may eventually lead to generalizable representation for tabular data remains a full blank.
Indeed, there have been a few works around this topic. Most (if not all) of them are limited in the scope of a single table or fixed form of a schema.
In this work, we first identify the crucial research challenges behind tabular data pre-training, particularly towards the cross-table scenario.
We position the contribution of this work in two folds:
(i)-we collect and curate nearly 2k high-quality tabular datasets, each of which is guaranteed to possess clear semantics, clean labels, and other necessary meta information.
(ii)-we propose a novel framework that allows cross-table pre-training dubbed as .
Noticeably, in light of pioneering the scaled cross-table training, CT-BERT is fully compatible with both supervised and self-supervised schemes, where the specific instantiation of is very much dependent on the downstream tasks.
We further propose and implement a contrastive-learning-based and masked table modeling (MTM) objective into CT-BERT, that is inspired from computer vision and natural language processing communities but sophistically tailored to tables.
The extensive empirical results on 15 datasets demonstrate 's state-of-the-art performance, where both its supervised and self-supervised setups significantly outperform the prior approaches.
CT-BERT: Learning Better Tabular Representations Through Cross-Table Pre-training
Junbo Zhao
=================================================================================
§ INTRODUCTION
With the extensive application of database management systems and the vigorous development of the internet industry, tabular data — also known as structured data — truly abounds.
Indeed, the accumulation of scaled tables stored in databases has brought significant value to the industry or individuals, through tech stacks like data mining or the development of OLAP databases.
Notably, over the past decade, various large-scale collections of tabular datasets have been proposed <cit.>, and they were used for tasks like tableQA <cit.>, table interpretation <cit.>, table expansion <cit.>, etc.
Despite that, how to enable a large-scale, distributed, and cross-table pre-training very much remains untapped.
This, unfortunately, is in stark contrast to the other communities such as computer vision and natural language processing.
In both of these domains, techniques like pre-training followed by fine-tuning have long established a dominant methodological status, such as BERT <cit.>, CLip <cit.>, ChatGPT <cit.>, GPT4 <cit.>, SAM <cit.>, etc.
In hindsight, the successes of these large-scale models lie in their ability to extract common semantic structure from the seen/unseen input and condense this knowledge/common sense into a vectorial representation.
The emergence of this capacity stems from a scaled pre-training process on a gigantic amount of text or vision data across the domains.
Recently, a few works have attempted to learn contextualized representation from tabular data through neural networks, or more specifically the transformer model <cit.>, such as TabTransformer <cit.>, VIME <cit.>, TabNet <cit.>, SAINT <cit.>, etc.
While the concept is truly promising, these approaches are limited to single-table training with a fixed form of a schema.
Most closely related to our work are TransTab <cit.> and PTab <cit.>. Both approaches note the importance of cross-table learning. However, they process the table to a proximal form of text data, for instance by converting a sample row in the table into a sentence, without doing much adaption specifically to the structured data.
This weakened coupling of the data values in the tables with the schema/meta/column names has arguably obstructed these approaches to scale and absorb common knowledge.
§.§ Challenges
In what follows, we identify the core challenges that remained in scaled and cross-table pre-training.
C1. How can pre-training models accept inputs from heterogeneous tables as there are significant differences between different tables? For instance, the feature value "apple" appears under the column names "fruit" and "My_Laptop" in two different tables, conveying completely different meanings.
C2.
Unlike image or text data where the pixels and word/character tokens are ordered, arbitrarily permuting any tables' rows or columns does not change its semantic meaning. We dub this property as permutation invariance uniquely to tabular data.
Thus, how can the pre-training mechanism be compatible with this nature of tabular data?
C3. Still driven by the difference against common vision or text data, how to design a suitable cross-table pre-training task objective because there is no obvious context or spatial structure in the tabular data?
§.§ Key Idea behind CT-BERT
Ideally, in order for the pre-trained model to properly acquire the common knowledge from multiple heterogeneous tables, the model should be encouraged to learn the innate similarities or dissimilarities among the tabular data distribution.
However, as we posited in the challenges, directly utilizing the original form of the data (or its corresponding embedding) may cause unentangleable confusion.
Let us give a concrete example; given two tables with similar schema, two forms “10 meters" and “10 kg" are iconically identical. Despite that, directly converting them to embedding may inherently confuse and adversely impact the convergence or training difficulty.
Abstracting away from this example, to cope with this challenge, the pre-training methodology must be capable to conform the different metrical systems or different notations.
It is true that we can write heuristic rules to tackle this problem, but the amount of it would be surely insurmountable.
In that regard, we outline the core idea behind CT-BERT.
In a nutshell, provided with any table, it can always be decomposed to feature that denotes the data curated column-wise, together with token drawn from the schema information such as the column name or other textual meta-information.
Instead of following a normal embedding-based encoding approach, we proactively combine the feature with the token information, by casting them into a form of textual representation.
For example, we convert the feature value "apple" combined with the schema information to "fruit is apple", which we dub as a phrase, as the atomic representation of the cell value in tabular data. This allows to distinguish the same feature value "apple" in column "fruit" and "My_Laptop" respectively.
We postulate that this manifests several merits. In particular, the challenge C1 can be both theoretically and empirically solved, and this formation is rid of many heuristic rules, except the template for sticking the feature and token together.
§.§ Our Methodology:
Essentially, CT-BERT bases itself upon the phrase as the atomic representation of each unit in any provided table, in combination of the feature (column name/meta) with the feature value.
We then process each atomic element similarly to word embedding in NLP.
Towards the challenge C2 of the permutation invariance property, we propose a novel transformer <cit.> encoding architecture that is adapted to cater to this nature of tabular data.
As a pioneer work to enable cross-table pre-training, we devise CT-BERT to be compatible with both supervised and self-supervised scenarios.
In that regard, we profoundly categorized the available tables drawn from databases by a standard whether there exists a clear label column or not, that we direct it to supervised and self-supervised learning paradigms respectively.
On one hand, for supervised learning, we propose a supervised contrastive learning-based objective to better cluster samples with the same label while allowing different labels to be uniformly distributed over the hypersphere of tabular representations.
On the other hand, in order to take advantage of large-scale unsupervised data, we propose another pre-training method of masked table modeling (which we call MTM) — adapted from the MLM objective in the NLP community <cit.> — which facilitates to mask some features in the atomic then let the model predict the recovery (for challenge C3).
We believe that if the model can predict the masked features from the retained features, then the model can learn the underlying relationship between the features.
Similar to CV or NLP, this relationship serves as the foundation to manifest the shareable knowledge that is migrated across tables.
§.§ Contributions
To wrap up, the contribution of this article is deemed two-fold.
For one thing, we collect and curate nearly 2,000 tabular datasets, each of which is guaranteed to possess clear semantics, clean labels, and other necessary meta information. We treat these high-quality and labeled datasets as the foundation to launch large-scale pre-training.
For another, we propose a generic and efficient cross-table pre-training solution, dubbed as Cross-Table pre-Training framework ().
CT-BERT promotes several novel development bullets including but not limited to: (i)-a novel paradigm compatible with both supervised and self-supervised objectives, (ii)-a contrastive learning and masked table modeling (MTM) objectives for pre-training tables, and a novel transformer architecture tailored to the permutation invariance nature of tabular data. Our pre-trained tabular model can support fine-tuning or few-shot learning for prediction on tables of any shape.
The remainder of the paper is organized as follows. In Section <ref>, we detail the table pre-training dataset we contributed. In Section <ref>, we present the proposed cross-table pre-training framework. In Section <ref>, we constructed extensive experiments to evaluate the effectiveness and superiority of .
§ RELATED WORKS
We provide a brief background on representation learning, models for tabular data, and self-supervised pre-training methods.
§.§ Representation Learning
In recent years, with the development of pre-trained large language models ("LLMs") like GPT-3 <cit.>, the pre-training then fine-tuning and prompting paradigms have attracted attention. These methods typically train models with self-supervised representation learning methods from large-scale unstructured text and structured knowledge bases, and then fine-tune them or use them for various downstream tasks. In early work in natural language, including Word2Vec <cit.> and GloVe <cit.>, pre-training distributed representations of words provided significant improvements over randomly initialized parameters. However, these methods cannot simulate the use of words in different linguistic contexts. This dilemma has prompted the development of vocabulary representations that can learn context and contextual relationships <cit.>, and these pre-trained language models have achieved tremendous success and produced state-of-the-art results in various NLP tasks <cit.>. Similarly, self-supervised representation learning can also be used for tabular data, such as knowledge bases (KB) and databases, where entities and relationships in the KB can be embedded into continuous vector spaces and then utilized for various downstream tasks, such as KB completion <cit.>, relation extraction <cit.>, entity resolution <cit.>, etc. Although representation learning on the text and KB has been successful, few works have explored directly learning self-supervised representations on large-scale tabular data for tabular modeling. In this work, we introduce , which is the first method for self-supervised pre-training on large-scale tabular data, and the pre-trained model can be fine-tuned for various downstream tabular prediction tasks.
§.§ Models for Tabular Data
For a long time, traditional machine learning (ML) methods such as tree-based methods <cit.> have dominated this field and have been the preferred choice for most practitioners and data mining competitions (e.g., Kaggle) <cit.>. Recently, many researchers have proposed new neural network-based architectures <cit.> to model tabular data, attempting to challenge the dominance of tree-based models in this field. For example, TabNet <cit.> uses sequential attention to simulate the process of tree decision-making, TabTransformer <cit.> leverages transformers <cit.> to learn categorical features in tables, and AutoInt <cit.> utilizes attention mechanism <cit.> to model the relationship between user and item features in click-through rate prediction tasks. However, only very few of these neural network-based models work <cit.> attempt to investigate how to handle heterogeneous tabular inputs. This leads to the advantage that deep learning methods can be pre-trained on large-scale datasets that cannot be fully exploited. As described in Section <ref> and <ref>, our proposed not only can accept inputs from heterogeneous tables but also achieves permutation invariance of feature columns, and leverage semantic knowledge from table headers and textual features. These advancements pave the way for to be pre-trained on large-scale datasets for cross-table prediction.
§.§ Self-supervised pre-training
One of the key reasons for the great success of deep learning in computer vision and natural language processing is that knowledge on a large amount of unlabeled datasets is learned through a self-supervised pre-training task and then generalized to downstream tasks through fine-tuning. For instance, masked language modeling (MLM) self-supervised pre-text task <cit.> is employed to learn contextual relationships in natural language processing. In computer vision, masked image modeling (MIM) <cit.> and contrastive learning <cit.> have been used to train powerful image representations. Some studies have attempted to extend the success of self-supervised learning to tabular data. These approaches can be roughly categorized into three types: 1) reconstruction of masked inputs <cit.>; 2) contrastive learning similar to that in SimCLR <cit.>; 3) a combination of the first two. For example, VIME <cit.> utilizes autoencoders to reconstruct corrupted table inputs. SCARF <cit.> randomly selects and replaces certain features with corresponding empirical marginal distributions to construct different views of the same sample. We argue that contrastive learning methods similar to that in SCARF <cit.> are not applicable to large-scale unlabeled cross-table pre-training tasks. Assuming the existence of a priori true labels for these unlabeled samples, such contrastive learning methods are highly likely to distance samples with the same labels, especially for tables with unrich sample labels. We are more inclined to believe that methods like masked language modeling (MLM) and masked image modeling (MIM) have greater potential. Therefore, in this work, for the first time, we formalize this series of approaches as masked table modeling (MTM) tasks. Additionally, we propose a novel masked table modeling method that combines semantic cues from table headers, which is more suitable for learning cross-table knowledge.
§ PRELIMINARY
§.§ Problem Formulation
For a given tabular data D=(𝐱_i,y_i)_i=1^n where n refers to the number of samples. 𝐱_i={𝐱_i^cat, 𝐱_i^num} where 𝐱_i^cat={x_i^1, x_i^2, … ,x_i^a} denotes all a categorical features, and 𝐱_i^num∈ℝ^b denotes all b numerical features. y_i∈{1, 2, … , T} where T refers to the total classes of labels. All samples share the same table header descriptions (column names) 𝐂={c^1, c^2, …, c^a+b}. Our goal is to find the best possible prediction function f_θ to model the mapping between features and labels:
f_θ(𝐱_i; 𝐂) = y_i,
where θ refers to all trainable parameters of the function f.
§.§ Pre-training then Fine-tuning Paradigm in Tabular Domain
Given a generic architecture, often called a backbone such as Transformer, and projection head for mapping to specific tasks, the model is first pre-trained on a large dataset by self-supervised or unsupervised tasks (e.g., Contrastive Learning or MLM). The individual feature columns of the dataset {𝐱^cat, 𝐱^num} are converted to the input format 𝐱_i={𝐞^CLS, 𝐞^1, 𝐞^2 ...,𝐞^a+b}, which is sent to the Transformer model, and the model is further optimized using self-supervised or unsupervised objectives.
Then, in the downstream task-specific fine-tuning stage, the pre-trained backbone module is retained, the pre-trained projection head is discarded and the classification head for the new task is constructed, and the output 𝐞^CLS is used for multi-classification and optimized via cross-entropy loss <cit.>, etc.
§ : A LARGE-SCALE SEMANTIC TABULAR DATABASE
In recent years, the field of cross-table pre-training has been relatively underexplored. One major challenge lies in the lack of a clean and high-quality tabular dataset. Just as the proposal of ImageNet <cit.> has greatly propelled the advancement of computer vision representation learning and influenced various other domains, such as self-supervised learning and transfer learning, a similar catalyst is needed for the domain of tabular representation learning. Therefore, in this work we contribute a large-scale semantic tabular database, which we called , to better train our . is a large-scale tabular database with high quality built on various public tabular dataset websites and through our strict data cleaning. These tabular datasets are collected from OpenML[https://www.openml.org/], UCI[https://archive.ics.uci.edu/datasets], CATALOG[https://catalog.data.gov/dataset], and Kaggle[https://www.kaggle.com/]. We have open-sourced [https://drive.google.com/file/d/1-2m1tyejUV5_bZduqZw1ZXS1BUSkhzVl/view?usp=drive_link] and hope to facilitate future research in the field of tabular representation learning.
With the advent of the Big Data era, the proliferation of database technologies has led to an explosion of tabular data on the Internet. These numerous tabular datasets can help more complex and powerful models and algorithms to learn more general tabular representations. And representations are the standard signal linking many machine learning applications in this day and age. This means that more novel AI techniques can be made accessible to databases, such as allowing large language models (e.g., ChatGPT <cit.>) to understand databases. However, the quality of tables in Internet databases is inconsistent greatly, which can significantly impact the learning performance of models. For example, column name information in some tabular datasets is usually anonymized or unclear to avoid compromising privacy (e.g., named f1, f2, etc.), which may lose important semantic knowledge to better understand the tabular data. In addition to this, some tabular datasets also suffer from too many missing values, redundant feature columns, lack of consistent formatting, etc. Therefore, in this work, we spent a lot of time filtering and cleaning the tabular data from the Internet database. Specifically for each table, our data cleaning includes the following steps:
(1) Check the semantic degree of the column names for each feature. For example, the column names {user_age, weight, monthly_income} have high semantic information, while the column names {f1, f2, xyz} have almost no semantic information. We compute the cumulative semantic relevance score for each table. In our cleaning protocol, we discard such tables that have less than 50% of the features having actual semantic information in the column names.
(2) Check the missing values. For example, the datasets with more than 40% missing values are discarded. Because too many missing values can easily lead to biased or inaccurate results. For the retained tables, we fill the missing values with the plural of the corresponding column.
(3) For categorical features in the tables, we aim to restore them to their original textual values. As for numerical features, we employ min-max normalization. This is done to mitigate the impact of inconsistent measurement units across different tables (e.g., kilograms vs. grams).
(4) For the table with labels and more than 100 features, feature filtering based on Random Forest importance <cit.> is performed, and the features with lower importance ranking are discarded.
At present, has contained about 17G datasets, including approximately 1000 labeled datasets and 1000 unlabeled datasets. Usually high-quality and semantically rich labeled datasets are more difficult to obtain, while unlabeled tabular datasets are easier to obtain.
Therefore, in supervised pre-training, the theoretical upper bound of model performance is expected to be influenced by the quantity of available labeled tabular datasets at the data level. In contrast, self-supervised pre-training has the potential for a higher upper bound of performance. According to what is suggested in previous research <cit.>, contrastive learning will not be adapted to tables that are not rich in label classed due tothe differences and labels of the samples being more relevant the chances of sampling negative samples are low, which is why we propose a novel self-supervised masked table modeling (MTM) pre-training approach.
We believe that the contrastive learning-based pre-training approach will be more suitable for lightweight labeled scenarios, and the upper limit of the model will be determined by the number of its available tabular datasets. On the other hand, the self-supervised pre-training approach may require a large amount of data for model training and would also theoretically have more room for improvement.
§ METHODS
Previously proposed table pre-training methods <cit.> have all been pre-trained on an individual tabular task dataset. As a result, these pre-trained models exhibit notably poor generalization performance on downstream tasks involving other tables. In this section, we detail our proposed novel cross-table pre-training framework , which improves the generalization ability of pre-trained models by learning shareable knowledge across different tables. The overall architecture is provided in Figure <ref>.
As we have discussed before, cross-table pre-training needs to address three key challenges C1-C3.
For C1, in Section <ref> we propose to use a natural language-like approach to process the input of heterogeneous tables and enhance cross-table transfer learning by leveraging semantic knowledge in the schema. For C2, in Section <ref> we use an adapted transformer encoder <cit.> without positional encoding to model feature-level interactions. For C3, in Section <ref> we propose a novel masked table modeling (MTM) self-supervised pre-training task for large-scale unlabeled dataset scenarios and a contrastive learning-based supervised pre-training task for lightweight labeled dataset scenarios, respectively. At last, in Section <ref> we introduce fine-tuning the pre-trained model on downstream tasks.
§.§ Input Processor on Heterogeneous Tables
Feature columns among tables from diverse domains often exhibit significant variations. Therefore the previous works <cit.> often use the table-specific feature extractor which is also called "feature tokenizer" in their literature. This greatly hinders the model to perform cross-table learning. In , we analyze that the table is essentially a multimodal structured data, which contains both text (e.g., column names and discrete categorical values) and continuous values. Based on this observation, we use a natural language-like approach and combine the column name schema information to convert all features into a uniformly formatted feature phrase, e.g. [column name] is [value]. This design has two advantages. First, our model can accept inputs from heterogeneous tables without any table-specific operation. This serves as a necessary condition for enabling cross-table pre-training. Second, the knowledge learned from pre-training can be maximized to transfer between similar features by semantic information in the schema across different tables. For example, gender features are recorded in both tables. In one table, the column name is gender and the value is "male", and in the other table, the column name is "sex" and the value is "man". Our model can encode the two feature phrases "gender is male" and "sex is man" into two distance proximity embeddings (e.g., cosine similarity is high) based on semantic information.
For each feature phrase, we convert it into a low-dimensional embedding and employ it to model the feature interaction in the subsequent phase. The right part of Figure <ref> illustrates the details about how we handle the categorical and numerical features separately to get the feature embedding.
Categorical Feature. For each sample x_i, each discrete category will have a corresponding text description (e.g., 1 for a man, 2 for a woman). We concatenate the column name and the original categorical description to form a feature phrase. Then, we use a pre-trained BERT <cit.> model to tokenize the phrase and generate the corresponding embedding for each token, where the pre-trained BERT model contains generic semantic knowledge. Further, we pool these token embeddings of the j-th feature into one feature embedding 𝐞_i^j∈𝐑^d. In our experiments, we tried average, self-attention <cit.> and other pooling methods. See Section <ref> for ablation experiments on these pooling strategies. Among them, the average pooling strategy performs the best. Therefore, without a special explanation, average pooling is used by default.
Numerical Feature. We know that at least for now pre-training token embedding of continuous values is ineffective <cit.>. For numerical features, we similarly process their column names as for categorical features to obtain the header embedding 𝐜^𝐣∈𝐑^d. Then we multiply the normalized numerical value with the corresponding header embedding to get the feature embedding𝐞_i^j=x_i^j×𝐜^𝐣∈𝐑^d. Note that the normalization of the numerical values is important here, as it helps the knowledge to transfer better across different tables. Because the same numerical features may have different measurement units across different tables. For example, the unit of height in one table is a meter, but in another table may be a centimeter.
We note that previous works <cit.> have also tried to combine column names to convert each sample into a sequence of text tokens and the subsequent learning is built on the token-level. We think that such token-level interactions are more suitable for extracting textual semantic information from tables (e.g., TableQA task <cit.>), but are not well-suited for our target column prediction task. For example, in a "work" column with the value "associate professor" in a table, this feature will first be converted into three token embeddings: [work], [associate] and [professor]. The subsequent model will learn the relationship between [associate] token and [professor] token in the same column, which is unreasonable. The experimental results in Section <ref> also validate this observation. However, in our design, one column corresponds to one feature embedding, and the subsequent model learns at the feature-level. This is a straightforward but effective enhancement. At the same time, for tables with a large number of features, such a design can optimize computational efficiency and memory space usage.
§.§ Feature Interaction
There is no inherent order relationship among different columns in a table. In other words, tables possess permutation invariance in the column dimension. Previous tabular modeling works <cit.> often overlooked this aspect by directly employing the transformer architecture <cit.>. Therefore, we have made certain modifications to the standard transformer encoder to adapt it to tabular data. Specifically, we 1) discard positional encoding and 2) use a shared-parameter fully connected feed-forward network at each transformer encoder block. Finally, our adapted transformer encoder block contains two sub-layers: a multi-head self-attention layer, and a shared-parameter fully connected feed-forward layer. In addition, a residual connection <cit.> is done for each sub-layer, followed by layer normalization <cit.>. The multi-headed self-attentive mechanism is the key to modeling feature interactions. It learns the relationship between features through Query, Key, and Value matrices. It is calculated as follows:
MultiHead(𝐇^l) = Concat(head_1, …, head_i, …, head_h))𝐖^O,
head_i = Attention(𝐇^l𝐖_i^Q,𝐇^l𝐖_i^K,𝐇^l𝐖_i^V),
Attention(𝐐,𝐊,𝐕)=Softmax(𝐐𝐊^T/√(d))𝐕,
where 𝐇^l∈ℝ^n × d is the input of the l-th layer; 𝐖^O∈ℝ^d × d is parameter matrix; 𝐖_i^Q, 𝐖_i^K and 𝐖_i^V ∈ℝ^d × d_head. d_head=d/h is the dimension of each attention head. Inspired by BERT <cit.>, we add a special classification token (𝐞^CLS∈ℝ^d) to the first position of the input sequence in each layer. This special token is used as the aggregate sample representation and is then served for the subsequent pre-training and downstream tasks. As described in Section <ref>, we can obtain the processed feature embeddings 𝐄={𝐞^1, 𝐞^2, …, 𝐞^a+b} from the raw tabular data . So we have the first layer of input 𝐇^0=[𝐞^CLS, 𝐄]. Finally, we can model the higher-order feature interactions step by step through the following calculation:
𝐇^l+1=LayerNorm(𝐇̂+linear(𝐇̂)),
𝐇̂=LayerNorm(𝐇^l+MultiHead(𝐇^l)).
§.§ Pre-training Across the Tables
Our work is the first to explore large-scale cross-table pre-training. Supervised and self-supervised pre-training are two major approaches in the field of deep learning. As described in Section <ref>, we contributed a cross-table pre-training dataset which is collected from various domains and includes approximately 1000 labeled tables and 1000 unlabeled tables. In this work, based on the nature of the collected dataset , we simultaneously explore supervised and self-supervised cross-table pre-training approaches. Firstly, for the relatively more easily learnable labeled tabular datasets, we propose a randomly subsampled supervised contrastive learning approach to adapt to the cross-table pre-training task. Secondly, for large-scale unlabeled tabular datasets, some studies have discussed the limitations of contrastive learning-based methods in unlabeled tabular scenarios <cit.>. So in order to fully leverage the potential of shareable knowledge within unlabeled tabular data, in , we propose a novel masked table modeling (MTM) self-supervised cross-table pre-training method.
Details of the two cross-table pre-training approaches are as follows:
Supervised contrastive learning. In the labeled tabular scenario, we observe that samples with the same labels tend to have similar feature sets. Based on this observation we make a bold hypothesis: powerful representation should model the invariant factors of feature sets with the same label. We, therefore, propose a random overlapping subsampling method to construct positive and negative samples in contrastive learning.
Figure <ref> illustrates how we randomly sample subsets and divide positive and negative pairs. Specifically, for each row (𝐱_i,y_i) we randomly sample k feature subsets {𝐬_i^1, 𝐬_i^2, …, 𝐬_i^k} and set all their labels to y_i. There will be a partial overlap of features between subsets. In this way, feature subsets with the same label form positive pairs, and subsets with different labels form negative pairs. Overall contrastive loss is:
ℒ_pretrain^CL(𝐗,𝐲)=1/| B |∑_i∈ B1/| P(i) |∑_p ∈ P(i)Ψ(𝐳_i^CLS,𝐳_p^CLS),
Ψ(𝐳_i^CLS,𝐳_p^CLS)=-log(exp(sim(𝐳_i^CLS,𝐳_p^CLS)/τ)/∑_i'∈ Bexp(sim(𝐳_i^CLS,𝐳_i'^CLS)/τ)),
where B is the set of samples in a batch; P(i)={p|p∈ B, p≠ i, y_i=y_p}. The previous tabular contrastive learning work SCARF <cit.> focused only on constructing different views of the same samples, simply treating all different samples as negative pairs. This only applies when the sample label classes are very rich such that the sample labels in a batch are almost all different. Compared to the tabular vertical fixed-partitioned contrastive learning method <cit.>, our method can learn more robust sample representations in richer feature subsets by random sampling.
Self-supervised MTM. For large-scale unlabeled scenarios, we propose a novel masked table modeling (MTM) self-supervised cross-table pre-training task. On each sample row in all tables, we mask some percentage of features, and then reconstruct them based on the retained features.
We argue that if the model is able to successfully reconstruct the masked features from the retained features, then the model is able to learn the underlying relationships between features that can be transferred as shareable knowledge between different tables with similar feature columns, which will eventually indirectly bring closer the representations of samples with similar feature relationships.
The middle part of Figure <ref> shows the overview of our self-supervised MTM pre-training method, which can be divided into three steps.
First step we select the features that are masked. Given an input table, we first convert all features of each sample into feature embeddings, as described in Section <ref>. Then we mask approximately p^mask features for each row (p^mask is set to 35% in our experiments and further ablation results are shown in Section <ref>). Specifically, we generate a binary mask vector 𝐦 = [m^1, m^2, …, m^a+b]∈{0, 1}^a+b where m_j is randomly sampled from a Bernoulli distribution with probability p^mask. The "1" in 𝐦 indicates a masked feature and "0" indicates keeping the original feature.
Second step we replace the masked features with a shared, learnable vector 𝐞^mask∈ℝ^d, which is also called mask token. Note that here we will add additional header embedding, which is obtained by pooling the text token embeddings of the corresponding column name, for each mask token. Because there is no order relationship between the columns in tables. Here the role of header embeddings is like the position embeddings in masked language modeling (MLM) <cit.> and masked image modeling (MIM) <cit.> tasks.
Third step we reconstruct these masked features. We feed the masked sample row 𝐱={𝐞^j|m^j=0}∪{𝐞^mask+𝐜^j|m^j=1} into the L-layer transformer encoder to get the encoded representations 𝐇={𝐡^𝐣}_j=1^a+b. For the masked numerical features, we pass it through a numerical projection matrix 𝐌_pro^num∈ℝ^d× 1 and then calculate the mean square error loss with the original feature values. For the masked categorical features, we pass it through a categorical projection matrix 𝐌_pro^cat∈ℝ^d× d and then compute the cosine similarity with the original feature embedding 𝐞_j. Here the feature embedding 𝐞^j is calculated in the same way as section <ref> but with the column names removed. We formulate the masked table modeling pre-training loss as follows:
ℒ_pretrain^mask(𝐗)= 1/| B |∑_i∈ BΦ(𝐱_𝐢,𝐞_𝐢,𝐳_𝐢),
Φ(𝐱_𝐢,𝐞_𝐢,𝐳_𝐢) = 1/N^num∑_j=1^N^num(x_i^j-z_i^j)^2 + 1/N^cat∑_j'=1^N^cat(1-sim(𝐞_i^j', 𝐳_i^j'))
where B is the set of samples in a batch; z_i^j=𝐡_i^j𝐌_pro^num; 𝐳_𝐢^𝐣'=𝐡_i^j'𝐌_pro^cat; N^num refers to the number of numerical features; N^cat refers to the number of categorical features. We do not compute the traditional cross-entropy loss for categorical features because the same category in the same feature column may be inconsistently labeled in different tables, which can lead to confusion when cross-table pre-training. For example, for the "gender" column, one table may have "man" corresponding to label "1" and "woman" corresponding to label "2", while another table might be the exact opposite, with "man" corresponding to label "2" and "woman" corresponding to label "1".
Rather than a completely random mask strategy, we think that the proportion of numerical and categorical features masked can be adjusted according to the downstream task. When the downstream scenario is a regression task, the model needs to predict a continuous value. In this case, the pre-training task to predict the masked numerical features will be more helpful. Similarly, for classification downstream tasks, it will be biased to mask more categorical features. The downstream tasks in our experiment are mainly classification prediction, so we set the mask ratio of categorical features and numerical features to 7:3 during pre-training. The overall mask rate is 35%.
§.§ Fine-Tuning on Downstream Tabular Tasks
After cross-table pre-training, we discarded the original projection header and added a new task layer on the Transformer encoder. We then fine-tuned the parameters on the downstream task datasets. The downstream scenario in our experiments is mainly classification prediction tasks. So, we employ a simple linear classifier as the task layer. We use softmax <cit.> to calculate the probability of each label category and use cross-entropy loss as our empirical supervised loss.
ℒ_task(𝐗,𝐲)=-1/N∑_i=1^N∑_j=1^Ty_ijlog(f_θ(𝐱_i)),
where label y_i uses one-hot encoding; T is the total number of all label categories.
§ EXPERIMENTS
In this section, we evaluate the effectiveness and superiority of on several benchmark tabular datasets. Specifically, we conducted extensive experiments to demonstrate the following two points:
* How does our backbone, which can accept heterogeneous table inputs, compare with the current state-of-the-art tabular neural network framework when faced with a fixed single table downstream task without pre-training?
* (key) Our large-scale cross-table pre-training can help improve the effectiveness of downstream tasks by self-supervised masked table modeling pre-training in large-scale unlabeled scenarios and supervised contrastive learning pre-training in lightweight labeled scenarios, respectively.
§.§ Experimental Setup
§.§.§ Datasets
The experimental dataset consists of two parts: upstream large-scale cross-table pre-training datasets and downstream tabular tasks for evaluating the effectiveness of our model and pre-training.
Large-scale cross-table pre-training dataset: We collected more than 2000 high-quality datasets with semantic information of column names and performed some data cleaning, including 1000 labeled datasets and 1000 unlabeled datasets. We call this dataset and describe it in detail in Section <ref>.
Public downstream tabular tasks: We selected 15 common and high-quality tabular datasets from OpenML-CC18 <cit.> to evaluate the effectiveness of our model and pre-training method. These downstream datasets contain both binary and multi-class classification tasks. We included the details and source of each dataset in Table <ref> & <ref> in the Appendix <ref>.
§.§.§ Competing Methods
We conduct experiments on the following "shallow" (eg. tree-based) and neural network-based methods to show the efficacy and efficiency of on tabular learning.
Shallow baselines:
* Logitic Regression <cit.> is a linear classification algorithm that models the relationship between input variables and a binary outcome using a logistic function. It is widely used due to its simplicity, interpretability, and ability to handle large datasets efficiently.
* Xgboost <cit.> is an advanced implementation of gradient boosting algorithms. It has gained great popularity in machine learning competitions (e.g., Kaggle) and has been considered the dominant approach to modeling tabular data for a long time.
* LightGBM <cit.> is another gradient boosting tree framework. It employs a novel approach called "Gradient-based One-Side Sampling" (GOSS) to achieve faster training speeds and lower memory usage.
Neural network-based baselines:
* MLP (Multilayer Perceptron) <cit.> is a basic feed-forward fully connected artificial neural network architecture, but is considered a competitive neural network approach on tabular data.
* TransTab <cit.> is a newly proposed tabular framework that combines column description and table cells as the raw input to a transformer and is the current state-of-the-art tabular model.
* FT-Transformer <cit.> is a adaptation of the Transformer architecture <cit.> for the tabular data (Feature Tokenizer + Transformer).
* TabNet <cit.> uses sequential attention to simulate the process of tree decision-making, enabling interpretability and more efficient learning on tabular data.
* VIME <cit.> is a self- and semi-supervised learning framework specifically designed for tabular data.
* SAINT <cit.> is a newly proposed hybrid deep learning approach to solving tabular data problems and performs attention over both rows and columns.
* DCN-v2 <cit.> is an improved version of Deep & Cross Network (DCN), and claimed to be able to automatically and efficiently captures feature interactions in tabular data.
* AutoInt <cit.> is a click-through prediction, which is a type of structured data task, model. It uses a multi-head self-attentive neural network to learn the high-order feature interactions of input features.
§.§.§ Metrics
We follow previous work <cit.> using AUC <cit.> as the main evaluation metric and improve on it using 5-fold cross-validation <cit.> as the final result. Note that within each fold of the training set, we partitioned 20% as a validation set, which was utilized for hyperparameter selection and early stopping. For the sake of fairness, we employed the identical dataset splitting setting for all baseline algorithms and on all downstream task datasets.
§.§.§ Implementation Details
For details of all baseline implementations see Appendix <ref>, while the settings for all baselines remain consistent across all experiments unless otherwise specified. In the data pre-processing phase, we scale numerical features to [0, 1] by min-max normalization in all methods. For classification features, we use ordinal codes to represent them in all baselines. However, note that in our , we use the raw textual values of the categorical features in order to better exploit their semantic information. uses a 4-layer transformer, where the embedding dimension of the token is 128, the hidden dimension of the middle dense layer is 256, and the self-attention module has 8 heads. We use a dropout of 0.3 in all attention layers and feed-forward layers. We choose ReLU for all activation functions. The supervised pre-training method is trained on 1000 labeled datasets, and the self-supervised pre-training method is trained on all 2000 datasets. We train using Adam <cit.> optimizer with a learning rate in {5e-5, 1e-4, 3e-4}, where the learning rate of the fine-tuning phase will be smaller than that of the pre-training phase. Batch size is in {64, 128, 256}. We use a pre-trained BERT-base-uncased <cit.> model on Hugging Face[https://github.com/huggingface] to obtain token embeddings that are rich in semantic information. In the pre-training phase, we set the maximum training epoch to 500 for both the supervised contrastive learning and the self-supervised masked table modeling tasks. In the fine-tuning phase, the maximum training epoch is 200 and the patience value is set to 20 for early stopping. Experiments were conducted with 8 GPU V100, Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz, and 128GB RAM. We use the DeepSpeed <cit.> framework for parallel computation acceleration. DeepSpeed offers a range of optimization techniques, including model parallelism, data parallelism, and mixed-precision training. It can improve the efficiency of our large-scale cross-table pre-training which occupies a large portion of the computational resources in our experiments.
§.§ Overall Performance
In this section, we report the overall performance of . The results are shown in Table <ref>.
§.§.§ Supervised Learning from Scratch
As can be seen in Table <ref>, _NoPT outperforms all the existing works on standardized benchmarking datasets on average. Although TransTab <cit.> greatly outperforms the baseline method, _NoPT is still slightly higher than TransTab on avg. 0.8%. We analyze this due to the fact that _NoPT models at the feature-level, while TransTab <cit.> models at the token-level, which may be not reasonable on tabular data. And the experimental results also show that TransTab's performance drops abruptly on some datasets, such as car and phishingweb. In addition, we found that _NoPT is also comparable to FT-transformer <cit.> and SAINT <cit.>.
We analyze and believe that _NoPT is essentially the same as these methods on single tabular data, which extract features from table data and then model feature interactions using a similar transformer encoder.
However, the difference is that _NoPT can receive input from heterogeneous tables. This gives our approach a natural advantage in cross-table pre-training which is detailed in Section <ref>.
§.§.§ Cross-table Pre-training.
We mainly compare with the supervised learning from scratch of .
Supervised: In labeled scenarios, our supervised contrastive learning cross-table pre-training model _P_S has achieved state-of-the-art average performance. As evident from the results in Table <ref>, _P_S outperforms the supervised training from scratch _NoPT by avg. 1.29% and achieves better performance on 10 out of 15 diverse downstream tabular tasks.
Moreover, we observed that _P_S achieves a comparatively competitive performance than masked table modeling self-supervised cross-table pre-training method on average. We analyze that the reason lies in _P_S's ability to fully leverage the label information, enabling the model to learn more powerful sample representations. And self-supervised methods may require a larger amount of training data to achieve significant advancements.
Self-supervised: In large-scale unlabeled scenarios, as can be seen in Table <ref>, our masked table modeling self-supervised cross-table pre-training model _P_M outperforms the supervised training from scratch _NoPT by avg. 1.2%. And _P_M achieves better performance on 13 out of 15 diverse downstream tabular tasks. It is noteworthy that our cross-table pre-trained model exhibits significant improvements on the cylinder-bands, higgs, and Amazon datasets. We hypothesize that this result can be attributed to the presence of certain tables in the pre-training data that bear close relevance to these downstream tasks. Therefore, we have reason to believe that masked table modeling cross-table pre-training approach on ultra-large-scale datasets is a highly promising approach on the path toward a comprehensive universal table model.
is the first attempt at such large-scale cross-table pre-training. Our experimental results demonstrate the feasibility of learning shareable knowledge across different tables through cross-table pre-training, which helps the model achieve better generalization on diverse downstream tasks. Both supervised pre-training and self-supervised pre-training methods achieved good performance. We believe that supervised training requires higher dataset requirements but may be better suited for specific scenarios, while self-supervised training has the potential for greater scalability through larger pre-training datasets in the future.
§.§ Few-shot Learning
As widely recognized, a significant advantage of pre-trained models is that they still work well when the downstream task dataset is relatively scarce, commonly referred to as few-shot learning
<cit.>. This capability stems from that the model can learn rich shareable knowledge from large-scale upstream datasets. In the domain of tabular tasks, there are numerous practical application scenarios characterized by limited data resources, such as medical diagnosis <cit.>. In such contexts, the exceptional few-shot learning ability of pre-trained models becomes invaluable. Therefore, we conducted extensive experiments to explore the practical effectiveness of in the context of few-shot learning settings.
Specifically, for each downstream classification tabular data, we randomly sampled 5/10/20 samples from each class to construct three new 5-shot/10-shot/20-shot tabular datasets. We then performed both supervised training from scratch and pre-training then fine-tuning on these new few-shot datasets. The experimental results are presented in Table <ref>. The self-supervised and supervised pre-trained models significantly outperformed the baseline of learning from scratch in the few-shot learning setting. In 5-shot case, _P_S outperforms the training from scratch _NoPT by avg. 8.4%, and _P_M also surpassed by avg. 3.58%. Furthermore, we can observe that the pre-trained model exhibits a greater improvement in performance when the number of samples is less. The improvement is most significant in the 5-shot case while is relatively weaker in the 20-shot case. We analyze this as a reasonable phenomenon. The shareable knowledge learned through cross-table pre-training is relatively more valuable when the training data is less. In conclusion, all these experimental results strongly demonstrate the tremendous potential of cross-table pre-training in the context of few-shot learning.
§.§ Ablation Studies
In order to demonstrate that modeling at the feature level is more effective than previously used word token-level modeling in tabular data, we conducted ablation experiments. Specifically, we do not pool all word token embeddings into one feature embedding but feed them directly into the transformer layer for learning. The experimental result is presented in Table <ref> and proves that feature-level modeling is significantly better than word token-level modeling. Additionally, we further evaluated different pooling strategies: average pooling, max pooling, and self-attention <cit.> pooling. The results are shown in Table <ref>. Among these strategies, average pooling gives the best results. We tried to analyze the reason that max-pooling may not be able to distinguish between different feature values in some cases. For example, the max value may come from the word token embedding in the column name, which is the same for all the sample rows. The self-attention mechanism may be too complex relative to this simple information extraction. And average pooling can do this task simply and efficiently.
§.§ Further Analysis
§.§.§ Convergence Curves
Figure <ref> compares the convergence curves of two paradigms: "training from scratch" and "pre-training then fine-tuning". We observed that pre-training and then fine-tuning leads to faster convergence and better results. This demonstrates that has learned beneficial shareable knowledge for downstream tasks through cross-table pre-training. Furthermore, pre-training and then fine-tuning can achieve reasonable results within a short period of time. This significantly improves the efficiency of executing downstream tasks that do not require high precision. It also partially alleviates the longer training time issue associated with neural network training compared to traditional tree-based machine learning methods <cit.>.
§.§.§ Masking Ratio
Previous research <cit.> has suggested that a higher mask rate is required to achieve better performance in masked image modeling tasks, whereas a lower mask rate is sufficient for masked language modeling tasks. In this experiment, we further investigate the impact of mask rates on masked table modeling tasks, as shown in Figure <ref>. We found that the model has high performance between 30% and 50%, with an excessively high mask rate leading to a steep descent, while an excessively low mask rate leads to a more moderate descent. We analyze that table data exhibits high information density, where a change in a single feature value can significantly alter the meaning of a sample. So too high a mask rate will cause the model to have difficulty in learning the correct feature relationships.
§.§.§ Hyperparametric Sensitivity Analysis.
We analyzed the sensitivity of the number of randomly sampled partitions and the learning rate. We randomly selected some datasets to experiment with the _P_S method. The experimental results are shown in Fig. <ref>. The settings are consistent with Section <ref> except for the corresponding hyperparameters. It can be seen that is robust to the hyperparameters.
§ CONCLUSION
With CT-BERT and TabPretNet, we hope to initiate the scaled cross-table pre-training for the community of database and data mining community.
Speaking humbly, we deem CT-BERT as a pioneer work to scale tabular data pre-training that it works in either a supervised and/or self-supervised manner.
We empirically demonstrate that facilitating the pre-training procedure across large-scale tabular datasets indeed offers decent efficacy benefits.
Perceiving it through the lens of the development of current LLMs, our model is still small (50M), which is roughly the same size as BERT-base <cit.> in spite of CT-BERT being the largest-scaled pre-trained model in tabular modeling thus far.
We think that for tabular data pre-training, we are still in the era of the BERT model in NLP tracking back a few years. That is to say, the size of the large model and the volume of the dataset still fall far behind the development of the LLMs, such as ChatGPT or its other rivals <cit.>.
On the bright side, the volume of available tabular data is truly gigantic — wherever a database system is deployed there will be tabular data — but perhaps much more decentralized than the text and vision data.
In the future, we hope to explore even further scaling CT-BERT and adapting it to more diversified data domains.
§ APPENDIX
§.§ Baseline architecture and implementation
The setup of our baseline follows the previous work <cit.> and includes the following methods:
* Logistic Regression: Use the default setting of the package Scikit-Learn. The maximum number of estimators is set to 1000.
* XGBoost: Implemented based on the XGBoost package. We set the maximum number of estimators in {50, 100, 300} and the max depth in {5, 8, 10}.
* LightGBM: Implemented based on the LightGBM. We set the maximum number of estimators in {50, 100, 300} and the max depth in {5, 8, 10}.
* MLP: Dense layers with hidden dimensions {256, 256}. Dropout with a rate of 0.1 is used. They are trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 5 with 100 maximum epochs.
* TabNet: Use the official implementation with the default recommended parameters[https://github.com/dreamquark-ai/tabnet]. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈{1e-4, 1e-3, 2e-2}, n_a,n_b ∈ {8, 16, 64, 128}, γ ∈ {1.3, 1.5, 1.8}, categorical embedding dimension ∈ {1, 8, 16} and early stopping patience of 5 with 100 maximum epochs.
* DCN-v2: Use the implementation by paper <cit.>[https://github.com/Yura52/tabular-dl-revisiting-models]. The number of cross is 2. The dropout rate for the feedforward component is 0.1. MLP part has two dense layers of dimension {256, 128}. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 10 in 100 maximum epochs.
* AutoInt: Use the implementation by paper <cit.><ref>. The attention layer number is set to 2. The attention head number is set to 2. MLP part has two dense layers of dimension 256, 128; dropout deactivated; trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5, 1e-4, 1e-3}, and early stopping patience of 10 in 100 maximum epochs.
* SAINT: Use the official implementation[https://github.com/somepago/saint]. The embedding size is 32 dimensions. 6 transformer layers are used. The number of heads of attention is ∈ {4, 8}. The dropout rate is 0.1 in all attention layers and feed-forward layers. Inside the self-attention layer, the q, k, and v vectors are of dimension 16, and in the intersample attention layer, they are of size 64.
* FT-Transformer: Use the official implementation[https://github.com/Yura52/rtdl]. Feed-forward component has 128 dimensions. 2 transformer layers are used. The number of heads of attention is ∈ {2, 4, 8}. The dropout rate is 0.1.
* VIME: We reproduce it by PyTorch <cit.> based on the original official implementation[https://github.com/jsyoon0823/VIME]. We train the model on all training data taking mask rate 0.3, batch size 128, learning rate 1e-4, and 10 epochs. During the fine-tuning phase, we add a classifier after the encoder with three dense layers of 100 dimensions and ReLU activations. Trained with batch size ∈ {16, 32, 64, 128}, learning rate ∈ {5e-5,1e-4,1e-3}, and early stopping patience of 10 in 100 maximum epochs.
* TransTab: Use the official implementation[https://github.com/RyanWangZf/transtab]. Token embedding has 128 dimensions. 2 transformer layers are used. The number of heads of attention is 8. We train the model on all downstream task data taking batch size 64, learning rate 1e-4, dropout rate 0, and early stopping patience of 10 in 100 maximum epochs. We run the pre-training, transfer learning, and vanilla supervised training methods in the paper, and take the highest score.
§.§ Details of the downstream task datasets
The downstream task datasets are mainly from the OpenML-CC18 benchmark <cit.>.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.04484v1 | 20230710111419 | Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging | [
"Raziye Kubra Kumrular",
"Thomas Blumensath"
] | cs.LG | [
"cs.LG",
"physics.app-ph",
"physics.comp-ph"
] |
Invertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications in Spectral X-ray Imaging
Raziye Kubra Kumrular and Thomas Blumensath
R. K. Kumrular and T. Blumensath are with the ISVR Signal Processing and
Audio Hearing Group, University of Southampton, Southampton SO17 1BJ, U.K.
(e-mail: [email protected] )
Received; accepted
=============================================================================================================================================================================================================================================
X-ray interaction with matter is an energy-dependent process that is contingent on the atomic structure of the constituent material elements. The most advanced models to capture this relationship currently rely on Monte Carlo (MC) simulations. Whilst these very accurate models, in many problems in spectral X-ray imaging, such as data compression, noise removal, spectral estimation, and the quantitative measurement of material compositions, these models are of limited use, as these applications typically require the efficient inversion of the model, that is, they require the estimation of the best model parameters for a given spectral measurement. Current models that can be easily inverted however typically only work when modelling spectra in regions away from their K-edges, so they have limited utility when modelling a wider range of materials. In this paper, we thus propose a novel, non-linear model that combines a deep neural network autoencoder with an optimal linear model based on the Singular Value Decomposition (SVD). We compare our new method to other alternative linear and non-linear approaches, a sparse model and an alternative deep learning model. We demonstrate the advantages of our method over traditional models, especially when modelling X-ray absorption spectra that contain K-edges in the energy range of interest.
Convolutional neural network (CNN), Denoising autoencoder, K-edge, Singular value decomposition (SVD), X-ray absorption spectrum
§ INTRODUCTION
X-ray Computed Tomography (XCT), which generates volumetric images based on measurements of X-ray transmission through an object, is a versatile imaging technique with applications in industry, security, medicine, and scientific investigation <cit.>. X-ray transmission is a function of X-ray energy, and the measurement of this dependency can be of significant importance in many applications. We are here interested in building models of this dependency that can help achieve this by allowing us to remove measurement noise, compress measurement data and constrain the estimation from limited measurements. In these applications, using models that both constrain the measurement whilst also allowing for easy estimation of the model parameters is crucial.
Whilst the physical interaction between photons and material can be modelled explicitly via very accurate Monte Carlo (MC) simulations<cit.>, these models do not allow for simple model inversion. We thus here develop models with few parameters (so-called low-dimensional models) that are easy to invert, that is, that allow us to easily compute optimal parameters for a given X-ray spectral observation. These models can then be used to create a parameterised function as a computational tool for spectral data analysis that has a range of significant applications in X-ray imaging. For example, traditional XCT reconstruction algorithms that ignore energy dependence produce image artefacts which can be removed when using invertible low-dimensional models <cit.>. Furthermore, low-dimensional models are crucial to remove measurement noise or constrain the ill-conditioned inverse problems that arise in several spectral imaging methods <cit.>.
Our work here is particularly motivated by our interest in measuring the spatial distribution of X-ray absorption spectra using commonly available lab-based X-ray tomography systems. There are several approaches to this. X-ray sources found in these systems generate X-ray photons with a range of energies (the X-ray source spectrum I_0(E)), though the X-ray detector does not normally differentiate different energy levels. To estimate absorption spectra, Dual-Energy CT uses two source spectra to allow spectral estimation <cit.> by utilising a two-parameter linear absorption spectral model. In Multi-Energy computed tomography (MECT), also called spectral X-ray tomography, spectrally resolved measurements are taken using photon counting detectors (PCD), though this comes at the cost of additional hardware requirements, a significant decrease in measurement speed, a significant increase in measurement noise as well as an increase in computational loads associated with the increase in measured data <cit.>. In all of these applications, a more accurate invertible low-dimensional model of the X-ray absorption spectra is of significant interest, especially when imaging a wide range of materials.
The attenuation of an X-ray beam with photons of a single energy (E) travelling along a path through an object is often modelled using the Beer-Lambert law:
I(E)=I_0(E)e^-∫μ(x,E) dx
where I(E) is the X-ray intensity measured by the detector, and I_0(E) is the X-ray intensity that would be measured by the detector without an object present. μ(x,E) is the energy-dependent X-ray linear attenuation coefficient (LAC) at position x along the X-ray beam and the integration is along the line of the X-ray path through the object.
For X-ray energies below about 1.02 MeV, X-ray material interactions are due to three primary phenomena: Rayleigh scattering (μ_R(E)); Compton scattering (μ_C(E)); and the Photoelectric effect (μ_P(E)) <cit.>. The total linear attenuation coefficient μ(E) can thus be written as:
μ(E)=μ_R(E)+ μ_C(E) +μ_P(E)
Figure <ref> shows the total linear attenuation of Aluminum (atomic-number(Z)=13) and Iodine (Z=53) and the contribution of each of these interactions.
We here show the LAC as a function of energy, focusing on the energy range between 20 keV and 150 keV commonly used in lab-based X-ray systems. Of particular interest for our paper will be the step in the LAC due to the Photoelectric effect (as seen in Fig <ref>b for Iodine at 33.17 keV), which appears at the K-shell binding energy of the atom. This step is known as the K-edge of the element and is unique for each element <cit.>.
The intrinsic dimensionality of the LACs using a linear principal component model has been studied previously in different settings <cit.>. We here instead investigate non-linear low-dimensional models of the X-ray absorption spectrum that work for all elements (Z≤ 92) and over energy ranges found in typical lab-based tomography systems (20keV≤ E≤150keV).
§ STATE OF THE ART ABSORPTION SPECTRUM MODELLING
§.§ Representation of Linear Attenuation Coefficient with Linear Models
Different X-ray absorption models have been proposed in the literature. These models typically assume absorption spectra to be a linear combination of two or more basis functions, which are assumed to be independent of the material <cit.>.
§.§.§ Photoelectric-Compton Basis (PCB) model
The first model is based on (<ref>). Rayleigh scattering is negligible. <cit.>. The Photoelectric-Compton Basis (PCB) model thus only uses basis functions to represent the Photoelectric effect and Compton scattering which are assumed to be material invariant.
This holds for energies far from the K-edge energy of a material (see Fig. <ref>a), where the Photoelectric absorption and Compton scattering phenomenon can be approximated as <cit.>:
μ(E)=a_pf_p(E) + a_cf_KN(E)
where f_p(e) and f_KN(e) are functions of energy only and capture the energy dependence of the Photoelectric absorption and Compton scattering. a_p and a_c on the other hand are parameters that are independent of energy and instead only vary with the material (they are functions of the electron density (ρ_e) and the atomic number of the material). a_p and a_c are thus two parameters that can be used to fit this linear two-dimensional model to data <cit.>. For a single material, they can be derived as functions of ρ_e and Z as:
a_p=ρ_eC_PZ^m
a_c=ρ_e,
where C_P = 9.8x10^-24 <cit.> is a constant, and m= 3.8 was determined experimentally.
The energy dependence of the Photoelectric effect is approximated by f_p(E)=1/E^n and it is possible to approximate the energy dependence of Compton scattering using the Klein-Nishina function (<ref>)<cit.>.
f_KN(E) = 1+α/α^2[ 2(1+α)/(1+ 2α)- 1/αln(1+ 2α) ]
+1/2αln(1+ 2α) -1+3α/(1+ 2α)^2
where α is E/E_e and E_e ≈ 511 keV denotes the rest mass energy of an electron. This two-dimensional linear model is suitable for low atomic number materials (Z<18) that do not have a K absorption edge in the range of energies considered, though the approximation error increases close to the K-edge as well as for higher energies <cit.>.
§.§.§ Material-basis (MB) model
The second linear model uses material-basis (MB) functions, which are two or more LAC functions taken from previously chosen reference materials<cit.>. This model is particularly popular in medical imaging, where the imaged object can be modelled using a basis function for the LAC of bone and one for soft tissue (or water)<cit.>. However, it is hard to express a wider range of materials with just two reference materials in the MB model, and this model does not provide a direct estimate of the electron density and effective atomic number <cit.>
§.§.§ Learned linear representations
Basis functions for low-dimensional modelling of X-ray absorption spectra can also be learned from training data. This can be done using the singular value decomposition (SVD), which also gives an estimate of the approximation error that can be achieved. The SVD computes the best linear approximation to a given training dataset in the mean squared error sense for a given size of subspace. There is a close relationship between SVD and principal component analysis (PCA) <cit.>, which has been used several times to derive low-dimensional linear models <cit.>.
For materials without K-edge in the energy range, it has been found that SVD models provide good approximations to LACs using two basis functions <cit.>. Furthermore, the learned basis functions are very similar to μ_p(E) and μ_c(E) <cit.>.
However, these models no longer work close to a K-edge <cit.>, though increasing the number of basis functions naturally has been found to increase performance also in these cases <cit.>.
§.§ Non-linear models
Given the inability of low-dimensional linear models to capture K-edges in absorption spectra, non-linear models might be suitable alternatives. As there are no known analytic models that capture the change in absorption around the K-edge of all materials in a succinct parameterization, learned non-linear models are a viable alternative.
§.§.§ Sparse Model
We have already introduced the idea of using a material basis function model. It is possible to include a basis function for each material in the periodic table, but as a linear model, this would require us to fit many coefficients. Instead, to derive models with few non-zero coefficients when using a larger set of material basis functions, sparse models can be used <cit.>. Whilst the generative model here is still linear (i.e. a spectrum is modelled as a linear computation of basis function), the estimation of the weights now becomes a non-linear process. To get a low-dimensional model, the basic assumption then is that a given spectrum represents a material that is a combination of a few elements.
Sparse models have been suggested as a complement to traditional regression methods for better identification of spectra in Raman spectroscopy <cit.>, though to the best of our knowledge, have not yet been used to model X-ray absorption spectra.
§.§.§ Neural Network based Models
Recent advances in deep learning now allow the estimation of complex non-linear relationships in complex data.
A suitable model for our purpose is an autoencoder, which is a deep neural network that can learn a non-linear low-dimensional representation <cit.>.
The network consists of two main components; a non-linear encoder, which compresses the input into a latent space representation, and a non-linear decoder which reconstructs the data from the low-dimensional representation <cit.>. For a single material, autoencoders have already demonstrated the ability to capture fine detail in the absorption spectrum around K-edge energies <cit.>.
To increase robustness and to incorporate noise suppression, an autoencoder is often trained as a denoising autoencoder (DAE), where the difference in training is that the input to the encoder is corrupted by noise during training.
§ MATERIAL AND METHODS
In this paper, we hypothesize that a single non-linear low-dimensional latent representation will allow us to model the X-ray absorption spectra of all elements, including those that have a K-edge in the energy range of interest. As low Z materials without a K-edge in the energy range under investigation are already well approximated using two linear basis functions, we propose to model the difference between a given spectrum and an optimal two-dimensional linear approximation.
§.§ Proposed Non-linear Model
Our proposed model combines a non-linear autoencoder with a two-dimensional SVD-based representation as shown in Fig. <ref>.
The SVD learns the effect of the Photoelectric effect and Compton scattering for materials with low atomic numbers, where we do not have a K-edge in the energy range of interest. The autoencoder then uses a 3-node latent representation to try and model the deviation of the true spectrum from the linear model of materials that have a K-edge in the energy range.
§.§ Low Dimensional Representation of X-ray Absorption Spectrum with the Autoencoder
There are different network architectures that can be used as the autoencoder in our model. We here compare two convolutional neural network (CNN) and three fully connected neural network (FCNN). The most basic FCNN simply consisted of a single input layer, the hidden (code) layer and an output layer with ReLU non-linearities in the input and output layers. The other four network architectures are shown in Fig.<ref> and Fig. <ref>. These architectures all had 3 nodes in the latent space when used jointly with the two-component SVD model, or 5 nodes if used without an initial SVD
approximation.
The main difference between FCNN2 and FCNN3 is the layer structure. In the FCNN3, the number of nodes shrinks gradually in the encoder and expands gradually in the decoder, which is a regular layer structure for the autoencoder, whilst, for FCNN2, the number of nodes in two consecutive layers first shrinks by about half before slightly expanding again in the next layer, a pattern that is repeated in the encoder and inverted in the decoder. Batch normalization is used to prevent overfitting. The main difference between the two convolutional networks is that CNN1 uses strided convolutions, whilst in CNN2 we use max pooling. To apply our idea to datasets sampled at different energy levels (131 energy levels), the CNN2 architecture is modified by creating much deeper layers but using the same node number in the latent space.
§.§ A Sparse Regularized Model for X-ray Absorption Spectrum
As a comparison, we also implement a sparse model using a material basis function matrix. Let Y be the X-ray absorption spectrum of a chemical mixture (Y ϵ R^N), and A (A=a_1,a_2,...a_N ) a matrix whose columns are the material basis functions of all elements of interest (A ϵ R^NxM ). To compute a sparse representation X, we solve the lasso problem:
Xmin𝐘 - AX _2 + λ X_1
where we use the FISTA algorithm for optimisation.
We here generate the material basis function matrix by using the LAC values for the 92 elements provided by the National Institute of Standards and Technology (NIST) database <cit.>. As the solution to the above lasso problem does potentially provide approximations of the data with more than 5 basis functions, for consistency with our 5 parameter model, we restrict the solution by selecting the 5 largest elements (in magnitude) of X and then fitting these values by computing a least squares solution using only the selected 5 material basis functions.
§.§ Traditional methods
To compare the two non-linear models above to traditional linear and non-linear models, we furthermore implemented an SVD-based method, where we selected the largest 5 components to provide a 5-dimensional linear model. We also implemented our three autoencoder models without the initial 2-dimensional SVD model by extending the dimension of the hidden layer to 5. Thus, all our models could be used to fit 5 parameters into a spectrum.
§.§ Dataset of the simulated X-ray absorption spectrum
X-ray absorption spectra have been simulated using the linear attenuation coefficients of the 92 chemical elements with Z≤ 92. LAC values were obtained by multiplying MAC (Mass attenuation coefficient) with average mass densities obtained from the NIST <cit.>.
The energy range of interest was chosen to be between 20 keV to 150 keV, which is the available source energy range found in many lab-based X-ray tubes. For computational efficiency, spectra were re-sampled into 26 equally sized energy bins, though similar results can be achieved with a finer energy resolution.
We generated a range of different datasets, consisting of combinations of between 1 and 5 different elements, with some datasets having pre-specified numbers of elemental spectra with K-edges. All datasets are summarised in Table <ref>. Each mixture is generated by randomly choosing the elements (possibly with restrictions on the required numbers of K-edges) and then combining them by multiplying them by the standard elemental density for that material as well as a random scalar drawn from a uniform distribution in the range between 0 and 1. To consistent data, the datasets scaled with standardization after generating combined LACs. To train the de-noising autoencoders, Gaussian noise, with zero mean and 0.1 standard deviations, was added to generate a noisy version of each dataset.
We created various datasets to perform the proposed method and compare it with other methods. Table <ref> shows the name of the generated datasets, where the subscript indicates the number of elements in each mixture in that dataset, e.g. each element in D_2E consists of two randomly selected elements, as well as the number of the elements in each mixture that have K-edges, e.g. each element in D_2E,2K contains two elements with K-edges in the energy range of interest (i.e. (Z>42)). The dataset containing 131 energy levels (D_2E,131) was generated the same way as the other datasets, the only difference being that it was quantized at every energy level.
Example spectra are shown in Figure <ref>a where we show noisy and noise-free spectra from D_2E, and Fig. <ref>b, where we show two example spectra from D_2E,0K.
§.§ Loss function
To evaluate the performance of different methods, we use the normalised mean squared error (NMSE):
NMSE = Y-Ŷ^2/Y^2,
where Y is true X-ray absorption spectrum, and Ŷ is predicted X-ray absorption spectrum. Y-Ŷ is the l_2 norm of the error between true spectrum and predicted spectrum, while Y is the l_2 norm of the true spectrum.
§ EXPERIMENTS AND RESULTS
We test the sparse and machine learning-based non-linear models and compare them to the linear methods. In the rest of the paper, we referred to the proposed hybrid models as SVD/autoencoder and the autoencoder models with 5 nodes in the latent space layer as 5-dimensional autoencoders. We also fit an SVD model using the largest 5 components, which we call the 5-dimensional SVD. The sparse model, where we fit the largest 5 components after sparse decomposition is called the Fista model.
All models that include one of the autoencoders were trained using the same parameters, using an Adam optimiser with a batch size of 64 and running for 300 epochs with a mean squared error loss function.
Unless otherwise stated, all autoencoder-based models were trained on the data-set D_2E, which was divided by random training (72%), validation (20%) and test (8%). The validation set was used to validate the model performance during training.
For the SVD/autoencoder model, we trained the SVD and the autoencoder separately, starting by fitting the SVD using data without K-edges, namely D_2E,0K. We then trained the autoencoder in the SVD/autoencoder model with the training data from D_2E, which also included simulated absorption spectra with K-edges. For the training of the autoencoder part of the SVD/autoencoder model, each spectrum was first projected onto the SVD subspace and the residual error was used to train the autoencoder. The output of the autoencoder was then added back to the approximation computed with the SVD model to provide the spectral approximation (as shown in Fig. <ref>).
We also used the same training dataset from D_2E to fit the 5-dimensional SVD model. For the FISTA model, the sparsity parameter (λ) was optimised for optimal performance on the same dataset. As the SVD is known to provide the best linear low-dimensional approximation in the mean squared error sense, we do not report results for other linear models.
After the training step, all models were tested on the test dataset of D_2E, and each model was evaluated with by plotting Box-whisker plots of the MMSE for each spectrum in the test data. Fig. <ref> shows results for the SVD/autoencoder models, the 5-dimensional autoencoder models, the 5-dimensional SVD model as well as the Fista model. From these results, we see that CNN2 performs better as the non-linear model, both on its own or in conjunction with the initial SVD projection. (Similar results were found when analysing other datasets (results not shown for brevity).) For the remainder of this paper, we thus only report the results for the CNN2-based models, the 5-dimensional SVD model and the Fista model.
To research the performance of our ideas on the dataset with finer energy resolution, we followed the same steps in the training and testing with other architectures. We focused on two different architectures that extended versions of CNN2 (have a better result than others) in this experiment. Figure <ref> shows the result of the modified version of SVD/CNN2 and CNN2 along with Fista and 5-dimensional SVD results for D_2E,131 dataset. The average NMSE performance here is similar to that found for the D_2E dataset.
To further demonstrate this, the modelling performance of our approach was also tested using the D_3E,0K (see Fig. <ref>) of 3 element mixtures without K-edge. For this dataset without K-edges, we again found that the SVD/autoencoder model no longer outperforms all other methods, and in fact, the 5-dimensional CNN2 now performed slightly better in terms of the mean of the NMSE errors. Crucially, the 5-dimensional SVD and Fista models showed almost the same performance. Of interest here is also the fact that the 5-dimensional SVD does not work as well as the non-linear models, which is likely due to the fact that the linear approximation used is not valid in energy ranges close to K-edges.
To see how the performance of our methods changes when the data has more materials with K-edges in their absorption spectra, we plot the NMSE of the datasets (D_3E,1K, D_3E,2K, D_3E,3K, and D_5E,5K) in Fig. <ref> for the SVD/CNN2, the 5-dimensional CNN2, the 5-dimensional SVD and Fista models. Whilst there is a decrease in the performance of the non-linear models if we increase the number of elements with K-edges, their relative performance is consistent, with Fista, SVD/CNN2 and CNN2 working better than the 5-dimensional SVD model in general.
We also trained our best architectures (SVD/CNN2 and CNN2), 5-dimensional SVD and Fista with two different dataset of 5 materials each to see if their performance depended on the training sets. We here used D_5E and D_5E,0K. The training in this experiment is the same as in the previous one; the only difference is the dataset used for training. After the training, these architectures were tested with D_2E,2K and D_5E,5K. Figures <ref>a and <ref>b demonstrate the NMSE result of this experiment, and our models (SVD/CNN2, CNN2 and Fista) still have lower errors than the traditional models.
§ DISCUSSION AND CONCLUSIONS
Accurate and precise modelling of the X-ray absorption spectrum of objects has been important for reducing image artefacts <cit.>, estimating material distributions within the object<cit.>, and constraining the ill-conditioned inverse problems <cit.> that arise in
several spectral imaging methods. In this paper, we considered non-linear models of the energy-dependent X-ray absorption spectrum for all possible materials. We introduced a novel non-linear model, consisting of a linear SVD and a deep learning-based approach, that accurately represents the LACs of K-edge-containing materials using several parameters. Furthermore, we evaluated the performance of different deep learning architectures, traditional linear models, and a sparse model for various simulated objects.
As seen in Fig. <ref>, all complex architectures (except SVD/FCNN1 and FCNN1), and the Fista model have a lower approximation error than the best linear model (5-dimensional SVD). Crucially, the traditional linear model has almost the same error, which is 5% higher than the SVD/FCNN1 and FCNN1 models. This primarily shows that a non-linear model is useful for modelling K-edges. Furthermore, this result suggests that more layers should be used while designing the deep learning architectures for modelling. The last and most important result is that the SVD/CNN2 and CNN2 architectures showed the best performance compared to other architectures in the experiment with the D_2E test dataset.
The result of experiments with data with D_3E,131 dataset showed that our models have the same sensitivity even for finer energy resolution. Interestingly, it shows that if objects with finer resolution have a K-edge in their absorption spectra, the 5-dimensional SVD approach cannot capture it. It can be seen in Fig <ref>, the SVD model has a higher error (10%) than all other models. For computational efficiency, we did not conduct any further experiments with the 131 energy level dataset, even though our models achieved better performance.
For objects whose K-edge in the X-ray absorption spectrum lies outside of the considered energy range, there is some loss in the SVD/autoencoder approach, as can be seen in Fig.<ref>. The main reason for this is that we have not trained the autoencoder part in the SVD/autoencoder model with non-K-edge materials. We trained the non-linear step in this model with the residual error (i.e. to model the K-edges), whilst the linear step is trained to model the non-K-edge X-ray absorption spectrum. Since the training methods used to model the non-K-edge X-ray absorption spectrum are not the same (such as non-linear and linear), this is likely to affect the performance of our approach. However, the errors are lower than the best linear model in the SVD/autoencoder, the 5-dimensional autoencoder and the Fista model (error value below 2% for CNN2, below 4% for SVD/CNN2 and Fista). Interestingly, the best linear model has a higher error than the other model, even for objects that do not contain K-edges in the X-ray absorption spectrum. Although traditional models are used to model the X-ray absorption spectra of non-K-edge materials in the selected energy range in the literature, these results show that our models can also be used for these spectra.
All experiments with objects with various numbers of K-edges in the X-ray absorption spectrum suggest that our models can be more accurate than the traditional model.
Furthermore, the error in the SVD/autoencoder and the 5-dimensional autoencoder model have increased when the number of K-edges in the X-ray absorption spectrum is increased, as seen in Fig. <ref>a and <ref>b <ref>c <ref>d. Interestingly, the errors of the Fista model for all experiments nearly stayed the same. The reason for this is that there is no training step in the Fista model (apart from fitting the sparsity parameter). Crucially, with the five-element dataset test (as shown in Fig. <ref>), we found that our models work better than the traditional model, even when trained with more complex datasets.
Our experimental results indicate that using the SVD/autoencoder model approach has significant advantages in the representation of the X-ray absorption spectrum of high atomic number materials compared to the linear model. In addition, the 5-dimensional autoencoder method has been experimentally shown to work better than traditional linear methods for non-K-edge materials (low atomic number materials) and also complex datasets. Whilst the Fista model did not show good performance for objects that don't have a K-edge, it has good accuracy for objects that have K-edges. The overall utility of our approach lies in that exploring the so-called low-dimensional representation of the X-ray absorption spectrum can be a valuable tool for analyzing the information on the scanned material.
IEEEtran
|
http://arxiv.org/abs/2307.04481v1 | 20230710110332 | Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling | [
"Giuseppe Desolda",
"Andrea Esposito",
"Florian Müller",
"Sebastian Feger"
] | cs.HC | [
"cs.HC",
"cs.AI",
"H.5.2; I.2.1"
] |
Digital Modeling for Everyone
G. Desolda et al.
Department of Computer Science, University of Bari Aldo Moro, Bari, Italy
{giuseppe.desolda, andrea.esposito}@uniba.it
LMU Munich, Munich, Germany
{florian.mueller, sebastian.feger}@um.ifi.lmu.de
Digital Modeling for Everyone: Exploring How Novices Approach Voice-Based 3D Modeling
Giuseppe Desolda10000-0001-9894-2116 Andrea Esposito10000-0002-9536-3087 Florian Müller20000-0002-9621-6214
Sebastian Feger20000-0002-0287-0945
August 12, 2023
====================================================================================================================================================
Manufacturing tools like 3D printers have become accessible to the wider society, making the promise of digital fabrication for everyone seemingly reachable. While the actual manufacturing process is largely automated today, users still require knowledge of complex design applications to produce ready-designed objects and adapt them to their needs or design new objects from scratch. To lower the barrier to the design and customization of personalized 3D models, we explored novice mental models in voice-based 3D modeling by conducting a high-fidelity Wizard of Oz study with 22 participants. We performed a thematic analysis of the collected data to understand how the mental model of novices translates into voice-based 3D modeling. We conclude with design implications for voice assistants. For example, they have to: deal with vague, incomplete and wrong commands; provide a set of straightforward commands to shape simple and composite objects; and offer different strategies to select 3D objects.
§ INTRODUCTION
The digital fabrication revolution aims to democratize the way people create tangible objects <cit.>. With the widespread availability of 3D printing together with many other digital fabrication technologies such as laser cutters or CNC routers, end users are moving from passive consumers to active producers. While the actual manufacturing process is largely automated today, users are still required to have a profound knowledge of complex 3D modeling applications, when they adapt models to their needs or even design new objects from scratch <cit.>. Thus, even if the introduction of technologies such as 3D printers has revolutionized the hobbyist community, lowering the barrier of entry to manufacturing even for novices (who can now put their hands in the process of creating artifacts without relying on third parties), we argue that the design of the 3D objects to be manufactured still requires a high level of knowledge and expertise.
These limitations have pushed researchers to investigate natural interaction techniques to simplify 3D modeling tools <cit.>. For example, research explored gestures <cit.>, virtual/augmented reality <cit.>, eye tracking <cit.>, brain-computer interface <cit.> and their combination <cit.> as a multimodal approach. However, their adoption is reserved for technical users and it is strongly limited by hardware costs and excessive size/weight that can make the users easily fatigued <cit.>. As another possible solution, voice-based interaction has been explored, to both integrate the traditional GUI interface (e.g., to enable shortcuts via voice commands) <cit.>) or as the primary interaction paradigm (e.g., see <cit.>). Although voice-based interaction requires only a microphone, it does not yet provide adequate digital modeling support for everyone: existing solutions either do not consider final users at all <cit.>, or only target 3D experts <cit.>, and novices are not considered potential target beneficiaries of the proposed innovations.
To lower the barrier to the design and customization of personalized 3D models by exploiting the potential of voice-based interaction, this study aims to understand how the mental model of novices translates into voice-based 3D modeling. We conducted a high-fidelity WoZ study to elicit novices' mental model, for example, their expectation, beliefs, needs, and abilities. We recruited a total of 22 participants without skills in 3D modeling, who performed 14 tasks revolving around some basic concepts of 3D modeling like the creation of objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite objects. All the WoZ sessions' recordings were analyzed through thematic analysis. The findings of the study have been distilled in the form of lessons learned. For example, we found that: voice assistants must manage the corrections the novices do during and after the commands; deal with vague and incomplete commands; consider the prior novices' knowledge; provide only a simplified set of operations for creating simple and composite 3D objects; design a workflow similar to what novices would do if they were building real objects; understand chained commands; understand commands that are relative to the users’ point of view.
The contribution of this paper is two-fold. First, we report the results of our WoZ study presenting the themes that emerged from the thematic analysis. Second, based on these results, we provide a set of design implications for the future design of voice-based interaction paradigms for 3D modeling for novices.
§ BACKGROUND AND RELATED WORK
This study revolves around the concept of voice-based 3D modeling as a key factor for enabling the democratization of digital fabrication. This section starts by illustrating some of the existing solutions based on natural interaction that try to address the complexity of 3D modeling (<ref>). Next, we provide an overview of the requirements for interacting with voice assistants (<ref>). Finally, we provide a brief summary of the motivation of this study and introduce the research question that guided our work (<ref>).
§.§ Addressing the Complexity of 3D modeling
To mitigate the issues of traditional GUI-based CAD systems, researchers explored natural interaction paradigms like eye tracking <cit.>, brain-computer interface <cit.>, gestures <cit.>, virtual/augmented reality <cit.> and their combination <cit.> as a multimodal approach for 3D modeling. The goal of natural interactions with CAD systems is to increase their usability for both expert users and, especially, novice users. Specifically, they aim to: [label=*)]
* reduce the learning curve of the system;
* allow a more intuitive interaction process;
* enhance the design abilities of the designers
<cit.>.
An example of a multimodal system is “3D Palette” by Billinghurst et al.: a mix of tablet and pen inputs, electromagnetic sensors and voice commands are used to support the digital design process <cit.>. Similarly, Nanjundaswamy et al. explored a mix of gesture-based interaction, speech recognition, and brain-computer interfaces to reduce the initial learning curve of the design system <cit.>. A complete overview of the multimodal solutions for CAD is reported by Niu et al. <cit.>. Despite these potential benefits, such multimodal techniques require the adoption of specialized hardware (e.g., depth-sensing cameras for gesture recognition, headsets to recognize brain signals), which use can be limited by their prices, sizes, weight, and complexity of use <cit.>. Thus, it is still hard for novice users to really adopt them in real and daily contexts <cit.>.
To overcome these limitations, researchers also investigated voice-based interaction because of its intuitive nature and the simplicity of the required hardware, i.e., a microphone, which nowadays is embedded in any laptop, tablet, or webcam <cit.>. Furthermore, considering the ubiquity of smartphones and the rise of AR and VR glasses, voice-based interaction can be generalized to technologies where other interaction modalities are not available options. Attempts of integrating voice-based interaction to CAD systems date as back as 1985 <cit.>. A more recent work suggests the use of voice commands to allow users to either quickly search commands by simply stating their intention <cit.>, or to annotate 3D models <cit.>. Systems, where the entire modeling process is carried out by voice commands, have also been explored. An example is the solution presented by Kou and Tan, where voice commands related to a CAD-specific lexicon and grammar are understood by a context-aware algorithm <cit.>. A similar example was proposed by Xue et al., which improves the previous solution by allowing free-form sentences in <cit.>. Another example of a fully-working system is the one presented by Grigor et al.: it follows the same ideas as the previous ones but uses AI to understand the users' inputs, thus allowing for more freedom in the commands, <cit.>. Similarly, Kou et al. proposed a flexible voice-enabled CAD system, where users are no longer constrained by predefined commands by exploiting a knowledge-guided approach to infer the semantics of voice input <cit.>.
Among all the previous examples, it must be highlighted that the design of their paradigm was made without any kind of involvement of the final users <cit.> or by solely involving experts in the final testing phase <cit.>. For example, the study by Nanjundaswamy et al. evaluates a multimodal system using gestures, speech and a brain-computer interface by involving a group of five skilled people <cit.>. Similarly, Khan et al. involve a total of 41 skilled users from an architecture or engineering background to elicit the requirements of a CAD system based on gestures and speech commands <cit.>. As another example, Vyas et al. test the usability of a speech-based CAD system involving 6 students with backgrounds in engineering, architecture and visualization <cit.>.
The work proposed by Cuadra et al. investigated how novices use voice assistants to design 3D objects <cit.>. They performed a WoZ study to compare voice assistants with and without the use of a video channel showing the design in progress, investigating how the two approaches impact users' accuracy and satisfaction. Cuadra et al. validate the idea of using voice assistants, as participants are more satisfied with their objects and suffer less from cognitive overload when the design process is supported by video, but it does not provide any insight on the mental model of novices approaching the digital modeling task <cit.>.
§.§ Interacting with Voice Assistants
The first solution of voice interaction implementing speech recognition dates as back as 1952, when Davis et al. proposed a prototype able to recognize digits <cit.>. In recent years, the evolution of machine learning and AI fostered the spreading of powerful commercial voice assistants, often based on deep neural networks trained on a plethora of data.
However, such powerful speech recognition models alone are not sufficient to build an effective voice assistant, since the interaction with such systems must be considered in the design of the whole system <cit.>. This need, together with the growing availability of commercial voice assistants, has fostered a sharp uptick of studies on user interaction with voice assistants <cit.>. Aspects like the cues that drive the conversation <cit.>, the properties that a voice assistant should have <cit.>, the user's mental model <cit.>, emotions felt during the conversation <cit.>, conversational design patterns <cit.> have been investigated. In addition, solutions to design and evaluate interaction with voice assistants are beginning to be proposed (see, for example, <cit.>). Careful consideration of these design aspects gains importance when voice assistants aim to simplify challenging or technical operations (e.g., see <cit.>). Since 3D modeling represents such a demanding task for novices, the elicitation of the novices' mental model is crucial to lower the barrier for 3D modeling.
§.§ Summary and Research Question
The analysis of the literature highlights that to simplify the 3D modeling, often the existing solutions are based on multimodal techniques such as gestures, eye tracking, or brain-computer interfaces; however, their adoption in real contexts is strongly limited by the adoption of specialized hardware and, overall, they target technical users.
Voice interaction seems a promising paradigm that can overcome the limitations of multimodal solutions, but the existing voice-based solutions are still lacking for three important reasons:
[label=*)]
* users are often not considered throughout the design phase, or they are only involved too late in testing phases;
* to the best of our knowledge, novices are never considered as target users;
* the voice-based interaction is built on top of the existing CAD systems (and their complexity), instead of designing from scratch the voice paradigm and the whole system.
Considering these limitations, to really democratize digital fabrication considering novices, users should be able to access 3D modeling tools even without special skills. All these motivations pushed us to explore novices' mental model in voice-based 3D modeling, in order to reduce the cost of their entry in the digital fabrication era. This is an aspect that has never been explored before and that deserves attention to really democratize digital fabrication. Therefore, our work addresses the following research question: How does the mental model of novices translate into voice-based 3D modeling?
§ METHOD
To answer our research question, we performed a high-fidelity WoZ study <cit.> because it has been proven successful in eliciting the user's mental model for voice-based interaction (e.g., see <cit.>). Then, we carried out an inductive thematic analysis <cit.> on the qualitative data, i.e., the transcriptions of the WoZ sessions and the answers of the participants to the open questions.
§.§ Participants
A total of 22 participants (F=15, M=7) have been recruited through convenience sampling <cit.> on the social circles of the authors of this article. This number of participants is in line with other similar studies (e.g., see <cit.>). Half of the participants were Italians while the other half were Germans. Their mean age was 24.1 years (σ = 3.7, min = 21, max = 34). The entire study was performed in English so as not to have results related to specific languages, which is out of the scope of this study. To ensure that the collected data is not biased toward knowledgeable users, we only recruited participants without any kind of experience with 3D modeling. Regarding the participants' level of education, around 45.45% already have a High School Diploma or a German A-level, 36.36% have a Bachelor's Degree, 13.64% have a Master's Degree, and only one participant (representing the remaining 4.55%) has not provided any information. Most participants (15 out of 22) do not have a STEM education, while 6 of the remaining 7 do not have any computational thinking skills, as they studied or worked in non-IT scientific fields (e.g., pharmaceutical and nutrition sciences). Regarding the participants' skills, they had an average level of IT knowledge (x̅ = 6.5/10; σ = 2.1), a medium-low level of knowledge of voice assistants (x̅ = 3.1/10; σ = 2.0) and very low knowledge of 3D modeling (x̅ = 1.6/10; σ = 1.1).
§.§ Tasks
A total of 14 tasks have been designed by two authors of this paper, both experts in 3D modeling, taking into account the most common and useful activities that are required to create simple and composite 3D objects. The resulting tasks revolve around basic concepts of 3D modeling, like the creation of simple objects, the manipulation of objects (e.g., scaling, rotating, and/or moving objects), and the creation of composite geometries. The details of the tasks are reported in
the task table in the attached appendix
(the list of all the graphical tasks is available in the attached appendix, sub-folder tasks). To reduce the impact of the primer effect <cit.> that providing a textual description of a task would have on the participants, we chose to provide the participants with graphical tasks: each task is composed of a brief prompt and a diagram showing the participants a 3D object or a 3D transformation that should be recreated (an example of graphical tasks is provided in <ref>). The representations chosen for each task were validated during a pilot study with 3 novices that were not considered in the final WoZ study.
§.§ Apparatus
We carried out the WoZ study remotely by using Zoom[<https://zoom.us>]. Four researchers have been involved: two Italians acted respectively as conductors and wizards for the Italian participants, while two German researchers acted as conductors and wizards for the German participants. In both groups, researchers switched roles to minimize the risk of bias introduced when conducting the test.
To create the illusion for participants that they are interacting with a real voice-based system for 3D modeling, we decided to use Blender[<https://www.blender.org>], explaining to participants that they can interact with it through voice commands. Blender has been selected since it is a free and open-source software that, among other features like sculpting or rendering, allows one to design and visualize 3D objects. One of the main features that made Blender the perfect choice for our WoZ study is the availability of API for the Python language[<https://docs.blender.org/api/current/>] that can be used inside a shell-like environment: this allows the Wizard to immediately create and modify the objects programmatically when the participants provide voice commands, thus preventing the participants from noticing anything odd and increasing the speed at which the Wizard is capable of satisfying the participants' requests. Taking advantage of this feature, we pre-defined a set of functions in a Python module to simplify the use of Blender's APIs for the purpose of this study (the module is available in the supplementary materials, sub-folder python module).
To show the participants the task they had to complete, we overlaid the graphical tasks on the bottom-right side of the Blender's window. To this aim, we used Open Broadcaster Software (or, more commonly, OBS)[<https://obsproject.com>], a free and open-source software for video recording and live streaming. Using OBS, it was also possible to define animations and transitions to show when users are moving to the next task and to signal to the participants that the “voice assistant” (i.e., the Wizard) is listening to the user's command or it is actually performing it. In particular, for each task, both the Blender window and the graphical task are visible (see <ref>). When the participants activate the Blender voice assistant by saying “Hey Blender”, the “I'm listening” label indicates that participants can provide the command to solve the task (see <ref>). Then, when the voice command has been issued, a rotating icon indicates that the voice assistant is analyzing it, creating the illusion that there is a real voice assistant (see <ref>). During the loading, the Wizard writes the Python statements related to the user commands and the result is finally shown in Blender (see <ref>).
§.§ Procedure
For each participant, when the Zoom session started, both the conductor and the Wizard were connected on Zoom but the latter never appeared or interacted with the participant. While the conductor introduced the participant to the study, the Wizard shared his screen, in particular the window created by using OBS. The sessions were recorded using Zoom's built-in recorder. Before starting the recordings, participants were asked to sign (either in digital or in verbal form) a privacy policy. It is worth mentioning that our universities require approval by an ethics committee only in the case of medical and clinical studies. For other studies like ours, they require that test participants give consent in a written or digital form; thus, we informed participants about all the details of the study and asked them to agree before starting the study. All of them agreed.
As soon as the participant agreed to attend the study, the conductor invited the participant to complete a set of tasks. The webcam of the conductor was turned off during task execution to avoid disturbing the participant. To reduce the variability between sessions and between the Italian and German participants, the same introductory script was defined (available in the attached appendix, sub-folder "introductory script"). In summary, the conductor explains that the goal of the study was to validate a new voice assistant called Blender, which we created to assist novices in 3D modeling. Then, the conductor asks to complete a set of tasks and that, for each of them, a graphical representation appears on the right-bottom side of their screen. The conductor also specifies that the participant had to first activate the voice assistant by saying “Hey Blender” and then, once the “I'm listening” label appears, the participant can provide a sequence of voice commands that, in their opinion, is the best to solve the task (for example “create a cube”). No examples of voice commands have been provided to avoid introducing bias. At the end of each task, the participants had to communicate with the conductor to move on to the next task.
At the end of the session, each participant filled in a questionnaire that includes questions on demographics, as well as some usability-related questions to evaluate the effectiveness of the Blender voice assistant. Furthermore, since (to the extent of our knowledge) there were no previous examples of graphical tasks for a WoZ study, we have also chosen to add some questions to evaluate how easy it was for the user to understand the tasks (available in attached appendix, sub-folder questionnaire). The entire procedure lasted around 30 minutes for each participant. A graphical synthesis of the entire procedure and the data collected is shown in <ref>.
§.§ Data Analysis
The first analysis regarded the questionnaire answers that evaluate the choice of providing the tasks in graphical format. Specifically, we included a question that asked “How easy it was to understand the graphical tasks?” and it ranges from 1 (not simple at all) to 10 (very simple). Both the median and average scores are 8.2/10, with a standard deviation of 1.0. These results seem to validate the idea of presenting the tasks graphically, but it also highlights that for some tasks (the ones with an ambiguous representation) the conductor of the study must be able to guide the participants to the right interpretation (without the use of words that may introduce a primer effect <cit.>). In our study, this issue impacted only the 11th task for four participants and it was solved by turning the webcam on and mimicking the action depicted in the task, in case the user was showing difficulties in understanding a task or if he/she explicitly requested help.
After ensuring the quality of the graphical tasks, we analyzed the qualitative data collected during the study, which helped us answer the research question, i.e., video transcriptions, questionnaire responses and participants' comments. All the video recordings (a total of about 11 hours) were first transcribed and expanded by including the annotations that identify pauses, the start and the end of the processing by the WoZ, and eventual errors or over-correction by the WoZ. This dataset was completed by reporting the participants comments and the answers to the three open questions we included in the questionnaire:
[label=*)]
* What did you like the most about the system used and the interaction with it?
* What did you like less about the system and the interaction with it? and
* Would you use a system like Blender to model in 3D? Please motivate your answer.
This data was analyzed in a systematic qualitative interpretation using Inductive Thematic Analysis <cit.>. The initial coding was conducted independently by four researchers, who are co-authors of this article and are experienced in qualitative data analysis: two of them analyzed the Italian results while the other two the German results. The two couples of researchers began with open coding independently. Once all the data was coded, the set of initial codes was further refined by merging the different codes. This first filtering phase allowed us to obtain a set of code groups that capture meaning at a higher level. The identified code groups were then used by each group to extract the main themes. At the end, both the codes and the themes of the two groups were compared to identify similarities and differences. With the exception of some minor differences related to their naming, both the codes and the themes identified by the two couples of researchers were identical in meaning. The final themes that will be presented here derive from a joint naming session carried out by all four researchers. Only a few small differences were identified, and they will be discussed as part of the design implications. The final codes and themes with the relationships among them are available in the attached appendix, sub-folder Codes and Themes.
§ RESULTS
The thematic analysis resulted in the description of five themes reported in the following sub-sections. For each theme, significant participant quotes are reported. For the sake of conciseness, we will refer to participants as “P” followed by the participant number, and to the WoZ system as simply “system”.
§.§ Basic Operations
This theme frames the strategies of interactions that novices have when they approach the 3D modeling activities of creation and manipulation.
§.§.§ Creation.
Novices tend to provide simple commands in the form “”, where the used verbs are typically “create”, “draw”, “build”, and examples of shape names are “cube”, “box”, or “cylinder”. This behavior has been observed in tasks that required the creation of simple or composite objects. Strictly related to this is the object duplication. Novices usually keep the requests simple by asking them to duplicate a precise object, as P4 did in task 12 when he said “duplicate the cube”. When the novices, instead, have to face the creation of multiple identical objects, without using the duplication requests (for example, because there was no previous copy in the scene), they simply use a basic creation request by also providing the number of copies: this is clearly exemplified by P5 in task 14 in “create four cylinders”.
§.§.§ Manipulation
The manipulation operations used by novices during the study are translation, rotation, and scaling. It is worth mentioning that the manipulation operations require some kind of reference frame to be performed; to this aim, novices often use relative references (for more details see theme theme:mental-model where the references used by the novices are discussed).
In more complex cases, novices provided commands containing both a creation request and an implicit manipulation request, where the manipulation is often expressed as a set of constraints on the final object. As an example, in task 14, P8 asked the system to “create four cylinders on the corners of the lower rectangle”: in this example, the multiple creation request is clearly visible, and it is put alongside a relative positioning request.
Finally, one of the most interesting identified open codes is the one that relates to moving objects with respect to implicit construction shapes. As an example, P4 during the last task asked “place the four cylinders at the four corners of a square.” In this example, the participant did not have a square in the scene but implicitly requested the system to create a square, place the cylinders at its corners, and delete the square once the operation was completed. This kind of operation was pretty common throughout the last task: around 45% of the participants provided a command that used a construction shape like the one previosly cited.
§.§ Selection of Objects
This theme covers the strategies adopted to identify and select objects, specifically, absolute selection, relative selection, or implicit selection. In the case of absolute selection, most participants explicitly refer to the entire scene, or to a single object in a scene by using its name (the one shown in the “inspector” view in Blender, as P11 asked during task 14 by saying “should I call it Box 0001 if I want to move it?”) or by its shape (as P1 did during task 6 by saying “move the cube 20 cm downwards”). A specialization of the latter case is the reference to a shape using a 2D approximation. One example is echoed by P8 during task 14: “Hey blender, move the upper rectangle on the side of the lower one”. Here, the user referred to two 3D boxes by their 2D approximation (rectangles).
The relative selection resulted in four commonly used strategies to select objects, namely:
* their relative time of creation (e.g., P3 in task 14: “Blender, place the second box under the first”);
* their relative position (e.g., P8 in task 14: “Hey Blender, create four cylinders in the corners of the lower rectangle”);
* their dimensions (e.g., P11 in task 14: “Hey Blender, move the tallest box attaching it to the side of the other box”);
* by inverting the current selection, eventually applying additional filters (e.g., P3 in task 14: “Blender, place the other two cylinders like you placed the previous ones”).
Finally, users also often performed implicit selections of the objects in the scene, for example, by referring to a single object in the scene or by referring to the last edited object, either explicitly or implicitly (e.g., P1 in task 8 implicitly referred to the last edited object by saying “increase the volume by three times”).
It is worth remarking that novices do not differentiate nor have preferences between the various methods, and actually, often mix them to be sure that the selection is clear and precise (e.g.: in a previously shown example by P8 in task 14, “Hey blender, move the upper rectangle on the side of the lower one”, the user performs the selection by using both an absolute reference to the 2D approximation of the shape of an object, and a relative reference to the positioning of another object).
§.§ Errors
Due to the lack of geometry knowledge and/or 3D modeling expertise, often novices commit errors of which the users are aware of, and errors of which the users are not aware of. In the first case, they try to prevent or correct the errors. For this reason, we named it “error correction”. In the second case, when a user is either not aware of an error or if they do not care about trying to fix it, then the error simply represents a mistake made during the task execution. For this reason, we named it “execution errors”. We analyze the details of each thread in the following paragraphs.
§.§.§ Error correction.
Different behaviors for correcting the errors have been observed, specifically during and after the command. Regarding the error correction made during the command, some novices try to prevent their own errors when they recognize one while stating the command, by providing a correction in the same command. For example, P9 during the chair construction task says “Hey blender, create a rectangle over the quadrilateral of length – I mean, height 30 centimeters, depth 5 and side 20–22...”. This command contains multiple corrections, starting from the correction of the name of the dimension that the user wants to set to 30 centimeters, and then correcting the actual size of the side of the rectangle to 22 centimeters
Regarding the corrections made after the commands, most of the participants expected some utility commands that are typically available in GUI-based software, like the “undo” and “redo” functions. As an example, P3 during task 14 provided both the command “Blender, undo the last operation”, and “place the other two cylinders as you've placed the previous ones.” This highlights how, although novices may not be familiar with the task of 3D modeling or voice-based interaction, they were able to transfer the knowledge of other software they may have used in the past, expecting that their previous experience would be applicable to the new, unknown system.
§.§.§ Execution errors.
Some of the mistakes committed by the novices are strictly related to lapsus, lack of knowledge, or system shortcomings. In the case of lapsus, some participants referred to shapes and objects using the wrong name (e.g., P10 was trying to refer to a box by calling it “cylinder” during task 14). In case of lack of knowledge, errors range from wrong names used for dimensions and primitives, to being unaware of the direction of the axis, perhaps by referring to previous knowledge obtained in school. For example, the Y axis in a 2D plane is usually the vertical one, thus some novices expect the Y axis to be the vertical one also in 3D. Finally, we identified system shortcomings, i.e. errors made by the wizard during the execution of the commands: all of these errors can be traced back to the incomprehension of the command, often due to its intrinsic vagueness (see the theme of “theme:mental-model”).
§.§ The Gulf of Execution
This theme represents the way novices translate their goals into commands. Throughout the sessions, before providing specific commands, we immediately noticed that novices often think aloud to understand what they have to do and how they can translate it to commands like P16 said during task 14 by saying “so, the picture has a different point of view. I should move it a little bit. Ok. Hey Blender, make the cylinder bigger.” Then, by analyzing their commands, we identified three main aspects of the commands where the gulf of execution becomes critical, specifically:
[label=*)]
* relativity
* vagueness
* abstraction.
§.§.§ Relativity.
Here we summarize how novices think about positions, scale, rotation, and selection relative to other parts of the scene. Two main overall frames of reference are used by the novices: the axes and other objects.
To select an axis, novices adopt three approaches, namely:
[label=*)]
* axis relative direction: a common way of selecting axes is through their relative direction (depending on the user's point of view), as echoed by P9 during task 11, by saying “move the geometric shape 20 cm to the right”;
* the axis color: as an example, during the execution of the last task (the one of creating a chair), P2 referred to the Y axis by its color stating “turn of 180 degrees the box on the green axis”;
* axis name: some novices also refer to axes by their actual name, as P19 did during the 12th task by asking the system to “move the right cube 10 centimeters along the X axis.”.
When referring to objects' dimensions, novices adopted two main approaches for selection. A first approach consists of using the dimensions' name, as P3 has done in the task of chair creation by saying “move along the y axis of a length equal to the base of the second box the last cylinder”. A second approach used a relative comparison to other dimensions; for example, P3 during task 14 selected an object by stating “move the third cylinder under the highest box [...]”.
§.§.§ Vagueness.
It encloses a lack of information in the commands provided to reach the goals. In general, the lack of information is caused by:
* chaining of multiple commands to describe at a high level a composite shape, as shown by P22 during the chair creation task, by asking “create four cylinders with the same distance to each other.”;
* missing data that the system needs to execute the requests; as an example, novices forget to provide some or all dimensions of a shape (e.g., P1 in task 1 stated “create a cube” without providing any dimension), they forget to specify a parameter for a transformation (e.g., P7 in task 10 asked to “rotate of 30 degrees the figure” without specifying a direction).
§.§.§ Abstraction.
We noticed two behaviors related to the abstraction of the commands. The first one relates a general abstraction over the process to reach the desired goal, as exemplified by P2 that tried to solve task 14 by saying “create a chair using two boxes and four cylinders”. The second one refers to how novices translate the desired 3D shapes into words. For example, shapes are created by providing a general description (e.g., P10 in task 4 by saying “create a 3D rectangle 30 cm high, 20 cm deep, and long 10 cm”, referred to a box as a “3D rectangle”, thus simply describing the shape) or by approximating the desired shape with a similar 2D shape (e.g., P8 during task 4 used “rectangle” instead of “box” by saying “create a rectangle of height 30, width 20, depth 10”). Furthermore, especially German participants, novices also refer to the 3D shapes by using similar real-world objects (e.g., P17 during task 3 stated “create a dice with an edge length of 30 centimeters”, using “dice” instead of “cube”).
§.§ Users' Requests
We collected requests and suggestions provided by the participants, which provide useful insights on novices' mental model.
Among the most common requests, participants often asked to rotate the camera and change their point of view. As an example, P11 during the last task of creating a chair, asked “can I see it from below?” and “can I see it from above” to perform some minor adjustments and corrections to the positions of the 3D objects. This behavior underlines the need to provide a way to allow novices to rotate their point of view. This functional requirement is strictly related to the theme of theme:selection-of-objects as it may benefit from different interaction modalities that could be explored (e.g., using AR).
Another common request is related to the actual dimensions: when novices explicitly set size in the command (for example, in the third task), they want to check that the system created an object of the right size. This is exemplified by P10 which explicitly asked if “can I ask it to check the dimensions?” in the third task. This suggestion does not translate to an additional requirement for the AI model that recognizes users' commands, but it rather provides some insights on the requirements of the whole 3D modeling tool.
Other minor suggestions regarded the customization of the axis: some participants expected the Y axis to be the “vertical” one as it usually happens in 2D drawings, rather than the Z axis as it happens in 3D modeling tools like Blender. Providing such a customization option would surely reduce the error rate in a final system, as the novices could adapt it to their own knowledge.
§ DISCUSSION AND IMPLICATIONS
Based on the findings of the WoZ study, in the following we present design implications for the development of future voice-based 3D modeling tools for novice designers and relate them to the wider research literature around voice assistants and general user experience principles.
§.§.§ Understand user corrections and adapt to them.
This requirement stems from the errors the users are aware of (see theme theme:errors). It poses requirements that impact two different facets of future voice-based digital modeling tools: the NLU layer and the conversation flow.
Regarding the NLU layer, systems must be able to intercept user corrections and aborted commands. Based on our findings, we note that recognizing uncertainty, hesitation, doubt, and error awareness early on is particularly crucial in the digital modeling context, as users displayed them frequently due to their unfamiliarity with 3D modeling <cit.>.
Regarding the conversation flow, after intercepting the error correction, it is important to design a dialog that helps users understand the error and recover from it <cit.>. Moore and Arar <cit.> provide valuable pointers through their Natural Conversation Framework which proposes a set of conversational patterns. Some of these patterns relate to user corrections and can be applied to voice-based digital modeling. An example inspired by this framework that relates to errors that users correct while they issue a 3D modeling command might be:
User: Hey blender, increase of 10 centimeters -no- of 20 centimeters the sides of the geometric figure
Agent: I'm sorry, I didn't understand. Do you mean an increase of 10 or 20 centimeters?
User: 20 centimeters.
Agent: Ok, I'm increasing of 20 centimeters the sides of the geometric figure.
§.§.§ Deal with vague and incomplete commands
. We have identified numerous theme:errors by the lack of knowledge and the system's shortcomings that users were unaware of. These errors are related to incomprehension due to the vagueness and abstraction of some commands. Self-repair strategies should be introduced to improve interaction <cit.>. To this aim, we identified two possible solutions. The first one consists of sensible defaults: in case of a vague command, the voice assistant fixes it by selecting a relevant parameter from a list of alternatives. For example, if the user says “create a cylinder on top of the cube”, the cylinder diameter is not specified. In this case, the system can assume that the diameter is equal to the side of the cube. This solution can also benefit from the dialog context: as suggested by Jain et al., resolving and maintaining the dialog context can help select the most appropriate sensible default from a list of alternatives <cit.>. For example, if other cylinders have been previously created with a given diameter on top of cubes the same can be applied to the new ones in case of vague commands. This allows the system to be proactive, anticipating the users' requests as suggested by Völkel et al. <cit.>.
The second solution consists of interactively guiding the user by providing the missing information. With reference to the previous command of the box and cylinder, instead of using defaults, the voice assistant can explicitly ask the user for the desired radius. The strategy adopted by the voice assistant is informed by the degree of system autonomy or desired user control. A hybrid solution can also benefit from both approaches: the selected sensible default can be used by the voice assistant to ask the user if the default is right, for example, with reference to the previous case the voice assistant can reply: “OK, I'm creating a cylinder with a diameter equal to the side of the cube. Is it OK?”
§.§.§ Translate interaction conventions to voice-based digital modeling
. Users commonly apply their experience with software applications to other applications or even different domains. As an example, some participants expected to execute “undo” or “redo” commands, which are common across applications and domains. This is in line with the traditional Nielsen heuristics of “user control and freedom” and “consistency and standard” <cit.>. The latter states that “users should not have to wonder whether different words, situations, or actions mean the same thing”, thus the system should “follow platform and industry conventions” (from Nielsen <cit.>). For this reason, a voice-based 3D modeling system should provide such common operations, like the aforementioned “undo” and “redo” commands. Further exploration may be required to clearly define and match the set of expected commands to voice-based digital modeling.
§.§.§ Adopt simple operations even for the creation of composite 3D models
. Based on the theme theme:creation-and-manipulation, we note that most users follow similar and simple approaches even in complex tasks. For example, by analyzing task 13 (which consisted of creating a figure having a cylinder on top of the cube), multiple approaches might be adopted, but novices used only basic operations (creation and translation) to create both a simple cube and a cylinder and then moving the latter on top of the former. This highlights that, although many technical operations may be implemented in voice assistants for digital modeling, it is important to provide novices with simple operations to create and compose 3D objects, rather than prescribing more complex operations like “extrusion” and “insetting”, which are most adequate for skilled users <cit.>.
§.§.§ Match digital modeling workflows with novices' expectations and experiences from building physical objects
.
Related to the theme:creation-and-manipulation, but by focusing on the last task (that consisted of the creation of a chair), we noticed that the majority of the users started by creating the base cylinders (almost all users started with a phrase like “create four cylinders”). This surely provides an interesting insight on how people approach the creation of composite 3D objects. By creating the base cylinders first, users are basically following an approach that starts from the bottom and proceeds upwards. This is not different from the approach that users should follow if they were composing physical shapes: by starting from the bottom, they are able to stack the various shapes without the risk of their composition to “fall down”. This indication can be useful if wizard procedures are introduced to guide the creation of composite 3D objects; for example, the voice assistants can start the interaction by asking which is the shape, with its features, that must be placed at the bottom, then going on guiding the user to create other shapes on top of the previous ones.
§.§.§ Provide alternatives for the selection of 3D objects
. By reflecting on the theme of theme:selection-of-objects, we argue that it is among the most critical ones: most of the 3D modeling revolves around the selection of objects to be composed. We found that several and different techniques have been adopted by the novices. For example, a common solution is represented by commands to select an object by referring to the entire scene, in other words in an absolute way. We also documented commands that use relative references, for example, their relative time of creation, their relative position, their dimensions, and by inverting the current selection. The last approach is represented by the implicit selection of the objects in the scene. These strategies represent different solutions the users can adopt to select a 3D object, and thus the voice assistant should accommodate all of them. To simplify the interaction, future voice assistants can be complemented with additional interaction modalities like gestures or eye tracking, where users could simply point <cit.> or gaze <cit.> at the object or surface they want to select.
§.§.§ Understand commands that are relative to the user's point of view
. As described in the themes theme:mental-model and theme:selection-of-objects, users often execute commands that are related to their point of view, in particular, to change the camera perspective, to select an axis, and to select a 3D object. In other words, we found that a common way for novices to issue commands is through the “screen” coordinate system <cit.>, as provided by some professional 3D modeling systems[<https://shorturl.at/fGLRZ>], by using common words such as “left” and “right”, as P9 did during task 11 with the command “move the geometric shape 20 cm to the right”. Furthermore, novices often provided commands relative to both their point of view and other objects (as P10 did during task 13: “insert a cylinder on top of the cube”). This implies that future voice assistants must be equipped with some way of understanding the 3D context into which the command is provided, and they must take into account the user's point of view during the intent-matching process.
§.§.§ Grant multiple ways to refer to the axes
. Users referred to the axes of the 3D scene by adopting different approaches: by indicating the axis color, by referring to the user's relative direction, by using the axis name (see themes theme:mental-model) or some users also preferred to switch the Y and Z axes as the “vertical” axis (see theme theme:users-suggestions). This ambiguity is also found in professional systems, as some of them use the Z axis as vertical while others use the Y axis instead <cit.>. This behavior should be considered in the design of voice assistants for 3D modeling, since this is a core activity that, if not adequately supported, might lead to ineffective user interaction.
§.§.§ Design for complex commands.
. Multiple chained commands have often been prompted to execute various actions. In our study, it was possible to accommodate the multiple users commands thanks to the WoZ but voice assistants are typically restricted to simple standalone commands. Similar to what Fast et al. already proposed for complex tasks <cit.>, also voice-based systems for 3D modeling should address this requirement, which strongly impacts the design of its NLU layer that must be able to understand and execute multiple chained commands.
§.§.§ Favor explicit trigger words
. Previous work by Vtyurina et al. argued that forcing the use of explicit trigger words would constrain user interactions, suggesting the use of implicit conversation cues for driving the dialog <cit.>. On the contrary, during our experiments novices used implicit conversational cues while thinking about their workflow and as a natural reaction after a successful command execution (see theme:mental-model): this highlights the need for future voice-based systems to provide clear explicit activation cues and trigger words, to avoid any unintentional activation that would disrupt users' workflow.
§.§.§ Embrace diversity in naming approaches
. As novices usually have little to no knowledge of the 3D modeling domain, they often have to resort to different naming approaches when dealing with shapes for which they do not recall the “right” name. As already highlighted in theme:mental-model, novices can refer to shapes by providing high-level descriptions (e.g., “3D rectangle” instead of “box”), 2D approximations (“rectangle” instead of “box”), or by associating them to a real-world object (e.g., “dice” instead of “cube”). For this reason, future systems must be able to understand both analogies and descriptions of shapes. A concrete solution might be the adoption of a lexical ontology like WordNet <cit.> to infer the shape name related to the real object.
§ LIMITATIONS OF THE STUDY
Our study is an initial step toward understanding how novices approach voice-based 3D modeling. We have identified some limitations of our work. First, the novices' languages deserve a wider exploration: our study highlights very small differences between Germans and Italians because of their culture; however, a similar study where participants use their native languages might be useful to understand how language might impact the resulting mental model. Similarly, this study does not focus on how aspects like ethnicity, socio-economic status, and age might impact the novice's mental model. Another limitation regards the tasks: the ones used in the study are representative of the most common operations to design 3D models but digital fabrication often implies the design of objects that are more complex than a chair. In addition, the set of proposed tasks does not cover all possible operations (e.g., selecting textures and making holes). Future work may also study differences between the mental model of lay users (target of this study) and novices in 3D modeling that are domain experts (e.g., they have expertise in sculpting or 3D world composition, but do not know how to model). Similarly, the proposed voice-based interaction approach may be compared with alternative solutions based on mouse and keyboard or multi-modal approaches, to explore the pros and cons of each solution. Finally, Blender has been selected as the 3D modeling tool because of the advantages reported in <ref>; however, its UI is designed for a WIMP interaction thus it presents commands, buttons, functions, etc., that might bias or confuse novices. Despite carefully hiding all the useless parts of the Blender UI, the adoption of a system purposely designed to better fit the voice interaction might be adopted to elicit the mental model.
§ CONCLUSION
Voice interaction is emerging as a promising paradigm that can simplify 3D modeling for digital fabrication. However, novices' mental model is never considered when designing voice-based 3D modeling systems. In addition, voice interaction is usually built on top of WIMP systems instead of designing the voice paradigm and the whole system from scratch. This study addresses these limitations by investigating the novices' mental model in 3D modeling and contributes to the state-of-the-art by identifying a set of design implications that support the definition of voice-based interaction paradigms for the design and customization of personalized 3D models. This contribution aims to lower the barrier to 3D modeling thus supporting the wider democratization of digital fabrication.
As future work, we are now addressing the limitations reported in the previous section. We are also working on the development of a prototype of a voice assistant integrated into Blender: it is currently being developed in DialogFlow <cit.> and it has been designed considering the design implications proposed in this study. The aim is to study novices' behavior when interacting with real systems, also exploring if and how the design indications suggested in this study also accommodate the design of more complex objects in more realistic situations, for example, by proposing scenarios instead of tasks.
§.§.§ Acknowledgements
This work has been funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 952026 (<https://www.humane-ai.eu/>).
The research of Andrea Esposito is funded by a Ph.D. fellowship within the framework of the Italian “D.M. n. 352, April 9, 2022” - under the National Recovery and Resilience Plan, Mission 4, Component 2, Investment 3.3 - Ph.D. Project “Human-Centered Artificial Intelligence (HCAI) techniques for supporting end users interacting with AI systems”, co-supported by “Eusoft S.r.l.” (CUP H91I22000410007).
splncs04
|
http://arxiv.org/abs/2307.04650v1 | 20230710154934 | Interaction between two overall neutral charged microscopically patterned surfaces | [
"Shiqi Zhou",
"Amin Bakhshandeh"
] | cond-mat.soft | [
"cond-mat.soft",
"cond-mat.stat-mech"
] |
School of Physics and Electronics, Central South University, Changsha, 410083, Hunan, China
[email protected]
Instituto de Física, Universidade Federal do Rio Grande do Sul, Caixa Postal 15051, CEP 91501-970, Porto Alegre, RS, Brazil.
We study the interaction between heterogeneously charged surfaces in an electrolyte solution by employing classical Density Functional Theory (cDFT) and Monte Carlo simulations. We observe a consistent behavior between cDFT and Monte Carlo simulations regarding force curves and two-dimensional density profiles. Armed with the validated cDFT, we explore the system’s behavior under parameters challenging to simulate directly. Our findings include impacts of domain size, domain charge, domain charge configuration, and bulk electrolyte concentration on the osmotic pressure. Remarkably, the force curve is more sensitive to the domain size for asymmetric configuration than symmetry configuration; the bulk concentration weakly influences the force curve independent of the system configurations.
Interaction between two overall neutral charged microscopically patterned surfaces
Amin Bakhshandeh
August 12, 2023
=====================================================================================
§ INTRODUCTION
Electrostatic interactions play a critical role in stabilizing colloidal systems, ionic chemical reactions, and biochemical and physical phenomena <cit.>. These interactions are responsible for many exciting phenomena; as a result, their study has led to significant advances in various scientific fields.
One intriguing phenomenon observed in colloidal suspensions with multivalent ions is the reversal of electrophoretic mobility <cit.>. Under certain conditions, a like-charge attraction between colloidal particles of the same charge sign can also occur <cit.>.
Understanding electrostatic interactions and their effects on colloidal systems is crucial for designing novel materials and processes in various applications, such as drug delivery, energy storage, and water treatment. Therefore, it is important to continue investigating the mechanisms and properties of electrostatic interactions in various systems.
When a charged surface is in contact with an electrolyte solution, an electrical potential difference is created across the interface, which attracts oppositely charged ions and creates an electric double layer (EDL). Helmholtz made the first attempt to formulate the concept of EDL in 1853 <cit.>, who proposed a primitive model in which there is a boundary between the charged surface and the electrolyte solution that constitutes an electrical double layer, with a layer of oppositely charged ions at the surface. However, Helmholtz's model did not consider the effect of the thermal motion of ions. This deficiency was corrected by Gouy-Chapman (GC) <cit.>, who introduced the concept of a diffuse double layer.
The stability of colloidal particles is more complicated and cannot be explained by the GC model alone. To provide a general understanding of the stability of colloidal suspensions, there are some approximate theories, such as Debye–Hückel and Derjaguin–Landau–Verwey–Overbeek (DLVO) theory. However, the problem with these theories is that they have limitations and can only be applied to systems such as 1:1 electrolytes with moderate concentrations. As the correlation of the system increases, these theories become inadequate and fail <cit.>.
Recent studies have shown that the interaction between the charged surface and the surrounding biomolecules can heavily influence the behavior of charged surfaces in biological systems. In particular, the presence of charged lipids and other membrane components can significantly alter the behavior of charged surfaces on cell membranes, leading to new and exciting phenomena <cit.>, recent developments in experimental techniques, such as single-molecule force spectroscopy, have provided new insights into the behavior of charged surfaces in biological systems <cit.>.
Recently, research has focused on the long-range interactions between heterogeneously charged surfaces. These surfaces are significant in nanotechnology, as it is possible to create periodically charged patterned surfaces using nano-fabrication techniques <cit.>. These systems exhibit unique behavior, such as the adsorption of polyelectrolytes and polyampholytes being dependent on their configuration <cit.>. It has been observed experimentally that the attraction between these surfaces can extend up to 500 Å <cit.>. One possible explanation for such observation is the correlation between the charged domains on the surfaces <cit.>. If this assumption was correct, with the rapid shear movement of plates, the attraction would disappear; in this situation, the domains do not have time to adjust themselves, and as a result, the correlations and attraction force get weakened. However, it is known that the correlation does not play any role in this observation, and the attraction force is due to pure electrostatic interaction <cit.>. In general, the study of these systems can be complex and often requires the use of molecular simulations and advanced approaches such as classical Density Functional Theory (cDFT) <cit.>.
The classical density functional theory provides a powerful tool for the calculation of the
structure and thermodynamic properties of heterogeneous fluid systems <cit.>. Most studies validating the accuracy of cDFT are primarily focused on one-dimensional cases, where the density distribution is solely dependent on a one-dimensional coordinate. Moreover, for three-dimensional cases, various scenarios have been compared with molecular simulations, resulting in different reported outcomes <cit.>.
The effectiveness of cDFT in the present two-dimensional model is still unknown, despite its demonstrated utility in studying complex systems, such as heterogeneously charged surfaces within electrolyte solutions <cit.>. Therefore, one of the aim of this article is to assess the performance of a commonly used cDFT version in a two-dimensional case using molecular simulation data, since to best of our knowledge such comparison does not exist.
Despite the advances that have been made in our understanding of charged surfaces in electrolyte solutions, there is still much to be learned. The behavior of these systems is highly dependent on the properties of the electrolyte solution, such as its concentration and ionic strength. Furthermore, the effects of thermal fluctuations and the presence of other molecules in the system can also significantly impact the behavior of charged surfaces. Additional research is required to fully comprehend the behavior of these complex systems and to create new theoretical and computational tools that can help us better predict their behavior in different environments.
The behavior of these systems is governed by two forces: electrostatic and entropic. The entropic force in an electrolyte solution arises from the collisions between an ion with other ions or surfaces within the solution., resulting in an exclusion volume effect. This effect causes the particles to experience a net force that is proportional to the density gradient of the surrounding particles.
This paper presents a comprehensive study of the interaction between two heterogeneously charged neutral surfaces (HCNS) using molecular simulation and cDFT. Through our simulations and theoretical calculations using cDFT, we aim to shed light on the complex interplay between electrostatic interactions and entropic forces that govern the behavior of these systems.
The paper is organized as follows: In section <ref>, we provide a detailed description of our simulation approach and theoretical model used to investigate the behavior of heterogeneously charged neutral surfaces. In section <ref>, we describe the cDFT, In section <ref>, we present and discuss our results; in section <ref>, we conclude our work.
§ SIMULATION METHOD
We utilize a two-dimensional model to investigate the behavior of heterogeneously charged surfaces immersed in an electrolyte solution. The model consists of two flat surfaces with dimensions L_x and L_y, separated by a distance of H and surrounded by the electrolyte. For our simulations, we set L_x=L_y=400 Å. The solvent is treated as a uniform dielectric with permittivity ϵ_w and Bjerrum length is defined as λ_B = q/k_B T ϵ_w, where q, k_B, and T denote the proton charge, Boltzmann constant, and temperature, respectively. We take λ_B to be 7.2 Å. Each plate has a charged domain with dimensions of L × L_x. The cell configuration is shown in Fig. <ref>.
Our model considers ions as hard spheres with a radius of 2 Å, and we simulate the system using the grand canonical Monte Carlo (GCMC) algorithm, as described in previous studies <cit.>. To study the effect of the electrolyte solution, we put the system in contact with a salt reservoir at a concentration of c, and determine the excess chemical potential of the reservoir using the mean spherical approximation (MSA) <cit.>. Although the MSA is an approximation, it is very precise for 1:1 electrolytes <cit.>.
In this paper, we use the term "symmetric" when two plates have the same configuration, and "asymmetric" when the configurations of two plates are mutually crosswise arranged.
We implemented a grand canonical Monte Carlo algorithm (GCMC) simulation for salt concentration 20 and 100 mM for surface charge density 0.0998 and 0.2 Cm^-2. Each plate consists of 1600 charged particles arranged in different patterns. To evaluate the electrostatic energy, we used the 3D Ewald summation method with a correction for the slab geometry of Yeh and Berkowitz <cit.>.
In Fig. <ref>, we plotted the density profile of negative and positive ions in 3D for different patterns.
To evaluate the force on the plate, we consider both the electrostatic interactions and the entropic force resulting from the momentum transfer of ions colliding with the plates. To calculate the entropic force, we use the method proposed by Wu et al. <cit.>:
For each sample, we move the plate towards the other plate and count the number of overlaps with electrolyte ions <cit.>:
β F = < N >/Δ z ,
where N is the number of overlaps of ions and Δ z is the displacement of the wall where Δ z → 0. The entropic pressure is then becomes:
β P = < N >/Δ z L_x L_y .
§ CLASSICAL DENSITY FUNCTIONAL THEORY
We use a cDFT version <cit.> that has been repeatedly validated in several one-dimensional cases. In this version, the treatment of hard sphere repulsive interactions incorporates a recently developed extended form of Ref. <cit.> of the fundamental measure functional version proposed by Kierlik and Rosinberg <cit.> .
The long-range Coulomb interaction is treated with a mean field approximation, and coupling between hard sphere and Coulomb
interactions is treated with second-order perturbation expansion based on mean
spherical approximation. Although this version has been repeatedly validated in
one-dimensional cases, two-dimensional validation has not yet been reported.
Consideration of two-dimensional model increases difficulty in algorithm
convergence. For this reason, we take three measures. (i): the calculation is
started from zero surface charge density; one gradually increases the charge
density to the target value and takes the density distribution output of the
previous charge density as input for the next charge density calculation. (ii) The
nonlinear equations resulting from the discretized two-dimensional cDFT
equation are solved by the Newton GMRES algorithm which is implemented in
the public-domain nonlinear Krylov solver NITSOL. (iii) In one-dimensional cases, the mesh size is typically very small, often around 0.025 times the molecular diameter. However, in the two-dimensional case, we set the mesh size to 0.1 times the molecular diameter for both dimensions. We conducted a validation test comparing grid sizes of 0.05 and 0.1 times the molecular diameter and found nearly identical results.
To evaluate the osmotic pressure, we begin by calculating the effective electrostatic interaction potential as a function of the plate separation H. This potential is obtained as the difference between the excess grand potential when the two plates are at a distance of H apart and the excess grand potential when the plates are sufficiently far apart. Next, the derivative of this interaction potential with respect to distance is computed to obtain the interaction force between the plates.
The excess grand potential is determined by subtracting the grand potential of the system with the two plates at a distance of H apart by the grand potential of a bulk system with the same volume but without the two plates. Within the framework of cDFT, it is possible to calculate the grand potential of a heterogeneous system, such as the two-plate system, by substituting the density distribution obtained from cDFT into the expression for the grand potential. Conversely, if the bulk density is used as input, the grand potential of the bulk system can be obtained.
§ RESULTS & DISCUSSION
In the first step, we compared the forces obtained by Monte Carlo (MC) simulations and cDFT for symmetric and asymmetric cases with a domain size of 200 × 400Å, and a surface charge density of 0.0998 Cm^-2 in the presence of 20 mM 1:1 electrolytes. As shown in Fig. <ref>(a-b) for symmetric and asymmetric cases, there is good agreement between simulation and cDFT.
Next, we investigate the effect of electrolyte concentration on the interaction between the charged microscopically patterned surfaces. For the symmetric case, Fig. <ref>(a) with a domain size, 200 × 400 Å and a surface charge density of 0.0998 Cm^-2, the repulsion force does not change significantly when the concentration increases from 20 to 100 mM. The same observation can be seen for the asymmetric case where the attraction exists. However, this is not the case for surface charge density. As is seen in Fig. <ref>(b), the blue curve representing the case with a surface charge density of 0.25 Cm^-2 exhibits a significant change in the attraction curve. In Fig. <ref>(a), the osmotic pressure is plotted against the separation distance. As can be seen, despite increasing the electrolyte concentration by five times, the osmotic pressure for the symmetric case does not change significantly.
However, as seen in Fig. <ref>(b), the attraction is higher for 20 mM electrolyte concentration than 100 mM. The observed phenomenon can be attributed to the entropic forces, wherein the increased collision between the plates and ions of the electrolyte at higher concentrations leads to a reduction in the attractive force. However, as the two plates get closer, the entropic forces reduce, since the number of particles between plates reduces, for both cases resulting in the same attraction force magnitude.
As is shown in Fig <ref>(a-b), for the symmetric case with a surface charge density of 0.2 Cm^-2, the repulsion does not change significantly for different domain sizes of 20×200 Å^2 and 40×200 Å^2. However, there seems to
be greater sensitivity to domain size for asymmetric cases in which there is the attractive force. It is observed that a bigger domain size results in a significantly higher attraction between plates for asymmetric configurations.
Also, we compared the density profiles of ions obtained from Monte Carlo simulation and cDFT in Figs. <ref> and <ref>, to achieve this, we mapped the 3D density profile of ions to 2D. As is observed, the MC simulation results have higher fluctuations than cDFT, but both simulation and cDFT are consistent overall. The ion density profiles provide us with insight into the structure of the electric double layer around the charged patterned surfaces. The issue with the density profile is that its fluctuations are unavoidably large, as depicted in Figs. <ref>-<ref>, this is due to the fact that to obtain 2D density profile, the small bins were used and this led to high fluctuations. However, overall, the agreement is generally acceptable. It is worth noting that for the two-dimensional system, the force curves do not appear to be significantly influenced by the density profile. Although the density profile is a physically significant quantity that sensitively depends on the microscopic configuration, its impact on the force curves seems to be relatively minimal. This observation is satisfactorily predicted by cDFT. Since force is the
negative derivative of the potential function with respect to distance, and its effective prediction should depend more sensitively on the accuracy of the method. Fortunately, cDFT demonstrates promising accuracy in predicting force curves. However, this particular issue is the subject of our future research.
After comparing the predictions of cDFT and Monte Carlo simulations, we observed a good agreement between the two methods. This comparison confirms the accuracy and reliability of our theoretical approach to studying the behavior of these systems. As a result, we focused solely on cDFT to investigate systems with higher electrolyte concentrations (2.5 M) and surface charge densities (2.5 Cm^-2). These parameters are challenging to simulate using MC method due to the difficulty in reaching equilibrium. As is seen in Fig. <ref> we plot the osmotic pressure for different domain sizes at different separation distances for asymmetric cases.
Fig. <ref> shows no attraction between the plates at high electrolyte concentrations for H>10 Å. This is opposite to the case of low electrolyte concentration, in which the attraction exists at separation distances up to 40 Å for asymmetrical pattern.
In Fig.<ref>(a), we show osmotic pressure obtained from cDFT for asymmetric configurations of plates with surface charge density σ = 0.1 Cm^-2 and different domain sizes in the presence of 2.5M 1:1 electrolyte. As is seen in Fig.<ref>(a) there is a maximum attractions and repulsion forces which around 5.9 and 8.8 Å. When the separation distance between plates is small, the higher entropic forces encourage ions to migrate from the region between the plates to the reservoir. Consequently, this leads to an increased Debye length and attraction force between the plates. However, by increasing the separation distance, H, the Helmholtz double layer (HDL) starts to form. The second maximum occur at a distance approximately twice the ion diameter (8 Å). This distance corresponds to when the Helmholtz layers of the two plates come into contact with each other. This double layer, increase the excluded volume between plates and as a result increases the entropic force, this results in the plates do not experiencing any attraction and only an entropic force being observed. When the separation distance increases, the entropic force gets smaller, and also, since the Debye length is around 1.9 Å, the interaction force rapidly goes to zero.
In Fig.<ref>(b), as the surface charge density increases, the first minimum (maximum attraction force) disappears due to the direct electrostatic interaction between the plates. However, the maximum repulsion force shifts to a smaller separation distance of around 8.1 Å, which can be attributed to the complete formation of the HDL on the domains. This is because the higher surface charge density leads to a stronger binding between ions and plates,and in turn this leads to the formation of a more rigid HDL.
In Fig. <ref> we plot the maximum and minimum forces observed in Fig. <ref> against different domain's size and the H which the maximum repulsion force appears in terms of L.
As is shown in Fig.<ref>(a), the maximum repulsion occurs at smaller distances for larger domain sizes. This suggests that HDL forms easily and completely for larger
domains than smaller ones. This observation is because the ions find it more difficult to approximate small domains due to the repulsion from neighboring domains, we show this schematically in Fig. <ref>. The maximum osmotic pressure, Fig.<ref>(b), reveals that as the domain size decreases, the maximum force also reduces. This, again, can be explained by the fact that the ions are firmly anchored to the charged surface within the HDL of the larger domains, where ions can more easily approximate the domain due to the weaker repulsion of neighboring domains. Additionally, as explained previously, Fig.<ref>(c) the maximum attraction force directly relates to the domain size. This is due to the electrostatic interaction between domains, which overcomes the entropic forces.
In Fig. <ref>, the same information is provided for σ=0.1 Cm^-2, and besides that, we also show the separation distance when maximum attraction force occur, H_min, in terms of L in Fig. <ref>(a).
As is shown in Fig. <ref>(a and c), the separation distance at which maximum and minimum forces occur exhibits the same trend as Fig. <ref>. For maximum forces in Fig. <ref>(b), a similar trend to Fig. <ref> is also observed. However, in Fig. <ref>(d), it can be seen that a larger domain size leads to stronger attraction forces.
§ CONCLUSION
This study investigates the impact of domain size, domain charge, domain surface configuration, and bulk electrolyte concentration on the osmotic pressure of charged systems using Monte Carlo simulation and Classical Density Functional Theory (cDFT).
We examined the interaction force between plates with symmetric and asymmetric configurations and its relation with domain size. To this end, we studied surface charge densities of 0.0998 and 0.25 Cm^-2 and a 20mM 1:1 electrolyte for both asymmetric and symmetric configurations with domain sizes of 20, 40, and 200 Å. In all cases, we observed attraction between the plates in the asymmetric configurations, while repulsion was observed in the symmetric configurations. Furthermore, we compared the results obtained from recently developed cDFT for plates with non-homogeneous charge distribution with those from MC simulation and found a good agreement.
Our findings reveal that the domain size has minimal effect on the osmotic pressure in symmetric configurations. However, the
attraction between plates is sensitive to the size of the domain for the case of asymmetric configurations.
Furthermore, we analyze the behavior of the force curve with variations in bulk concentration for symmetric and asymmetric configurations. The force curve remains relatively unchanged with changes in bulk concentration, but higher domain charge densities can amplify its sensitivity. Moreover, asymmetric configurations exhibit more complex behavior at higher electrolyte concentrations. It was observed that in this case a maximum repulsion appears which can be explained by HDL.
In addition, our study confirms the validity of the classical density functional theory cDFT <cit.> for slab geometry with non-homogeneous charged distributions. In our future work, we employ the method to study the systems with spherical geometry <cit.>, these system are of great importance in colloidal science an biological systems such as viruses.
§ ACKNOWLEDGMENTS
This project is supported by the National Natural Science Foundation of China (Grants 22173117), the High Performance Computing Center of Central South University and CAPES.
|
http://arxiv.org/abs/2307.04165v1 | 20230709130711 | On IMU preintegration: A nonlinear observer viewpoint and its application | [
"Bowen Yi",
"Ian R. Manchester"
] | eess.SY | [
"eess.SY",
"cs.SY"
] |
1
.001
Yi and Manchester
mode = title]On IMU preintegration: A nonlinear observer viewpoint and its applications
1]Bowen Yi[
auid=000,bioid=1,
]
[1]
[email protected]
[1]organization=Robotics Institutes, University of Technology Sydney,
postcode=NSW 2006,
country=Australia
[2]organization=Australian Centre for Robotics, School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney,
postcode=NSW 2006,
country=Australia
2]Ian R. Manchester
[email protected]
[cor1]Corresponding author (The work has partially been done when the first author was with The University of Sydney.)
The inertial measurement unit (IMU) preintegration approach nowadays is widely used in various robotic applications. In this article, we revisit the preintegration theory and propose a novel interpretation to understand it from a nonlinear observer perspective, specifically the parameter estimation-based observer (PEBO). We demonstrate that the preintegration approach can be viewed as recursive implementation of PEBO in moving horizons, and that the two approaches are equivalent in the case of perfect measurements. We then discuss how these findings can be used to tackle practical challenges in estimation problems. As byproducts, our results lead to a novel hybrid sampled-data observer design and an approach to address statistical optimality for PEBO in presence of noise.
Nonlinear observer IMU preintegration Robotics Sampled-data estimation
[
[
August 12, 2023
===================
§ INTRODUCTION
State estimation and perception are fundamentally important for autonomous systems <cit.>. Initially, filtering approaches dominated the field of online state estimation due to the limitation of computational capacity <cit.>. In recent years full smoothing approaches which are based on nonlinear batch optimisation have gained popularity in numerous localisation problems, since they provide estimates with high accuracy <cit.>. However, the optimisation-based estimation framework is computationally demanding. This issue is currently becoming more urgent than ever as we have witnessed the trend of utilisation of monocular cameras with IMUs – known as the monocular visual-inertial system (VINS) – in real-world robotic systems. The VINS is an asynchronous sampled system, with IMUs providing measurements at a high rate. As a result, there is the need to calculate the “standard” inertial integration from initial conditions between two camera frames, which thus makes it a daunting task to solve in real time.
In <cit.>, Lupton and Sukkarieh propose the IMU preintegration approach to address the above-mentioned computational challenges. It allows pre-processing of the high-rate data from IMU to obtain low-rate pseudo measurements, in which initial conditions and the preintegrated quantities are separated, thus reducing on-line computational burden significantly. Later on, the preintegration approach was extended to kinematic models living on nonlinear manifolds <cit.>, and now is gradually becoming a popular result in the robotics community. More recently, it has been improved and elaborated from several different perspectives, e.g., analytical solutions for graph optimisation <cit.>, approximation via Gaussian process <cit.>, and generalisation on groups <cit.>, just to name a few. Since its introduction, the preintegration approach has been widely applied in various robotic systems, see e.g. <cit.>.
In this paper we prove that the preintegration approach can be derived following the observer theory for nonlinear systems, in particular the parameter estimation-based observer (PEBO). It is a novel kind of constructive observer technique recently proposed by Ortega et al. in <cit.> and later elaborated in <cit.>, in which state observation is translated into an on-line parameter identification problem; see <cit.> for a geometric interpretation. Recently, we have extended the PEBO methodology from Euclidean space to marix Lie groups, which has been proven instrumental in solving several open problems in observer design for robotic systems <cit.>.
Although the approaches of preintegration and PEBO have been pursued in parallel in different communities, it is interesting and generally important to elucidate the connections between these two frameworks. By bridging these distinct bodies of research, this paper aims to unveil their relationship and present the following main contributions.
1) We revisit the preintegration theory and provide a nonlinear observer interpretation to it. Namely, the preintegrated signals are exactly the dynamic extended variables (i.e., fundamental matrices) in PEBO but implemented in a moving horizon. Under some mild assumptions, we establish the equivalence between the preintegration and PEBO approaches.
2) We show the practical utility of the resulting equivalence in addressing several practical challenges encountered in state estimation problems. In particular, it provides a novel solution to design sampled-data observers for continuous-time dynamical systems and enables the attainment of statistical optimality in PEBO in the presence of noisy measurements.
The remainder of the paper is organised as follows. In Section <ref> we consider the dynamical models in Euclidean space as an illustrative example to recall the preintegration and PEBO approaches. It is followed by some preliminary results about the connections between two approaches in Euclidean space. In Section <ref>, we present our main results on the manifold SO(3) ×^n, which is the state space considered in numerous robotic and navigation-related problems, and also the original motivation of IMU preintegration. Then, we discuss some applications of the main claim in Section <ref>. The paper is wrapped up by some concluding remarks in Section <ref>.
Notation: For a given variable or signal x, sometimes we may simply write x(t) as x_t, and the dependency of signals on t is omitted for brevity when clear. We use x(t_1^-) to denote the value of x just before t_1, i.e. x(t_1^-):= lim_s>0,s→ 0 x(t_1 - s). We use |x| to represent the standard Euclidean norm of a vector. SO(3) represents the special orthogonal group, which is defined as SO(3)={R∈^3×3|R^⊤ R = I_3, (R) =1}. The operator (·)_× is defined such that a_× b := a× b for two vectors a, b ∈^n. For a variable y, we use y̅ to represent its noisy measurement from sensors. λ_ max{A} denotes the largest eigenvalue of a symmetric matrix A∈^n× n.
§ PRELIMINARY RESULTS IN EUCLIDEAN SPACE
We start with the deterministic systems with states living in Euclidean space to introduce our basic idea. Its extension to the systems on manifolds, which is tailored for pose estimation of rigid bodies, will be presented in the next section.
§.§ Problem Set
In many engineering problems, there is a need to estimate the unknown internal state x ∈𝒳⊂^n for the linear time-varying (LTV) dynamical system
ẋ = A_t x + B_t u
y = C_t x + D_t u
with input u ∈^m and the output y ∈^p, and we usually consider the state space as ^n. Since sensor noise is unavoidable in practice, the measured signals of u and y satisfy
u̅ = u + ϵ_u, y̅ = y + ϵ_y,
in which ϵ_u ∈^m and ϵ_y∈^p represent measurement noise, usually modelled as zero-mean white-noise processes. This estimation problem has been well addressed by the Kalman filter and the full-information estimation approach (a.k.a. batch optimisation).
In some applications, despite admitting continuous-time models, we are concerned with estimation of the state x at some discrete instants {t_k}_k ∈ℕ. This is because multiple sensors provide information with different rates – sometimes even having obvious time-scale separation. For example, in the problem of visual inertial navigation (VIN) for robotics, the IMU provides data at a very high rate, and it is reasonable to roughly view inertial measurements as some continuous-time signals. In contrast, it is well-known that image processing is relatively computationally heavy, and thus the camera provides data at a low rate. As a result, if the estimation algorithm is being processed at the same rate as the IMU, then it is usually not tractable on-line.
In this paper, we make the following assumption. This scenario exists in many practical problems, particularly in robotic systems.
The input u̅ is available as a continuous-time signal, and the output y̅ is measured at some discrete instances {t_k}_k∈ℕ.
The main results can be extended straightforwardly to discrete-time systems with multi-rate sampled data (i.e. high-frequency input u̅ and low-rate output y̅), and we do not discuss it in this paper.
§.§ Preintegration in Euclidean Space
To address the state estimation of x under Assumption <ref>, Lupton and Sukkarieh proposed in <cit.> the preintegration approach to generate pseudo-measurements to improve on-line efficiency. Let us recall its basic idea with the LTV model (<ref>) as follows.
<cit.> Consider the LTV system (<ref>). Given two instants t_k< t_k+1, there exist a matrix F_k and a vector v_k such that the state satisfies
x(t_k+1) = F_k x(t_k) + v_k.
for all x(t_k) ∈^n.
Its proof is available in <cit.>. We underscore that the matrices F_k and v_k are independent of the state x, which are accessible signals and uniquely determined by the measurable signals A_t,B_t and u_t. Hence, we call F_k and v_k as preintegration, and they can be calculated as
F_k = F(t_k+1^-), v_k = v(t_k+1^-),
which is generated by the dynamics
.
Ḟ = A_t F, F(t_k^+) = I_n
v̇ = A_t v + B_tu, v(t_k^+) = 0_n
}
Note that when implementing preintegration we only have the measurable signal y̅ rather than the perfect output y, and thus the second preintegration is implemented as
v̇̅̇ = A_t v̅ + B_tu̅ , v̅(t_k^+) =0_n,
where v̅ may be viewed as the noisy signal of v. They can be written as the Picard integral for t ∈ ( t_k, t_k+1 )
F_t = ∫_t_k^t A_s F_s ds, v̅_t = ∫_t_k^t ( A_s v_s + B_s u̅_s ) ds,
and implemented numerically via discretization.
Now, using the preintegration we obtain the equation (<ref>) that is a new LTV discrete-time dynamical model with known F_k and v_k, and the (nominal) output function
y(t_k) = C(t_k) x(t_k) + D(t_k) u(t_k).
Supposed the current moment is t_N, in the full-information estimation (FIE) approach we need to estimate {x(t_k)}_k∈ℓ with ℓ:= {0,…, N}. The simplest case is to consider solving the optimisation
(, ) = , min J_ x() + J_ w()
s.t. x̂(t_k+1) - F_k x̂(t_k) - v̅_k = w_k
with the cost functions[We assume that y is measured from t_0 without loss of generality.]
J_ x() = ∑_k = 0^N-1γ_k |y̅(t_k) - C(t_k) x(t_k) - D(t_k) u̅(t_k) |^2
J_ w() = ∑_k = 0^N-1γ_k'|w_k|^2
and the definitions
:= ( x_0, …, x_N), := ( x̂_0, …, x̂_N)
:= (w_0,…, w_N).
The coefficients γ_k, γ_k'>0 may be involved to weight different instances, and two widely-used selections are:
(i) using the norm inverse of some covariance for the consideration of noise; and
(ii) selecting γ_k = λ^N-k with λ∈ (0,1) to represent forgetting factors in on-line deterministic estimators.
The above summary of preintegration is presented as a high-level framework, which may be implemented in different ways. For instance, the optimisation problem can be solved for each instance (a.k.a. full-information estimation, FIE), in a moving-horizon, or incrementally as done in LTV Kalman-Bucy filters at the discrete instants {t_k} in a lower sampling rate; the optimisation may also be replaced by computing the optimal maximum a posteriori (MAP) estimate, and combined with factor graphs.
To summarise, the basic idea is to use the preintegration (<ref>) to transform the continuous-time model (<ref>) into the discrete model (<ref>) with low-rate measurements, and then complete the estimation task.[To distinguish from the other estimates in the remainder of the paper, we write the estimate from the preintegration approach as _ PI.] Note that a salient feature of (<ref>) is the separation between the preintegrated signals F,v and the initial condition x(t_k), which is capable of reducing significant on-line computational burden in the nonlinear context.
.9
State Estimation via Preintegration:
- preintegration: (<ref>)
- estimate: _ PI
- optimisation: (<ref>)-(<ref>)
The computational burden of estimation of the original continuous-time system (<ref>) is not prohibitive, due to linearity in the model. However, when considering the visual navigation problem on manifolds, high nonlinearity and non-convexity limit the performance and complicate the analysis of both full-information estimation and filtering approaches.
§.§ Parameter Estimation-Based Observer
Recently, a new constructive nonlinear observer technique, namely PEBO, has been developed for a class of state-affine systems <cit.>. Its basic idea is translating state estimation into the one of some constant variables and then identifying them online. This provides an efficient way to simplify observer design.
Instead of introducing the approach comprehensively, we limit ourselves to the LTV system (<ref>) to show the basic idea of PEBO. Following <cit.>, the first step is to design the dynamic extension
ξ̇= A_t ξ + B_t u, ξ(t_0)= ξ_0,
with ξ∈^n, in which the initial condition ξ_0 is selected by users thus being known. We underline here that the PEBO approach is developed for the deterministic system with the perfect measurement u, and the robustness to various uncertainties can be addressed from standard Lyapunov analysis. In this subsection, we consider the case with access to the perfect u, and its extension to with the noisy measurement u̅ will be discussed in Section <ref>.
If we define the error e:= x -ξ, it yields the error dynamics
ė = A_t e.
As shown in linear systems theory <cit.>, the solution of e is given by e(t) = Φ(t,0) e(0), in which Φ(t,s) is the state transition matrix of A_t from s to t. Though it is generally impossible to write down the function Φ(t,s) analytically, it can be calculated by implementing the dynamics of fundamental matrix Ω on-line
Ω̇ = A_t Ω, Ω(t_0)= I_n
Φ(t,s) = Ω(t) Ω(s)^-1.
Then, we have the new parameterisation to the state x as
x_t = ξ_t - Ω_t ξ_0 + Ω_t θ
with the unknown vector θ := x(0). It means that once the parameter θ have been determined as θ̂, one has the state estimation as
x̂_t = ξ_t - Ω_t ξ_0 + Ω_t θ̂.
By plugging the new parameterization of x into (<ref>), we have the linear regression model with respect to θ as follows
Y_t = C_tΩ_t θ
with the variable
Y_t := y_t - C_t ξ_t + C_tΩ_t ξ_0 - D_t u̅_t.
Its noisy “measurement” is defined accordingly as
Y̅_t := y̅_t - C_t ξ_t + C_tΩ_t ξ_0 - D_t u̅_t.
The remainder is to estimate θ from the regressor (<ref>) on-line. With measurements collected at {t_k}_k ∈ℕ, the simplest case at the moment t_N is to solve the optimisation
θ̂:= θ∈^nmin ∑_k=0^N-1γ_k | Y̅(t_k) - C(t_k)Ω(t_k)θ|^2,
with some coefficients γ_k >0.
Hence, the PEBO approach can be summarised below.
.9
Parameter Estimation-Based Observer:
- dynamics: (<ref>), (<ref>)
- estimate (observer output): _ PEBO from (<ref>)
- optimisation: (<ref>)
For batch optimisation or filtering approaches, it is necessary to impose some “informative” excitation or observability assumptions on the model (<ref>) along the trajectory. There are some observer design tools requiring observability/detectability uniformly along all feasible solutions, e.g., <cit.>; however, this is not the case in various robotic localisation and navigation problems. It means that the optimisation (<ref>) may have multiple or even infinite solutions under an insufficiently excited trajectory.
§.§ The PEBO Viewpoint to Preintegration
In this section, we provide our new interpretation to the preintegration approach from a nonlinear observer perspective. For states living in Euclidean space with perfect or noise-free measurement of u, we summarise our findings as follows.
Consider the LTV system (<ref>) with ϵ_u =0. State estimation from the preintegration approach using (<ref>)-(<ref>) exactly coincides with that from the PEBO (<ref>)-(<ref>) using the zero initial condition ξ_0 = 0_n, in the following senses.
a) The preintegration signal F and the fundamental matrix Ω satisfy
Ω(t_k) = ∏_i=0^k-1 F_i := F_k-1… F_0 , ∀ k∈ℕ
Ω(t) = F(t)Ω(t_k) , t ∈ (t_k, t_k+1).
b) The preintegration signal v and the dynamic extension variable ξ verify
v_t = ξ_t - Ω_t Ω(t_k)^-1ξ(t_k) , t ∈ (t_k, t_k+1).
c) If the cost function J_ x + J_ w in (<ref>) admits a unique global minimum, the PEBO estimate equals to the one from preintegration, i.e., _ PEBO = _ PI.
First, we note that the fundamental matrix Ω shares the same dynamics as the one of the matrix F in preintegration. The only difference is that the latter resets its initial values in instances {t_k^+}_k ∈ℕ. From the semigroup property of the state transition matrix Φ(t,s), as well as the resetting
lim_s→ t_k^-F(s) = I_n,
we have
Φ(t,t_k) = F(t), t∈ (t_k, t_k+1).
On the other hand, for t∈ (t_k, t_k+1) we have
Ω(t) = Φ(t,t_0) Ω_0
= Φ(t,t_k)Φ(t_k,t_k-1) ⋯Φ(t_1,t_0) I_n
= F(t) ∏_i=0^k-1 F_i,
which verifies the first claim.
For the case ϵ_u =0, we have v(t) = v̅(t) for all t≥ 0. By comparing the dynamics of ξ and v, we have
[̇L̇1̇Ṙ]̇v̇-̇ξ̇ = A_t (v-ξ),
thus
v_t - ξ_t = Φ(t,s) (v_s -ξ_s), ∀ t_k+1>t≥ s > t_k.
Selecting s= t_k, and resetting as done in preintegration
lim_s→ t_k^- v(s) = 0_n,
then for t∈ (t_k,t_k+1) we have
v_t = ξ_t - Φ(t,t_k)ξ(t_k),
which verifies the item b).
At the end, let us show the equivalence between the optimisation problems (<ref>) and (<ref>). For the case of ϵ_u = 0 (with perfect measurement of u), we have x(t_k+1) - F_k x(t_k) - v_k =0, and thus
J_ x() = J_ x() + J_ w(0) ≤ J_ x() + J_ w().
Since we have assumed the unique minimum of the cost function, the optimisation in the preintegration approach becomes
= min J_ x()
with the hard constraint
x̂(t_k+1) - F_k x̂(t_k) -v_k =0, k =0,…, N-1.
Invoking the properties a)-b), the above optimisation can be written as
= ∈^(N+1)nmin ∑_k =0^N-1γ_k |y̅(t_k) - C(t_k) x(t_k) - D(t_k) u̅(t_k) |^2
= ∈^(N+1)nmin ∑_k =0^N-1γ_k |y̅_t_k - C_t_k(ξ_t_k +Ω_t_k( x_0 - ξ_0)) .
. - D_t_k u_t_k|^2
= ∈^(N+1)nmin ∑_k =0^N-1γ_k | Y̅(t_k) - C(t_k)Ω(t_k) x_0 |^2,
where in the last equation we have used the hard constraints (<ref>). Let us recursively solve (<ref>) – combining the properties a) and b) – we have the new constraint
x̂(t_k) = ξ(t_k) + Ω(t_k) x̂_0,
which has been plugged into the second equation in (<ref>).
It is clear that the cost function in (<ref>) only contains the decision variable x̂_0, which is the first n-elements in , and the solution to the optimisation (<ref>) is thus given by
x̂_0 = x_0 ∈^nmin ∑_k =0^N-1γ_k | Y̅(t_k) - C(t_k)Ω(t_k) x_0 |^2
together with (<ref>), and note that Ω and ξ are available signals (i.e. the dynamic extension variables in the PEBO). Obviously, this exactly coincides with the solution _ PEBO for the case with zero initial condition ξ_0 for the dynamic extension. We complete the proof for the term c).
The above result establishes the connection between preintegration and PEBO for the LTV dynamical model (<ref>) with the ideal measurements of u.
§ IMU PREINTEGRATION AND PEBO ON MANIFOLDS
In this section, we extend the results in Section <ref> to the extended pose estimation problem on the manifold SO(3) ×^n, which was the original motivation to study the preintegration approach.
§.§ IMU Preintegration
Let us recall the approach of IMU preintegration, which was proposed in <cit.> and elaborated on the manifold in <cit.>.
The motion of rigid body can be charaterised by the kinematic model
Ṙ = R ω_×
v̇ = a +g
ṗ = v
with the attitude R∈ SO(3), the sensor velocity v ∈^3, the “apparent” acceleration a ∈^3 in the inertial frame {I}, and the rigid-body position p ∈^3, which is briefly written as p. The gravity vector is given by g = [0,0,9.8]^⊤ m/s^2. See <cit.> for a concise representation using the matrix group SE_2(3). The IMU provides discrete-time samples of the biased acceleration and rotational velocity in the body-fixed frame {B}, i.e.,
a̅ = a + b_a + ϵ_a
ω̅ = ω + b_ω + ϵ_ω,
in which b_a and b_ω represent the sensor biases[They are slowly time-varying, but can be modelled as constants.], and ϵ_a and ϵ_ω are measurement noise.
§.§.§ Standard inertial integration
If the “initial” condition at t_1 is given, then the states (R, v, q) can be uniquely obtained (for the noise-free case) as the Picard integral
R(t_2) = R(t_1) + ∫_t_1^t_2 R(s) [ω̅(s)- b_ω]_× ds
v(t_2) = v(t_1) + ∫_t_1^t_2 R(s) ( a̅(s) - b_a )ds + Δ_t g
p(t_2) = p(t_1) + Δ_t v (t_1) + 1 2Δ_t^2 g + ∬_t_1^t_2 R(s) ( a̅(s) - b_a ) d s^2
with
Δ_t:= t_2 - t_1.
If Δ_t is sufficiently small, then the first integral equation in (<ref>) can be approximated by <cit.>
R(t_1) ≈ R(t_1) ( ∫_t_1^t_2 (ω̅(s) - b_ω) ds ).
Note that for a relatively large Δ t, this does not hold.
As is shown in <cit.>, the above standard inertial integration equations have strong nonlinearity and non-convexity with respect to the unknown initial conditions, mainly stemmed from the attitude state R. Between any two key frames, it requires to repeat the above “standard” integration, which yields heavy computational burden for real-time implementation.
§.§.§ Inertial preintegration
It is well known that IMUs are sampled with a much higher rates than other sensors for navigation or localisation. In <cit.>, it is suggested to integrate the inertial observation between required poses in the body-fixed frame of the previous pose, and then we may view the inertial observations as a single observation in the filter. To be precise, we may define rotation matrix Δ R_t_1^t related to the attitude at t_1, i.e.
R(t) = R(t_1) Δ R_t_1^t,
with the state at t_1 being Δ R_t_1^t_1 = I_3. In general, the function Δ R_t_1^t does not have an analytic form, but the relative rotation matrix Δ R_t_1^t can be approximated by
Δ R_t_1^t ≈( ∫_t_1^t( ω̅(s) - b_ω)ds )
for |t-t_1| sufficiently small. The inertial integration (<ref>) can be equivalently written as
R(t_k+1) = R(t_k) Δ R_t_k^t_k+1
v(t_k+1) = v(t_k) + R(t_k) Δ v_t_k^t_k+1 + Δ_t g
p(t_k+1) = p(t_k) + Δ_t v (t_k) + 1 2Δ_t^2 g
+ R(t_k)Δ p_t_k^t_k+1
with the functions for t≥ t_k
Δ v_t_k^t = ∫_t_k^tΔ R_t_k^s ( a̅(s) - b_a ) ds
Δ p_t_k^t = ∬_t_k^tΔ R_t_k^s( a̅(s) - b_a ) d s^2.
Note that the terms Δ v_t_k^t_k+1 and Δ p_t_k^t_k+1 are defined in the body-fixed frame, which can be calculated perfectly – by preintegrating IMU measurements – without the access to the initial conditions (R_t_1, v_t_1, p_t_1). This is the original motivation to study IMU preintegration.
§.§.§ Estimation via IMU preintegration
The IMU preintegration has been widely used in many robotic applications, e.g., visual inertial SLAM and navigation. In these problems, there are numerous feature points, whose coordinates p_i ∈^3 (i=1,…, n_p) are constant and unknown, i.e.,
ṗ_i =0 , i=1,…, n_p.
Each feature is captured by the camera, thus satisfying some algebraic equations
y = h(x) + ϵ_y
with y = ^n_y and the noise ϵ_y, which is the output function (a.k.a. observation models) in the observer theory. We have defined the extended state variable as[We assume that sensors have been well calibrated to simplify the presentation. In more general cases, we may take all biases into the variable x and estimate them on-line simultaneously.]
x = (R, v, p, p_1, …, p_n_p) ∈
with the manifold := SO(3) ×^3(2+n_p).
At the instance t_N, we would like to estimate the state
(t_N):= ( x(t_0),x(t_1), …, x(t_N)).
Similar to the case in Euclidean space, we may formulate it as the batch optimisation to estimate the state
= ∈^Nmin J_ I ()
with
J_ I := ∑_k =0^N-1[ (k)
+
_ R(k) + _ v(k) + _ p(k)
]
and
(k) = | y(t_k) - h(x(t_k)) |^2_Σ_y^-1(k)
_ R(k) = |R(t_k+1) - R(t_k)Δ R_t_k^t_k+1|_Σ_1^-1(k)^2
_ v(k) = | v (t_k) + R (t_k) Δ v_t_k^t_k+1 + Δ_t g - v (t_k+1) |_Σ_2^-1(k)^2
_ p(k) = | p_t_k + Δ_t v _t_k + 1 2Δ_t^2 g
+ R_t_kΔ p_t_k^t_k+1 - p_t_k+1|_Σ_3^-1^2
and Σ_i ≻ 0 (i=1,2,3) are some covariances to characterise the uncertainty in the model (<ref>). If the stochastic properties of ϵ_a and ϵ_w are known in advance, we may use some on-line propagation to approximate Σ_i(k). See <cit.> for example, and we omit its details.
.9
Estimation via IMU Preintegration on Manifolds:
- preintegration: (<ref>), (<ref>)
- estimate: _ PI
- optimisation: (<ref>)-(<ref>)
§.§ Parameter Estimation-Based Observer on Manifolds
In this section, we briefly summarise the main results in our previous papers <cit.> about the PEBO design on manifolds.
Consider the kinematics (<ref>) with the measurable output in (<ref>). In <cit.>, the observer design is conducted in the body-fixed frame with the dynamics given by
Ṙ = Rω_×
v̇ = -ω_× v + a̅ - b_a + R^⊤ g
ṗ = -ω_× p - v,
where p is defined as the origin coordinate of {I} in the body-fixed frame, i.e.
p := R^⊤ p.
In the PEBO approach, we design the dynamic extension
Q̇ = Q ω_×
ξ̇ = A(ω, Q)ξ + B(a̅ , b_a)
Ω̇ = A(ω, Q) Ω
Ω(t_0) = I,
with
A(ω,Q ) := -ω_× 0 Q^⊤
-I -ω_× 0
0 0 0,
B(a̅, b_a) := a̅ - b_a
0
0.
The key observation in <cit.> is that the system state can be linearly parameterised as
R_t = Q_c Q^⊤_t
v
p
g_c = ξ_t - Ω_tξ_0 + Ω_t θ
with the unknown constant matrix Q_c ∈ SO(3), and the vector
θ:= ( v(0), p(0), g_c).
Similar to the case in Euclidean space, we only need to determine (Q_c,θ) and :=(p_1, …, p_n_p), whose estimates are written as (Q̂_c, θ̂,). Then, the estimates of x∈𝒳 is given by
x̂_t = (R̂, R̂v̂ ,R̂p̂ , ).
with
R̂ = Q̂_c Q_t
v̂, p̂, ĝ_c^⊤ = ξ_t - Ω_t ξ_0 + Ω_t θ̂.
For the measurements collected at instances {t_k}, the unknown (Q̂_c, θ̂,) can be obtained from the following optimisation:
(Q̂_c, θ̂, ) = Q_c ∈ SO(3)
θ∈^9, ∈^3n_pmin ∑_k=0^N-1(k)
ĝ_c = Q̂_c^⊤ g
with defined in (<ref>). The main result of PEBO on manifolds is summarised as follows.
.92
PEBO on manifolds:
- dynamics: (<ref>)
- estimate (observer output): _ PEBO from (<ref>)-(<ref>)
- optimisation: (<ref>)
§.§ The PEBO Viewpoint to IMU Preintegration
We are in the position to present the main result of the paper. Similarly to the case in Euclidean space, we establish the connection between IMU preintegration and PEBO on manifolds as follows.
Consider the kinematics (<ref>) with constant p_i (i=1,…, n_p). The estimation of the state of the IMU preintegration (<ref>)-(<ref>) converges to the estimate of the PEBO (<ref>)-(<ref>) as min_j=1,2,3(λ_ max{Σ_j}) → 0, in the following sense.
a) The preintegration of Δ R_s^t and the extended state Q satisfy
Q(t_0)^⊤ Q(t_k) = ∏_i=0^k-1'Δ R_t_k^t_k+1 := Δ R_t_0^t_1…Δ R_t_k-1^t_k
for all k∈ℕ.
b) If the cost function has a global minimum, then the estimates from the PEBO and the IMU preintegration satisfy
_ PI→_ PEBO λ_ max{Σ_j}→ 0 (j=1,2,3).
The property a) is straightforward to verify because
Δ R_t_0^t_1…Δ R_t_k-1^t_k = Δ R_t_0^t_k
and
d dt(RQ^⊤) = 0.
When the largest eigenvalue of Σ_j converges to zero, the last three terms in (<ref>) make (<ref>) as the hard constraints. For the fact b), we need to show that the constraint (<ref>) together with
ĝ_c = Q̂_c^⊤ g
in PEBO yields the constraint (<ref>) in IMU preintegration. To see this, for a fixed (constant) estimate θ̂ and defining
η := (v̂, p̂, ĝ_c)
we have
η̇ = ξ̇- Ω̇ξ_0 + Ω̇θ̂
= A(ω, Q) ξ + B(a̅,b_a) - A(ω, Q) Ω(ξ_0 + θ̂)
= A(ω, Q) η + B(a̅, b_a).
Now, consider the coordinate transformation
η↦ z= [ z_1; z_2; z_3 ]
:= [ R̂v̂; R̂p̂; Q̂_c ĝ_c ] .
In the transformed coordinate, the dynamics verifies
ż_1 = R( a- b_a) + g
ż_2 = z_1
ż_3 = 0.
Considering the constraint in (<ref>), we may equivalently select the decision variable as (R̂, z_1,z_2, ), and the change of decision variable does not affect the minimum of the cost function .
In the new coordinate, z_1 and z_2 satisfy
z_1(t_k+1) = z_1(t_k) + R(t_k) Δ v_t_k^t_k+1 + Δ_t g
z_2(t_k+1) = z_2(t_k) + Δ_t v (t_k) + 1 2Δ_t^2 g
+ R(t_k)Δ p_t_k^t_k+1.
It exactly coincides with (<ref>). Hence, following the same arguments in the proof of Proposition <ref>, we can show that the estimates from these two approaches are exactly the same.
§ DISCUSSION AND APPLICATIONS
§.§ Discussions
In this section, we present some further remarks and applications following from the connections between pre-integration and PEBO.
First, let us make some comparisons between two frameworks of PEBO and preintegration.
The preintegration approach may be roughly viewed as the implementation of PEBOs in a moving horizon, i.e., the “initial moment” is recursively defined as {t_k}_k∈ℕ and then the task is to estimate the state x(t_k). In PEBO, we only need to estimate the initial condition at t_0. For the ideal case with perfect models and measurements, these two frameworks exactly coincide with each other, as illustrated in Proposition <ref>.
In the pose estimation-related problems, the IMU preintegration utilises the body-fixed frame for accelerations and velocities; in contrast, the PEBO in our previous works <cit.> adopts the inertial frame.
In IMU preintegration, it is possible to write the state transition matrix analytically for the ( v, p)-subsystem; see (<ref>). In PEBO, we need to calculate the state transition matrix for the ( v, p)-subsystem numerically, but it brings two benefits:
B1: The sensor bias b_a appears in the dynamics (<ref>) in a linear way. As shown in <cit.>, we are able to construct a linear regression model on the unknown bias b_a using the PEBO methodology.
B2: In some applications, we do not need the estimation of attitude R. By applying PEBO in the body-fixed frame, we are able to estimate ( v, p, ) directly without the information of attitude.
In the generalised PEBO approach <cit.>, there is a need to calculate the fundamental matrix Ω(t) over time in (<ref>). Though its dynamics is forward complete, the variable Ω is unbounded when the matrix A_t is unstable. Since Ω is part of the internal state in the observer, at some finite time the observer would become dramatically ill-conditioned and impossible to represent accurately in memory. As a result, it may bring some numerical issues and make the observer very sensitive to sorts of perturbations. For this consideration, it is reasonable to implement a PEBO in “moving horizons” like preintegration in order to improve robustness.
When considering the uncertainty from the input-output measurements, the estimates from the PEBO and preintegration approaches would be different. In PEBO, we only need to solve the optimisation problem with the decision variable θ (equivalently x_0) at a single instance; in contrast, the hard constraint (<ref>) does not hold in the preintegration approach, and there are additional decision variables {x_k, w_k}_k∈ℕ. For this case, their relation resembles the single and multiple shootings in the direct methods for optimal control.
State estimation via recursive algorithms under Assumption <ref> is known as the problem of sampled-data (or digital) observers <cit.>. Even for linear time-invariant (LTI) systems, there are still several open problems to design a sampled-data observer <cit.>. An useful application of the proposed equivalence between preintegration and PEBO is providing a novel method to design sampled-data observers. We will present constructive details in the next subsection.
§.§ Application I: Sampled-data Observer via Preintegration
In this section, we show that the proposed equivalence provides a new method to design a hybrid sampled-data observer for the LTV system (<ref>). We summarise the results as follows. To simplify the presentation, as well as to obtain asymptotic stability claims, we consider the ideal measurements (u,y) in the following proposition.
Consider an observable LTV system (<ref>). Assume the sampled instances {t_k}_k ∈ℕ are selected such that
P1: The pair (Φ(t_k+1, t_k), C(t_k)) is (discrete-time) uniformly completely observable, where Φ(·,·) is the continuous-time state transition matrix of A_t defined in (<ref>).
P2: There exists a constant k_2 ∈ℕ_+ such that
W_q := ∑_i= k^k+k_2Ψ(i,k) Q Ψ^⊤(i,k) ≻δ_q I_n
for some Q ≻ 0, δ_q>0 and ∀ k∈ℕ with Ψ(i,k) the discrete-time state transition matrix of z_k+1 = Φ(t_k+1,t_k) z_k.
Then, the hybrid sampled-data observer
.
Ḟ = A_t F, F(t_k^+) = I_n
v̇ = A_t v + B_tu, v(t_k^+) = 0_n.
F_k = F(t_k+1^-), v_k = v(t_k+1^-).
} ℋ_1
.
x̂_k+1 = F_k x̂_k + v_k + K_k+1 e_k+1
e_k+1 = y_t_k+1 - C_t_k+1 (F_k x̂_k + v_k) - D_t_k+1u_t_k+1
K_k = P̂_k C_k^⊤ [C_k P̂_k C_k^⊤ + R]
P̂_k+1 = F_k P_k F_k^⊤ + Q
P_k = P̂_k - K_k C_k P̂_k.
} ℋ_2
with some positive definite matrices Q and R, provides a globally asymptotically convergent estimate x̂, i.e.
lim_k→∞ |x̂_k - x(t_k)| =0.
According to Propositions <ref>-<ref>, the systems state x at the instances {x(t_k)}_t∈ℕ exactly satisfies the discrete dynamical model
x(t_k+1) = F_k x(t_k) + v_k
y(t_k) = C(t_k) x(t_k) + D(t_k) u(t_k),
with the preintegration signals F_k and v_k generated from the system _1. Invoking the first equation in (<ref>), we have
F_k = Ω(t_k+1) Ω^-1(t_k) = Φ(t_k+1, t_k).
As a consequence, the discrete-time uniform complete observablility (UCO) of the pair (Φ(t_k+1,t_k), C(t_k)) implies the UCO of the LTV system (<ref>). Note that the system _2 is the standard Kalman-Bucy filter for the LTV system (<ref>). Together with the condition (<ref>), we conclude the global asymptotic convergence (<ref>) by invoking <cit.>.
In the condition P1, it is equivalent to impose the UCO of the discrete-time LTV system (<ref>). It is relatively straightforward to verify the UCO of the continuous-time system (<ref>) is a necessary condition to P1, but it is not sufficient. Consider the constant observable pair (A_0, C_0), and let A = A_0, C(t) = C_0 for t∈ [2k, 2k+1) and C(t)= 0 for t∈ [2k+1, 2k+2) with k ∈ℕ. The resulting pair (A_t,C_t) guarantees the UCO of (<ref>) but not for the system (<ref>) if the sampled data are collected in [2k+1, 2k+2). On the other hand, the condition P1 is unnecessary to design a sampled-data observer. If the observability Gramian is positive definite only in some interval but not uniform over time, it is still possible to design globally convergent state observer by using MHE or some state-of-the-art recursive designs <cit.>.
In <cit.>, nonlinear sampled-data observers are classified into two categories: i) design via approximate discrete-time models of the plant; and ii) emulation: discretisation of continuous-time observers. Clearly, the proposed observer belongs to the first class, but we utilise an exact discrete-time model rather than its approximation because of its linearity. Indeed, the proposed design is also applicable to nonlinear systems which can be transformed into the affine form.
The proof of Proposition <ref> does not rely on the assumption of periodic sampling. That is, the proposed sampled-data observer is also immediately applicable to the case with asynchronous measurements, which was studied for the linear time-invariant (LTI) systems <cit.>. We provide a much simpler solution to this specific problem for LTV systems.
§.§ Application II: Statistical Optimality in PEBO
In this subsection, we will show that the proposed equivalence in Sections <ref>-<ref> leads to an intuitive way to improve the performance of PEBO in the presence of noisy input u.
We assume that the initial condition x_0 is a deterministic variable but unknown, and model the noisy terms ϵ_u and ϵ_y as zero-mean white noise processes (<ref>), in which ϵ_u ∈^m and ϵ_y∈^p are addictive zero-mean white-noise processes, namely[Here, the white processes are not rigorously defined due to the δ-covariances, with δ the delta function. A rigorous definition is based on the stochastic differential equations <cit.>.]
𝔼[ϵ_u,tϵ_u,s^⊤] = Σ_u δ(t-s)
𝔼[ϵ_y,tϵ_y,s^⊤] = Σ_y δ(t-s).
The variables {ϵ_u}, {ϵ_y} and x_0 are uncorrelated. Then, the error e= x-ξ in PEBO for the LTV system (<ref>) satisfies
ė = A_t e + B_t ϵ_u.
According to the state covariance propagation for LTV systems <cit.>, we have
x_t - ξ_t = Ω_t(θ - ξ_0) + ϵ_e
with the white-noise process ϵ_e, i.e.
𝔼[ϵ_e(t) ϵ_e(s)^⊤] = Π_t δ(t-s)
and Π_t satisfies
Π̇_t = A_t Π_t + Π_t A_t^⊤ + B_t Σ_u B_t^⊤, Π(0)= 0_n× n,
where the initial condition of Π is due to the deterministic assumption of x_0. Noting that the uncertainties from u̅ and y̅ in (<ref>), we have
Y̅ = C_tΩ_t θ + ϵ_ Y
with
ϵ_ Y(t) := ϵ_y - D_tϵ_u + C_tϵ_e.
Unfortunately, the variables ϵ_e and ϵ_u are not independent, since ϵ_e(t) is indeed filtered from ϵ_u. However, for the LTV system (<ref>) without the feedfoward term, i.e. D_t=0, the variable ϵ_ Y is a white noise process
𝔼[ϵ_ Y(t)ϵ_ Y(s)^⊤] = (Σ_y + C_tΠ_t C_t^⊤) δ(t-s).
Hence, we may reformulate the optimisation (<ref>) as
θ̂:= θ∈^nmin ∑_k=0^N-1| Y̅(t_k) - C(t_k)Ω(t_k)θ|^2_(Σ_y + C_tΠ_tC_t^⊤)^-1
to obtain some statistic optimality, where Π_t is generated from (<ref>).
In <cit.>, the PEBO approach is applicable to nonlinear systems in the form of
ẋ = f(x,u), y= h(x,u),
for which a coordinate transformation x↦ z:=ϕ(x) exists such that the lifted dynamics is given by
ż = A(u,y)z + B(u,y) , y = C(u,y)z + D(u,y).
It is generally difficult to calculate covariance propagation for nonlinear systems, but there are many works discussing how to empirically approximate it in the literature on preintegration <cit.>. The proposed connection between two approaches, together with the state-of-the-art development of preintegration, provides a promising way to develop nonlinear stochastic PEBO method.
§ CONCLUDING REMARKS
In this paper, we have presented a novel observer interpretation to the IMU preintegration approach. Our findings reveal an exact correspondence between the preintegrated signals and the dynamic extended variables in PEBO that is implemented in a moving horizon. Furthermore, we have identified the precise conditions under which these two approaches yield identical estimates. These results were developed in both the Euclidean space and matrix Lie groups. Finally, we have utilised the proposed equivalence to design a novel sampled-data observer for LTV systems, and to improve the performance of PEBO in the presence of measurement noise.
These connections suggest some interesting avenues for future research, including:
- In the preintegration and PEBO approaches, we require that the system dynamics is in (or can be transformed into) a state-affine form (<ref>). It would be interesting to integrate them with contraction analysis <cit.>, for which the so-called differential dynamics is exactly an LTV system.
- In Section <ref>, we show that different coordinates are used in the IMU preintegration and PEBO. For the latter, we adopt the body-fixed coordinate ( v, p), and it is interesting to observe the benefit of the linear parameterisation of bias b_a. This is notable by its absence in the inertial coordinate for preintegration <cit.>. Hence, it would be of practical interest to implement IMU preintegration in the body-fixed coordinate towards real-time bias estimation.
- It is theoretically interesting to elaborate the results in Section <ref> using Itô integrals toward a more rigorous formulation.
§ APPENDIX
§ SOME DEFINITIONS
A pair (A_k,C_k) of discrete-time systems is uniformly completely observable if the observability Gramian
W_O[k,k_1] ≽δ_o I
for some δ_o>0, k_1 ∈ℕ_+ and all k ≥ 0, with
W_O[k,k_1]:= ∑_i=k^k+ k_1Ψ^⊤(i,k) C_k^⊤ C_k Ψ(i,k)
in which Ψ(i,k) is the state transition matrix from k to i of the system z_k+1 = A_k z_k.
§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENT
Bowen Yi: Conceptualization, Methodology (propositions), Writing - original draft. Ian R. Manchester: Methodology, Writing - review and edit, Project administration.
§ DECLARATION OF COMPETING INTEREST
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
§ ACKNOWLEDGEMENT
This paper is supported by the Australian Research Council. The first author would like to thank Dr. Chi Jin for bringing IMU preintegration into his attention.
abbrv
|
http://arxiv.org/abs/2307.03980v1 | 20230708140837 | Building and Road Segmentation Using EffUNet and Transfer Learning Approach | [
"Sahil Gangurde"
] | cs.CV | [
"cs.CV",
"cs.LG",
"eess.IV"
] |
Building and Road Segmentation Using EffUNet and Transfer Learning Approach
Sahil Gangurde
ABV-Indian Institute of Information Technology & Management, Gwalior, India
[email protected]
===========================================================================================================================
In city, information about urban objects such as water supply, railway lines, power lines, buildings, roads, etc., is necessary for city planning. In particular, information about the spread of these objects, locations and capacity is needed for the policymakers to make impactful decisions. This thesis aims to segment the building and roads from the aerial image captured by the satellites and UAVs. Many different architectures have been proposed for the semantic segmentation task and UNet being one of them. In this thesis, we propose a novel architecture based on Google's newly proposed EfficientNetV2 as an encoder for feature extraction with UNet decoder for constructing the segmentation map. Using this approach we achieved a benchmark score for the Massachusetts Building and Road dataset with an mIOU of 0.8365 and 0.9153 respectively.
segmentation,urban planning, state-of-the-art, mask, road, building
§ INTRODUCTION
With the increasing population, city areas will increase, so the road network and building networks will get congested and intertwined. It will be difficult for humans to look at the aerial views of the scene and create proper layouts of the roads and buildings. Land cover segmentation has been in the picture for a very long time. The area of unmanned aerial vehicles (UAVs) has seen significant growth in attention in recent years, particularly in research and industry. As unmanned aerial vehicles become more commercially successful, aerial photographs provide a new and intriguing study avenue. Integrating drones with computer vision is a unique and demanding notion that allows unmanned aerial vehicles to grasp the overflown region. The process of aerial image interpretation entails inspecting aerial images for the express goal of detecting numerous distinguishing qualities of the objects of interest. Several stages are required to acquire complete scene comprehension from an aerial photograph. Given a picture, a segmentation phase is used to separate the scene into sections of specific categories (such as residential areas, flood, woodland, roads, and so on), essentially seeing the entire environment as a completely linked location with all categories interacting with each other.
Semantic segmentation is the process of segregating different parts of images into predefined classes. It helps identify different labels in the image and pinpoint the exact map of it. Various problems related to medical imagery, satellite imagery, and urban planning can be solved by automating the process of detecting and segmenting multiple objects associated with the corresponding domain. The ability to recognize various objects from UAV images, such as railway lines, water bodies, forests, and other categories, could be beneficial in multiple applications, including creating and maintaining maps of cities, improving urban planning, noting environmental changes, and disaster relief. Our study focuses on creating effective ways of recognizing buildings from top-down aerial photos and establishing an efficient automatic system capable of identifying individual structures. In this paper segmentation on aerial images is performed to extract building mask. Then the project explores road segmentation and can be further extended for other classes.
§ RELATED WORK
performed road network segmentation from SAR images using FCN <cit.>. They evaluated three models, FCN-8s, VGG19 with UNet, and DeepLabv3+. The paper uses inferior backbone models along with UNet <cit.>. This is a major drawback and achieved very low accuracy over the custom dataset.
proposed stacking two UNets and generating the output mask <cit.>. The input image is first divided into blocks of 224x224 pixels and trained of the two UNets. The patches are then again converted into the real segmented mask. Though it gave promising results on the Massachusetts Building Dataset and Inria's Aerial dataset, the problem of converting the image into different patches and then reconstructing the image again is computationally expensive.
proposed EU-Net to perform building segmentation <cit.>. The paper uses a dense spatial pyramid pooling structure(DSPP) after the encoder network to increase the multi-class feature extraction. The decoder used is a UNet architecture. The DSPP block achieved better results than normal UNet in performing segmentation of different sizes of buildings.
proposed using an attention mechanism in the encoder to extract the features <cit.>. The encoder then produces the segmentation mask, transferring it to the edge detection block, which makes road edges. Thus, using a hybrid encoder mechanism provided very high accuracy for the road segmentation task.
used of attention mechanism in the encoder to extract the features <cit.>. The proposed model employed a hybrid encoder separated into two parts: the first harvests full-resolution features, while the second creates high-resolution feature encoding. The second half, on the other hand, employs max-pooling layers to expand our network's entire receptive field, providing the network with adequate context information to operate with. Before the features from both sections are combined, a 2D activation map is constructed for each portion, letting the network choose how much attention to devote to the features from each encoder step. This helped the segmentation of huge roadways and the development of fine-edged segmentation masks
used a novel vision transformer network to perform building segmentation <cit.>. The transformer simultaneously captured global and spatial detail contexts using a dual path structure for accurate building segmentation. The disadvantage of this approach was that to gather the global context; the search window size has to be large, which causes very high computation resources.
§ PROPOSED WORK
The problem statement can be formulated as follows:
* Develop models to segment buildings and roads from the urban environment and generate the mask for the same.
* Evaluate the models for segmentation metrics and choose the best one.
In this paper, we combine the state-of-the-art CNN architectures like Resnet50<cit.> and variations of EfficientNet<cit.> as encoders for UNet architecture and train them on the Massachusetts dataset<cit.>. This will generate the mask of the roads and buildings; hence, the two can be identified from the actual image. Figure <ref> shows the complete process of steps involved to achieve our goal.
§ DATASET
The datasets used in this project are Massachusetts Road dataset and Massachusetts Building dataset<cit.>. Both the datasets have a aeriel view of the Boston city and the corresponding segmented masks of roads and building.
§.§ Building Dataset
The dataset used is Massachussetts Buildings Dataset. It includes a total of 151 images shot from UAV in the Boston region. A single image has dimension of 1500 x 1500 pixels. Each image convers an approcimate area of 2.25 sq km of land. The whole dataset expands over a region of 340 sq km. The dataset have been split into three parts:
* Training Data: 137 images
* Validation Data: 4 images
* Test data: 10 photos
The segmentation masks are created by using the building footprints of the OpenStreetMap project. The dataset covers urban and suburban part of Boston. The building labels include houses, buildings, garages all of various sizes. The images are made available by the Massachusetts government. The segmentation masks after computing computationally were further hand corrected for higher accuracy on the model training. Figure <ref> shows the sample images and their masks of the building dataset.
§.§ Road Dataset
The Massachusetts Roads Dataset contains 1171 photos clicked from the UAV. Each picture has a resolution of 1500x1500 pixels. A single photo covers area of 2.25 sq km. The images were randomly divided into three sets:
* Training Data: 1108 images
* Validation Data: 14 photos
* Test Data: 49 photos
The dataset spans over 2600 square kilometers and includes many urban, suburban, and rural areas. The test site alone spans 110 square kilometers. The segmentation masks are created by using the road centerline footprints of the OpenStreetMap project. The labels i.e. the centerline is then given a thickness of 7 pixels. All picture is rescaled to 1 pixel per square metre resolution. Figure <ref> shows the sample images and their masks of the building dataset.
§ MECHANISM/ALGORITHMS
We will train the above datasets on the following models as encoders to the UNet. The encoder's general functionality is extracting features present in the image using mask labels. These models will extract the necessary details from the images the decoder will reconstruct the mask for the input.
§.§ Encoder-Decoder Architecture
The encoder is a CNN model which extracts the features from the image. The encoder downsamples the image, reduces the feature map resolution so that it captures the high level details from the original image. This is followed by many SOTA models in past like ResNet<cit.>. It is a common practice in CNN architecture to reduce the size of input image to extract the high level details. It is challenging to create a segmentation map based on the final feature map of the encoder due to its reduced size. A decoder network consists of set of layers that upsamples the feature maps extracted form the encoder network to again recover the information. Figure <ref>
§.§ UNet
. created the UNET for biomedical image segmentation<cit.>. The UNet has two parts one is the encoder and other is the decoder. The encoder extracts the features from the input image and the decoder achieves exact localization using transposed convolutions. The encoder consists of only convolutional and max pooling layers. It was mainly developed for the use of medical image segmentation but for our task this model we will use this model along with other encoder and find the segmentation mask.
§.§ EfficientNet
introduced the EfficientNet with a scaling convolutional strategy. Figure <ref> displays the architecture of EfficientNet<cit.>.
As the depth of the network increases the the accuracy increases is what is shown by the ResNet<cit.> architecture. But at some point the accuracy of the network cannot be increased due to the problem of vanishing gradient. To solve this issue, scaling must be performed in all dimensions i.e. depth, width and resolution. EfficientNet introduced a new method called 'compound scaling' through which each of these above mentioned parameters get scaled by a factor ϕ. The parameters for scaling are given in <ref>.
depth(d) = α^ϕ
width(w) = β^ϕ
resolution(r) = γ^ϕ
such that, (αβ^2γ^2)^ϕ≈2
where α≥1, β≥1and γ≥1
With ϕ = 1 using grid search authors came up with the value of α=1.2, β=1.1 and γ=1.15. Now keeping these value as constant we can change the factor ϕ to get the scaled models from EfficientNetB1 upto EfficientNetB7.
§.§ EfficientNetV2
Compound scaling in EfficientNet scales all the parameters of the model by the factor of ϕ. This type of scaling is not necessary as this scales in all parameters. So EfficientNet gives less control over the model parameters. Also in EfficientNet as the size of the image increases we need to decrease the batch size. This increase in image size needs more time to compute the features. EfficientNet uses MBConv layer which uses depth wise convolution which is an expensive operation. The motive behind EfficientNetV2<cit.> was to create a CNN model with the motive to increase the accuracy(A) while decreasing the training step time(S) and having less parameters(P). Basically max(A) while min(S^w, P^v) where w and v are experimentally determined.
To solve the problem of less parameters and less training time NAS was used to create a model with the above given objective function. To reduce the depthwise convolution time, proposed Fused MBconv method. In Fused MBConv instead of performing a depthwise convolution and we perform a convolution with a filter of 3x3. As the depthwise convolution performs multiplication over all channels by removing it we reduce the computation cost and create faster models. The Figure <ref> shows the MBConv operation.
§ TECHNOLOGIES USED FOR IMPLEMENTATION
The problem involves solving the segmentation task of various deep learning libraries, matrix manipulation libraries, image processing libraries, and plotting libraries are used. Table <ref> shows the different libraries, frameworks, and other technologies used in this project. Most of the code runs were done in the Kaggle environment. Kaggle environments are backed by Google Cloud, which provides free computation power to run ML tasks.
§ DATA PROCESSING
Following are the steps involved in data processing:
* One-hot encoding: For all the images, perform one hot encoding. One hot encoding is the process of converting the pixel values into number of the class we want the image to be segmented. Figure <ref> shows the original image, real mask and the constructed one hot encoded mask of the image.
* Augmentation: Perform random horizontal flip, vertical flip and 90 degree rotation on the images and their corresponding masks.
* Padding: The encoder models are implemented in such a way that the padding is added to arbitrary input size to match the input size of various encoders.
* Dataset Loader: Create a data loader for model to train with input as image and the label as the one hot encoded mask.
§ RESULTS AND DISCUSSION
§.§ System Configuration
All the models are trained on Kaggle with Google Cloud backbone. Table <ref> shows the system parameters of the environment under which the models are trained.
§ EVALUATION METRICS
The given models will be evaluated on two metrics - Intersection Over Union(IOU) and F1 Score.
* Intersection Over Union (IOU) - Intersection Over Union is also known as the Jaccard index, is used to calculate the percentage of overlap between the true mask and the predicted output mask.
IOU = y ∩ y^'/y ∪ y^'
The intersection consists of the pixels found in the true mask and the predicted mask and the union consists of the pixels containing the true mask as well as the predicted mask. Equation <ref> shows the formula for IOU calculation.
* F1 Score - F1 score or dice coefficient calculates the overlap of two masks. The values of the dice coefficient lie between 0 and 1 inclusive where 1 denotes perfect overlap and 0 represent no overlap. The equation <ref> shows the formula for Dice coefficient.
Dice Coefficient = 2 * |y ∩ y^'|/|y| ∩ |y^'|
The loss function for the neural network to minimise is given by 'Dice Loss' and is shown in <ref>.
Dice Loss = 1 - 2∑_pixels^yy^'/∑_pixelsy^2 + ∑_pixelsy^'2
* Accuracy - Accuracy is defined as the ratio of sum of how many pixels in the image are correctly identified as the true segmented pixel and the number of pixels not identified as true segmented correctly to all the pixels present. Basically in terms of true position, negative and false positive negative in terms of pixel values accuracy can be defined as given in equation <ref>
Accuracy = TP + TN/TP + TN + FP + FN
* Precision - It shows the purity of the positive detection relative to the ground truth values. In precision TP mask having an IOU of above threshold, while FP represent the mask having an IOU of below threshold.
Precision = TP/TP + FP
* Recall - Recall determines the completeness of positive prediction with respect to ground truth label.
Equation <ref> represents the formula to calculate the recall.
Recall = TP/TP + FN
§.§ Building Segmentation
The goal of this experiment is to detect building mask from the input aerial image. Five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for all the models are same. Table <ref> shows in-detail configuration of the parameters.
§.§ Road Segmentation
The goal of this experiment is to detect road mask from the input aerial image. Similiar to the building segmentation, five different encoders are tested on UNet and the experiments are carried out by training the models on dataset and finding the IOU score and dice loss. The parameters for road segmentation task are given in the Table <ref>. These parameters are common for all the models in this experiment.
§.§ Results and discussion
The models are tested on the test data, and the results obtained are shown in Table <ref> and <ref> for building and road segmentation respectively. The scores written in bold represent the best score achieved under a particular metric.
§.§ Benchmarks
The results derived from the experiments outperform the benchmark scores for both datasets. Table <ref> shows the recent papers presented for the building dataset, and the best accuracy achieved has been presented in this paper. Also, in Table <ref> existing models are compared concerning mIOU and mDice for the road dataset. The models presented in the paper set new benchmark scores for the Massachusetts dataset.
§ LIMITATIONS AND FUTURE SCOPE
The size of the input image is a massive problem for UAV-based segmentation. Very high GPU memory is required for images with higher dimensions to load the model with weights. Standard images consist of 3 channels, but satellite images can contain more than three channels. In that case, the whole UNet architecture must be changed to fit the extra 3rd dimension data.
In this paper, only roads and buildings are segmented as a part of urban object segmentation. Aerial images from different cities can be taken, and masks for various classes such as manholes, power lines, railway tracks, etc. must be created to expand the segmentation classes and allow more objects to be segmented in the urban environment. Attention mechanism must be explored on EfficientNet+Unet architecture to improve the accuracy further.
§ CONCLUSION
Based on the experiments, we can conclude that for building and road segmentation, UNet architecture with a pre-trained encoder is the best architecture to be used. Using transfer learning, the training time and GPU cost are reduced, and the accuracy of the models is very high. The problems discussed in the research gaps regarding transfer learning are filled, and models pre-trained with an imagenet dataset were used. The thesis presents the new benchmark score for the Massachusetts Building and Road dataset. For the building segmentation task, EfficientNetV2L+UNet achieved an IOU of 0.8365, and for the road segmentation task, EfficientNetB7+UNet gave an IOU of 0.9153.
IEEEtranN
|
http://arxiv.org/abs/2307.04501v1 | 20230710114528 | A Privacy-Preserving and Accountable Billing Protocol for Peer-to-Peer Energy Trading Markets | [
"Kamil Erdayandi",
"Lucas C. Cordeiro",
"Mustafa A. Mustafa"
] | cs.CR | [
"cs.CR",
"cs.CE"
] |
[t]
979-8-3503-9790-1/23/$31.00 2023 IEEE
A Privacy-Preserving and Accountable Billing Protocol for Peer-to-Peer Energy Trading Markets
This work was supported by EPSRC through EnnCore [EP/T026995/1] and by the Flemish Government
through FWO-SBO SNIPPET project [S007619]. K.E is funded by The Ministry of National Education, Republic of Turkey.
Kamil Erdayandi1, Lucas C. Cordeiro1 and
Mustafa A. Mustafa12
1Department of Computer Science, The University of Manchester, UK
2imec-COSIC, KU Leuven, Belgium
Email: {kamil.erdayandi, lucas.cordeiro, mustafa.mustafa}@manchester.ac.uk
=======================================================================================================================================================================================================================================================================================================================
This paper proposes a privacy-preserving and accountable billing (PA-Bill) protocol for trading in peer-to-peer energy markets, addressing situations where there may be discrepancies between the volume of energy committed and delivered.
Such discrepancies can lead to challenges in providing both privacy and accountability while maintaining accurate billing. To overcome these challenges, a universal cost splitting mechanism is proposed that prioritises privacy and accountability. It leverages a homomorphic encryption cryptosystem to provide privacy and employs blockchain technology to establish accountability.
A dispute resolution mechanism is also introduced to minimise the occurrence of erroneous bill calculations while ensuring accountability and non-repudiation throughout the billing process. Our evaluation demonstrates that PA-Bill offers an effective billing mechanism that maintains privacy and accountability in peer-to-peer energy markets utilising a semi-decentralised approach.
Billing, Privacy, Accountability, Peer-to-peer Energy Market, Homomorphic Encryption, Blockchain
§ NOMENCLATURE
tocsectionNomenclature
1.20
[π_P2P,π_FiT, π_RT]
c_i, p_j, u_k i-th consumer , j-th prosumer, k-th user
N_C , N_P, N_U Number of consumers, prosumers, users
V^P2P P2P market's traded electricity volume array
V^Real Real electricity consumption array
π_P2P, π_FiT, π_RT P2P, FiT, Retail price
Stat Array of the statements of the users
Bal_sup Balances of the supplier
inDev Array of the individual deviations of the users
Dev^Tot Total deviations of the users
KGen_pe(k) Paillier key generation method
PK_sup , SK_sup Public, Private (Secret) key pair of Supplier
{.}_ℰ Data homomorphically encrypted with PK_sup.
H(.) Hash Function
§ INTRODUCTION
§.§ Motivation and Background
Peer-to-peer (P2P) energy trading
enables users to obtain clean energy at more reasonable prices than traditional suppliers, making it accessible to a wider society <cit.>. It facilitates direct energy exchange between households that harness renewable energy sources (RES) <cit.>. This approach empowers individuals to become active participants in the energy system <cit.>, allowing RES owners to optimise their profits and reduce their bills through trading with other users <cit.>.
Although P2P energy trading markets offer various benefits, some challenges hinder their widespread adoption. Firstly, the vast amount of data exchanged can reveal sensitive information about users <cit.>, such as their energy usage habits and lifestyle patterns. Access to this data poses significant privacy risks <cit.> and could potentially violate privacy protection regulations, e.g., GDPR <cit.>. Thus, it is crucial to ensure privacy-preserving data processing and protect data from unauthorised access <cit.>. Secondly, such markets require secure and accountable solutions.
However, it is challenging to audit transactions without a tamper-proof system <cit.>. To ensure fair and accurate energy trading, it is also essential to guarantee integrity and verifiability of any data used. Thirdly, often what users commit at P2P markets deviates from what they deliver due to intermittent RES output. Hence, any billing models will need mechanisms to deal with such deviations.
§.§ Relevant Literature
Within P2P energy trading, two crucial phases are market clearance and billing & settlement <cit.>. Since privacy-preserving market clearing mechanisms have already been explored <cit.>, this paper focuses on the billing phase.
Madhusudan et al. <cit.> propose four billing models for P2P energy markets which account for deviations in energy volumes from the users' bids and incorporate individual, social, or universal cost-sharing mechanisms to ensure cost-effectiveness for both consumers and prosumers. Nonetheless, they do not explore user privacy.
A privacy-preserving billing protocol that incorporates an individual cost-sharing mechanism has been proposed in <cit.>.
However, it relies on a remote server for bill calculations, which poses a risk of a single point of failure.
Singh et al. <cit.> propose a method that uses blockchain and homomorphic schemes to protect the confidentiality of user data while enabling efficient data analysis. They do not explore any billing mechanisms. Gür et al. <cit.> propose a system based on blockchain technology and IoT devices to facilitate billing. To ensure data confidentiality, the system employs session keys and stores the encrypted data on the blockchain. However, this is still vulnerable to breaches as unauthorised parties can gain access to these keys, enabling them to access sensitive data.
In summary, no prior study on P2P market billing fully satisfies the three essential requirements: protecting user privacy, maintaining strong system accountability, and accommodating variations in user consumption. Neglecting any of these elements undermines the market trust, transparency and fairness, which are essential to their success and sustainability. Furthermore, integrating these three features within a single platform efficiently poses considerable challenges.
§.§ Contributions and Organization
To address the issues raised in the existing literature, we propose a novel privacy-preserving and accountable billing (PA-Bill) protocol, which effectively mitigate the challenges surrounding security, privacy, accountability, and user consumption variations prevalent in current studies.
PA-Bill utilises a universal cost-splitting billing model that mitigates the risk of sensitive information leakage due to individual deviations. It also avoids a single point of failure by performing most calculations locally in a semi-decentralised manner. To preserve privacy, the mechanism employs homomorphic encryption in bill calculations. Moreover, PA-Bill utilises blockchain technology to integrate accountability mechanisms that addresses possible conflicts during the billing calculation process. To minimise privacy leakage, only the hashed version of the data is stored on the blockchain. Finally, PA-Bill can support large communities of 500 households.
Unlike other solutions, PA-Bill integrates privacy protection, accountability, and accommodating user consumption variations into a single solution in an efficient way. To the best of our knowledge, no previous work has successfully implemented an efficient billing model that simultaneously preserves privacy, ensures accountability, and effectively handles discrepancies between committed and delivered volume.
To mitigate the aforementioned issues in the literature, we propose a Privacy-Preserving and Accountable Billing Mechanism (PA-Bill) which
* utilises a universal cost splitting billing model that determines conditions during billing calculations without relying on individual deviations from proposed electricity volumes. This mitigates the risk of sensitive information leakage due to individual deviations.
* avoids single point of failure by performing majority of calculations locally in a semi-decentralised way.
* provides privacy preserving computation mechanism with Homomorphic Encryption.
* utilises blockchain technology to integrate accountability mechanisms that address possible conflicts during the billing calculation process. To minimise privacy leakage, only the hashed version of the data is stored in the blockchain, rather than plaintext or encrypted data.
* evaluates the performance for a large community which consists of 500 households.
The rest of the paper is structured as follows: Section <ref> outlines the preliminaries. The proposed PA-Bill is presented in Section <ref>. The security analysis of PA-Bill is presented in Section <ref>, while its performance is evaluated in Section <ref>. Finally, Section <ref> concludes the paper.
[MM: Start with a paragraph about privacy-preserving billing in smart grid. In this paragraph include at least the following papers.]
* `Private Memoirs of a Smart Meter' - 2010
* `Privacy-Preserving Smart Metering with Verifiability for Both Billing and Energy Management' - 2014
* `Privacy-Preserving Smart Metering' - 2011
* `A Practical Smart Metering System Supporting Privacy Preserving Billing and Load Monitoring' - 2012
* `Design and implementation of a secure cloud-based billing model for smart meters as an Internet of things using homomorphic cryptography' - 2017
* `Plug-In Privacy for Smart Metering Billing' - 2011
* `A Privacy-Enhancing Protocol that Provides In-Network Data Aggregation and Verifiable Smart Meter Billing' - 2014
* `Practical Single-pass Oblivious Aggregation and Billing Computation Protocols for Smart Meters' - 2022
* `Verifiable and Privacy-preserving Fine-Grained Data-Collection for Smart Metering' - 2015
* `Secure and privacy-friendly local electricity trading and billing in smart grid' - 2018
[MM: Conclude by saying that the existing work is good, but no P2P markets have been considered, which make the billing models more complex.]
[MM: Write a second pargarpg focusing on PP solutions for billing on the P2P markets. End this paragraph by linking it back to the limitations that you plan to address. These two paragraphs should be sufficient. ]
§ PRELIMINARIES
§.§ System Model
Our proposed billing protocol, illustrated in Fig. <ref>, involves prosumers, consumers, a trading platform (TP), a distributed ledger/Blockchain (DLT), a referee, and a supplier.
Prosumers generate energy through renewables, consume the volume they require, and sell any surplus energy. Consumers solely consume energy. Households have home energy management systems (HEMs) and smart meters (SMs) that measure electricity flows, provide real-time measurements, and facilitate P2P trading for the user.
Prosumers and consumers can trade electricity through a P2P market using a trading platform (TP). If necessary, they can also buy or sell electricity from/to a supplier as a backup option. However, P2P trading is more beneficial than relying on the supplier due to pricing considerations <cit.>. Financial reconciliation occurs during settlement cycles (SCs) for users involved in trading. Within each SC, data regarding the actual electricity usage of households and their commitments to trade in the market are stored on DLT. Households calculate their bills locally in a decentralised manner. If a dispute arises, a referee intervenes to resolve it by requesting data from households and retrieving it from DLT.
§.§ Threat Model and Assumptions
Our threat model comprises untrustworthy and semi-honest entities. Prosumers and consumers who may attempt to violate the protocol specifications and obtain sensitive data of other users are considered to be untrustworthy. Prosumers may try to maximise their revenue, while consumers may aim to minimise their expenses. Semi-honest entities include the TP, referee, and supplier. They adhere to the protocol specifications, but they may still be curious to learn sensitive data of users.
SMs are tamper-proof and sealed. Anyone, including their users, can not tamper with them without being detected.
Users act rationally by seeking the most cost-effective electricity to buy or sell <cit.>.
We assume that the entities communicate over secure and authentic communication channels.
§.§ Design Requirements
* No single point of failure (SPF):
To avoid SPF, calculations and data storage should be distributed <cit.>.
* Privacy:
Confidentiality of individual users' volumes of energy traded and consumed as well as individual deviation and deviation sign should be provided.
* Accountability: Disputes arising from erroneous bill calculations must be addressed in an accountable way to prevent any party from denying responsibility.
* Fair deviation cost distribution: cost of P2P market deviation should be split fairly among market participants.
§.§ Building Blocks
Homomorphic encryption (HE) enables computations to be performed on encrypted data, resulting in encrypted outputs that produce the same results as if the operations were conducted on unencrypted data <cit.>. Specifically, we deploy the Paillier cryptosystem which supports homomorphic addition and scalar multiplication on ciphertexts <cit.>. Our solution ensures the privacy of households by encrypting sensitive information such as energy consumption data per SC. Billing calculations are performed on this encrypted data, thereby preserving the confidentiality of the information.
We use blockchain technology to provide accountability by ensuring that transactions are permanently recorded in a decentralised and immutable system with append-only storage. Transactions recorded on a blockchain cannot be altered by design, ensuring that they are accurate and trustworthy <cit.>.
§ PRIVACY PRESERVING AND ACCOUNTABLE BILLING (PA-BILL) PROTOCOL
In this section, we propose a privacy-preserving and accountable billing protocol for P2P energy market where users' actual energy consumption may differ from the volumes they committed. It protects sensitive household information and enables system entities to verify accurate billing calculations.
§.§ PA-Bill Overview
The process of PA-Bill protocol is illustrated in Fig. <ref>, which includes interactions between the entities. The system utilises the public-private key pair of the supplier for all homomorphically encrypted calculations. A distinct set of HE keys, namely PK_sup and SK_sup are generated for each billing month. Additionally, each month the consumers and prosumers are paired together to perform accountable calculations.
In the energy trading model, users send homomorphically encrypted bid-offer data to the TP, which calculates the final trading price π_P2P and the amount of energy V^P2P[u_k] that each user u_k will trade via the P2P market, as in <cit.>.
During each SC, π_P2P is publicly released. V^P2P[u_k] is shared with related paired users for future calculations, and its hash is stored on the DLT for future verification. SMs measure their users' actual imported/exported electricity and transmit the encrypted version (V^Real[u_k]) to relevant users. The hash of this encrypted version is also stored on the DLT.
After sending and storing related data for billing, the calculation of bills among prosumers and consumers is performed in three stages in a privacy-preserving way. Firstly, individual deviations of users are calculated. Consumers calculate the individual deviations of prosumers and vice versa. Secondly, the total deviations of consumers and prosumers are calculated by six user selected from consumers and prosumers. Thirdly, statements (bills/revenues) of users are calculated.
To protect sensitive data such as energy consumed/traded, and individual energy deviations of households, our work utilises HE scheme to process data while preserving privacy. However, it is crucial to design the billing algorithm in such a way that it avoids any indirect leakage of private information despite the use of encryption. Traditional billing methods <cit.> have the potential to expose confidential information by using individual deviations between actual and committed energy volumes to determine the “conditions" in calculating bills. This enables inferences to be made about whether the actual electricity consumption volume is lower or higher than the committed data. To address this issue, we propose a privacy-preserving and accountable cost-splitting billing that uses total deviations of consumers and prosumers rather than individual deviations to determine billing conditions.
In the event of a dispute, the referee requests the necessary data from households, as well as it retrieves the hash of the previously stored data from DLT (to ensure the accuracy of the data requested from households) to settle the dispute. In this case, the referee corrects erroneous computations of the pair of customer and prosumer whose calculations do not match each other and identifies the responsible party in the pair. The responsible party is penalised, incentivising them to act truthfully, which would otherwise result in penalties. Besides, the referee can directly calculate the supplier's balance since the calculations do not involve any confidential information.
Finally, at the end of the month, final bills and revenues, and the balance of the supplier are released with the help of the referee and the private homomorphic key of the supplier.
§.§ Technical Details of PA-Bill
At the start of each billing period (e.g., a month), the following two steps (1-2) are carried out.
§.§.§ Generation of Keys
The supplier generates a public-private HE (Paillier) key pair: KGen_pe(k) PK_sup, SK_sup.
§.§.§ Matching customers and prosumers
The referee conducts a random matching process in which each consumer is paired with a list of prosumers and vice versa.
The number of users in the lists may exceed one or be zero in cases where N_C > N_P or N_C < N_P, while the lists contain only one user if N_C = N_P. Here, N_C and N_P denote the respective number of customers and prosumers. The function M(u_k) returns the list of users that have been matched to the user u_k.
At each SC, the following six steps (3–8) are carried out.
§.§.§ Transfer and Storage of P2P Traded Data
TP makes the P2P trading price public by storing it at DLT in plaintext. For each u_k, TP transmits homomorphically encrypted value of traded volume V^P2P[u_k] to user u_k and to users in M(u_k). The privacy-preserving calculation of the encrypted traded values by user u_k (V^P2P[u_k]) can be performed after the transmission of bids-offers in a homomorphically encrypted format.
It is assumed the TP has already calculated V^P2P[u_k]. Once the data has been transmitted to relevant parties, the TP also hashes the homomorphically encrypted traded volume of user u_k, i.e., H(V^P2P[u_k]), and stores the result at the DLT, together with a timestamp and ID of u_k.
§.§.§ Collection, Transfer and Storage of SM Data
At the end of each SC, each SM measures the real volume of energy imported from (or exported to) the grid by their user, i.e., V^Real[u_k], encrypts it with PK_sup and hashes it, i.e., H(V^Real[u_k]). It then stores the hash value to DLT with timestamp and ID of u_k. The user SM also stores V^Real[u_k] as well as sends it to the users in M(u_k).
§.§.§ Calculation of Individual Deviations
in this step, each user u_k calculates the individual deviations (inDev) from the volume of energy they committed for themselves and their corresponding matched users in M(u_k)
(see Alg. <ref>). To calculate inDev, each user u_k subtracts their committed volume from the volume measured by their SM for themselves (u_k) and the users m_l in M(u_k). The calculations are carried out in homomorphically encrypted format.
The espective encrypted results inDev and inDev_M are sent to the referee.
After the referee receives the encrypted individual deviations from users, it checks whether the computations have been done correctly. For each user and its matched user, the referee receives four encrypted results. The user u_k provides its own encrypted result, inDev[u_k], as well as that of its matched user. For the matched consumer c_i and prosumer p_j, the referee checks if the calculated values are the same. In order to achieve this, the referee subtracts these two calculated values from each other in a homomorphically encrypted format. The result of this subtraction is then sent to the supplier who has the private key to perform homomorphic encryption operations. The supplier decrypts the result of subtraction and sends it back to referee. The referee checks whether the received value from the supplier is zero or not. If it is zero, it considers the calculations to be accurate and proceeds to store the hash of the resulting computation of user u_k (not that of the matched user) in DLT along with the corresponding ID and timestamp of u_k, to facilitate future verification. Otherwise (if the received result is not zero), the referee intervenes to correct any erroneous calculations and identify the responsible party. To do so, the referee requests V^Real and V^P2P from the users, checks their correctness by hashing and comparing them with the previously stored hashes in blockchain by TP and SMs. If the encrypted data received from the users is accurate, the referee recalculates the inDev in encrypted format for c_i and p_j, whose results were incorrect. Next, the referee follows the same process of subtracting the calculated values and having the result decrypted by the supplier to compare the recalculated outcome with the values obtained from c_i and p_j. The referee then identifies the party that is accountable for the mismatch.
§.§.§ Calculation of Total Deviations
To calculate total demand and supply deviations, the referee selects three consumers and three prosumers. Each consumer c_i sends their respective inDev[c_i] to the selected prosumers and vice versa.
Selected prosumers and consumers verify the received encrypted deviations by hashing and comparing them with stored hashes in DLT. Then, selected prosumers sum up inDev[c_i] for each c_i to calculate Dev_C^Tot (eq. <ref>) and selected consumers do the same for each p_i,
(eq. <ref>).
Dev_C^Tot∑_i=0^N_C-1inDev_C[c_i]
Dev_P^Tot∑_j=0^N_C-1inDev_P[p_j]
After calculating Dev_C^Tot and Dev_P^Tot, selected prosumers and consumers send them to a referee for verification. If the results match, the referee sends them to the supplier.
The supplier then decrypts the results and makes them publicly available by storing Dev_C^Tot and Dev_P^Tot into DLT. If the results do not match, the referee corrects any erroneous calculations and identifies the responsible party. This is done by recalculating (eq. <ref>) and (eq. <ref>) in encrypted format after requesting and verifying the necessary data via DLT.
§.§.§ Calculation of Bills and Rewards
we present our proposed privacy-preserving and accountable universal cost-splitting billing model that employs total deviations instead of individual deviations to establish billing conditions. The proposed billing model is presented in Alg. <ref>. The algorithm takes as input V^P2P, V^Real, π_P2P, π_RT and π_FiT and calculates the bills/revenues of consumers/prosumers. The algorithm outputs Statements Stat[u_k], Stat_M[u_k] for user u_k and its matched users in M(u_k), respectively. Stat[u_k] indicates the bill of u_k when u_k is a consumer and it stands for the revenue of u_k if u_k is a prosumer. We have devised universal formulas such as Stat[u_k] which is applicable to both consumers and prosumers.
The algorithm works in three modes based on the difference between total deviations of consumers and prosumers, and proceeds as follows.
If Dev_P^Tot = Dev_C^Tot, prosumers have generated enough electricity to meet the demand of customers, resulting in a balanced P2P market. In this case, individuals can purchase the required energy from other households and sell their excess energy to other households at π_P2P in addition to their commitments in the P2P market rather than relying on suppliers. Energy sharing between households to compensate for deviations is advantageous for both consumers and prosumers, as they can exchange energy at a price of π_P2P, which is higher than π_FiT and lower than π_RT, compared to relying on suppliers to buy electricity at π_RT and sell electricity at π_FiT. The statements for each user u_k and for paired users in M(u_k) are calculated between ln. 3-6 in the algorithm.
If Dev_P^Tot < Dev_C^Tot, there is a shortage of electricity in the P2P market as prosumers have not generated enough electricity to meet customer demand. If there is a shortage of electricity that cannot be compensated by other users, the only option is to purchase it from the supplier at π_RT. Users with a shortage of electricity can buy it at this price, while households with a surplus can sell it at π_RT instead of selling it to the supplier for π_FiT, which is advantageous for prosumers. In accordance with this, the statements for each user u_k and for paired users in M(u_k) are calculated between ln. 9-11 in the algorithm.
If Dev_P^Tot > Dev_C^Tot, there is excess electricity in the P2P market as prosumers have generated more electricity than is needed to meet customer demand. In this case, consumers can purchase energy from prosumers at π_P2P to compensate for their energy shortage due to deviation. The total revenue of the prosumers is distributed among them in proportion to the excess energy they provided. To calculate this, the total revenue generated by prosumers due to excess energy is first determined. Some of the excess energy is sold to consumers with a shortage of electricity at π_P2P, while the remainder is sold to the supplier at π_FiT. Therefore, the total revenue of prosumers, TotRev_P, can be calculated as
TotRev_P =(Dev_C^Tot·π_P2P + (Dev_P^Tot - Dev_C^Tot) ·π_FiT)
The total revenue TotRev_P is distributed among the prosumers in proportion to inDev_P[u_k] /Dev_P^Tot. In accordance with this, Alg. <ref> calculates statements for each user u_k and for paired users in M(u_k) between ln. 16-19, if u_k is a consumer. Otherwise, the statements are calculated between ln. 21-24.
At the end of the algorithm, statements are accumulated on stat^Tot in encrypted format for u_k and user in M(u_k) assuming that stat^Tot was set to zero before the first SC.
After each pair calculates their statements bilaterally, they send the results to the referee for verification. If the results do not match, the referee intervenes to correct any erroneous calculations and identify the responsible party. This is done by running Alg. <ref> for the unmatched pairs after requesting and verifying the required data for computation via DLT.
§.§.§ Calculating the of Balance of the Supplier
The referee calculates the supplier's balance using only public information, and does so in a non-encrypted format.
In the case where Dev_P^Tot = Dev_C^Tot, Bal_sup is set to zero (Bal_sup 0) since there is no excess or shortage of electricity in the P2P market to compansate from the supplier.
If (Dev_P^Tot > Dev_C^Tot), there is excess energy in P2P market and the supplier purchases it at FiT price π_FiT, resulting in a negative balance for the supplier to pay. Bal_sup is calculated as the negative product of the total excess energy (Dev_P^Tot - Dev_C^Tot) and π_FiT, i.e.
Bal_sup -(Dev_P^Tot - Dev_C^Tot)·π_FiT
If (Dev_P^Tot < Dev_C^Tot), there is a shortage of energy in P2P market that needs to be compensated by the supplier at retail price π_RT. Bal_sup is calculated as the product of supplied energy (Dev_P^Tot - Dev_C^Tot) and π_RT, i.e.
Bal_sup (Dev_C^Tot - Dev_P^Tot)·π_RT.
At each SC, the resulting Bal_sup is accumulated to the total supplier balance
except when the SC is equal to zero where Bal^Tot_sup is set to Bal_sup.
The next step is carried out at the end of each billing period.
§.§.§ Transfer and Announcement of Bills, Revenues and Supplier Balance
The final accumulated monthly statements of households are not protected from the supplier, as payments must be made, the referee sends encrypted statements consisting of bills and revenues to the supplier. The supplier then decrypts these statements using their HE private key and hashes and stores the decrypted version on the DLT system for future verification during the payment process. The supplier's balance is also hashed and stored on the DLT.
§ SECURITY, PRIVACY AND ACCOUNTABILITY ANALYSIS
The PA-Bill protocol addresses the security concern of avoiding SPF by distributing the majority of calculations and data storage locally.
It addresses privacy concerns by utilising HE to encrypt sensitive user data such as V^Real and V^P2P, ensuring that sensitive information remains confidential during billing computations. In addition, the PA-Bill protocol employs a cost-splitting mechanism that utilises the total deviations of users rather than individual deviations to calculate billing modes. This method avoids indirect privacy leakage of individual deviations.
It employs Blockchain technology to create an unalterable record of the hashes of essential data necessary for billing computations. This ensures the verification and integrity of critical data, thereby enabling all parties to be held accountable for their actions during the billing process.
§ PERFORMANCE EVALUATION
In this section, we demonstrate that PA-Bill achieves computational efficiency without compromising privacy, accountability, or the ability to accommodate user consumption variations. PA-Bill effectively addresses these critical aspects while maintaining a level of computational efficiency. We prove our claims through both theoretical analysis and experiments.
§.§ Theoretical Analysis
The time complexity of the method is mainly determined by the input parameters of Alg. <ref> and Alg. <ref>, which include the number of users (N_U). The time required to perform the algorithm grows depending on the input size. Specifically, the nested double loops in Alg. <ref> and Alg. <ref> lead to a quadratic time complexity of n^2 for cases where in cases where N_C > N_P or N_C < N_P, the time complexity is reduced to n with a single iteration in the inner loop when N_C = N_P where each user has only one matched user. The time complexity of the calculations in eq. <ref> and eq. <ref> is n, where n depends on the inputs N_C and N_P, respectively.
§.§ Experimental Results
We evaluate the performance of PA-Bill by running simulations on a PC with Intel Core i5 CPU @ 2GHz CPU and 16GB of RAM to demonstrate its efficiency. We utilise the SHA3-256 algorithm for hashing and the Paillier cryptosystem for homomorphic encryption with 2048-bit keys. These operations were implemented using the Python libraries hashlib and phe, respectively. We utilised Ethereum network to prototype the blockchain platform.
To deploy and test Ethereum for our project, we used Ganache[https://www.trufflesuite.com/ganache], wrote smart contracts in Solidity[https://solidity.readthedocs.io/en/v0.8.7/], and compiled them on Remix[https://remix.ethereum.org/]. To connect our project with the Ethereum network, we utilised the Python Web3[https://web3py.readthedocs.io/en/stable/] library. As we utilised existing tools to design the blockchain platform, we did not conduct a separate performance assessment of the platform itself. Our previous work <cit.> is deployed as electricity trading platform, so we do not reevaluate it in this context either. Instead, our primary focus lies in evaluating the performance of the privacy and accountable billing model.
The billing model simulations were conducted on a sample of 500 users, consisting of 250 consumers and 250 prosumers. We measured PA-Bill's execution time (ET) for computationally intensive components in two scenarios: worst-case (every household makes an incorrect bill calculation (unintentionally or maliciously), thus requiring an intervention from the referee) and best-case (all households make correct calculations, hence no referee intervention is deployed).
The SC is set to be one hour. Table <ref> demonstrates the average execution time per SC for PA-Bill components, computed over a one-month billing period comprising 720 SCs (24 SCs per day). The execution time which results in milliseconds for both worst-case and best-case scenarios, tested with a large group of 500 users, indicate that our proposed billing protocol offers a computationally efficient solution for PA-Bill.
§ CONCLUSION
In this work, we proposed PA-Bill, a privacy-preserving and accountable billing protocol that addresses security, privacy, and accountability issues in P2P markets at the billing and settlements stage. PA-Bill utilises a universal cost-splitting billing model, local semi-decentralised calculation, and Homomorphic Encryption for privacy protection. Blockchain technology is deployed for accountability mechanisms that resolve conflicts during billing calculation. PA-Bill is evaluated on a community of 500 households. In our future work, we plan to investigate network constraints.
IEEEtran
Potential Venues:
* https://www.sest2023.org/
* https://smartnets.ieee.tn/
* https://sites.google.com/view/smace2023/
In the case of a rejection
* https://hpcn.exeter.ac.uk/trustcom2023/
* https://brains.dnac.org/2023/
* https://attend.ieee.org/isc-2023/
|
http://arxiv.org/abs/2307.04706v1 | 20230710170555 | Cosmological Information in Perturbative Forward Modeling | [
"Giovanni Cabass",
"Marko Simonović",
"Matias Zaldarriaga"
] | astro-ph.CO | [
"astro-ph.CO"
] |
=1
||
‖‖
@width
|
http://arxiv.org/abs/2307.04869v1 | 20230710193253 | Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning | [
"Gaurav Bagwe",
"Xiaoyong Yuan",
"Miao Pan",
"Lan Zhang"
] | cs.LG | [
"cs.LG",
"cs.AI",
"cs.CV"
] |
[
Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning
equal*
Gaurav Bagweecemtu
Xiaoyong Yuancompmtu
Miao Paneceh
Lan Zhangecemtu
ecemtuDepartment of ECE, Michigan Technological University, Houghton, MI, USA
compmtuCollege of Computing, Michigan Technological University, Houghton, MI, USA
ecehDepartment of ECE, University of Houston, Houston, TX, USA
Gaurav [email protected]
Federated Continual Learning, Federated learning, Rehearsal-free Continual Learning, Prompt Learning
0.3in
]
Federated continual learning (FCL) learns incremental tasks over time from confidential datasets distributed across clients. This paper focuses on rehearsal-free FCL, which has severe forgetting issues when learning new tasks due to the lack of access to historical task data. To address this issue, we propose Fed-CPrompt based on prompt learning techniques to obtain task-specific prompts in a communication-efficient way. Fed-CPrompt introduces two key components, asynchronous prompt learning, and contrastive continual loss, to handle asynchronous task arrival and heterogeneous data distributions in FCL, respectively. Extensive experiments demonstrate the effectiveness of Fed-CPrompt in achieving SOTA rehearsal-free FCL performance.
§ INTRODUCTION
Federated learning (FL) has been a popular collaborative machine learning paradigm enabling multiple clients to learn a shared model without exposing private client data <cit.>. While successful, existing FL algorithms are mainly designed for a single task with fixed datasets on clients <cit.>, which becomes ineffective in handling non-stationary data distribution over time. Therefore, recent efforts have been put into federated continual learning (FCL) to learn tasks that are presented sequentially. Since the model in continual learning (CL) may overfit data from the current task and suffer from catastrophic forgetting <cit.>, the mainstream research to address the forgetting issue can be roughly divided into two categories: rehearsal-based and rehearsal-free FCL.
Although rehearsal-based approaches achieve state-of-the-art (SOTA) performance by using the rehearsal buffer to store and retrain data from previous tasks, the buffer size needs to be large enough to effectively mitigate forgetting <cit.>, leading to scalability and data storage constraints in FL. Moreover, many applications do not allow this buffer due to privacy concerns <cit.>, further restricting their adoption in practice. Hence, this work focuses on rehearsal-free FCL. Existing efforts along this line regularize the global model with knowledge from previous tasks when learning a new task <cit.>. Unfortunately, they have substantially deteriorated performance compared to rehearsal-based approaches <cit.>. Moreover, existing research requires continuously exchanging the entire model to learn incremental tasks in FCL, leading to significant communication overhead. In view of these, it is critical to developing innovative rehearsal-free FCL in a communication-efficient way to address the forgetting issue while maintaining the model plasticity for new tasks.
Enlightened by the recent advance of prompting techniques <cit.>, in this work, we leverage prompt learning to achieve the above goal. As one promising transfer learning approach, prompt learning uses insertable embeddings called prompts to condition a pre-trained model for downstream tasks. Recent research enables prompt-based CL by using key-query mechanisms, which achieves SOTA rehearsal-free performance, even outperforming rehearsal-based CL <cit.>. Due to the small size of prompt parameters, the communication efficiency of FCL is expected to be improved significantly. However, existing prompt-based CL is designed for centralized datasets, which becomes ineffective in FL with distributed and confidential datasets. The main limitation is due to the inherent heterogeneity of distributed clients. On the one hand, clients may observe heterogeneous data for the same task, leading to biased learning performance and slow convergence. On the other hand, incremental tasks may arrive asynchronously on clients, further deteriorating the overall learning performance. Therefore, to unleash the potential of prompting for rehearsal-free FCL, we propose Fed-CPrompt to facilitate inter-task and inter-client prompt-based knowledge transfer while addressing the heterogeneity concerns of data distribution and task arrival over clients.
Our key contributions are summarized below:
* We propose Fed-CPrompt, an innovative rehearsal-free FCL framework based on prompting techniques. Fed-CPrompt achieves SOTA FCL performance to handle the stability-plasticity dilemma under heterogeneous FL environments in a communication-efficient way.
* We introduce two key components to Fed-CPrompt: asynchronous prompt learning takes advantage of task asynchronicity to strengthen the task-specific prompts; C2L loss alleviates inter-task forgetting and inter-client data heterogeneity via a contrastive and continual loss.
* We conduct extensive experiments to demonstrate the effectiveness of Fed-CPrompt in various challenging FCL settings, such as heterogeneous data distribution and asynchronous task arrival.
§ PROPOSED METHOD
§.§ Problem Statement
In a standard FCL setting, a central server coordinates a set of distributed clients 𝒞 to learn incremental tasks 𝒯_1, …, 𝒯_n over time.
The training data for each task is distributed to clients and cannot be shared. FCL aims to obtain a global model parameterized by 𝐰 to perform all existing tasks. In this work, we consider a challenging CL problem, class-incremental CL, where the task labels are unknown during inference <cit.>. Our design can be easily extended to the task- or domain-incremental FCL problems. The optimization objective can be written as
min_𝐰∑_i∈{1, …,n}∑_c∈𝒞n_c^𝒯_i/n^𝒯_iℒ(𝒟_c^𝒯_i;𝐰),
where n_c^𝒯_i and n^𝒯_i represent the number of training samples from client c and all clients for task 𝒯_i, respectively. 𝒟_c^𝒯_i is the training dataset of 𝒯_i on client c.
This objective function uses data from all existing tasks, making it a rehearsal-based FCL problem.
This work focuses on rehearsal-free FCL. Specifically, each client can only observe the training data of the current task, i.e., when training on task 𝒯_n, training data of all previous tasks are unseen.
However, due to the unavailability of historical task data, training the current task can overwrite previous task information of the model 𝐰 in (<ref>), deteriorating the forgetting issues in CL. Thus, existing rehearsal-free FCL approaches cannot achieve comparable performance to rehearsal-based approaches <cit.>.
§.§ Design Principle
In this work, we aim to accommodate the forgetting issue for rehearsal-free FCL. Inspired by the success of the prompt-based rehearsal-free CL that achieves SOTA performance, we intend to implement prompting techniques in our design. Existing prompt-based CL <cit.> use insertable embeddings, called prompts p, to condition a frozen pre-trained model θ to perform incremental tasks. Due to the small size of prompt parameters, a task-specific prompt is created and stored for each task to avoid overwriting previous knowledge. Here, we refer readers to Appendix <ref> for more details. While successful, the above prompt-based CL research is designed for centralized datasets, which becomes ineffective in FL settings.
The main challenge of implementing prompting techniques in FCL is the inherent heterogeneity of distributed clients. On the one hand, the data heterogeneity among clients leads to biased local updates and slow convergence. On the other hand, the sequential tasks may appear asynchronously over clients, further delaying convergence. Due to the small size of learnable parameters in prompt learning, it is essential to improve their learning capacity by facilitating knowledge transfer between tasks and clients. Therefore, we propose Fed-CPrompt, an innovative prompt-based rehearsal-free FCL framework. As shown in Figure <ref>, Fed-CPrompt introduces two key components, asynchronous prompt learning and contrastive and continual loss, to address the aforementioned task arrival and data heterogeneity concerns. In the following, we first introduce these two components and then present the overall training of Fed-CPrompt.
§.§ Asynchronous Prompt Learning
We adopt the existing prompt-based CL approach (CODA-P <cit.>) on clients to learn incremental tasks based on their local data. In CODA-P, the prompt for the current task is re-weighted based on previous task information to refine task-specific representation via attention mechanisms (see Appendix <ref>). In Fed-CPrompt, when client c∈𝒞 learns task 𝒯_m, p^𝒯_m_c=∑_i∈[1,m-1]α^𝒯_i_s P^𝒯_i_s + α^𝒯_m_c P^𝒯_m_c, where α^𝒯_i_b and P^𝒯_i_b are the 𝒯_i-specific attention and prompt at the server (b=s) and client c (b=c), respectively. The updated client-side p^𝒯_m_c will be uploaded to the server and aggregated based on classical FL <cit.> to obtain server-side prompt p^𝒯_m_s. However, such naive aggregation becomes inefficient in asynchronous task arrival. When client c is training task 𝒯_m, the latest task observed by other clients might be task 𝒯_n (m<n), and 𝒯_n will be observed by client c later. Hence, to handle this condition, Fed-CPrompt introduces asynchronous prompt learning.
Instead of waiting for updated prompts of the current task 𝒯_n from all clients before aggregation, we allow task-specific prompt aggregation in parallel. In this way, the previously learned prompt at the server p_s^𝒯_m can be refined by p^𝒯_m_c. Moreover, taking advantage of the task arrival heterogeneity, the training of p^𝒯_m_c becomes
p_c^𝒯_m=∑_i= 1^m-1α^𝒯_i_s P^𝒯_i_s + α_c^𝒯_m P_c^𝒯_m + ∑_j= m+1^nα^𝒯_j_s P^𝒯_j_s ,
where the first and the third terms are task knowledge from the server, which are frozen when training task 𝒯_m. It should be mentioned that although the newest task for client c is 𝒯_m, the asynchronous task arrival in FCL allows client c to leverage unseen task knowledge to navigate the local training. By incorporating past and future task representations, we increase the capacity of prompts to learn task-specific instructions.
§.§ C2Loss: Contrastive and Continual Loss
To address the data heterogeneity issue while alleviating forgetting in FCL, we introduce a new loss function, contrastive and continual loss (C2Loss), to regularize local training on clients. The goal of C2Loss is mainly twofold. First, C2Loss accommodates disagreements between clients due to biased local training with heterogeneous data distribution. Second, C2Loss enforces distinct task-specific prompts construction, which facilitates CL to avoid the forgetting effect.
Specifically, when learning task 𝒯_m at communication round r, we have the C2Loss on client c∈𝒞 given by
ℒ_C2L (P_c^𝒯_m(r))=max(|| P_c^𝒯_m(r) - P^𝒯_m_s(r-1)||_2
- γ min{|| P_c^𝒯_m(r) - P^𝒯_i_s ||_2 , i∈[1,n], i≠ m } + α, 0),
where the first term within the max() calculates the change of the current prompt compared to that in the previous round. By restricting this change, C2Loss smooths the local update to achieve the first goal. The second term within the max() finds the most similar prompt to the current prompt based on the distance between the current and all previous prompts. By increasing this distance, C2Loss enforces the discrimination between task-specific prompts to achieve the second goal. Besides, γ>0 is the hyperparameter to balance the impact between the first two terms. α∈ [0,1] represents a margin value that encourages a separation between the first two terms <cit.>.
§.§ Overall Training
In Fed-CPrompt, client c∈𝒞 conducts local training with dataset 𝒟_c^𝒯_m for the current task 𝒯_m. As discussed in (<ref>), a prompt is constructed based on attention mechanisms, and thus the learnable prompt parameter for client c is defined by 𝐰_c^𝒯_m={P_c^𝒯_m, K_c^𝒯_m, A_c^𝒯_m} (K and A composite the α in (<ref>), detailed in Appendix <ref>). By incorporating the C2Loss to the cross-entropy loss, we have the local optimization function of client c by
min_𝐰_c^𝒯_m,ϕ_c^𝒯_mℒ_CE(f_ϕ_c(x; θ, 𝐰_c),y) + λ ℒ_C2L(𝐰_c^𝒯_m),
where ϕ_c^𝒯_m represents the classifier parameter for task 𝒯_m. Note that 𝐰_c and ϕ_c concatenate both the frozen previous task parameter and the current task learnable parameter as discussed in (<ref>). Besides, θ is the frozen pretrained model parameters; (x,y)∈𝒟_c^𝒯_m; λ∈ [0,1] is the hyperparameter balancing losses.
Both prompt parameters 𝐰_c^𝒯_m and classifier parameters ϕ_c^𝒯_m will be uploaded to the server. The server handles asynchronous task arrival by conducting parallel aggregation following classical FL <cit.>.
The overall training is illustrated in Algorithm <ref> of Appendix <ref>.
§ EXPERIMENTS
§.§ Experimental Setup
We evaluate the proposed Fed-CPrompt based on the CIFAR-100 dataset <cit.>, a widely used dataset in continual learning for classification tasks. We consider a total of 10 clients in FCL. The server-side knowledge aggregation is based on FedAvg <cit.>. The evaluation metrics include average accuracy and average forgetting, which are standard metrics used in previous CL research <cit.>. To comprehensively evaluate Fed-CPrompt, we consider baseline approaches, including rehearsal-free FL approaches (i.e., Fed-EWC and Fed-LWF) and recent prompt-based CL approaches (i.e., Fed-CODAP, Fed-DualP, Fed-L2P). Further details on the dataset setup, FL settings, evaluation metrics, and baseline approaches can be found in Appendix <ref>.
§.§ Experimental Results
Effectiveness of Fed-CPrompt.
We evaluate the effectiveness of Fed-CPrompt under iid and non-iid FL settings. We report the average test accuracy and forgetting over all ten tasks. As illustrated in Table <ref>, Fed-CPrompt gains a significant performance improvement over all rehearsal-free FCL methods under iid settings. Compared with the best of existing works, Fed-CPrompt achieves around a 2% increase in Top-1 accuracy and around a 2% drop in Forgetting.
It should be mentioned that non-prompt-based methods (Fed-EWC and Fed-LwF) optimize about 86 million parameters, while Fed-CPrompt optimizes only 4 million (≈ 4.18%) to achieve better performance. Moreover, Fed-CPrompt has better convergence, which significantly reduces the communication cost for FCL.
Besides, we further compare the Fed-CPrompt with other prompt-based FCL baselines under non-iid settings. In the experiments following <cit.>, we consider two non-iid settings: label skew and quantity skew. As illustrated in Table <ref>, Fed-CPrompt outperforms the existing prompt-based methods under non-iid settings. In particular, under a challenging label-skew setting, Fed-CPrompt achieves a significant performance improvement by 10.65%.
Impact of Asynchronous Continual Learning Tasks. We demonstrate the effectiveness of Fed-CPrompt under asynchronous task arrival, where the clients train the models on different tasks at the same time.
As illustrated in Table <ref>, the average test accuracy of Fed-CPrompt significantly outperforms the existing methods by 2.07% and 11.79% under iid and non-iid settings, respectively.
Our findings suggest jointly considering past and future task information can improve the training efficiency of FCL.
It should be noted that the forgetting of Fed-CPrompt is comparable to or higher than the existing works; however this is due to the high accuracy gained by Fed-CPrompt on the first task. Besides the impact of high accuracy on the first task, we can still observe the substantial advantage of Fed-CPrompt in mitigating catastrophic forgetting, as the average accuracy on all ten tasks achieved by Fed-CPrompt is much higher than the existing works.
Impact of C2Loss. We further perform ablation studies to evaluate the effectiveness of the proposed C2Loss. We compare the performance between FedProx and Fed-CPrompt with and without C2Loss. As shown in Table <ref>, Fed-CPrompt with C2Loss achieves the highest accuracy and lowest forgetting compared with the rest two methods.
This is mainly due to that C2Loss handles inter-task and inter-client knowledge transfer, thereby leading to better task discrimination and improved accuracy.
§ CONCLUSION
This paper proposed Fed-CPrompt, an innovative rehearsal-free FCL framework to alleviate catastrophic forgetting over incremental tasks and facilitate knowledge transfer among distributed and heterogeneous clients. Fed-CPrompt introduces two key components: asynchronous prompt learning to handle asynchronous arrival, and a simple yet effective contrastive continual loss that optimizes prompt parameters while providing additional supervision for learning distinct task-specific prompts. Extensive experiments demonstrate the effectiveness of our proposal.
icml2023
§ PRELIMINARIES FOR PROMPT-BASED CONTINUAL LEARNING
In this work, we build upon the technical foundations of prompt-based methods from the prior centralized continual learning research <cit.> to introduce prompts, which can collaboratively learn in heterogeneous federated settings.
As done in CODA-P, prompt parameters are attached to several multi-head self-attention (MSA) layers in a pre-trained ViT. Define a task-specific prompt parameter for task 𝒯_m as P^𝒯_m∈ℝ^L_P × D ×𝒯_m, where L_P, D, and 𝒯_m are the prompt lengths, embedding dimension, and the number of prompts for each task, respectively. We consider prefix-tuning to attach prompts to the keys and values of an MSA layer with input h∈ℝ^L × D and the query, key, and value as h_Q, h_K, and h_V. A prompt p is split into {P_K, P_V}∈ℝ^L_p/2× D, which are respectively attached to the key and the value of this layer, i.e., MSA(h_Q, [P_K; h_K], [P_V;h_V]). where [·;·] is a concatenation operation. Since CODA-P achieves SOTA centralized continual learning performance, we adopt the weighted prompt for local training, and the prompt for task 𝒯_m can be calculated by
p^𝒯_m=∑_i∈[1,m]α^𝒯_i P^𝒯_i,
where P^𝒯_m is a learnable prompt to the current task 𝒯_m, α^𝒯_m=γ(q(x)⊙ A^𝒯_m, K^𝒯_m) measures the cosine similarity γ between the attended query and the key, where the attended query defined by the element-wise product ⊙ between query and learnable attention parameter. The query is produced as q(x)∈ℝ^D=f(x; θ), where f(·;θ) is the encoder of the pre-trained ViT[We refer the reader to sections 4.1 and 4.2 of the CODA-p <cit.> paper for more details.]. For training task 𝒯_m, the learnable parameters include P^𝒯_m, K^𝒯_m, A^𝒯_m, and the classification head ϕ^𝒯_m, whereas ( α^𝒯_i, P^𝒯_i) ∀ i ∈[1, m-1] is frozen but contributes to the training as in Equation (<ref>). In addition, the classification head of the previous task 𝒯_1,⋯, 𝒯_m-1 , i.e are frozen.
§ RELATED WORK
Federated Continual Learning (FCL).
FCL performs addresses catastrophic forgetting across multiple clients trained on their private sequential tasks, where a global model is obtained by exchanging task-specific knowledge via a global server. The mainstream FCL research can be roughly divided into two categories: rehearsal-based and rehearsal-free FCL.
The rehearsal-based research stores and replays information from previous tasks to mitigate the global model's forgetting over time <cit.>. For example, Huang et al. proposed FCCL to address the heterogeneity and catastrophic forgetting in federated learning based on buffered data for intra- and inter-domain knowledge distillation <cit.>. Similarly, Zizzo et al. and Wang et al. leveraged replay buffers and novel data-sharing approaches based on differential privacy to mitigate forgetting <cit.>. To tackle the global model's forgetting brought by heterogeneous clients, Dong et al. introduced a proxy server to store and select the best old models to assist clients' local training <cit.>. While successful, the above rehearsal-based FCL research requires large storage space and complex data-sharing strategies to replay past information, making it challenging to scale over time.
Another category of FCL research is rehearsal-free approaches without storing past information.
One group of rehearsal-free continual learning (CL) expands the model architecture when encountering new tasks <cit.>. However, most architecture-based approaches require task identity to condition the network during inference, leading to their ineffectiveness for class-incremental or task-agnostic CL scenarios, i.e., the task identity is unknown. In this work, we focus on the practical but more challenging class-incremental FCL in a rehearsal-free manner.
Existing FCL research along this line proposed regularizing the model with respect to the previous task knowledge when training a new task. For example, Shoham et al. and Yoon et al. leveraged the weight consolidation method to restrict the updates of the important parameters regarding previous tasks while improving the training performance for the new task <cit.>. Similarly, several recent works implemented knowledge distillation methods to transfer knowledge from the model for the old task to that for the current task <cit.>.
In addition, <cit.> investigates the asynchronous-task FCL while using representation loss and a modified aggregation strategy to address the forgetting across multiple clients asynchronously learning respective tasks.
While the aforementioned research enables class-incremental FCL without the rehearsal buffer, they rely on optimizing the entire model on the client side, leading to heavy communication overhead when iteratively exchanging distributed client knowledge in FL, especially for CL scenarios. To address the limitations of existing research, in this work, we propose a novel rehearsal-free FCL approach for class-incremental learning problems based on prompt learning techniques.
Prompt Learning. Prompt learning has been a popular transfer learning approach that modifies the input sample with input embedding called prompts, aiming to provide additional information to condition the model to perform downstream tasks <cit.>. However, designing the prompt function for various downstream tasks is challenging. Recent research has introduced "soft prompts" to automatically train the learnable prompt parameters to replace the heuristic manual selection, such as the prompt tuning, p-tuning, and prefix tuning <cit.>. Prompt learning has shown great potential for parameter-efficient transfer learning with a small set of prompt parameters. Taking advantage of the small parameter size, Zhao et al. <cit.> and Guo et al. <cit.> adopted prompt learning to improve federated learning efficiency.
Some recent works have implemented prompt learning techniques in CL. Wang et al. proposed L2P by using the key-query-based similarity method to select prompts from a prompt pool to instruct different tasks in CL <cit.>. Later, DualPrompt was introduced as the follow-up to L2P with better CL performance, which learns two sets of disjoint prompt spaces to encode task-specific and task-invariant instructions, respectively <cit.>. More recently, CODA-Prompt was proposed using an attention-based end-to-end key-query method, which produces the input-conditioned prompts to further improve CL performance <cit.>. Nevertheless, the above prompt-based CL approaches designed for centralized datasets cannot be directly used for federated learning scenarios, as they ignore the unique challenges raised by distributed nature of clients, such as the heterogeneous data distribution and asynchronous task arrival over clients. To the best of our knowledge, none of the existing prompt learning research has been done for FCL.
§ ALGORITHM
The overall training process includes four main steps (a-d) as shown in Algorithm <ref>.
(a) The server distributes the prompts and model to each new participating device. (b) Each user first freezes the previous prompt parameters. (c) Each user optimizes the local prompt parameters and classifier head following CE loss and Equation (<ref>). (d) The clients return the locally trained model to the server. Further, the server aggregates the model following classical FedAvg <cit.>. Algorithm <ref> follows steps (a) - (d) until convergence.
§ IMPLEMENTATION DETAILS
In this section, we conduct extensive experiments to evaluate the proposed Fed-CPrompt. We first introduce the experimental setup, followed by the experimental results.
Additionally, the same random seed is used to conduct all experiments for reproducibility.
§.§ Dataset Setup.
The CIFAR-100 dataset consists of 100 classes with 600 samples per class. In the experiments, we divide the dataset into 10 disjoint tasks with 10 classes per task (5,000 training samples per task). We divide the samples on each task among clients following a uniform distribution for the iid settings in federated learning. We implement label-based and quantity-based distribution skew (i.e., label skew and quantity skew) for non-iid settings with non-iid degree β = 0.5 following <cit.>. The test dataset consists of 10,000 samples, with 1,000 samples per task.
§.§ Federated Learning Settings.
The learning rate is set to lr = 0.0001. We also deploy an early-stopping mechanism in each task using a validation set.
We consider 𝒞 = 10 clients, R=40 communication rounds, and local epochs l_epochs=5. The network parameters are optimized using Adam optimizer and a batch size of 128 images. We split the CIFAR100 dataset into 10 tasks, each with 10 classes. This is distributed among the 10 clients following <ref>.
§.§ Asynchronous Tasks.
We consider an asynchronous scenario where different clients learn from different tasks at the same time. Specifically, we select a random set of 5 clients to participate in the following task 𝒯_n = 𝒯_m+1, while the remaining 5 clients remain on task 𝒯_m.
§.§ Baseline Approaches.
We compare our proposed Fed-CPrompt with CODA-Prompt <cit.>, Dual prompt <cit.>, L2P <cit.> applied to federated settings. These prompt-based methods have shown potential parameter-efficient SOTA solutions in continual learning.
Additionally, we consider conventional non-prompt-based rehearsal-free methods to demonstrate the advantages of prompt-based methods. Specifically, Fed-EWC <cit.> and Fed-LWF <cit.> provide a fair representation of conventional non-prompt-based rehearsal-free methods in continual learning <cit.>. This comparison allows us to show the potential of a prompt-based approach to other rehearsal-free methods.
The same set of hyper-parameters, such as the learning rate, batch size, and number of rounds is adopted in the baselines and our proposed Fed-CPrompt.
§.§ Prompt parameters.
We use prefix-tuning <cit.> to attach the prompts to layers (1-5) of the pretrained ViT network <cit.>. The total prompt size is n = 100, and 10 prompts per task. Each prompt is set with length L_p = 8 and embedding dimension D = 768.
§.§ Evaluation metrics.
We evaluate our model on the standard continual learning metrics, including average accuracy and average forgetting, which are widely used in previous works <cit.>. We follow the standard definition of accuracy and forgetting mentioned in <cit.>.
§ ADDITIONAL RESULTS
Training Efficiency.
Figure <ref> and Figure <ref> demonstrate the effect of catastrophic forgetting when training new incremental tasks. Overall, we observe that in Fed-CPrompt retains knowledge from previous tasks, mitigating the catastrophic forgetting issues. Additionally, the accuracy per task is higher due to the increased capacity of prompts compared to prompts used in Dual Prompt and L2P.
Overall, our findings suggest that prompt-based algorithms, especially Fed-CPrompt, can effectively mitigate the problem of catastrophic forgetting and improve the training efficiency of lifelong learning systems.
Impact of Asynchronous Continual Learning Tasks. To investigate the impact of client pacing on the training efficiency of our lifelong learning system, we conduct experiments with varying degrees of client pacing. Specifically, we compare the system's performance when all clients move to the next task simultaneously versus when some clients move to the next task while others are still at the current task. Our results show that when some clients move to the next task, the knowledge of the next task can benefit the current task prompt by providing additional context and improving the convergence speed (Figure <ref>). The model can leverage the knowledge learned from the next task to understand the current task better, leading to faster convergence and improved accuracy.
Moreover, we also explore the idea of leveraging the prompts from other tasks for example in our design, the clients on task 𝒯_m+1 leverage prompts from task 𝒯_m to improve the convergence speed of the current task. In addition, clients on task 𝒯_𝓂 have an increased capacity due to the additional prompt from task 𝒯_m+1. Our experiments show that incorporating prompts from previous tasks into the current task prompt can significantly improve the convergence speed and reduce the training time, as shown in Figure <ref>. This is because the model can reuse the knowledge learned from previous tasks and incorporate it into the current task prompt to improve its understanding of the new task.
|
http://arxiv.org/abs/2307.05709v1 | 20230711182612 | Structural and magnetic properties of Fe-Co-C alloys with tetragonal deformation: a first-principle study | [
"Wojciech Marciniak",
"Mirosław Werwiński"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci"
] |
[email: ][email protected]
Institute of Molecular Physics, Polish Academy of Sciences, M. Smoluchowskiego 17, 60-179 Poznań, Poland
Institute of Physics, Poznan University of Technology, Piotrowo 3, 60-965 Poznań, Poland
[email: ][email protected]
Institute of Molecular Physics, Polish Academy of Sciences, M. Smoluchowskiego 17, 60-179 Poznań, Poland
Fe-Co alloys with induced tetragonal strain are promising materials for rare-earth-free permanent magnets.
However, as ultrathin-film studies have shown, tetragonal Fe-Co structures tend to a rapid relaxation toward a cubic structure as the thickness of the deposited film increases.
One of the main methods of inducing the stable strain in the bulk material is interstitial doping with small atoms, like B, C, or N.
In this work, we present a full configuration space analysis in density functional theory approach for supercells with a single C impurity in one of the octahedral interstitial positions and for the full range of Co concentrations x.
We discuss all assumptions and considerations leading to calculated lattice parameters, mixing enthalpies, magnetic moments, and averaged magnetocrystalline anisotropy energies (MAE).
We present a comprehensive qualitative analysis of the structural and magnetic properties' dependence on short- and long-range ordering parameters.
We analyzed all unique Fe/Co atoms occupancies at all stoichiometric concentrations possible in 2 × 2 × 2 supercell based on 2-atom tetragonal representation.
We rely on the thermodynamic averaging method and large sample count to obtain accurate MAE values.
We reevaluate several chemical disorder approximation methods, including effective medium methods (virtual crystal approximation and coherent potential approximation) and special quasirandom structures method applied to Fe-Co-based alloys.
We observe a structural phase transition from the body-centered tetragonal structure above 70% Co concentration and confirm the structural stability of Fe-Co-C alloys in the tetragonal range.
We show the presence of a broad MAE maximum around about 50% Co concentration and notably high MAE values for Co content x as low as 25%.
In addition, we show the presence of a positive correlation between MAE and mixing enthalpy.
Structural and magnetic properties of Fe-Co-C alloys with tetragonal deformation: a first-principle study
Mirosław Werwiński
August 12, 2023
=========================================================================================================
§ INTRODUCTION
Permanent magnets are an indispensable part of modern technology.
Among their main characteristic parameters are the energy product (BH)_ max and coercive field H_ C.
(BH)_ max determines the efficiency of a permanent magnet and mainly depends on the saturation magnetization M_ S and coercive field.
Most of the current high-end magnets, with outstanding performance, contain rare-earth elements, such as samarium in SmCo_5 and neodymium in Nd_2Fe_14B.
However, rare-earth-based magnets have limitations, such as the relatively low Curie temperature of neodymium magnets, which is insufficient for many applications.
Moreover, concerns have risen recently about the rare-earth market fragility, which manifested in the so-called rare-earth crisis in 2011 <cit.>.
Hence, intense research for rare-earth-free permanent magnets has been conducted in the following years.
Many potential candidates have been discovered, including MnBi, MnAl, and FeNi magnets <cit.>.
Currently, rare-earths prices tend towards levels similar to those during the crisis period, encouraging further efforts towards developing efficient rare-earth-free permanent magnets.
One promising material among many transition-metal-based (TM-based) candidates is the Fe-Co alloy.
Burkert showed, using density functional theory (DFT) calculations, that in uniaxially strained body-centered tetragonal (bct) disordered iron–cobalt () alloys, giant magnetocrystalline anisotropy energy (MAE) of about 800 (over 10 ) can be achieved for Co concentration x close to 0.6 and lattice parameter ratio c/a close to 1.22 <cit.>.
Such MAE value is comparable to properties observed for SmCo_5, Nd_2Fe_14B, and FePt, while at the same time, the saturation magnetization of Fe-Co significantly exceeds the values observed for aforementioned materials.
Afterward, many systems have been synthesized following the epitaxial Bain path <cit.>, including /Pt multilayers <cit.> and deposition of on Pd (001) <cit.>, Ir (001) <cit.>, and Rh (001) buffers <cit.>.
However, the thin-film experiments showed MAE values lower than those predicted by Burkert
Neise <cit.> showed that the discrepancies between the theoretically predicted MAE and the measured values could be attributed to the virtual crystal approximation (VCA) utilized by Burkert
Using 2 × 2 × 2 supercell approach with atoms arrangements modeled according to the most random nearest neighbors patterns, they showed that ordered phases of have larger MAE than disordered ones, which was confirmed later by Turek <cit.>.
They also proposed the preparation of the epitaxial films along the Bain path <cit.>, which has since been realized by Reichel <cit.> on the Au_xCu_1-x buffer, offering a possibility to tailor the lattice parameter in a wide range <cit.>.
Turek further improved the theoretical prediction, ascribing again the calculated versus experimental MAE difference (of the order of 3 – 4) to the VCA.
Utilizing a more sophisticated method of the chemical disorder approximation, namely coherent potential approximation (CPA) <cit.>, they obtained MAE of much lower and a less sharp maximum of 183 spanning a wider range between about 0.5 and 0.65 Co concentration for c/a ≈ 1.22 <cit.>.
They also showed that ordering of the alloys towards phase (derived from B2 CsCl structure elongated along the z-axis) could significantly increase the MAE (by a factor between 2 and 3) to 450 for Fe_0.4Co_0.6 and 580 for Fe_0.5Co_0.5 – corresponding well with theoretical of 520 from Ref. <cit.>.
Experiments and further calculations have shown that bct thin films are prone to a rapid relaxation towards the body-centered cubic (bcc) structure above the critical thickness of about 15 monolayers (about 2 nm) <cit.>.
Additions of small interstitial atoms such as B, C, and N were proposed to stabilize the necessary tetragonal distortion by the formation of martensite phase.
Using special quasirandom structures (SQS) method <cit.> in supercells, multiple authors obtained a bct structure with c/a lattice parameters ratio as high as 1.12 – 1.17 <cit.>.
Several experimentally obtained systems have confirmed these predictions <cit.>, although there is still plenty of room for further improvements.
Two above-mentioned MAE enhancement methods, namely (i) strain induced by a lattice mismatch between two epitaxially grown layers and (ii) spontaneous lattice distortion due to impurities, are summarized in the recent review by Hasegawa <cit.>.
Steiner performed an case study by averaging over completely random structures in a 2 × 2 × 2 supercell <cit.>.
They suggested that proper caution has to be placed on the averaging method since CPA and VCA are effective medium methods that do not describe local structure relaxation and reduced symmetry.
Despite their concerns, they obtained MAE values similar to the CPA results reported previously by Turek <cit.>.
Since then, many articles have focused on a supercell approach applied to selected cases of doped with boron <cit.>, carbon <cit.>, and nitrogen <cit.>, mostly regarding (i) the phase derived from B2 (CsCl) structure strained along the z-axis, or (ii) the Fe_0.4Co_0.6 disordered alloy.
For ()_2B, Däne performed a sampling of the full configuration space of the 12-atom supercell, again using the argument that VCA and CPA do not correctly describe the distribution of possible values of MAE and the influence of chemical neighborhood and local geometry optimization.
They observed a significant spread of the MAE values with an overall average in good agreement with the experiment.
They argue that treating a ”true” disorder is certainly beneficial.
They also noted that it is necessary to average over sufficiently large supercells, as the supercell size can significantly affect the MAE values obtained <cit.>.
The discussion about configuration space analysis is connected with symmetry and ordering in the supercell.
Given the vast data set regarding multiple structures in a single crystal system, analysis of ordering towards specific structures is straightforward to implement; it provides more insight into physical phenomena occurring.
Works on energy states of closely related structures reach the '30s–'60s of the 20^th century, including contributions from Bethe, Bragg, Williams, Warren, and Cowley in short-range and long-range order analysis methods of that period <cit.>.
Recently, a notable example of ordering effects analysis closely related to our work includes research on the FeNi ordering towards the L1_0 phase performed by Izardar, Ederer, and Si <cit.>.
Here, we present a complete analysis of all stoichiometric compositions modeled in a 2 × 2 × 2 supercell.
We consider all possible symmetrically inequivalent arrangements of Fe and Co atoms.
The aim of the study is to predict the phase stability and intrinsic magnetic properties for the full range of concentrations of the system and place it in the frame of works on F-Co, Fe-Co-B, Fe-Co-N, and Fe-Co-C alloys.
To achieve it, we study the full configuration space of the 17-atom representation of the Fe-Co-C system and explore this approach to crystallize the most effective method of similar analyses for future applications.
§ CALCULATIONS' DETAILS
§.§ System preparation
We used the full-potential local-orbital (FPLO18.00) code <cit.> with the generalized gradient approximation (GGA) exchange-correlation functional in the Perdew, Burke, and Ernzerhof (PBE) <cit.> parametrization for all calculations.
The use of FPLO was dictated by, inter alia, the inherent implementation of the full-potential approach (i.e., omitting the crystalline potential shape approximation), and the expansion of the extended states in terms of localized atomic-like numerical orbitals basis <cit.>.
The full-potential approach is particularly essential for accurately determining a subtle quantity such as MAE.
Another important factor in choosing FPLO is the very high performance of the code, at the expense of the lack of multithreading.
In our approach, scaling multiple single-thread calculations up in an embarrassingly parallel manner is the optimal solution.
Initially, we built a 2 × 2 × 2 supercell of the 2-atom body-centered system representation in the P4/mmm space group (s.g. 123).
The result is a computational cell containing a total of 16 Fe/Co atoms.
Initial atomic positions were assumed to be perfect (0, 0, 0) and (1/2, 1/2, 1/2) in each unit cell, and a single C atom was introduced as an octahedral interstitial dopant on the (0, 0, 1/4) site in the supercell.
The resultant structure is shown in Fig. <ref>(a).
Structures visualizations were prepared in VESTA software <cit.>.
The carbon concentration in the prepared models is about 6 at% and 1.25 wt% (1 C atom per 16 TM atoms).
Initial atomic positions were optimized for Co concentrations equivalent to all stoichiometric cases in the 17-atom supercell (Fe_16C, Fe_15CoC, Fe_14Co_2C, ..., Co_16C).
At this stage, we used VCA for the disorder treatment, 6 ×6×6 k-point mesh, 10^-5 density and 10^-7 Ha (∼2.72 10^-5 eV) energy convergence criteria and 10^-3 eV Å^-1 force tolerance for initial optimization.
Cell volume and c/a optimization were performed based on a third-order surface fit to energy versus compuational cell volume in the 160 – 208 Å^3 range, incremented by 1 Å^3 and c/a ratios in the 1.05 – 1.16 range, incremented by 0.01.
Uniaxial elongation of the cell was assumed after Reichel <cit.>.
The preparation of the VCA system ended with a full optimization of atomic positions for the minimum of the mentioned fit.
We used a scalar-relativistic approach with the same parameters as before.
An exemplary resultant structure for the Fe_8Co_8C system is shown in Fig. <ref>(b).
In the final step of structures' preparation, atomic sites were populated with all possible discrete, stoichiometric, geometrically inequivalent Fe/Co occupations.
The equivalency was determined based on the initial, perfect body-centered tetragonal geometry.
4 195 unique combinations were obtained out of 65 534 total combinations without repetitions, including 748 unique combinations out of 12 870 for the Fe_8Co_8C case alone.
The criterion of identity between the combinations was the equity of all interatomic distances between all atom types, i.e., Fe–Fe, Co–Co, Fe–Co, Fe–C, and Co–C in the initial, perfect supercell.
It can be proven that it is unambiguous and directly couples each combination with the distribution of minority atoms in the supercell, such as the short-range ordering parameter described later.
This approach provided us with a relatively simple method for preliminary analysis.
Electron density was then converged in the scalar-relativistic mode, using 9 × 9 × 9 k-points over the entire Brillouin zone, following five additional force optimization steps for every structure to prevent numerical artifacts.
For this step of the calculations, convergence criteria were set at 10^-6 density and 10^-8 Ha (∼2.72 10^-6 eV).
One of the final Fe_8Co_8C structures is presented in Fig. <ref>(c).
Relevant magnetic parameters were derived based on the converged electron density and systems' energies, as described later.
Those include magnetocrystalline anisotropy energies (MAE), mixing enthalpies (Δ H_ mix), magnetic hardness parameter (κ), Bethe short-range order parameter (σ), Warren-Cowley short-range order parameter (α^XY) for first coordination shell, and long-range ordering parameter towards B2 phase (S).
Specific equations and methods relevant to detailed parts of the presented work are introduced further alongside the results.
§.§ Assumptions and ensemble averaging methods
We estimate our MAE results for each data point to be within 15% relative error due to relatively low k-point mesh.
Obviously, obtaining accuracy within 1% for each considered structure would be highly valuable.
However, raising the accuracy would greatly increase the computational cost beyond current capabilities.
Obtained system energies and the mixing enthalpies are much more accurate.
Bound by this limitation, we focus on qualitative trends and averages in more subtle values, such as MAE.
We assume the error imposed by the low k-point mesh for each data point is random and non-cumulative.
We utilize thermal averaging after Däne <cit.> to include influence of non-optimal ground level energy states:
MAE(T) ≡∑_ν[ MAE_ν· exp(-E_ν/ k_BT) · n_ν]/∑_ν[ exp(-E_ν/ k_BT) · n_ν]
where E_ν denotes the total energy of a unique atomic arrangement combination ν, MAE_ν represents its magnetocrystalline anisotropy energy, and n_ν is the number of geometrically equivalent configurations.
An important part of the discussion is whether the averaging assumed in Eq. <ref> is proper.
Foremost, we acknowledge the fact that at room temperature, a vast part of the system does not occupy the ground state, which is calculated in plain DFT.
It results, e.g., in the real magnetic moments being lower than predicted.
A fact more important for us is that Eq. <ref> does not count factors such as the energy barrier height between atom arrangements in the cell.
In fact, if the energy barrier is high enough, simple arithmetic averaging should be more appropriate.
The height of the energy barrier between the conformations could be obtained by, for example, the nudged elastic band (NEB) method <cit.>.
However, it would be computationally not yet feasible to obtain heights of all possible transitions <cit.>.
Obtaining at least a few values of the barriers in the near future could be beneficial.
The solution is, however, not compatible with our methods.
Less accurate but less costly linear scaling DFT methods could be utilized to obtain rough values of the barriers.
Moreover, this thermodynamic approach results in the configurations' statistical distribution corresponding to slow cooling.
We can further assume that despite the obtained result do not rely solely on the most optimal atomic arrangements, the lowest energy structures vastly contribute to the overall MAE.
Overall, Eq. <ref> certainly works for situations corresponding to slow cooling of the alloy.
Hence, it is another assumption in our work that applies to thermal averages.
Apart from the assumptions, an important factor to note is the notation we use to describe various C impurity nearest-neighbor patterns.
Those designations (Fe-C-Fe, Co-C-Co, and especially Fe-C-Co) should not be mistaken with the common Fe-Co-C system designation, which we also utilize in this work.
§ RESULTS AND DISCUSSION
§.§ Structural properties
We will first discuss the structural parameters of the alloys under consideration.
During the VCA geometry optimization, we observed a structural phase transition from body-centered tetragonal (bct) to face-centered cubic (fcc) structure, which occurs between 11 and 12 Co atoms in the supercell (between 69 and 75% Co concentration), see Fig. <ref>.
It corresponds to the well-known phase transition towards hexagonal close-packed structure for high Co concentration in .
The fcc structure is the closest to the hcp structure we can obtain under the assumed constraints.
Although unstable at the standard conditions, the fcc structure for pure Co has been obtained in the high-pressure regime by Yoo <cit.>.
Unit cell volume decreases monotonically with Co concentration after a weak peak for a single Co atom in the supercell, with a significant drop with the transition from bct to fcc structure.
Distinct maximum in unit cell volume has been argued by Pauling and other authors, as brought recently by Díaz-Ortiz , to be of the same nature as a peak in magnetization (Slater-Pauling curve) <cit.>.
The weak maximum we obtained stays in contradiction with the expected, Slater-Pauling-like shape of the curve brought to attention by Prinz <cit.> and successfully reproduced in calculations, e.g., by Díaz-Ortiz <cit.> and Steiner <cit.>, with a distinct maximum at around 20–30% Co in .
We ascribe this discrepancy to the presence of the dopant atom in the unit cell.
Nevertheless, a noticeable positive deviation from Vegad's law is apparent.
A similar influence of the small interstitial dopant on the structural (and magnetic) parameters of the system has been observed by Chandran for the system <cit.>.
The exact lattice parameters obtained using the VCA in the bct regime are a ranging from 2.75 Å for Fe_5Co_11C to 2.82 Å for Fe_15Co_1C, and c ranging from 3.01 Å for Fe_5Co_11C to 3.07 Å for Fe_12Co_4C.
Resultant optimized volume of the bct systems ranges from 185.8 Å^3 for Fe_5Co_11C to 192.3 Å^3 for Fe_15Co_1C.
Consistency with Fe_16C supercell volume obtained by Delczeg-Czirjak <cit.> in VASP code (about 196 Å^3) is good, as well as comparison to experimental value (about 183 Å^3) obtained by Reichel for (Fe_0.4Co_0.6)_0.98C_0.02 <cit.>.
The result for equiatomic (Fe_0.5Co_0.5)_16C (188 Å^3) is close to values obtained by Khan and Hong in equiatomic (Fe_0.5Co_0.5)_32C (about 187 Å^3) <cit.> and (Fe_0.5Co_0.5)_32N (about 188 Å^3) <cit.>.
It is also close to the result by Odkhuu and Hong for (Fe_0.5Co_0.5)_16N (about 190 Å^3) <cit.>.
Similar values have also been presented for B-doped alloys by Reichel <cit.>.
This slight overestimation of the transition metal alloy lattice parameter is an expected behavior of the applied PBE exchange-correlation functional.
Diaz-Ortíz provided an excellent review of structural parameters, magnetic moments, and stabilities of alloys calculated from first principles.
They listed several other results of unit cell volume for , ranging from 180 to 190 Å^3 per 16-atom cell <cit.>.
Most importantly, Delczeg-Czirjak showed that lattice parameters do not exhibit any significant dependency on the atomic configuration exemplified by the C impurity nearest neighbors <cit.>.
We followed the assumption of not optimizing lattice parameters for every configuration, as it would be too computationally demanding.
Derived lattice parameters lead to the c/a ratio in the bct regime rising from 1.07 in the case of Fe_16C to 1.12 for Fe_5Co_11C.
It is in agreement with the initial assumption of Burkert <cit.> and following theoretical estimations of uniaxial strain induction by interstitial impurities <cit.>.
Reichel presented experimental c/a value of 1.05 for B-doped Fe_0.38Co_0.62, and c/a for (Fe_0.4Co_0.6)_16C equal 1.03–1.04, which is lower than the value of approximately 1.10 close to earlier calculations results present in the literature, and also predicted by us.
They provided several possible reasons for the observed difference in their work <cit.>.
The phase transition from bct to fcc for has been also previously reproduced computationally by Delczeg-Czirjak for Co concentration around 65 at% <cit.>.
Uniaxial strain in the order of a few percent has been numerously shown to lead to reasonable MAE values <cit.>, which can be further improved, e.g., by buffer-induced effects in thin-film applications <cit.>.
§.§ Mixing enthalpy and basic magnetic properties versus Co concentration
A basic parameter describing the system is the mixing enthalpy.
It provides information about the tendency towards the formation of respective structures instead of separation into their constituent phases (in this case, pure Fe- and Co-based phases).
For each structure, we calculated mixing enthalpy Δ H_mix between bct Fe_16C and fcc Co_16C using equation analogous to the one used by Díaz-Ortiz , for convenient comparison with their results <cit.>:
Δ H_mix(x) = E_(Fe_1-xCo_x)_16C - xE_Co_16C - (1 - x)E_Fe_16C,
as it, in fact, is the same quantity they calculated for ordered structures in 2 × 2 × 2 supercells.
The results, presented in Fig. <ref>(a), correspond well with the aforementioned data for .
The absolute values of Δ H_mix (up to 8 mRy atom^-1) are only slightly lower in comparison with up to 9 mRy atom^-1 calculated by Díaz-Ortiz <cit.>.
It indicates the stability of both disordered and ordered alloys with a minor structure destabilization by the dopant.
Overall, the magnitude of mixing enthalpies suggests good mixing potential, comparable to both TM alloys and steels.
Moreover, the shape of the curve suggests the stability of each of the structures relative to neighboring ones, up to 11 Co atoms in the system, or up to the calculated bct-fcc transition.
Furthermore, a slight asymmetry in the dependence of mixing enthalpy on x can be observed.
On average, the systems closer to the Co-side have lower energies, especially for Co-C-Co systems.
However, the absolute minimum for Co-C-Co systems occurs for Fe_8Co_8C.
For Fe-C-Co, and especially Fe-C-Fe systems, the minimum is moved to the left.
The effect of ordering on the mixing enthalpy will be discussed in the following sections.
On average, for the region around the equiatomic Fe_8Co_8C, the energy of the systems with C impurity neighbored by two Co atoms is lower compared to systems with the C atom adjacent to two Fe atoms or one Fe and one Co atom.
This is consistent with the observation by Delczeg-Czirjak <cit.> that the energy of Fe-Co-C systems depends mainly on the direct chemical neighborhood of the impurity atom, with a preference towards Co-C-Co nearest neighbors sequence.
A similar effect has been calculated by Chandran for N-doped Fe and <cit.>.
Such behavior contradicts the negligence of the direct chemical neighborhood of the impurity atom in earlier works of Khan and Hong <cit.>.
However, we will try to show that despite notable influence on exact quantitative results, neglecting the direct C neighborhood does not alter the qualitative trends in the system and possibly in other interstitially doped systems.
Surprisingly, we can observe a tendency towards the energetic preference of systems containing Fe-C-Fe nearest neighbors sequence for low Co concentrations.
The rapid increase in mixing enthalpy for Co-rich systems is consistent with mixing enthalpies calculated by Díaz-Ortiz and instability of Co-rich bct alloys observed in experiments <cit.>.
In Fig. <ref>(b), we see a decrease in average spin magnetic moments per TM atom with increasing Co concentration.
The average magnetic moment on an Fe atom in Fe_16C is 2.38 , and the average magnetic moment on a Co atom in Co_16C is 1.53 .
There is a positive deviation from a linear change with x, similar to the Slater-Pauling-like characteristics of unit cell volume versus x dependency.
As seen in partial Fe and Co contributions to the average spin magnetic moment, this deviation from a linear trend stems from the Fe contribution.
The partial contribution from Co magnetic moments increases linearly.
However, as opposed to pure results reported by Bardos <cit.>, we do not observe a characteristic, sharp maximum related to Slater-Pauling behavior.
There is a considerably low deviation in average Fe, Co, and total TM magnetic moments across different configurations.
The structural phase transition, between 11 and 12 Co atoms, affects magnetic moments on both Fe and Co atoms, but the change is minimal.
Giannopoulous found magnetization in thin films of (Fe_0.45Co_0.55)-C with 20 at% C to be in range of 1600 emu cc^-1 <cit.>, which translates to about 2.05 .
In the literature review performed by Diaz-Ortíz , as well as in their own results, we can find average magnetic moments in bcc Fe and bcc Co ranging from 2.13 to 2.35 on Fe atoms and from 1.53 to 1.77 on Co atoms.
Their MBPP/PBE (mixed-basis pseudopotential code) calculations for ordered Fe-Co phases yield a total magnetic moment of 2.36 for Fe_3Co DO_3 phase, 2.29 for Fe-Co B2 phase, and 2.00 for FeCo_3 DO_3 phase <cit.>.
Similarly, Chandran reported from VASP/GGA that Fe bcc has a magnetic moment of 2.22 , and Co bcc has a magnetic moment of 1.59 , not counting for the orbital moment contribution, which for both systems should be around 0.10–0.15 <cit.>.
For C-doped systems, Delczeg-Czirjak found in SPR-KKR/PBE (spin-polarized relativistic Korringa-Kohn-Rostoker) with CPA that the average magnetic moment drops from 2.2 in systems with the composition close to Fe_0.4Co_0.6 to around 1.8 in systems with the compositions close to (Fe_0.4Co_0.6)_16C <cit.>.
Possible giant MAE values are the property, which initially brought attention to the system.
Hence, MAE is among the first characteristics of the system to consider.
We calculated MAE according to the formula:
MAE = E_100 - E_001,
where E_100 and E_001 denote the system's energies in the [1 0 0] and [0 0 1] magnetization axis directions (hard and easy axis in the bct structure, respectively).
More precisely, we performed a single-step of fully-relativistic calculations in two orthogonal directions, [1 0 0] and [0 0 1], over a charge density self-consistently converged in the scalar-relativistic approach <cit.>, a method proven previously to be both accurate and effective <cit.>.
Figure <ref>(c) presents MAE versus x for all configurations, as well as thermodynamical averages according to Eq. <ref> and assuming T = 300 K for each Co concentration.
To provide an approximate scale in , we assume a uniform, average cell volume of 186 Å^3 across all Co concentrations and TM atoms configurations.
Vertical histograms are scaled to fit the width between points and represent the data spread.
There is apparently unimodal distribution of all MAE results for the whole x range among all configurations.
A bimodal distribution can be observed closer for the lowest energy configurations results, with MAE values being either very high or near zero.
We can observe that MAE varies hugely between configurations, with the absolute maximum for 7 Co atoms in the 16 TM-atom supercell.
With more than 11 Co atoms in the system, we observe a rapid decrease and change in the sign of MAE, associated with the phase transition.
The high difference in MAE between individual configurations is consistent with similar results for ordering towards L1_0 phase in equiatomic FeNi obtained by Izardar, Ederer, and Si.
Though we focus on qualitative trends with low convergence criteria for each data point, they conducted a full convergence for several dozen structures <cit.>.
Focusing on qualitative trends, in Fig. <ref>(c), we see a broad maximum for x ≃ 0.25 – 0.75.
According to Eq. <ref>, we obtained an average MAE of 0.75 for Fe_8Co_8C.
MAE decreases by around 20% between x = 0.5 and x ≃ 0.3.
It is in contrast to a rapid drop in MAE for low Co concentrations reported by Delczeg-Czirjak (65% drop between x = 0.6 and x = 0.3).
We obtained nearly the same MAE values for x ≃ 0.6 and x ≃ 0.3.
Intriguingly, we observe several configurations with relatively high MAE values for such low Co concentration as 0.25.
Our findings of notable, positive MAE for low Co concentrations contradict earlier results obtained with effective medium methods.
VCA and CPA reported negative MAE for low Co concentrations Fe-Co alloy, as seen on MAE versus c/a versus x maps by Burkert and Turek <cit.>.
On the other hand, it is consistent with the findings of Steiner , who, from supercells, reported positive MAE in a much wider Co concentration range, and Wu , who reported high MAE for Fe_12Co_4C and Fe_11Co_5C <cit.>.
Moreover, we can observe a few high-MAE configurations among the 5% most preferable ones, see green histograms in Fig. <ref>(c).
The thermodynamically averaged MAE values over 5% of the lowest energy configurations overestimate averages of all symmetrically non-equivalent configurations.
It suggests the non-negligible influence of high-energy (and hence low probability) structures stemming from their quantity.
Our quantitative MAE results can be placed in the context of numerous works describing selected atomic configurations in pure Fe-Co, as well as B-, C-, and N-doped systems, realized both experimentally and by DFT calculations to date.
Giannopoulos found experimentally for C-doped Fe_0.45Co_0.55 thin films to be in order of 0.8 <cit.>, exact same value as obtained by Reichel for (Fe_0.4Co_0.6)_0.98C_0.02 thin films <cit.>.
Reichel have also shown from combined DFT and experimental analysis that the (Fe_0.4Co_0.6)_32C system possesses slightly lower MAE of the order of 0.5 and much higher stability for relatively thick films <cit.>.
They also reported B-doped alloys to behave similarly, with a little higher MAE than C-doped system <cit.>.
Odkhuu and Hong provide similar results for <cit.>.
Delczeg-Czirjak found MAE for Fe_6Co_10C to be in the order of 51 or 0.75 as calculated in WIEN2k/SQS, higher than their SPR-KKR/CPA calculations (41.6 ) <cit.>.
For B2 Fe-Co-C and Fe-Co-N systems, Khan and Hong reported MAE values of 0.65 and 0.58 , respectively <cit.>.
Overall, our results agree well with previous calculations and experiments wherever direct comparison is possible.
Qualitative trends among major magnetic properties are similar and quantitative results lie close to previous DFT data.
However, the dataset we provide is vastly greater than anything currently available in the literature.
§.§ Magnetocrystalline anisotropy energy and magnetic hardness in relation to the mixing enthalpy
To systematize the dataset, we first analyze the dependency of MAE on the mixing enthalpy.
This dependency for all configurations is shown in Fig. <ref>(a).
We see an increase of MAE with lowering the system enthalpy, indicating the preference towards high-MAE structures.
There is a significant scatter of values for separate systems around the average.
Systems with the dopant atom neighbored by two Co atoms have noticeably larger MAE and lower mixing enthalpy relative to the systems with Fe-C-Fe and Fe-C-Co nearest neighbors (NN) sequence.
To further explore the usability of investigated structures, we calculate magnetic hardness.
It is a parameter describing the system resistance towards spontaneous self-demagnetization and can be defined as <cit.>:
κ = √(K_1/μ_0M_S^2),
where K_1 is the magnetic anisotropy constant, M_S is the saturation magnetization, and μ_0 is the vacuum permeability.
A simple empirical rule is that a permanent magnet candidate needs κ greater than 1 to resist self-demagnetization.
κ is a useful technical value, as plenty of magnets with relatively low MAE values are manufactured widely due to their high magnetic hardness and low materials cost.
In the case of the Fe-Co-C system, numerous experimental realizations showed a possibility of further amendment of the system to at least double its MAE by tuning the c/a ratio, where interstitial doping can be combined with growth on specifically tailored substrates <cit.>.
We also previously showed the positive effect of 5d doping of a similar system <cit.>.
Hence, we are interested in promising compositions showing at least semi-hard magnetic properties due to C-doping alone.
Skomski and Coey described systems with κ around 0.5 as semi-hard <cit.>.
We mark the κ = 0.5 value in Fig. <ref>(b).
In our estimation, we assume K_1 equals MAE, as defined before.
Saturation magnetization is derived from the calculated total magnetic moment and cell volume.
Thus, we can expand Eq. <ref> to the form:
κ = √(E_100 - E_001/μ_0 [ ∑_i M_i/V]^2),
where i is the atomic site in the computational cell, M_i is the total magnetic moment of the atom occupying site i, and V is the computational cell volume.
Figure <ref>(b) presents the resultant magnetic hardness versus mixing enthalpy relation.
It is similar to the MAE dependency on the mixing enthalpy, presented in Fig. <ref>(a).
The magnetic hardness of many configurations exceeds the conventional limit of 0.5 for semi-hard magnetic materials but does not exceed 0.9, remaining below the limit for hard magnetic materials.
Odkhuu and Hong reported similar values of κ, ranging from 0.5 to 1 for the system <cit.>.
From Eq. <ref>, we can see that there are two main ways to improve the magnetic hardness of the sample.
We can either improve MAE or reduce magnetic moment.
For permanent magnet applications, we are at the same time interested in as high saturation magnetization as possible.
It implies that improving magnetic anisotropy while maintaining relatively high magnetic moments is of interest.
Alternatively, achieving high magnetic hardness at the cost of magnetic moment can be beneficial in case of sufficient economic advantage.
Relatively negligible changes in total magnetic moment across configurations with the same Co content suggest that, in our case, the MAE changes are a decisive factor in the magnetic hardness variations for different configurations.
Either way, both pathways for MAE improvements are feasible in the Fe-Co-C system.
§.§ Magnetic moments
Looking into the dataset, we focus on average magnetic moments per TM atom in the system, along with the spread of the values in different atomic configurations.
Figure <ref> summarizes results for exemplary Co concentrations x, 25%, 50%, and 75%.
Presented trends in average Fe, Co, and total spin magnetic moments – dependencies on mixing enthalpy and short-range ordering, and their distribution – are representative.
Similar results in the literature are scarce, in contrast to analyses of TM magnetic moments on different impurity atom coordination shells, performed by, e.g., Delczeg-Czirjak and Khan <cit.>.
As presented in Fig. <ref>(a), for low Co concentration, particularly the low enthalpy configurations are associated with high average magnetic moment on Co atoms.
It can be explained by the preferred Fe-C-Fe neighborhood, as the dopant atom tends to lower magnetic moments on neighboring atoms.
Delczeg-Czirjak shown that TM atoms adjacent to the C impurity in the system have significantly reduced magnetic moments <cit.>.
For intermediate Co content (exemplified by the Fe_8Co_8C system), there is no significant correlation between the average total magnetic moment and mixing enthalpy neither on Fe nor on Co atoms in the bct range.
For x = 0.75 (in the fcc range), a preference towards higher Fe and lower Co magnetic moments emerges.
We can observe that despite the average total spin magnetic moment on Fe and Co atoms varying considerably between configurations, the average total spin magnetic moment per atom remains almost constant.
Spin magnetic moment on Co atoms remain close to 1.5 , as predicted by linearity in its linear partial contribution to the total average spin magnetic moment in the supercell.
The trend can be seen more clearly in Fig. <ref>(d–f) where we present histograms of the average Fe, Co, and total magnetic moments in the structures.
In general, the magnetic moment on Fe atoms depends much more on their chemical neighborhood than the magnetic moment on Co atoms.
In Fig. <ref>(d), we see that on the Fe-rich side of the concentration range, for Fe_12Co_4C, the total magnetic moment in the system, 2.233 , remains almost constant across all configurations with a triple standard deviation of 0.03 .
A similar trend can be observed for the average Fe magnetic moment (2.48±0.08 ).
However, for average Co magnetic moments (1.57 ), we can see that the triple standard deviation is relatively high and equals 0.31 .
On the Co-rich side, for Fe_4Co_12C alloy, see Fig. <ref>(f), we notice that the total magnetic moment in the system also remains almost constant (1.70±0.02 ).
Still, we observe a noticeable variation of 0.16 around the average value of Co magnetic moments (1.45 ).
However, average magnetic moments on Fe atoms, 2.48 , vary considerably across different configurations, in the range of ±0.42 , which yields almost 34% relative variability between lowest and highest Fe magnetic moment value.
In Fig. <ref>(e), presenting results for Fe_8Co_8C, we observe moderate variation in average magnetic moments on both Fe and Co atoms, in the range of 2.53±0.20 on Fe, 1.56±0.22 on Co, and a total magnetic moment in the system of 2.03±0.05 .
Again, a major driving factor in the spread of magnetic moments across all structures can be the magnetic moment lowering by the neighboring C impurity, which is most prominent on Co atoms, as presented for numerous Fe-Co-based systems by Khan <cit.>.
Moreover, a similar result for N-doped B2 Fe-Co was obtained by Chandran .
They obtained magnetic moments being reduced from 2.78 to 2.09 between next-nearest and nearest neighbors of the dopant for Fe atoms and from 1.76 to 1.12 for Co atoms, with magnetic moment fluctuations propagating into next-nearest neighbors <cit.>.
To explore other factors influencing magnetic moments in the system, we can use a local neighborhood-based order parameter σ of Bethe, which can be defined for a binary alloy as <cit.>:
σ = p_AB - (p_AA + p_BB) = 2p_AB - 1,
where p_XY denotes the probability of finding an XY nearest neighbor pair.
Though developed for equiatomic systems, σ derived from Eq. <ref> also provides useful information for non-equiatomic binary systems, as it depicts changes in the system with increasing content of NN pairs of non-similar atoms.
In that case, σ generally takes values between -1 and 1, with positive values indicating preference towards dislike (in our case, Fe–Co) atomic pairs in the structure and negative values indicating preference towards same-atom type pair (Fe–Fe and Co–Co).
However, both minimum and maximum achievable σ changes with the system composition and supercell size, σ_min being in <-1,0> range (likewise atom pair affinity) and σ_max in <0,1> range (dislike atom pair affinity).
Considering different atomic configurations for particular Co concentrations makes it possible to determine the effect of the former on the values of magnetic moments on individual atoms.
Díaz-Ortiz shown for that the average magnetic moment does not change significantly with ordering <cit.>.
Similarly, for special quasirandom structures (SQS), comparing C impurity local neighborhood, Delczeg-Czirjak did not find any relevant change of total magnetic moment in the Fe-Co-C system.
The magnetic moment for the (Fe_0.5Co_0.5)_4C in their work remained at around 1.8 <cit.>.
Indeed, for , we do not see any significant change in the average spin magnetic moment with the local chemical neighborhood, as shown in Fig. <ref>(g–i).
Only a slight increase in the average spin magnetic moment with short-range ordering can be observed for the Fe_8Co_8C system, presented in Fig. <ref>(h).
It validates effective medium approaches, such as VCA and CPA, to work for disordered , similarly to , the latter pointed by Díaz-Ortiz <cit.>.
As for the average Fe and Co magnetic moments, we can see the variation across different structures drops with short-range ordering, indicating a strong contribution from Fe-Co NN interaction.
It is consistent with a known strong Fe-Co d orbital hybridization and exchange interaction <cit.>.
For any specific minority atoms concentration in our computational cell, the σ range is restricted due to limitations induced by the composition and system size, as described above.
§.§ Ordering and its influence on magnetic parameters
Apart from average magnetic moment dependence on the short-range ordering, we can explore the ordering effect on other important system properties, including mixing enthalpy, magnetocrystalline anisotropy energy, and magnetic hardness.
Figure <ref> presents aggregated results for Co content between 3 and 11 atoms in the system, in the bct region.
We do not present results for lower Co concentrations because they cover only a small number of configurations and do not have reasonable statistics.
Figure <ref>(a) shows mixing enthalpy decrease with the increase of short-range Fe–Co ordering, i.e., the fraction of Fe–Co pairs among all NN pairs.
It might indicate system stabilization by Fe–Co nearest neighbor and Co–Co or Fe–Fe next-nearest neighbor interaction.
As previous studies have shown, in the case of the N-doped B2 phase, nearest neighbors Fe–Co exchange integral and next-nearest neighbors Co–Co integral calculated by Odkhuu and Hong contribute the most to magnetic ordering <cit.>.
Hence, we ascribe the system stabilization to the same interactions.
Figure <ref>(b) shows the distribution of MAE in structures with different atomic configurations.
Both the highest and lowest MAE for single configuration can be observed for σ equal to 0.
For the highest σ values, MAE converges to around 85 for the Co-C-Co NN sequence and to around 10 for the Fe-C-Fe NN sequence.
For negative σ values, which can be associated with low Co concentrations, MAE drops to 0 .
It can be deduced that the NN ordering influences the MAE by strong Fe-Co interplay.
Nevertheless, the factor that contributes most to the overall behavior of the MAE relative to order is the direct immediate chemical neighborhood of the impurity atom.
In Fig. <ref>(c), we present thermodynamic averages, according to Eq. <ref>.
Bars represent the range of MAE values obtained in calculations.
We observe no significant correlation between average MAE and atoms distribution for σ > 0.
The most probable MAE for σ equal to 0 is quite high regardless of the dopant neighborhood.
The changes in MAE described above are clear, though the scatter of MAE values for various individual structures is substantial.
Taking all the above into account, the configuration space of Fe-Co-C alloys can be somewhat effectively reduced to random nearest-neighbor patterns.
Still, it should be done cautiously and can lead to substantial errors, though any anomalies should be evident in the results.
Along with the low average magnetic moment dependence discussed above, the lack of strong MAE dependence on the short-range ordering implies that Fe-Co-C retains the properties of a random alloy, similarly to pure Fe-Co.
Thus, methods relying on conformational space reduction by neighbor patterns analysis, such as SQS, yield a non-negligible error, similar to effective medium methods, as noted before by Díaz-Ortiz <cit.>.
In future studies, it should be decided on a case-by-case basis whether the trade-off between the significant reduction in computation time in approximate (SQS-type) methods and the accuracy and ability to obtain a complete picture of the system in methods that allow order-dependence analysis is justified.
Figure <ref>(d) presents a similar picture for magnetic hardness.
We can see that usable magnetic hardness can be obtained for systems around and above σ = 0.
For highly ordered systems, the first coordination shell of the dopant plays a key part.
Above σ = 0.4, only Co-C-Co and part of Fe-C-Co systems retain magnetic hardness in the semi-hard region.
The interesting part is the negative-σ side of Fig. <ref>(b–d).
We observe that where Fe–Fe and Co–Co interactions dominate, MAE and hence the magnetic hardness drops.
Although the σ is convenient and effective parameter in analyzing the aggregated results, especially showing the linear decrease of mixing enthalpy with increasing dislike atom pairs content in the supercells, it lacks one property necessary to conduct a complete analysis.
It conveys a strict order parameter definition only for equiatomic binary alloy.
Namely, its expected value, the same as the value for a completely disordered alloy, is not always equal to zero and depends on minority atoms concentration c_m as 4(c_m - c_m^2).
For equiatomic alloy, σ equals 0 for completely disordered alloy and takes values up to 1 (or -1) for completely ordered alloys.
To investigate the properties of disordered alloys in a broad concentration range, we use Warren-Cowley short-range order parameter α <cit.>, which for the first coordination shell (α^Fe,Co_I – shortened further to α) can be simplified as:
α^AB_I = 1 - p_AB/2 c_A c_B,
where c_A denotes the concentration of atom type A, p_AB/2c_B = P_AB equals the conditional probability of finding an atom of type B at the first coordination shell of the randomly selected atom of type A, and when substituted, gives the exact Warren-Cowley formulation.
Structures with all α parameters (for different coordination shells) equal to 0 are disordered, and structures with α_i equal to 1 (or -1) are perfectly ordered on coordination shell i.
For simplicity, in Fig. <ref>, we present only MAE versus α dependency.
Generally, in an infinite crystal, α takes values between 2 c_A c_B - 1/2 c_A c_B and 1 <cit.>.
We get only zero to negative α values due to the small computational cell size.
Overall, the plot is similar to the positive σ part of Fig. <ref>(b) and (c) taking into account that preferred dislike atom type coordination is associated with positive σ, but negative α.
The most probable MAE value is proportional to the ordering for Co-C-Co systems and, to some extent, for others.
Apart from that, we want to highlight three main observations.
Firstly, there is a considerable spread in values for random alloys (for α = 0).
It is further indicator implying that certain methods of configurational space reduction, like SQS, are inherently predestined to fail in proper -based alloys MAE predictions, and the uncertainty of such results can be, in fact, substantial.
Secondly, same as for σ and similarly to order parameters in recent works by Izardar and Ederer for L1_0 FeNi <cit.>, the MAE value converges towards a reasonably high MAE value for perfectly ordered systems.
Lastly, in all (Fe-C-Fe, Fe-C-Co, and Co-C-Co) systems, there is a group of configurations that possess high MAE, increasing with ordering.
We remind here that α = -1 structures are ordered.
For Fe-C-Fe and Fe-C-Co systems, the average MAE value diverges and eventually suddenly drops for high-order structures – a behavior described above for Bethe σ dependencies.
From the comparison of high-order structures calculations to the random alloy, Díaz-Ortiz deduced that ordered structures are stable, with B2 phase among them <cit.>.
Structures predicted by them, namely DO_3, L6_0, and B2, as well as similar phases such as L1_2 exhibit a high degree of short-range σ and α ordering, as calculated according to Eqs. <ref> and <ref>.
Wu have, similarly, reported stability of Fe-rich DO_3, and equiatomic B2 phases <cit.>, and Odkhuu and Hong postulated B2 Fe-Co to be a good matrix for low-energy high-MAE N-doped phases <cit.>.
One of the very first works on the topic of strained system treated with CPA effective medium approximation by Turek researched ordering influence on MAE in the system <cit.>.
and B2 phases differ only by lattice parameters c/a ratio, where is an fcc-like structure and B2 is close to bcc.
As such, we also checked specifically the B2 ordering in the low c/a regime for the C-doped alloy.
For this purpose, we use the long-range order parameter S of a binary alloy, which is defined in relation to a specific structure, in our case – B2-like Fe_8Co_8C.
Ordering towards B2 and its equivalent phase has been studied in VCA and CPA approaches in several works to date, including one by Turek <cit.>.
The parameter S value equal to 1 is associated with a perfect ordering towards the chosen structure (in our case – an ideal crystal in the B2 type), and S equal to 0 represents an absolute lack of the ordering of the given type.
Though, a system without ordering towards one structure can be perfectly ordered towards another structure, such as L1_2 structure having a zero S towards L1_0, both being highly-ordered fcc-like structures and having a high degree of nearest-neighbor ordering.
Long-range ordering parameter S can be represented in general as follows <cit.>:
S = p - p(S=0)/p(S=1) - p(S=0),
where p denotes the probability of finding an atom of a given type on the expected atomic site.
For two-atom type 2 × 2 × 2 supercell and B2 ordering we expand it as:
S = |N_ I - N_ II|/N,
where N_ I denotes the number of minority atoms close to z = 0 or z = 0.5c plane, N_ II denotes the number of minority atoms close to z = 0.25c or z = 0.75c plane, and N is the sum of minority atoms in the system.
The sites are visualized in Fig. <ref>.
An effectively similar approach has been used recently by Izardar studying equiatomic FeNi L1_0 binary phase <cit.>.
Parameter S provides a linear scale, similar to one applied by Turek <cit.>.
In Fig. <ref>, we show ordering towards B2 structure dependencies analogous to Fig. <ref>, presenting results for short-range ordering parameter σ.
As the S parameter towards B2 considers only equiatomic systems, the results aggregated are for Fe_8Co_8C only.
Similarly to σ-dependency, Fig. <ref>(a) presents a monotonic decrease in mixing enthalpy with B2 ordering in Fe_8Co_8C.
The energy of configurations with the Co-C-Co NN sequence is, on average, significantly lower than the energy of configurations with the Fe-C-Co NN sequence, which is, in turn, lower than the energy of Fe-C-Fe systems.
This fact is independent of the ordering.
Perfectly ordered B2 structure with C dopant between two Co atoms possesses the lowest energy.
In Fig. <ref>(b), we see multiple atomic configurations deviating vastly from the average.
In fact, the single highest MAE value, which is twice the average, can be observed for S = 0.5.
The associated structure is presented in Fig. <ref>(c).
The qualitative agreement of MAE averages, presented in Fig. <ref>(c), with the work of Turek is good.
We can see that MAE does not follow any specific trend with B2 ordering.
For low ordering towards the B2 phase, we can see both very high and very low MAE values.
MAE value converges towards a reasonably high 85 for perfect B2 order and Co-C-Co configuration.
Conversely, for C impurity in the Co plane (neighbored by two Fe atoms), MAE converges towards a low value of approximately 10 .
These are exactly the same MAE values as for most positive sigma and most negative alpha parameters, see Figs. 6 and 7.
It is, in fact, the same structure, visualized further in Fig. <ref>.
Magnetic hardness versus B2 ordering, shown in Fig. <ref>(d), have to be similar to MAE since the system magnetization has been shown above to not depend on the ordering.
The main conclusion is that for higher ordering, only systems with Co-C-Co and Fe-C-Co NN sequences possess usable magnetic hardness.
Similarly to σ, for low B2 ordering, we can still observe many individual atomic arrangements with hardness above 0.5.
It might be tempting to dive more deeply into the evaluated atomic occupation configurations individually, with particular emphasis on the high-symmetry structures.
However, such analysis is beyond the scope of this work, as we rely on error cancellation due to the high sample count.
A detailed look at the specific structures would require a much finer k-point mesh and fine atomic positions optimization of such atomic arrangements.
Nevertheless, to emphasize possible further paths of Fe-Co-C system investigation, we present in Fig. <ref> four selected low-energy, high-MAE structures (a, b, c, and e), as well as a high-energy, low-MAE, perfectly ordered B2 structure (d).
We found that high-order structures for as low as 25% Co concentration can indicate usable magnetic properties.
Interestingly, the lowest energy structure for Fe_12Co_4C is the Co interlayer in the plane farthest away from the C impurity.
Since the price of Fe is negligible in the overall price of an Fe-Co alloy, those are promising candidates for future permanent magnets.
As for qualitative trends, we observe the L1_2 structure among the lowest energy systems for high Co concentrations in the fcc regime.
Despite the structure changes towards bct with lowering of the Co content, the atomic occupations for low-energy Fe_12Co_4C remain the same as in the high-Co L1_2 phase, presented in Fig. <ref>(e).
§ SUMMARY AND CONCLUSIONS
We conducted a full configuration space analysis for 2 × 2 × 2 supercell based on a 2-atom body-centered tetragonal unit cell, with a single C impurity at one of the octahedral interstitial positions in the supercell.
The calculations were performed using density functional theory (DFT) with the generalized gradient approximation (GGA) using the full-potential local-orbit scheme (FPLO18).
In our tetragonal supercells, we observe a structural phase transition from a body-centered tetragonal (bct) to a face-centered cubic (fcc) structure at a Co concentration of about 70 at.%.
The lattice parameter c/a ratio in the bct region ranges from 1.07 to 1.12.
We calculated relevant magnetic properties for all non-equivalent Fe/Co atoms arrangements in the computational cell.
Since DFT calculations are, by definition, performed for a temperature of 0 K (for the ground state), we used thermodynamic averaging with an assumed temperature of 300 K in determining the average magnetocrystalline anisotropy energy (MAE) values.
Although, as previous experiments have shown, the structure expected above the critical Co concentration (x ≃ 0.7) is hexagonal, the assumed tetragonal geometry of the supercell does not allow this and leads to an fcc structure.
One of the basic features of the supercell geometry we analyzed is the first coordination shell of the C dopant atom.
The C atom has two nearest neighboring sites which can be occupied by two Fe atoms, two Co atoms, or one Fe and one Co atom.
We found that for low Co concentrations, structures with impurities adjacent to two Fe atoms become more stable.
The expected result of the stabilization of the (Fe_0.5Co_0.5)_XC alloys by the Co-C-Co nearest neighbor sequence for medium to high Co concentrations is also confirmed in our results.
Although we observe a rather large scattering of magnetic moments for different configurations on both Fe and Co atoms, the total magnetic moment in the supercell remains more or less constant.
Average (spin) magnetic moments decrease with increasing Co content, without a clear maximum for intermediate concentrations.
Positive MAE values in the bct region indicate a uniaxial magnetocrystalline anisotropy and show a broad maximum around medium Co concentration (x ≃ 0.5).
The calculated course of MAE as a function of Co concentration is in very good quantitative agreement with experimental data, which is a noteworthy improvement over effective medium methods.
The magnetic hardness of many configurations exceeds the conventional limit of 0.5 for magnetically semi-hard materials but does not exceed 0.9, remaining below the limit for hard magnetic materials.
In addition, for relatively low Co concentrations, on the order of 25%, we have identified a number of energetically stable structures with high MAE values and potential economic significance.
The calculated mixing enthalpy of considered Fe-Co-C alloys is the lowest at around 50% Co concentration.
Moreover, the general trends indicate that higher values of MAE (and magnetic hardness) correlate with more negative values of mixing enthalpy, which
It shows the better structural stability of high-MAE atomic configurations.
A significant part of the discussion is devoted to determining the effect of ordering on the magnetic properties of the compositions under consideration.
We focus on the Bethe and Warren-Cowley short-range ordering parameters and the ordering parameter towards the arbitrarily chosen B2 (CsCl) structure.
In the largest range of values of the Bethe short-range ordering parameter, its increase correlates with an increase in MAE, while for the highest values of the parameter (above 0.2) we no longer track correlation.
Furthermore, we observe no significant correlation between MAE and the value of the Warren-Cowley short-range ordering parameter and the ordering parameter towards the B2 structure.
The direct neighborhood of the impurity dominates MAE value dependencies.
On the contrary, we see a clear decrease in the value of the enthalpy of mixing (higher stability) as short-range and long-range ordering parameters increase.
In summary, we present a relatively simple and effective method for averaging multiple configurations to predict accurate MAE values for the Fe-Co-C system.
We show that the method can be made even more efficient by averaging a few percent of the most energetically favorable structures, with little loss in accuracy.
In addition, the Fe-Co-C system is a good matrix for further modifications (e.g., induction of additional stresses) stabilized by the Fe-Co nearest neighbor interactions.
Considering that B-, C-, and N-doped Fe-Co alloys possess similar structural and magnetic properties, further research of Fe/Co ordering in interstitially-doped Fe-Co can provide much-needed insight towards efficient, rare-earth-free permanent magnet development.
§ ACKNOWLEDGEMENTS
We acknowledge the financial support of the National Science Centre Poland under the decision DEC-2018/30/E/ST3/00267.
The work on the conformation-picking scheme development has been financed by the Polish Ministry of Science and Higher Education under the grant DI2017/007947.
Calculations were made in the Poznan Supercomputing and Networking Centre (PSNC/PCSS).
We acknowledge Paweł Leśniak for his help in compiling and maintaining the computational codes utilized.
We thank Joanna Marciniak and Ján Rusz for their valuable discussion and suggestions.
We also want to thank Karolina Olszewska, Justyna Rychły-Gruszecka, Justyn Snarski-Adamski, Maciej Szary, and Jan Raczyński for the valuable feedback on the manuscript.
|
http://arxiv.org/abs/2307.04120v1 | 20230709082441 | Toward a stellar population catalog in the Kilo Degree Survey: the impact of stellar recipes on stellar masses and star formation rates | [
"Linghua Xie",
"Nicola R. Napolitano",
"Xiaotong Guo",
"Crescenzo Tortora",
"Haicheng Feng",
"Antonios Katsianis",
"Rui Li",
"Sirui Wu",
"Mario Radovich",
"Leslie K. Hunt",
"Yang Wang",
"Lin Tang",
"Baitian Tang",
"Zhiqi Huang"
] | astro-ph.GA | [
"astro-ph.GA"
] |
subject
Article
SPECIAL TOPIC:
2023
04
xx
x
xxxx
000000
xxx
xxx
1,2]Linghua Xie
1,2]Nicola R. [email protected]
3]Xiaotong [email protected]
4]Crescenzo Tortora
5]Haicheng Feng
1]
Antonios Katsianis
6,7]Rui Li
1,2]Sirui Wu
8]Mario Radovich
9]Leslie K. Hunt
2,10]Yang Wang
2,11]
Lin Tang
1]Baitian Tang
1,2]Zhiqi Huang
Xie L.
Xie L. et al.
[1]School of Physics and Astronomy, Sun Yat-sen University, Zhuhai Campus, 2 Daxue Road, Xiangzhou District, Zhuhai, P. R. China;
[2]CSST Science Center for Guangdong-Hong Kong-Macau Great Bay Area, Zhuhai, China, 519082
[3]Institute of Astronomy and Astrophysics, Anqing Normal University, Anqing, Anhui 246133, China
[4]INAF – Osservatorio Astronomico di Capodimonte, Salita Moiariello 16, 80131 - Napoli, Italy;
[5]Yunnan Observatories, Chinese Academy of Sciences, Kunming, 650011, Yunnan, People's Republic of China
[6]School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China;
[7]National Astronomical Observatories, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing 100012, China
[8]INAF - Osservatorio Astronomico di Padova, via dell'Osservatorio 5, 35122 Padova, Italy
[9]INAF - Osservatorio Astronomico di Arcetri, Largo Enrico Fermi 5, 50125, Firenze, Italy
[10]Peng Cheng Laboratory, No.2, Xingke 1st Street, Shenzhen, 518000, P. R. China
[11]School of Physics and Astronomy, China West Normal University, ShiDa Road 1, 637002, Nanchong, China
The Kilo Degree Survey (KiDS) is currently the only sky survey providing optical (ugri) plus near-infrared (NIR, ZYHJK_S) seeing matched photometry over an area larger than 1000 deg^2. This is obtained by incorporating the NIR data from the VISTA Kilo Degree Infrared Galaxy (VIKING) survey, covering the same KiDS footprint. As such, the KiDS multi-wavelength photometry represents a unique dataset to test the ability of stellar population models to return robust photometric stellar mass (M_*) and star-formation rate (SFR) estimates. Here we use a spectroscopic sample of galaxies for which we possess u g r i Z Y J H K_s “gaussianized” magnitudes from KiDS data release 4. We fit the spectral energy distribution from the 9-band photometry using: 1) three different popular libraries of stellar population templates, 2) single burst, simple and delayed exponential star-formation history models, and 3) a wide range of priors on age and metallicity.
As template fitting codes we use two popular softwares: LePhare and CIGALE.
We investigate the variance of the stellar masses and the star-formation rates from the different combinations of templates, star formation recipes and codes to assess the stability of these estimates
and define some “robust” median quantities to be included
in the upcoming KiDS data releases.
As a science validation test, we derive the mass function, the star formation rate function, and the SFR-M_* relation for a
low-redshift (z<0.5) sample of galaxies, that
result in excellent agreement with previous literature data. The final catalog, containing ∼290 000 galaxies with redshift 0.01<z<0.9, is made publicly available.
98.62.Lv,98.62.Ai,98.62.Ck
Toward a stellar population catalog in the Kilo Degree Survey: the impact of stellar recipes on stellar masses and star formation rates
[
August 12, 2023
=======================================================================================================================================
§ INTRODUCTION
The spectral energy distribution (SED) of galaxies provides crucial information on the properties of their stellar populations at the different cosmic epochs. In particular, the stellar mass content and the star formation history of galaxies are of major importance to understand the mechanisms of their formation, including the impact of the environment on their properties <cit.>. For instance, the study of the stellar mass function as a function of the redshift is a crucial probe of the stellar mass assembly of galaxies <cit.>, and combined with the halo mass function of simulations, can be used as a cosmological probe, e.g. in abundance matching studies (e.g. <cit.>, <cit.>, <cit.>). Similarly, the star formation rate function can measure the growth of the stellar content of galaxies across the cosmic time (e.g. <cit.>). A relevant example of scaling relation is the
star formation versus stellar mass, also known as the galaxy main sequence (<cit.>). This is crucial to understand the formation mechanisms of galaxies, in particular the relation between the star formation activity across time (<cit.>), and the gas consumption during galaxy formation (<cit.>).
The measurement of the galaxy stellar masses and star formation rates mainly relies on details of stellar population analyses (<cit.>, <cit.>), and their ability to constrain the stellar mass-to-light ratios (e.g. <cit.>) and specific star formation history (e.g. <cit.>). This is a notoriously complex problem (<cit.>), due to the existence of degeneracies among some of the parameters, in particular dust, age and metallicity (e.g. <cit.>, <cit.>, <cit.>, <cit.>).
Furthermore, in order to convert the stellar population parameters into “galaxy” properties, one needs to account for the galaxy intrinsic luminosity, which carries other uncertainties, e.g. galaxy distances, or redshifts. This step is generally incorporated in the stellar population codes that can model the SED using the redshift as a free parameter (e.g. <cit.>, <cit.>, <cit.>, <cit.>) or as an input from spectra or photo-z codes (e.g. <cit.>, <cit.>, <cit.>).
Despite these difficulties,
spectroscopical data (<cit.>, <cit.>, <cit.>, <cit.>)
or multi-band photometry (e.g., <cit.>, <cit.>) have been routinely used to derive stellar masses, age, metallicity using simple stellar population (SSP, e.g. <cit.>) or more complex stellar population models with a parametrized star formation history (SFH, e.g. delayed exponential: <cit.>, log-normal: <cit.>, double power law: <cit.>, Γ: <cit.>) or non-parametric SFHs <cit.>.
Optical broadband photometry alone cannot break the dust-age–metallicity degeneracies (e.g. <cit.>), while
extending the wavelength range in the near-infrared (NIR) can provide additional constraints that can alleviate them
(<cit.>, <cit.>).
The combination of optical and NIR photometry is also effective for photometric redshifts from SED fitting techniques, which are an important ingredient in stellar population analyses. These consist of finding a model galaxy spectrum, given by a linear combination of representative stellar or galaxy templates, which best fits the observed galaxy SED (<cit.>). Here, the wide baseline can alleviate the degeneracy between various galaxy spectra as a function of galaxy redshifts (<cit.>).
In this paper, we want to test the outcomes of different stellar population codes, namely LePhare (<cit.>) and CIGALE <cit.>, and different stellar population templates and star formation histories, using a multi-band, seeing matched catalog of galaxies collected in the fourth data release (DR4) of the Kilo Degree Survey (KiDS, <cit.>, K+19 hereafter). The catalog includes sources
for which we possess 1) optical photometry in ugri bands and NIR photometry in ZYHJK_s bands from the VISTA Kilo Degree Infrared Galaxy (VIKING, <cit.>), 2) spectroscopic redshifts (spec-zs, hereafter) from different surveys, and 3) deep learning photometric redshifts. It collects about 290 000 sources, a subsample of which
has already been used in KiDS to calibrate photometric redshifts (e.g., <cit.>). The advantage of spectroscopic redshifts is that they
alleviate the degeneracies between colors and redshifts, which further impact the accuracy of the stellar parameters. The addition of photometric redshifts will also allow us to assess the impact of their larger uncertainties on the same stellar parameters.
In fact, the final goal of this work is to evaluate the variance of the stellar population quantities from different SED fitting recipes, popular stellar population templates, as well as the uncertainties on redshifts. We will determine what are the most stable parameters and define robust quantities suitable for science applications. This is a first step to define a strategy to produce a robust stellar population catalog for the upcoming KiDS data release 5 (KiDS-DR5, Wright et al. 2023).
The main parameters we are interested in are the stellar mass and the star formation rate, but we will also provide the catalog of ages and metallicities of the galaxy stellar populations from a large set of priors. Since for this spectroscopic sample we also possess very accurate morphotometric redshifts from deep learning (i.e. GaZNet, <cit.>), we can finally test the impact of
redshifts derived from
pure multi-band photometric catalogs combining optical and NIR, like the ones expected to be collected from future large sky surveys like Euclid mission (<cit.>), Vera Rubin Legacy Survey in Space and Time (VR/LSST; <cit.>), China Space Station Telescope (CSST; <cit.>).
There have been previous works including stellar population analyses of KiDS galaxy catalogs, either determining stellar mass only, for weak lensing studies (<cit.>) or estimating galaxy properties, including photometric redshifts and stellar masses, for bright galaxies (i.e. r<21, <cit.>), or estimating structural parameters and stellar mass to select ultra-compact and massive galaxies (<cit.>) and for central dark matter studies (<cit.>).
However, none of these has investigated the impact on the stellar masses of the combination of fitting procedure and stellar templates. A similar analysis has been provided for the CANDELS survey (<cit.>), where they used optical plus NIR photometry and tested the impact on stellar masses of different stellar population codes, stellar templates and star formation histories.
As a science validation test, we will conclude our analysis by using stellar mass and star formation rate estimates to derive the stellar mass function, the star formation rate function, and the mass vs. star formation rate relation of the galaxies from the KiDS spectroscopic sample, using both spectroscopic and deep learning redshifts and compare them with literature data at redshift z<1.
The paper is organised as follow. In Sect. <ref> we introduce the data and the set-up of the stellar population analysis; in Sect. <ref> we present the stellar population inferences, assess their accuracy and precision using a series of statistical estimators, and define a robust definition of the stellar mass and star formation estimates; in Sect. <ref> we discuss the dependence of the accuracy and scatter on galaxy properties and finally show the galaxy mass function, the star formation rate function, and the stellar mass-star formation rate relation as a science validation test; in Sect. <ref> we draw some conclusions and perspectives for future analyses.
Throughout the paper, we will adopt the following cosmological parameters: Ω_m = 0.3, Ω_Λ = 0.7, H_0 = 70 km s^-1 Mpc^-1.
§ DATA AND METHODS
The spectroscopic sample which we use in this paper consists of 9-band photometry from the 1000 deg^2 area of KiDS data release 4 (KiDS-DR4 hereafter, see K+19), plus spectroscopic redshifts collected from the Galaxy Mass Assembly <cit.> survey, and the Sloan Digital Sky Survey/Baryon Oscillation Spectroscopic Survey <cit.>, overlapping with the KiDS footprint. We also add further machine learning redshifts from the GaZNet convolutional network presented in <cit.>, as these have been demonstrated to provide very accurate redshifts up to z∼ 3, for galaxy samples with magnitude r 22.5. In the following we describe in more details the content of the dataset and the different stellar population model set-ups used to analyze them.
§.§ Photometry and spectroscopic redshifts
The photometric data of the spectroscopic sample are collected from the KiDS and the VIKING surveys. These are two sister surveys covering a total area of 1350 deg^2 of the sky, in ugri and ZYJHK_s bands, respectively.
The KiDS survey has been carried out at the VST/Omegacam telescope in Cerro Paranal (<cit.>; <cit.>).
It has been optimized for weak lensing in the r-band, which provides best seeing imaging (average FWHM∼0.7”), and mean limiting AB magnitude (5σ in a 2” aperture) of 25.02±0.13. The other bands have been observed with poorer seeing and reached mean limiting AB magnitudes of 24.23±0.12, 25.12±0.14, 23.68±0.27 for u, g and i, respectively (see K+19).
VIKING has been carried out at the VISTA/VIRCAM (<cit.>) and
complemented KiDS observations with five NIR bands (Z, Y, J, H and Ks). The median value of the seeing is ∼ 0.9” (<cit.>), and the AB magnitude depths are 23.1, 22.3, 22.1, 21.5 and 21.2 in the five bands (<cit.>), respectively.
The 9-band fluxes have been measured via the Gaussian Aperture and PSF (GAaP) photometry method (<cit.>), which gives colours that are corrected for PSF differences. Hence, GAaP photometry naturally provides seeing matched fluxes for each source in the catalog, by definition.
However, sources more extended than the aperture function result in underestimated total fluxes. In order to correct this systematic effect, a total aperture correction needs to be applied to derive the “total” galaxy properties (see Sect. <ref>).
As discussed in K+19
the GAaP photometry is Galactic extinction corrected using
the <cit.> maps with the <cit.> coefficients.
As a spectroscopic database, we have collected redshifts from: 1) GAMA data release 4 (<cit.>), and 2) SDSS data release 17 (<cit.>, SDSS-DR17 hereafter). Previous compilations of spectroscopic data overlapping with the KiDS area did not include SDSS-DR17, but included other high redshift datasets (see e.g. <cit.> and reference therein). However, the statistics of galaxies matching the KiDS-DR4 catalog at redshift larger than z∼1 is rather sparse.
On the other hand, for the analysis we are interested to perform in this paper, SDSS-DR17 and GAMA provide a quite abundant sample of galaxies at z1.
In particular, GAMA is the most complete sample,
reaching ∼95.5% completeness for r-band magnitude r<19.8 (<cit.>).
To match the redshift distributions of the two catalogs, we exclude sources at z>0.9, where the overall catalog drops to a constant number of a few tens of galaxies per redshift bin, mainly from SDSS-DR17. We also notice that a large portion of sources at
z<0.005 are classified as “stars” from their parent surveys. Hence, to avoid the contamination from other misclassified stars, we decide to
use a conservative cut and select only sources with z>0.01. Equally, we exclude all sources classified as Quasars (QSO), as their SED might be dominated by the nuclei emission, rather than the stellar population light. These criteria together produce a final catalog of
242678 GAMA and 77859 SDSS-DR17 galaxies, which includes 31728 repeated sources. For these duplicates, we adopt the SDSS-DR17 redshifts, which have errors, finally ending up with a total of 288 809 objects.
In the following, we consider these sources to be “galaxies”, although we might still expect some minor contamination from unclassified QSO (or active galactic nuclei, AGN).
The distributions of the redshift and the r-band Kron-like magnitude, MAG_AUTO (r-mag for short), obtained by SExtractor <cit.> for these galaxies are finally reported in Fig. <ref>, where we have broken the sample in the two original spectroscopic surveys, for clarity. From the r-mag distribution we can see the different completeness magnitude of the two samples, with SDSS-DR17 showing a peak at r∼17.8.
and GAMA
at r∼19.8.
The sample (in)completeness is not expected to impact the main goal of our analysis, which is to study the response of the 9-band optical+NIR photometry to the different stellar population recipes, however we will need to consider this when the stellar parameters will be used for the science validation test (see Sect. <ref>).
§.§ Statistical estimators
Here we introduce some statistical estimators we will use throughout the paper: 1) the relative bias, 2) the median absolute error and 3) the outlier fraction.
1) The relative bias is defined as
Δ p =
p_i-r_i,
where p_i and r_i are the estimated (log) parameters and the reference value for any i galaxy of the sample.
In the case of redshifts, this becomes
μ=p_i-z_i/1+z_i,
where p_i are the predicted photometric redshifts and z_i are the spectroscopic redshifts (see <cit.>).
2) The Normalized median absolute deviation (NMAD) is then defined as:
NMAD = 1.4826 × median (|BIAS - median (BIAS)|).
where we identify by BIAS either the Δ p or the μ defined above. This gives a measure of the overall scatter of the predicted values with respect to the 1-to-1 relation, i.e. the precision of the method.
3) Fraction of outliers.
It is often useful to define the fraction of catastrophic estimates, that significantly deviate from the mean values, as a measure of the robustness of an estimator. In case of redshifts this is defined as the fraction of discrepant estimates,
with the condition |μ|>0.15 (see, e.g., <cit.>). For the stellar population parameters we decided to use a 2σ level in the log-normal distribution of the estimated values, which allow us to spot strong deviations from gaussianity.
§.§ Deep Learning morphoto-metric redshifts from GaZNet
As mentioned in Sect. <ref>, in this paper we want to test the robustness of the derived quantities from a full photometric samples. To do that, besides the spec-z as in Sect. <ref>, we use the morphoto-metric redshifts obtained by combining KiDS r-band images and the 9-band catalog using the Galaxy morphoto-Z Network (GaZNet, <cit.>, Li+22 hereafter).
GaZNet has been previously tested on a KiDS galaxy sample (see Li+22 for details) and demonstrated to achieve very high precision in normalized median absolute deviation (NMAD=0.014 for z1 redshift and NMAD=0.041 for z1 redshift galaxies) and low outlier fraction (0.4% for lower and 1.27% for higher redshift galaxies, respectively), down to r∼22. These performances are better than the ones obtained by standard bayesian methods in KiDS for “point” estimates (e.g. BPZ, see <cit.>) and other machine learning methods based on photometry only data applied previously to KiDS datasets (e.g. <cit.>, <cit.>).
The level of accuracy reached by the deep learning estimates is shown in Fig. <ref>, where we compare the GaZNet estimated redshifts vs. the spec-z catalog described above.
In this figure we show the GaZNet estimates also for the SDSS-DR17 sample, that was not part of the deep learning training/testing in Li+22. As such, the SDSS-DR17 sample, added in this paper, represents a totally independent galaxy test sample with rather different distribution in redshift and luminosity than the original training sample (see Fig. <ref>). This gives us a more realistic sense of the scatter we can expect from the full photometric samples from KiDS, covering similar redshift/magnitude ranges. For the predictions in Fig. <ref>, we obtain a relative bias μ=0.005, a NMAD=0.017 and an outlier fraction of 0.4%, which are perfectly in line with the results found on <cit.>, hence confirming the very good performances of the deep learning morphoto-z provided by the GaZNet. We just notice a tail of outliers at z0.05, which are overestimated by the GaZNet and that might yet produce some systematics in the stellar population parameters.
§.§ LePhare stellar population: set-up and templates
LePhare (<cit.>), is a template-fitting code, which performs a simple χ^2 minimization between the stellar population synthesis (SPS) theoretical models and data, in a standard cosmology (see <ref>).
In our analysis we adopt a <cit.> Initial Mass Function[In LePhare, there is no real option to set the IMF, but this is implemented in the stellar libraries. For the <cit.> libraries the IMF closer to Chabrier is the <cit.> IMF. To account for these IMF difference we will simply adopt the standard -0.05 dex correction to transform Kroupa-based into Chabrier-based masses. ] (IMF), the <cit.> dust-extinction law. We also include the contribution of nebular emission, e.g. from low-mass starforming galaxies (see Sect. <ref>): LePhare uses a simple recipe based on the Kennicutt relations <cit.> between the SFR and UV luminosity, Hα and [OII] lines.
Regarding the stellar templates, we test three different libraries: 1) the standard <cit.>, 2) the <cit.> and 3) the <cit.> stellar population synthesis (SPS) models. We have also adopted three different models for the star formation history (SFH), ψ(t): 1) a single burst (SB, hereafter), i.e. ψ(t)=δ(t_0), where t_0 is the age of the galaxy, 2) the exponentially declining law (ExD, hereafter), ψ(t)∝ exp(-t/τ), and finally 3) a combination of both (SB+ExD), which is directly allowed by, e.g., the M05 stellar libraries.
We remark here that the choice of the exponential declining SFH is due to the limited choice offered by Le Phare, even though the ExD is flexible enough to embrace a variety of realistic SFHs.
CIGALE (see below) will give us the chance to make a different choice, although a more general approach with a larger variety of SFHs will be considered in future analyses.
The full LePhare set-up is summarized in Table <ref>.
As anticipated in Sect. <ref>, we use the redshift, both spec-z and morphoto-z, as input in LePhare.
The stellar population parameters we use to perform the best fit to the GAaP 9-band magnitudes, described in Sect. <ref>, are: age, metallicity, and star formation parameters (either δ(t_0) or τ), which are assumed to vary as in Table <ref>. Consistently with previous literature (e.g. <cit.>, <cit.>), we use the best-fit parameters as a reference for this analysis.
§.§ CIGALE stellar population: set-up and templates
We also adopt the Code Investigating GALaxy Emission (CIGALE, <cit.>, v2020.0), which can construct the FUV to the radio SEDs of galaxies and provide star
formation rate, attenuation, dust luminosity, stellar mass, and many other physical quantities, using composite stellar populations from simple stellar populations combined with highly flexible star formation histories.
For our analysis, we make use of BC03 and M05 stellar templates. Differently from LePhare, CIGALE does not have a pure ExD law among the SFH choices, hence we decide to adopt a delayed exponential law (DelEx, hereafter), ψ(t)∝ t/τ^2 exp(-t/τ), which is smoother than the exponential declining SFH from LePhare.
Consistently with LePhare, we have adopted a <cit.> Initial Mass Function (IMF), <cit.> dust-extinction law and both the inclusion or not of nebular continuum and emission lines for the BC03 only. In CIGALE the nebular templates adopted are based on <cit.>. The full set-up parameters, including the range of the stellar parameters adopted, are summarized in Table <ref>. As for LePhare, we use the best-fit parameters from CIGALE in the following analysis.
§ RESULTS
In this section, we discuss the outcome of the different models summarized in Table <ref>. These have, in some cases, very strong differences in the recipe of the star formation history (SFH), as we have adopted a single burst and both an exponentially declining and delayed exponential SFR, with a wide range of τ (see Sects. <ref> and <ref>, and Table <ref>).
This choice is made to explore the impact of different SFHs on the stellar masses
and SFR estimates.
The SFH models above have have been effectively used to reproduce the properties of local galaxies <cit.> and cosmic SFR density and stellar mass density at redshifts z < 2 <cit.>.
As anticipated, we also include the effect of emission lines that, although they are generally important in massive galaxies at high redshift (e.g.<cit.>, <cit.>, but see also <cit.>), can also be relevant for local low-mass starforming galaxies (e.g. <cit.>).
Overall,
the model combinations in Table <ref> include a fair variety of libraries and SFHs, which we expect to provide realistic evidences of systematic effects.
Moreover, as we are preparing the methods to be applied to the full KiDS photometric dataset, we will perform the same analysis using morphoto-zs as input, which will be provided to deeper limiting magnitudes that the ones offered by the spectroscopic “galaxy” sample (e.g. down to r∼22.5 as seen in Li+22). This will allow us to evaluate the existence or not of systematics on stellar population parameters, and the impact on the precision of the estimates, due to the usage of the more scattered photometric redshifts.
Once collected all the estimates from all the configurations in Table
<ref>, we will 1) check the overall consistency among the different stellar parameters; 2) discuss the scatter of the parameters and possibly define some robust estimator for them. As mentioned in the Sect. <ref>, in this first paper we concentrate on the stellar masses and the star formation rates, as the most physical meaningful parameters one can extract from large multi-band photometric samples of galaxies, to study their evolution across the cosmic time.
We use the estimates from BC03 templates and ExD star formation recipe in LePhare (LP/BC03/ExD in Table <ref>) as reference model for mass and star formation estimates, if not otherwise specified. This is for uniformity with previous analyses in KiDS (e.g. <cit.>).
To statistically assess the difference among the stellar mass and the SFR estimates among the different configurations, we will use the following estimators: 1) the relative bias, 2) the median absolute error and 3) the outlier fraction, defined in Sect. <ref>.
§.§ Stellar masses
In this section we show the results for the stellar masses for the case we fix the redshift of the galaxies of the sample to the spectroscopic and morphoto-metric redshifts, introduced in Sect. <ref> and shown in Fig. <ref>. By stellar masses, we aim at determining the total mass in stars, while we have seen in Sect. <ref> that the seeing matched GAaP photometry adopted in KiDS does not correspond to a “total aperture”. Hence, if using these fractional fluxes, the stellar masses calculated by the stellar population codes are
the mass of stars required to produce the inputted
galaxy SED, resulting in an aperture bias.
Therefore, in order to recover a fair estimate of the total galaxy stellar mass, the observed SED must be representative of the total light emitted from the galaxy.
In order to correct this systematic effect, we opt to use the quasi-total SExtractor, MAG_AUTO, using the equation:
M_ *, corr= M_*,out+0.4×(GAAP_r-
MAG_AUTO)
where M_*,out is the stellar mass estimated by the stellar population tools, GAAP_r is the r-band GAaP magnitude from the KiDS catalog, and the M_ *, corr is the corrected “total” mass, under the assumption of constant mass-to-light ratios. In the following we will first show the results of the stellar population analysis using the spectroscopic redshift, then we compare these latters with the results of the morphoto-z to estimate the impact of the larger uncertainties on these latter on determining galaxy distances (see Sect. <ref>). Finally, we discuss the impact of the inclusion of the nebular emissions in the models.
§.§.§ Using Spectroscopic redshifts
We start showing the results obtained using the spectroscopic redshift as fixed parameter in the stellar population tools.
In Fig. <ref>, we compare the stellar mass estimates from LePhare and CIGALE, using different libraries and SFHs and spectroscopic redshifts. All other parameters, in Table <ref>, are kept varying in the model grid to be estimated via the SED fitting procedure.
The range of masses is quite large and spans over almost 6 order of magnitudes from log M_*/M_⊙∼6 to log M_*/M_⊙∼12, although stellar masses log M_*/M_⊙7-7.5 are compatible with globular cluster sized systems rather than galaxies. We cannot exclude the contamination from such compact stellar systems, but we decide to retain all sources in the catalog without making any mass based selection. Nonetheless, we will keep this cautionary note on the very low mass end in mind throughout the paper.
Overall, the stellar masses all align along the 1-to-1 relation with residuals (bottom panels), defined as Δlog M=log M_y-log M_x, computed in different mass bins, that are generally distributed around zero, but with the LP models systematically smaller and the CI models rather aligned to the reference model, LP/BC03/ExD. All residuals, except LP/M05/SB, are consistent with zero within 1σ scatter, defined as the standard deviation of the Δlog M, σ(Δlog M), at least for masses larger than log M/M_⊙∼9. In the same bottom panels, we report the mean scatter for the mass bins at log M/M_⊙<9 and >9, showing generally a slightly larger values at lower masses (mean 0.22 dex) than larger masses (mean 0.20 dex), with the CI models also showing a systematically smaller σ(Δlog M) than LP ones.
The bias, NMAD and outlier fraction of each configuration are summarized in Table <ref>.
Similarly to Fig. <ref>, the bias is indeed consistent with zero for all configurations within the NMAD, except for LP/M05/SB for which the bias is statistically
significant. CIGALE shows both a negligible bias and small NMAD, whether or not the same stellar libraries of the reference model from LePhare (BC03) are adopted, meaning that the code and the SFH can have an impact on the scatter but not on the accuracy of the stellar mass inferences.
On the other hand, the large bias found for LP/M05/SB shows that the combination of template and SFH has a large impact on the bias, for a fixed fitting method. If we also fix the template (see e.g. M05), we can see that the bias can have rather large variations (from -0.423 of LP/M05/SB, to -0.178 of LP/M05/ExD), eventually due to the impact of the different SFH choices that exacerbate the difference on the treatment of thermally pulsating asymptotic giant branch (TP-AGB) phase by M05 (see e.g. <cit.>). Moreover, we notice a double sequence, at stellar masses log M_*/M_⊙10.8 in the models including the exponential SFH, separating star-forming from quiescent galaxies. The same sequence is not evident on the SB model, which tends to assign younger ages and lower mass-to-light ratios to star-forming galaxies and ultimately ending into an overall strong underestimate of the stellar masses (see the negative biases in Table <ref>). The CIGALE model using M05 and a delayed exponential (CI/M05/DelEx) shows a tighter distribution, with no sign of the double sequence. This confirms that the M05 models are more sensitive than others to the SFH, although there might be a residual component from the fitting (code) procedure, having CI models ∼30% smaller scatter than the LP ones, on average. The NMAD generally mirrors these behaviors,
with M05 configurations being larger than the corresponding set-ups from other templates (see e.g. LP/M05/SB vs LP/CB07/SB or CI/M05/ExD vs CI/BC03/ExD).
All in all, from Fig. <ref> we see that, using the spec-z as input, the scatter of the different combinations are well confined within ∼0.2 dex and
the outlier fraction is always very small (∼ 4-5%), consistently with a log-normal distribution of the uncertainties with no pathological cases across the models.
Considering the whole statistical estimators, we can conclude that stellar masses from spec-z are a rather robust quantity with no signs of significant systematics, except for the LP/M05/SB model.
This is consistent with findings from previous analyses also using optical + NIR photometry (e.g. Lee et al. 2010, <cit.>), although there are analyses reaching different conclusions (<cit.>).
§.§.§ Using morphoto-metric redshifts
We now show the results obtained using the GaZNet redshifts as fixed input in the stellar population tools. This is a critical test to check the impact of the use of noisier redshifts on the statistical estimators discussed in Sect. <ref>, and the overall variation of accuracy and precision of the estimates we might expect when applying this analysis to pure photometric datasets as the full KiDS photometric galaxy sample (see K+19 and future releases).
In Fig. <ref> we show the same correlations as in Fig. <ref>, but using the GaZNet redshifts, while in Table <ref> we report the corresponding statistical estimators. In this case, we also use the LP/B03/ExD model from the spec-z as reference to check the impact of the GaZNet redshift in terms of accuracy and scatter. Basically, the results show that, for the same correlations seen in Fig. <ref>, the relative bias of the different configurations is not worsened, meaning that the accuracy of the mass estimates is not affected by the use of the morphoto-z. This is eventually a consequence of the good accuracy of these latter as seen in Fig. <ref>. On the other hand, we register an evident increase of the NMAD
as a consequence of the morphoto-z intrinsic statistical errors and outlier fractions,
which is also mirrored by the scatter of the residual, at the bottom of the 1-to-1 relations, which is now of the order of 0.23 dex, for log M_*/M_⊙>9, and 0.49 dex for log M_*/M_⊙<9, on average.
These large scatter at low stellar masses are mainly caused by
the trend we see that below log M_*/M_⊙=8.5, where stellar masses are systematically overestimated compared to those obtained with the spec-z. This is not an effect that comes from the particular set-up of the fitting procedure, as shown by the comparison of the LP/B03/ExD/morphoto-z against the same set-up with spec-z
(bottom/left plot in Fig. <ref>). Even in this latter case, we see that below log M_*/M_⊙=8.5 the positive bias is similar to the ones of all other configurations. We track the motivation of this systematics to some bias of the GaZNet redshifts for a group of objects at very low redshifts (z<0.05 see Fig. <ref>), which turn-out to have also low masses. This can be due to some residual contamination from stars, not picked in the spectra classification, or just a failure of the GaZNet predictions at very low-z, which clearly impact the mass predictions. We will come back to this on Sect. <ref>.
However, still looking at the LP/B03/ExD/morphoto-z vs. spec-z, above
log M_*/M_⊙=8.5, the bias is almost absent and the only relevant effect is the GaZNet redshift scatter that, from the NMAD, is quantified in 0.09.
This is confirmed by noticing that the general increase of the NMAD from the spectroscopic sample to the morphoto-metric sample, in Table <ref>, is compatible with the sum in quadrature of the NMAD of the former with 0.09 coming from the latter, consistently with some pseudo-Gaussian distributions. This is consistent with a log-normal distribution of the uncertainties of the stellar masses, which are confirmed by the outlier fractions that are all of the order of 5-6% above 2σ of the log M_* scatter. A more detailed discussion of the variation of the statistical estimators as a function of the sample properties is presented in Sect. <ref>.
§.§.§ The impact of the nebular emissions on stellar masses
As anticipated at the beginning of Sect. <ref>, we intend to check the impact of the inclusion of nebular emission on our models. Generally speaking, starforming galaxies can have their spectra heavily contaminated by nebular emissions. The most prominent ones are Lyα @λ1216Å [OII] @λ3727Å, Hβ @λ4861Å, [OIII] @λλ 4959Å and 5007Å,
Hα λ6563Å. These emissions are all sparsely distributed in the optical and NIR wavelength at redshift z<1, but they are generally fainter than the continuum collected by the broad bands in this redshift range, except for strong starburst, low-mass galaxies. Here, we have the chance to estimate the impact of the presence of these emissions on the stellar masses, while we will discuss the impact on the star formation rate estimates in Sect. <ref>. We consider the options offered by the LePhare and CIGALE (see details in Sects. <ref> and <ref>) to implement the NE in the models as in Table <ref>. The results of the statistical estimators are reported in Table <ref>, between brackets, for all models considered. Here, we do not find any significant variation of the indicators of all models, which lets us to conclude that the stellar masses are poorly sensitive to the inclusion of the NE, regardless the stellar template, the SFH and the code adopted. We will keep the record of these models in the catalog and we will consider the for the discussion on the variance of the models in the discussion (Sect. <ref>).
§.§ Star Formation Rates
In this section, we present the results on the star formation rates. These measurements represent
the current amount of stellar masses formed per unit of time corresponding to the best parameters of the assumed SFH model fitting the SED.
As, by definition, the single burst models do not provide any such estimate, they will be discarded in the following analysis. For the same reason, the mixed model allowed from the M05 libraries (SB+ExD) is almost equivalent to the ExD, as it returns the same SFR estimates for the galaxies best fitted with an exponential SFH (ExD). Hence, only the latter one will be listed in the result tables and figures for the LePhare models, together with the DelEx of CIGALE.
We remind here the set of τ and ages adopted for the models in Table <ref>. As seen, we have used a rather large sample of both parameters to check the impact of them on our inferences, even though some extreme values can be either slightly un-physical or too optimistic. For instance, there might be little sensitivity from the fitting procedure to effectively distinguishing between a τ=15 Gyr and 30 Gyr, both producing rather flat SFH, hence leaving a large leverage to the model to converge on both values with similar confidence. On the other hand, the stellar models can be rather insensitive to an age =0.5 Gyr, being the broad band photometry unable to catch the typical feature of young stars, and also given the very shallow limiting magnitude of the u-band that would provide most of the UV rest-frame emission of galaxies up to z=0.9. However, for this test we decide to maintain a broad range of priors for the parameter space to learn their impact and confidently optimize their choice for future analyses.
As far as the output of both stellar population codes is concerned, similarly to stellar masses in Sect. <ref>, star formation rates should also be corrected for the total fluxes. This is needed to ensure that the specific star formation rate, sSFR=SFR/M_*, of a galaxy is conserved. Hence, in the following, we will correct the SFRs by the same amount of the stellar masses, i.e.
log SFR_corr=log SFR_out + (log M_*,corr-log M_*,out),
where M_*,out and SFR_out are the output of the SED fitting codes and M_*,corr is given by Eq. <ref>.
Finally, as we want to select a star-forming sample, we will adopt a canonical cut in specific star formation rate (sSFR) to separate passive from active galaxies and use log sSFR/ yr^-1 =-11 as a threshold (see e.g., <cit.>). SSFRs lower than this value should not be taken, in principle, at face value as these correspond to a physically negligible SFR. For this reason, we do not use them in our analysis, although we report them in our catalog with the warning to use them with caution.
§.§.§ Using Spectroscopic redshifts
As for the stellar masses in Sect. <ref>, we first discuss the SFR results obtained using the spectroscopic redshift as fixed parameter in
LePhare and CIGALE.
In Fig. <ref>, we show the SFRs computed using the different libraries and SFHs as in Table <ref>. Overall the SFRs look all aligned along the 1-to-1 relation, although both LePhare and CIGALE estimates using M05 show some negative offset (more pronounced in CIGALE), as seen by the residuals shown at the bottom of each panel.
Furthermore, at log SFR/M_⊙ Gyr^-1 8, the correlations show a tilt toward a positive bias, more pronounced for CIGALE, that only for CI/BC03/DelEx partially compensates the negative bias at higher SFRs.
On the other hand, at log SFR/M_⊙ Gyr^-1 8, the CI/BC03/DelEx estimates are nicely consistent with the LePhare estimates of LP/BC03/ExD.
Overall the two tools show a substantial agreement if they use the same libraries, while they do not seem to show a strong dependence on the SFH.
This is seen from Table <ref>, showing the statistical estimators for the different experiments. Here we find, indeed,
that LP/M05/ExD and CI/M05/DelEx have similar Bias, NMAD, and outlier fraction.
Looking at the whole statistical estimators
we can confirm that, broadly speaking, the relative bias of the SFR estimates is
barely consistent with zero within the NMAD for M05, while it is well consistent with zero for the BC03. From Fig. <ref> (bottom panels), we also see that the overall scatter of the residual is of the order of 0.3 dex, slightly larger to the one of the stellar masses.
Moreover, the outlier fraction, even in this case is consistent with a log-normal distribution, across the all SFR range. This broad result suggests that the SFR in star-forming galaxies is a rather stable parameter, in the redshift range we have considered. The degree of accuracy and scatter among different model and library configurations is almost comparable to the stellar masses derived from spec-z. We will now check, if this works similarly for the morphoto-metric redshifts, while we will check the impact of the NE in Sect. <ref>.
§.§.§ Using morphoto-metric redshifts
In Sect. <ref> we have discussed the impact of the morphoto-z from GaZNet on the stellar mass estimates and shown that the net effect of the morphoto-metric redshift is to increase the scatter and the outlier fraction of the final estimates. We have also seen that the overall impact of the GaZNet redshift can be quantified by comparing two set of estimates with same tool, stellar population library and SFH and changing only the input redshits (e.g. using the LP/BC03/ExD from morphoto-z vs. spec-z). The same trend is seen for the SFR estimates with the GaZNet redshift with respect to the spec-z, as shown in Fig. <ref>. Compared to the spec-z estimates in Fig. <ref>, we see that the scatter and the number of “large” outliers (see below) of the morphoto-z based estimates is increased with respect to the LP/BC03/ExD from spec-z. This is seen from the bottom panels with residuals in Δlog SFR (see caption), where we report an average scatter of 0.35 dex for log SFR/M_⊙ Gyr^-1>9 and 0.52 dex for log SFR/M_⊙ Gyr^-1<9.
This is also quantified in Table <ref>, where we again measure an increased NMAD for all configurations. As noticed for the masses, this is compatible with a pseudo-Gaussian increase of the NMAD values of the morphoto-z estimates, being the NMAD of the LP/BC03/ExD/morphoto-z (0.163) a measure of the overall impact of the morphoto-z errors.
The “gaussianity” of the log SFR distribution obtained from the morphoto-z is confirmed by the outlier fraction above the 2σ(log SFR), of the order of 5%.
In the same Fig. <ref>, we also see that the bias is generally compatible with zero, except for the CI/M05/DelEx (morphoto-z). In
general, a trend of the bias with the SFR is evident, due to a positive bias for the lower star formation rates (log SFR/M_⊙ Gyr^-1 8.5). Here the effect of the morphoto-z is to exacerbate the weak trend shown by the spec-z estimates, which is partially absorbed by the scatter of the residuals.
Due to the well known correlation between the SFR and the stellar mass (see Sect. <ref>), we conclude that this has the same origin of the bias found for stellar masses at log M_*/M_⊙<8.5, as discussed in Sect. <ref>. We also notice a cloud of outliers at log SFR/M_⊙ Gyr^-1 10 from the GaZNets-based estimates. These come from a series of morphoto-z outliers, overestimating the intrinsic redshift of the galaxy. Indeed, the higher fictitious redshifts force the SED fitting procedure to interpret the rest frame photometry of the galaxy to be bluer and, hence, more star-forming than the result one obtains from the spec-z.
To conclude the analysis of the SFRs we can say that, as for the stellar masses, these are also a rather stable quantities with respect to the fitting tool, stellar libraries and SFHs, as they do not show significant systematics, except for small SFRs, although we register a tendency of the M05 models to underestimate the SFRs with respect to the BC03.
§.§.§ The impact of the nebular emissions on star formation rates
We can finally check the impact of the nebular emissions on the predictions of the star formation rates from the different stellar population models considered. As done for the stellar masses in Sect. <ref>, we report the results of the main statistical indicators in Table <ref>, face-to-face with the same indicators from the no-emission models, using the GaZNet redshifts as input. As for the masses we do not see any significant change on the overall relative bias, NMAD and outlier fraction, meaning that the inclusion of the nebular emissions does not produce any relevant effect for any of the model, given the mass (log M_*/M_⊙>8.5) and redshift range (z<1) considered here.
§.§ Median mass and SFR estimates
A relevant result of this paper is that both the stellar mass and the star formation rate are two quantities that can robustly be constrained with seeing matched photometry covering a wide range of wavelengths, from optical to NIR (see e.g. <cit.>, <cit.>).
For completeness, in <ref> we briefly test the case where only optical bands are available and compare this with results obtained in Sect. <ref> to briefly illustrate the advantage of adding the NIR to the optical bands, in terms of accuracy and precision of the stellar population estimates.
By robust constraints, here we mean that the M_* and the SFR estimates do not show statistically significant “relative” bias if compared to the estimates from other tools, libraries and star formation histories. As seen in Sects. <ref> and <ref>, this is generally true for all models considered except LP/M05/SB, as this shows a relative bias of the stellar masses which is systematically larger than the scatter of the overall mass estimates (see Table <ref>, and Fig. <ref>). This makes this model an outlier with respect to all other models (see Sect. <ref>) and we decide to exclude it in the following analysis.
As reference estimates we have arbitrarily chosen the LP/B03/ExD model, but this cannot be taken as ground truth. If we assume that the true values of M_* and SFR have to be found within the interval covered by the adopted models, then we can define the “median” value as a reasonable estimator of the ground truth of each of them. To deal with the low number of measurements
available to compute the median, we follow the approach of <cit.> and adopt the Hodges-Lehmann
estimator, defined as the median value of the means in the linear space of all the possible pairs of estimates in the sample:
M_* ^ MED= median ( M_*,i+M_*,j/2 ),
where the i and j indexes vary over the different models in Table <ref>. For a dataset with n measurements, the set of all possible two-element subsets, for which the median is computed, has n(n - 1)/2 elements.
Similarly we will define a median star formation rate
SFR ^ MED= median ( SFR_i+SFR_j/2 ).
Assuming these quantities to be unbiased estimators of the ground truth, we will use them for a science validation test as in the Sect. <ref>.
As a sanity check for our median estimates, as well as for individual model results, in Appendix <ref> we show a direct comparison of the M_* and SFRs versus some external catalogs overlapping with the KiDS area. In particular, we use the stellar masses from <cit.>, which makes use of ugriZ photometry, and the SFR estimates of the SDSS-DR7 galaxy sample from the MPA-JHU group[https://wwwmpa.mpa-garching.mpg.de/SDSS/DR7/sfrs.html ], based on spectroscopy data as discussed in <cit.>. The main conclusion from these comparisons is that the “median” stellar masses and the star formation rates derived from the 9-band SED fitting are generally consistent with independent estimates based on different data and techniques. This is particularly true for M_* estimates, while for SFRs we can expect some offset due to intrinsic systematics of the different proxies adopted (see also Sect. <ref>). However, in all cases the relative bias between different datasets is confined in the typical scatter of the data.
§ DISCUSSION
In the previous sections we have assessed the accuracy and scatter of the different configurations using the relative bias, NMAD and outlier fraction as statistical estimators, and concluded that both stellar masses and SFRs are rather robust quantities. In this section we want to examine the accuracy and scatter in more details, as a function of the intrinsic properties of galaxies, like redshift, signal-to-noise ratio, and stellar mass. This will allow us to check the presence of “trends” in the systematics that might affect the stellar population parameters in different volumes of the parameter space defined by these quantities. This is fundamental if one wants to study the evolution of the mass function of scaling relations like the “main sequence” of the star-formation galaxies from the M_*- SFR relation. We also briefly discuss other sources of systematics, and finally compare the galaxy mass function and the M_*- SFR derived from our “median” parameters with previous literature, as a science validation of our inferences.
§.§ Relative bias, NMAD and outliers as a function of redshift, SNR and stellar mass
In Fig. <ref> we plot the bias, NMAD and outlier fraction as a function of the redshift, r-band signal-to-noise ratio (SNR) and stellar mass, for the stellar masses (left) and star formation rates (right) derived fixing the redshifts to the morphoto-z. Here, we decide to show only the GaZNet-based estimates because, as seen in Tables <ref> and <ref>, these are the estimates that, by incorporating the uncertainties of the morphoto-metric redshifts, provide the upper limits for both scatter and outlier fractions.
We also show the dependence on the r-band SNR as lower limit of the photometry uncertainties (being all other bands generally less deep than the r-band), that should also enter in the precision of the stellar population parameters. We finally remark that we limit our comparison to log M_*/M_⊙>8.5, as we have seen in Sect. <ref> that, below these limit masses, the estimates are dominated by the morphoto-z biases.
The first comment is that both stellar masses and star formation rates show similar features of the statistical estimators as a function of the different quantities, suggesting that the source of the biases and scatter are the same for both quantities. For stellar masses, starting from the top to the bottom, the outlier fractions stay usually within more than acceptable values at all ranges, although we see that toward low redshift (z0.05) and masses (log M_*/M_⊙9) the outlier fraction and NMAD show a systematic increase. This has been anticipated in Sects. <ref> and <ref> and tracked to an excess of outliers on the GaZNet redshifts.
Similar degradation of the estimators are observed at z0.7 for M_*
estimates, mainly for the poorer statistical samples, which have also degraded the redshift estimates. Overall we see that all the statistical estimators remain contained within reasonable bias (|Δ p|<0.2), NMAD (<0.3) and outlier fraction (<10%), especially
having excluded the LP/M05/SB from the mass set-ups. For star formation rates, notice
the bimodal behavior of the |Δ p| between the M05 and the BC03 models discussed in Sect. <ref>, with the
CI/M05/DelEx model showing the largest deviation from all other models. This possibly suggests that M05 stellar libraries need to cope with more complex SFHs than the ones used here. However, the overall indicators of all models seem to stay contained in the limits of the NMAD, we have kept all of them in the SFR^MED estimates, to average off all possible systematics.
For all the other estimators (NMAD and outlier fraction), we see little difference among the adopted fitting configurations, and confirm no major impact of the NE models either.
To conclude, we expect we can use the “median” estimates in the full range of masses log M_*/M_⊙>8.5 and at all redshift 1 in future applications, although, for SFRs, it remains to see if the SFR^ MED is totally bias free. In the next sections, and in Appendix <ref>, we will show some evidence that this might be the case. We have also checked the statistical estimators as a function of r-mag (not shown) and we can confirm that the outlier fraction and the bias become almost out of control at r-mag 23, which sets a safe limit for future applications based on the use of current GaZNet redshifts.
§.§ Some considerations about other sources of systematics
Before moving to some science application we need to stress that providing a full insight into all possible systematics that might come from the stellar population analysis is beyond the purpose of this paper.
We have already introduced the problem of the wavelength coverage in Sect. <ref> and addressed in <ref>.
Other sources of bias one should consider are the input redshifts in the stellar population tools. As we have discussed in Sect. <ref>), we are motivated to do this because by leaving the stellar population tools to constrain redshift and stellar population properties at the same time, we expect the degeneracies between redshifts and galaxy colours to strongly affect the stellar populations. This is also briefly discussed in <ref>, where we show that the results in terms of photo-z and stellar masses are much more scattered and prone to biases than fixing the redshifts.
On the other hand, we have seen in the previous sections that in case of unbiased morpho/photometric redshifts moving from spectroscopy to photometry based redshifts, the accuracy is not affected, but the scatter and the outlier fraction is increased with an acceptable level.
Finally, a comment on the stellar templates. In this paper we have used a variety of libraries that could be directly incorporated in the two reference tools adopted (see Table <ref>). However, this list is neither complete, nor optimal to really account for the current state-of-art stellar population models. We expect to expand our analysis to other stellar libraries (see, e.g., MILES, <cit.>), in future analyses. In this respect, we can consider this analysis as a first step to a more general program to implement a larger variety of models to ground based multi-band datasets.
§.§ Galaxy stellar mass function, star formation rate function
and SFR-M_* relation
We want to conclude this paper with a science validation test for the quantities we have focused on in this analysis: the “median” values, M_*^MED and SFR^MED. In Sect. <ref> we have seen that these quantities can be considered robust estimates of the stellar mass, M_*, and the star formation rate, SFR, respectively. A way to test this, is to derive the galaxy stellar mass function (GSMF), i.e. the number of galaxies in a given mass bin in a unit of volume, Φ (M), and the corresponding star formation rate function (SFRF), i.e. the number of galaxies in a given SFR bin in a unit of volume, Φ (SFR). This latter, in particular, will give us the chance to compare the SFRs derived from different indicators (UV, Hα, IR luminosities) with our estimates obtained from the KiDS 9-band photometry. We finally derive the SFR-M_* relation and compare them with independent observations to check for broad consistency of our inferences with previous literature. This will allow us to qualify the dataset based on the process presented in Sect. <ref> for future catalog compilations and science applications.
Both the GSMF and the M_*-SFR relation have a crucial role in the understanding of the assembly and formation of galaxies (see discussion in Sect. <ref>) and there have been enormous progresses to trace these quantities back to the early phases of the galaxy formation (see e.g., GSMF: <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, SFR-M_*: <cit.>). The SFRF is less constrained (especially for high star-forming galaxies) as it is highly dependent on the assumed methodology to obtain the galaxy SFRs (see e.g. <cit.>).
For this test, we are interested to check the consistency of our derivations with previous literature in a statistical sense, while we leave the physical interpretation of these relations for a dedicated analysis, using the full KiDS photometric galaxy catalog. To avoid corrections due to the different completeness mass of the GAMA and SDSS-DR17 data in our spectroscopic sample (see Sect. <ref>), we will consider, here below, only the GAMA sub-sample.
§.§.§ Galaxy Stellar Mass Function
In Fig. <ref> we start by showing the stellar mass vs. redshift diagram of the GAMA galaxies in our sample. We also overplot the contour of the completeness mass, obtained from the turn-over points in number counts in a given (narrow) redshift bin (see e.g. <cit.> for more details on this method). As we can see, the completeness mass becomes almost constant to 10^11 M_⊙ at z0.4, leaving there just a small statistical
samples to compare with literature. We then decide to limit our analysis at z<0.4, where we have different reference works to compare our data with.
For the comparison of the GSMF, we use observations derived for the GAMA galaxies at z<0.1 (<cit.>, <cit.>) and 0.2<z<0.4 (<cit.>).
In Fig. <ref> we show the GSMF from the M_*^ MED estimates, derived in the redshift bin z=0.02-0.1 and z=0.2-0.4, against the GSMF from similar redshifts for homology. In the same figure, we also show
the completeness mass, defined as in Fig. <ref>.
In Fig. <ref>, we do not compute the volume occupied by the complete sample of galaxies in the GAMA area, V_ max, as this would imply to know the GAMA survey selection function, which is beyond the scope of this comparison. We rather normalize the counts to match the literature GSMFs. As we see both the estimates derived by spec-z and the morphoto-z nicely follow the GSMFs of previous literature in the two redshift bins.
In particular, at z<0.1 (left panel) the consistency with previous GAMA inferences from <cit.> and the recent compilation from <cit.> are almost indistinguishable for masses above the limiting mass of our spectroscopic sample, although the match becomes more insecure at very high masses, where both the exact volume adopted and the different selections can cause noisy statistics.
A similar behaviour is also seen in the other redshift bin adopted (0.2<z<0.4, right panel).
Here, the consistency of our GSMF with the dataset from <cit.> is yet very in the full range of masses above the completeness limit.
Overall, this good match with independent GSMFs brings us to the conclusion that the stellar masses we have produced have high science fidelity to be expanded to further analyses.
§.§.§ Galaxy Star Formation Rate Function
Differently from the GSMF, the star formation rate function (SFRF) is not a standard proxy for galaxy evolution, although this can provide relevant insight into galaxy formation (e.g. <cit.>). One reason is that SFRs are more sensitive than stellar masses to the assumed methodology. For this reason, more focus is given to the observed values of UV, Hα or infrared (IR) luminosity functions as probes of the SFRs in galaxies (<cit.>). Despite these difficulties, there have been attempts to quantify the SFRF at different redshifts (e.g. <cit.>).
At low redshifts (z<0.5) the UV and Hα data are significantly affected by dust attenuation effects (e.g. <cit.>, <cit.>). This limitation impacts the derived UV/Hα star formation rate functions which are usually incomplete at the high star-forming end (log SFR /M_⊙/Gyr^-1 10).
Thus, especially for these high SFR ranges,
IR SFRs are considered more robust and give a more accurate estimate of the SFRFs, at least at low redshifts (e.g. <cit.> and reference therein). Taking all this into account, in Fig. <ref> we show the SFRFs based on the “median” values derived in Sect. <ref>. We compare these SFRFs in three redshift bins consistent with other observations from <cit.>, which reports a collection of SFRFs based on UV, Hα and IR, and from <cit.>, which presents SFRs from SED fitting of a local sample of SDSS galaxies.
In the figure, we can see the co-existence of SFRs based on different proxies and appreciate the large scatter introduced by the different methods. Broadly speaking, the UV- and Hα-based SFRFs are consistent between them and generally discrepant from the IR-based ones. Our SED estimates look very well consistent with the IR SFRFs, down to the “limiting SFR”, marked as a vertical dashed lines in the different redshift bins[This has been obtained following the same procedure of the stellar masses, i.e. as the peak of the SFRF. Here, though, we do not interpolate in the SFRF vs. z but we show the peak in every particular bin.].
Finally, we remark the almost perfect agreement with the SDSS SED estimates from <cit.>, especially considering our spec-z estimates. Hence, we conclude that the SFR^ MED allow us to build SFRFs which are in good agreement with previous literature based on IR luminosity function and SED fitting, while the difference with UV and Hα-based estimates have to rely to the difference in the calibration of the different methods (see e.g. <cit.>). This does not impact the fidelity of our estimates, as they show no systematics with similar (photometric) probes. As we will see in Appendix <ref>, this conclusion is corroborated by the direct comparison of the SFR^ MED estimates with spectroscopical SFRs, showing a statistically insignificant bias for the morphoto-z and no bias for the spec-z based estimates.
To conclude, the consistency of both the GSMFs and SFRFs with literature further support the assumption that the “median” estimates represent a realistic proxy of the true M_* and SFRs, either using spectroscopic or morphoto-metric redshifts. In particular, the accuracy of the GMSFs and SFRFs based on morphoto-metric redshifts demonstrate that the method can be successfully extended to larger photometric KiDS galaxy collections.
§.§.§ M_*-SFR relation
For the M_*-SFR relation, in Fig. <ref> we also plot the results of the lower-redshift bins, where the mass completeness allows us to have
a sufficient sample for a consistency check. We use, as comparison,
a series of mean relations of star-forming galaxies from other literature studies in different redshift bins: namely, 1) Tortora et al. (2003, in preparation), including <cit.> based on a hybrid method using far-ultraviolet (FUV)+total infrared luminosity, 2) <cit.>, performing SED fitting using multi-band FUV-FIR, 3) <cit.>, based on a collection of homogenized literature[They calibrate to a Kroupa IMF, and the SFR estimates to the Kennicutt & Evans <cit.>. Note that the choice of IMF does not impact the M_*-SFR relation as it equally affects the stellar mass and the SFR estimates.].
We also add the prediction from Illustris-TNG (<cit.>) and EAGLE simulations (<cit.>) to illustrate the potential of deriving SFRs from larger KiDS galaxy samples to check against the outcome of state-of-art hydrodynamical simulations to gain insight on the galaxy formation scenario[We did not compare the inferred GSMF in Sect. <ref> with the same simulations because these latter are tuned, by construction, to fit the observed stellar mass functions.].
In Fig. <ref> we show the M_*-SFR relation for the median quantities obtained using the GaZNet redshifts as input only. This is because
we have seen, in
Sect. <ref> that these
represent the worse-case scenario, where the measurements are more scattered and
show systematic effects only at very low masses (log M_*/M_⊙<8.5) – these are below the completeness mass we can use as lower limit for science analysis.
From Fig. <ref> we find that the M_*-SFR relation of the KiDS galaxies (black points with errorbars) nicely follows the majority of the literature data, both from observations and simulations, down to the completeness mass, despite the different methods adopted in literature and the definition of star-forming systems. At masses below the limiting mass, our M_*-SFR shows a significant departure from other relations.
We expect to check if this is indicative of the presence of systematics, when we will use the full KiDS photometric sample, for which we expect to push the mass completeness to lower levels in all redshift bins.
We are convinced that this consistency check, of both the M_*-SFR relation and the GSMF, which is just qualitative at this stage, confirms the validity of the procedure and the data produced in this analysis.
§ CONCLUSIONS AND PERSPECTIVES
In this paper we have used a spectroscopic galaxy catalog including 9-band (u g r i Z Y J H K_s) photometry from the 4th data release of the Kilo-Degree Survey (KiDS) to derive robust stellar masses and star formation histories. We have performed a full template fitting analysis using two popular stellar population codes, LePhare and CIGALE, and a combination of stellar population libraries (<cit.>, <cit.>, <cit.>) and star formation histories (i.e. a single burst, an exponential decline, and a delayed exponential). Besides the spectroscopic redshifts, taken from GAMA data releases 2 and 3 and SDSS data release 17, we have considered as input of the SED fitting process, the morphoto-metric redshifts obtained from the deep learning tool GaZNet (<cit.>). In this latter case, we can perform a controlled test of the variance one would introduce, in large dataset, where only photometric redshift are available for the galaxy catalogs.
In fact, the main goal of this analysis has been to assess the relative accuracy and the variance of the stellar population parameters under a variety of combinations of fitting tools/stellar templates/star formation histories. We summarize here below the main result of this analysis:
1) the stellar mass and the star formation rate show limited scatter and relative bias which is within the scatter, when comparing the estimates for every galaxies against the different methods. As such, these quantities are rather stable against the stellar template fitting set-ups;
2) the relative bias, NMAD and outlier fraction vary with the stellar mass and SNR, not with redshift;
3) due to the overall resilience of the parameters to the different variables in play, we can reasonably adopt a median definition as an unbiased estimator of the “ground truth” values for the parameters. Following <cit.>, we have used a Hodges-Lehmann median for this robust parameter estimate and used them for a science validation;
4) we have evaluated the scatter of the individual fitting set-ups with respect to the Hodges-Lehmann median (Fig. <ref>) and found that, depending on the combination of templates and star formation histories, stellar masses and star formation rates can deviate by ∼0.1 dex, for high mass systems, to ∼0.2 dex, for low mass systems;
5) as a science validation test, we have derived the stellar mass function and the star formation rate mass function, as well as he M_*-SFR relation and compared with previous literature in different redshift bins, finding a very good match with a wide literature;
6) we provide the catalog of the galaxy parameters, including stellar masses, star formation rates, age, metallicity, extinction and the τ of the exponential decaying models, for ∼290 000 galaxies with spectroscopic redshifts, 0.01<z<0.9, from GAMA and SDSS-DR17. The catalog is available at this URL[link] and contains also the 9-band GAaP photometry, the r-band MAG_AUTO, and the spectroscopic redshift from the parent spectroscopic surveys.
In the future we plan to expand this test, including more stellar formation tools (e.g. FAST : <cit.>, SED3FIT: <cit.>, Prospector: <cit.>, P12 <cit.>), star formation histories (e.g.
log-normal <cit.>, Γ <cit.>) and stellar libraries (e.g. <cit.>).
This will allow us to investigate an even larger variety of models and use the “median” of their outcomes (see Sect. <ref>) as an unbiased stellar population parameter estimators
for the full KiDS “galaxy’’ photometric sample and finally provide a general-purpose catalog to be used for a variety of galaxy studies. Piece of similar datasets have been previously used in KiDS, to study the size-mass relation of galaxies (<cit.>), the ultra-compact massive galaxies number density evolution (<cit.>), the mass function of galaxies at different redshifts (<cit.>, the clustering of red-sequence galaxies (<cit.>), the dark matter halo masses of elliptical galaxies as a function of observational quantities (<cit.>), the dark matter assembly in massive galaxies (<cit.>).
§ ACKNOWLEDGEMENTS
NRN acknowledges financial support from the Research Fund for International Scholars of the National Science Foundation of China, grant n. 12150710511.
RL acknowledges the support of the National Nature Science Foundation of China (No. 12022306) and the science research grants from the China Manned Space Project (CMS-CSST-2021-A01).
AK acknowledges financial support from the One hundred top talent program of the Sun Yat-sen University.
HF acknowledges the financial
support of the National Natural Science Foundation of China (grant No. 12203096).
LX thanks Dr. O. Ilbert for the useful suggestions about LePhare and Fucheng Zhong for useful discussions.
§ DATA AVAILABILITY
The data that support the findings of this study are available at the URLs provided in the text.
unsrt
§
§.§ The impact of missing NIR photometry
In this Appendix we want to check the impact of the wavelength range on the analysis we have performed, and quantify, in particular, the advantage of the inclusion of the NIR to produce reliable stellar population parameters. It is well known that the wider wavelength base is a necessary prerequisite for accurate photometric redshifts (see e.g. <cit.>). As we will see in <ref>, accurate redshifts have themselves a large impact on the stellar population parameters.
Here we want to show that, even assuming to correctly know the redshift of a galaxy, the wavelength baseline is crucial to provide stellar masses and SFRs with minimal bias and scatter. For space sake we just consider the extreme case of fully discarding the NIR bands, to show what is the maximum errors one would commit applying the same set-up as in Table <ref>. For the same brevity reason we show the results for 4 LePhare models: LP/B03/ExD, LP/M05/SB, LP/M05/ExD, LP/CB07/SB. In Table <ref> we report the main statistical estimators for the different configurations, for both mass and SFR estimates, either assuming the spec-z or the morphoto-z as input. These can be compared to Tables <ref> and <ref>.
The most evident effect is the large increase of the scatter of the estimates, as measured from the NMAD.
For stellar masses we find that the NMAD increases by 30-40% (e.g. LP/M05/ExD/spec-z) to about 100% (LP/B03/ExD/morphoto-z). On the other hand, all SB model show little increase in NMAD (10%) and smaller relative biases, indicating that these are almost insensitive to the wider wavelength baseline.
For the SFRs we find a similar degradation of the precision of the estimates with NMAD in Table <ref> increased by 30% to 90% with respect to Table <ref>, and minimal variation on the relative bias.
§.§ Comparison of M_* ans SFR estimates against external catalogs
As anticipated in Sect. <ref>, here we want to perform a direct comparison of our M_* and SFR estimates against external catalogs.
For the stellar masses, we have mentioned the existence of stellar mass catalogs based on similar KiDS data (e.g. <cit.>, <cit.>), however here we decide to compare the stellar masses with a catalog made on a different photometric data from <cit.>. The catalog of their stellar masses is available on the GAMA website[Catalog link: http://www.gama-survey.org/dr2/schema/table.php?id=179]. This is based on the ugri optical imaging from SDSS (DR7) and (according to the catalog description) Z-band from UKIDSS (see T+11 and reference therein). Similarly to us they also use BC03 templates, Chabrier IMF, and Calzetti extinction law, with an Exponentially declining star formation history, but they use a customized code for their stellar population models. Hence we can expect some differences in the estimates due to the code adopted and the data (different observations, photometric accuracy and errors etc.), while they use the GAMA spectroscopic redshifts information as input of their model. We have found a match of 64 771 galaxies with our catalog which are plotted in Fig. <ref> against the M_*^ MED estimates from Sect. <ref>, considering both the spectroscopic and the GaZNet redshifts as input. Being our LP/BC03/ExD the closest model to their set-up, we also add this for comparison in the same figure. Overall, we see that all the estimates (except the M_*^ MED/spec-z) are consistent within the errors, shown at the bottom of each panel, with a scatter that is always contained within ∼ 0.2dex for the spectroscopic redshifts and ∼ 0.25dex for the morphoto-metric redshifts. We also clearly observe that the LP/BC03/ExD has almost no bias, meaning that the different codes and also the different data have a minimal impact on the final mass estimates. The offset with the M_*^ MED (of the order of 0.15 dex) is due to the relative bias of the different models entering in the “median” quantities: in Sect. <ref> this is quantified to be ∼0.10 dex for the LP/BC03/ExD (see blue line in the 2nd row from the top of Fig. <ref>– left panel), hence consistent with the 0.15 dex offset above, considering the scatter of ∼0.2 dex in the top/left panel of Fig. <ref>. The bias with the LP/BC03/ExD model become even smaller is we use the same baseline as T+11, i.e. the 5-bands ugriZ, as shown by the orange residuals at the bottom of each panels. This also indicates that most of the effect of the NIR bands mainly impacts the massive galaxies where the difference of the masses can be as large as 0.2 dex. We still see, in all cases, the systematic deviation of the sample based on GaZNet redshifts at log M_*/M_⊙<9 discussed extensively before, depending on the redshift systematics and not on the stellar population analysis.
For the SFRs we make use of the SDSS-DR7 star formation rate catalog (see footnote <ref>) based on the analysis discussed in <cit.>, but see also <cit.>. Here, the star formation rates are computed by directly fitting the emission lines (e.g., Hα, Hβ, [OIII]@λ5007, [NII]@λ6584, [OII]@λ3727, and [SII]@λ6716). This offers us the opportunity to check the presence of biases on our “median” results against spectroscopic inferences, hence based on a more robust method, especially considering the bimodal bias from M05 and BC03, discussed in Sect. <ref>. The comparison of our 9-band estimates with no-NE and the SDSS-DR7 SFRs are shown in Fig. <ref>. We decide to use the no-NE to confirm the little impact of the emission line in the SED based SFR estimated, as discussed in Sect. <ref>. In Fig. <ref>, we see that the SFR^ MED are in very good agreement with the SDSS spectroscopic inferences, with a bias which is well within the scatter of the data-points. For the morphoto-z sample we see, as usual, the positive bias at low star formation rates induced by the systematics in the morphoto-metric redshifts at low-SFR values, as discussed in Sect. <ref>, although here the offset starts to become significan at log SFR/M_⊙ Gyr^-1 9, suggesting that the different methods (e.g. emission lines vs. SED fitting) can introduce some biases (see also Sect. <ref>). The scatter remains always confined within ∼ 0.4 dex (see values in the figure insets), in line with the results also discussed in the same Sect. <ref>, at least for higher SFRs (log SFR/M_⊙ Gyr^-1>9).
We believe the evidences collected in this appendix, for both stellar masses and SFRs, support all the main conclusions of the paper about the robustness of the stellar population quantities from the different methods/set-ups and the use of the “median” values as an unbiased estimator of the true quantities for our galaxy sample, given the range of redshifts adopted.
§.§ LePhare results with redshift as free parameter
Both SED fitting tools, LePhare and CIGALE, can use the redshift as free parameter during the fitting procedure. This gives us the chance to have a direct visualization of the degeneracies in the final results, introduced by missing the information about accurate galaxy redshifts. For this particular test we use LePhare to show the impact on the estimates of the stellar masses. We use the reference set-up, i.e. the LP/BC03/ExD, which becomes LP/BC03/ExD/specz for the case with spec-z fixed and LP/BC03/ExD/zfree in the variation with the redshift as free paramater. In Fig. <ref> we show: 1) on the left panel the spec-z vs the photometric redshift inferred by LePhare, photo-z_ LP, and 2) on the right, the corresponding stellar masses.
From this figure, we can clearly see the impact of missing the information on redshift in the stellar population analysis, in comparison with the equivalent quantities obtained for the GaZNet morphoto-z (Fig. <ref> and Fig. <ref>, bottom left). This is also quantifies in the residual plots at the bottom of Fig. <ref>, where we plot the relative bias and scatter both for the photo-z_ LP and the GaZNet redshifts inferences. In particular, the stellar masses in the former case show a bias and scatter that is fully driven by the larger variance of the photometric redshits: see, e.g., the cloud of galaxies with masses almost parallel to the 1-to-1 relation with a large positive offset on the top of the figure, which are absent in the GaZNet-based estimates. This is confirmed by the global statistical estimators, as for the redshifts we have μ=0.005, NMAD=0.039 and Out. frac.=2.3%, i.e. much larger than the same quantities derived for the GaZNet redshift in Fig. <ref> (μ=0.005, NMAD=0.017 and Out. frac.=0.4%, respectively) which are mirrored by a similar worsening of the same estimators from the masses that for the z_ LP case are Δ p=-0.077, NMAD=0.197 and Out. frac.=4.7% which are up to twice as much larger than typical values found for the GaZNet morphoto-z equivalent values in Table <ref> (Δ p=-0.033, NMAD=0.093 and Out. frac.=3.8%). This quantifies the advantage of having accurate photo-z in the stellar population analysis.
|
http://arxiv.org/abs/2307.04989v1 | 20230711025828 | Composition constraints of the TRAPPIST-1 planets from their formation | [
"Anna C. Childs",
"Cody Shakespeare",
"David R. Rice",
"Chao-Chin Yang",
"Jason H. Steffen"
] | astro-ph.EP | [
"astro-ph.EP"
] |
firstpage–lastpage
: A Tensor Compiler with Explicit Data Movement Description and Instruction-level Graph IR
Zixuan Ma, Haojie Wang, Jingze Xing, Liyan Zheng, Chen Zhang, Huanqi Cao
Kezhao Huang, Shizhi Tang, Penghan Wang and Jidong Zhai
Tsinghua University
=================================================================================================================================================================
We study the formation of the TRAPPIST-1 (T1) planets starting shortly after Moon-sized bodies form just exterior to the ice line. Our model includes mass growth from pebble accretion and mergers, fragmentation, type-I migration, and eccentricity and inclination dampening from gas drag. We follow the composition evolution of the planets fed by a dust condensation code that tracks how various dust species condense out of the disc as it cools. We use the final planet compositions to calculate the resulting radii of the planets using a new planet interior structure code and explore various interior structure models. Our model reproduces the broader architecture of the T1 system and constrains the initial water mass fraction of the early embryos and the final relative abundances of the major refractory elements. We find that the inner two planets likely experienced giant impacts and fragments from collisions between planetary embryos often seed the small planets that subsequently grow through pebble accretion. Using our composition constraints we find solutions for a two-layer model, a planet comprised of only a core and mantle, that match observed bulk densities for the two inner planets b and c. This, along with the high number of giant impacts the inner planets experienced, is consistent with recent observations that these planets are likely dessicated. However, two-layer models seem unlikely for most of the remaining outer planets which suggests that these planets have a primordial hydrosphere. Our composition constraints also indicate that no planets are consistent with a core-free interior structure.
planets and satellites: composition –
planets and satellites: formation
– physical evolution
– terrestrial planets
§ INTRODUCTION
The TRAPPIST-1 (T1) system is a late-M dwarf star that hosts seven tightly packed, terrestrial planets <cit.>. The unique and tightly constrained planet masses and orbital distribution of this system has implications for its formation history and as a result, this system has been widely studied. Observations show that the two inner-most planets have the largest masses in the system while the mass of the outer planets increases with their orbital distance <cit.>. This mass trend has been referred to as a reversed mass ranking and is difficult to explain with current planet formation theories <cit.>. The planets are in a complex resonance chain where the outer four planets are in first-order mean-motion resonances with each adjacent planet and the inner three planets are in higher-order resonances (8:5 and 5:3). Three-body Laplace resonances also exist throughout the system <cit.>.
Terrestrial planets can primarily grow their mass though core accretion <cit.> or through pebble accretion <cit.>. <cit.> numerically modeled both mechanisms around a T1-like star to understand if planetary systems around low-mass stars preferentially form through either planetesimal or pebble accretion. They found that while both planetesimal and pebble accretion form similar planetary systems, planets that formed through planetesimal accretion had a much larger water content than the planet analogues that formed through pebble accretion. Thus, constraints on the water mass fraction (WMF) and bulk densities of the T1 planets can provide insight into their formation history.
Measurements from transit-timing variations and dynamical modeling helped constraining the bulk density of the planets <cit.>. These studies showed that the planets have similar densities and are consistent with rocky worlds with water mass fractions (WMF) <20%, suggesting that all the planets formed in a similar manner and their primary growth mode was via pebble accreiton.
<cit.> proposed that the formation of the T1 system first took place at the ice line where planetesimals formed by the streaming instability. The streaming instability is when solid particles concentrate into dense filaments and undergo gravitational collapse into numerous bound objects. Planetesimals up to ∼100 km may form in this way <cit.>. After the initial planetesimals form, they continue to grow by pebble accretion <cit.>. Each planet then undergoes inward type-I migration and accretes silicate pebbles once it is inside the ice line. In this manner, the planets form sequentially at the ice line. The innermost planets stall near the inner edge of the gas disc, which is set by the star's magnetosphere.
The <cit.> formation theory has been the most widely accepted formation theory for the T1 system and as a result, it has been extensively tested. <cit.> analytically tested this theory by evolving protoplanets in a gas disc that begin at the ice line and grow their mass via pebble accretion while migrating in. Protoplanets sequentially appear every protoplanetary appearance timescale of ∼ 10^5 orbital periods at the ice line. This approach was able to reproduce the multiplicity and resonance structure of the T1 system. However, analytical approaches neglect important effects such as two body interactions and the study was not able to reproduce the mass distribution of the T1 system.
<cit.> tested the plausibility of the T1 planets forming sequentially at the ice line by numerically modeling the inward migration of the fully formed T1 planets from the ice line. Using the n-body code rebound <cit.>, and reboundX <cit.> to model the effects of an evolving gas disc on the planets. <cit.> demonstrated that if the T1 planets were sequentially produced and migrated inwards, the planets naturally converged into a chain of first order resonances. Modeling a migration barrier in the inner gas-free cavity, where planets were pushed further inwards from an outer Lindblad torque from the gas disc, tidal forces, and orbital damping from gas drag, they were able to reproduce the observed two-body and three-body resonances of the system. However, this mechanism does not explain the composition of the planets, since beginning with fully formed planets at the ice line may imply a water content much larger than what is observed.
<cit.> numerically modeled the formation of the T1 system starting with results from a Lagrangian dust evolution code that modeled the formation of planetesimals via the streaming instability <cit.>. They tracked the growth of planetesimals into planets via pebble accretion and planetesimal accretion using a modified version of the n-body code mercury <cit.> which included pebble accretion and the gas effects of type-I migration and aerodynamic drag. Once the planets migrated to shorter orbits, they modeled the final stage of mass evolution semi-analytically in order to reduce computation time. This approach reproduced the general mass distribution of the T1 planets, but not the orbital architecture as the last stages were not modeled numerically.
<cit.> also successfully reproduced the masses of the T1 planets by numerically modeling the growth of embryos in a gas disc that loses mass from photoevaporation and disc winds in addition to gas accretion onto the central star. The more complex temporal evolution of this gas disc results in an initially fast and then later, when the surface density profile flattens out, slow migration of the embryos. They found that this fast-then-slow migration resulted in systems that display the reversed mass ranking mass trend found in T1. However, their simulations began with relatively large embryos that already reached the pebble isolation mass, distributed between 0.015–0.2 au, after the disc has evolved for 1 Kyr.
Although previous n-body studies have reproduced the broader features of the T1 system, it has proven difficult to reproduce the planet densities and masses, and the orbital architecture of the system when starting from an early stage in planet formation. In this paper, we use a suite of numerical tools to constrain the bulk compositions of the planets by following their formation process starting just after Moon-sized bodies form and up until the gas disc dissipates, while reproducing the observed planet densities. Detailed modeling of this period allows us to follow the composition of the planets throughout their formation process. While cometary impactors can build or destroy the atmospheres of the T1 planets at a later time <cit.>, and other late-stage mechanisms can alter the surface properties of the planets, we provide constraints on the bulk compositions of the T1 planets from their formation.
We improve upon previous work by using a disc model that changes in time and has not yet been used to study the T1 system, resolving solid body collisions in a more realistic manner (i.e. fragmentation), and by using the most up-to-date prescriptions for pebble accretion. Furthermore, we provide the abundances of the refractory elements of the T1 planets for the first time and use these results to probe the interior structure of the planets using a new planetary interior structure code.
We present a new module for reboundX that tracks pebble accretion growth, type-I migration, and eccentricity and inclination damping from gas drag. Our simulations also model the fragmentation of solid bodies involved in collisions <cit.>. We place tight constraints on the composition of the bodies by simultaneously modeling the composition of the accreted dust pebbles as a function of location and time. The composition of the dust is determined by a dust condensation code by <cit.> that tracks how dust condenses out of an evolving protoplanetary disc as it cools in time. We use our final constraints on bulk composition to calculate the planet radius and density using the planetary interior structure code Magrathea <cit.>.
In Section <ref> we describe our evolving gas disc and our prescriptions for pebble accretion, type-I migration, and eccentricity and inclination damping. We also describe the model we use to track the composition evolution of the bodies. In Section <ref> we describe our n-body setup. In Section <ref> and Section <ref> we lay out our results and in Section <ref> we discuss the implications of these results and caveats of our models. Lastly, we summarise our results in Section <ref>.
§ MODELS
In this section we discuss our models for disc evolution, mass growth via pebble accretion, and the effects of gas on the dynamics of the solid bodies. We include gas effects throughout the duration of the simulation because the T1 planets are thought to have formed in less than a few Myr, a timescale shorter than the disc lifetime <cit.>. However, the effects from the gas disc (i.e. pebble accretion, type-I migration, eccentricity and inclination damping) are turned off after a body moves inwards past 0.02 au because the disc is thought to be truncated by the magnetosphere of the star <cit.>. We implement these prescriptions into a new module for reboundX <cit.> that works in tandem with the n-body integrator rebound <cit.>. We also describe how we track the evolution of the composition of the planets throughout the planet formation process, which is done as a post-processing step.
§.§ Disc Evolution
Our disc evolution model is based on the accretion and evolution of T Tauri discs from <cit.>. We adopt parameter values that are most fitting for the T1 system. The surface density for a gas disc with mass M_ d follows
Σ(r,t_1) = M_ d/2 π R_1^21/(r/R_1)t_1^3/2 e^-(r/R_1)/t_1
with orbital distance r from the star and a radius scale of R_1. R_1 is the radius within which ∼0.6 of the disc mass resides initially. We set R_1=500 au and the initial disc mass M_ d=0.03 M_⋆. Observations of CO line emissions suggests gas discs are between 100-1000 au in size (see Figure 4 of <cit.>). We choose an intermediate value that results in reasonable migration timescales of our starting solid bodies. t_1 is a dimensionless time defined as
t_1=t/t_ s+1
at time t. The viscous timescale for the gas disc is
t_ s= 1/3R_1^2/ν_1,
where ν_1 = ν(R_1) is the kinematic viscosity at r = R_1.
Equation (<ref>) assumes that the disc is vertically isothermal and has a radial temperature profile of T(r) ∝ r^-1/2. As in the minimum mass solar nebula <cit.>, we adopt the temperature profile of
T(r)=280(r/ au)^-1/2(L_⋆/L_⊙)^1/4 K
where L_⋆ is the luminosity of the star. For consistency with previous studies of the T1 system, we place the ice line at r_ ice=0.1 au. Assuming the ice line corresponds to where the temperature is 170 K, Equation (<ref>) leads to a stellar luminosity of L_⋆≈ 1.5 × 10^-3 L_⊙. This luminosity is three times larger than the current luminosity of the T1 star, and therefore our model considers a much earlier time in the life of the star. While the location of the ice line will change in time we opt to use a fixed location of the ice line for simplicity.
The viscosity of the gas disc is prescribed by ν=α c_ s H, where α is a dimensionless constant, c_s the speed of sound, and H the vertical scale height of the gas <cit.>.
The speed of sound is given by
c_ s=√(k_ B T(r)/μ m_ H)
where k_ B is the Boltzmann constant, μ = 2.34 is the mean molecular weight of the gas, and m_H is the hydrogen mass. In turn, the vertical scale height of the gas is given by
H = c_ s/Ω,
where Ω=√(G M_⋆/r^3) is the Keplerian angular frequency. We adopt α=1 × 10^-4, and along with R_1, Equation (<ref>) sets the timescale for the viscous evolution of the gas disc.
Finally, we assume that the column density of the pebbles is determined by a constant dust-to-gas ratio, that is,
Σ_ p(r)=0.01f_ pΣ(r)
where f_ p is a constant scale factor. We adopt a dust-to-gas ratio of 1% and hence f_p = 1.
§.§ Pebble accretion
Mass growth of a planetesimal via pebble accretion is separated into either the Bondi regime or the Hill regime, depending on the size of the pebbles and the mass of the central planetesimal <cit.>. Pebbles move from the Bondi regime to the Hill regime once the radius of the central planetesimal reaches the transition radius,
R_ t(r)=1160 km ( r/5 au)^1/2 ( Δ v/30 m s^-1 ) ( ρ_ pl/2×10^3 kg m^-3 )^-1/3
where ρ_ pl is the bulk density of the planetesimal and Δ v can be approximated by the sub-Keplerian speed of the gas when the pebbles are at least marginally coupled to the gas <cit.>. In our models, Δ v is then
Δ v = -1/2H/r∂ln P/∂ln rc_ s≈11/8c_ s^2/v_ K=29.9 m s^-1,
where P is the pressure in the mid-plane of the disc and v_ K= √(GM_⋆/r) is the Keplerian velocity, and we have used in the last two steps the profiles in Section <ref>.
In the 3D-Bondi branch, mass growth from pebble accretion proceeds as
M = 8.4 × 10^-3 M_⊕ Myr^-1 f_ p ( m_ pl/10^-4M_⊕)^2 ( Δ v/30 m s^-1)^-3(H_ p/H/0.1)^-1
×(H/r/0.05)^-1 (r/5 au )^-2
Once the radius of the planetesimal reaches the transition radius, mass growth proceeds in the 2D-Hill branch as
M=2 R_ accΣ_ p ( Δ v + Ω R_ acc ),
where R_ acc is the radius from which the planetesimal accretes pebbles.
The accretion radius can be approximated by
R_ acc = (Ωτ_ f/0.1)^1/3 R_ Hexp [-0.4 ( τ_ f/ t_ p )^0.65 ],
where R_ H=(m_ pl/3M_⋆)^1/3r is the Hill raidus, t_ p=Gm_ pl/(Δ v + Ω R_ H)^3 is the characteristic timescale for pebbles to pass by the planetesimal, and τ_ f is the friction time <cit.>.
Mass growth continues on the 2D-Hill branch until the planetesimal reaches the isolation mass when the planetesimal is large enough to induce a pressure bump exterior to its orbit which halts the incoming flux of pebbles. We adopt the pebble isolation mass (PIM) from <cit.>,
PIM = (H/r)^3 √(37.3 α + 0.01)× [ 1 + 0.2 ( √(α)/H/r√(1/τ_ s^2+4))^0.7 ] M_⋆,
where τ_s≡Ωτ_f is the dimensionless stopping time. The dimensionless stopping time τ_s of the pebbles depends on their location and composition <cit.>. We use τ_ s=0.1 at or exterior to the ice line, approximating the radial drift barrier, and τ_ s=0.001 interior to the ice line which roughly corresponds to the fragmentation barrier <cit.>. We do not model a change in the pebble surface density profile at the ice line. The assumption of a constant pebble flux is not trivial as planetesimals that form near the ice line will reduce the pebble flux in the inner disk <cit.>.
Once a given body reaches the PIM, all other bodies interior to it also stop growing by pebble accretion. When the bodies accrete pebbles we do not decrease the pebble density in the local region since the pebble accretion rate does not appear to be comparable, let alone exceed, the radial pebble flux often needed in modeling <cit.>. On the other hand, we do track the total pebble mass. All bodies stop growing by pebble accretion once the total dust mass has been reached.
§.§ Type-I migration and gas drag
Angular momentum exchange via spiral density waves cause the planetesimals to migrate inwards via type-I migration, and gas drag dampens the eccentricity and inclination of a planetesimal. <cit.> empirically derived expressions for the accelerations a body experiences in a gas disc, which can be implemented into n-body code. The accelerations a body experiences from type-I migration, eccentricity damping, and inclination damping are
a_ m=-v/t_ m,
a_ e=-2(v·r)r/r^2t_ e,
a_ i=-v_z/t_ ik,
respectively. k is the unit vector in the z-direction and v and r are the velocity and position vectors of the body. The timescales associated with each of these accelerations are scaled by the damping timescale
t_ wave= M_⋆/m_ plM_⋆/Σ(r)r^2 ( H/r)^4 Ω^-1,
from <cit.>. The eccentricity damping time is
t_e=t_ wave/0.780×
[1-0.14(e/H/r)^2 + 0.06(e/H/r)^3 +0.18(e/H/r) (i/H/r)^2 ],
and the inclination damping time is
t_ i=t_ wave/0.544×
[1-0.30(i/H/r)^2 + 0.24(i/H/r)^3 +0.14(i/H/r) (e/H/r)^2 ].
The type-I migration timescale is
t_ m = 2 t_ wave/2.7+1.1 β ( H/r)^-2
( P(e) + P(e)/|P(e)|[ 0.07( i/H/r)+0.085( i/H/r)^4-0.08( e/H/r) ( i/H/r)^2] )
where
P(e)=1+( e/2.25H/r)^1.2 + ( e/2.84H/r)^6/1-( e/2.02H/r)^4,
and Σ(r) ∝ r^-β. Following our surface density profile adopted in Section <ref>, we set β=1.
Dynamical studies have shown that a rapid migration of the fully formed planets is needed to break out of various three-body mean motion resonances (MMRs) before arriving in their current resonant chain. Because these fast migration rates are needed to reproduce the resonant structures an efficient stalling mechanisms may have been present in the inner region of the disc to prevent the planets from falling into the star. Rapid migration timescales of the fully formed T1 planets can naturally explain first-order MMRs in the system <cit.>, but the inner two planets are observed to be in higher order MMRs which indicates divergent migration in the inner disc. <cit.> demonstrated that divergent migration can happen close to the star from magnetospheric rebound. <cit.> were able to reproduce the T1 resonant structure by modeling a strong negative torque in the inner cavity, although this divergent torque was not physically motivated. These studies considered the dynamics of the fully formed T1 planets, when less gas is present. Migration timescales throughout the formation process are likely much shorter when more gas is present, but scattering and resonances may help reduce the migration rates throughout the formation process.
To reproduce the proposed stalling mechanism thought to exist in the T1 system, we use the “inner_disc_edge” module in reboundX by Kajtazi et al. (in prep.) This module applies an inner disc edge that functions as a planet trap by applying an opposite and roughly equal magnitude torque on the migrating body that enters the planet trap. We do not allow bodies to migrate past the orbit of the innermost T1 planet, ∼ 0.01 au by setting the inner disc edge to 0.01 au and the width of the planet trap to 0.01 au. The region in which this planet trap is employed is between 0.01-0.02 au. All our parameter choices for our fiducial disc evolution model are listed in Table <ref>.
§.§ Composition evolution
We use the dust condensation code by <cit.>, which models how dust condenses out of an evolving protoplanetary disc as the disc cools. The dust condensation code gives the initial elemental and mineral distributions of the protoplanetary disc that determines the composition of embryos that form at different orbital distances. We then follow the composition evolution of the embryos as they collide with one another and grow via pebble accretion to form planets. The formation location and collision history of the embryos determines the resulting composition of the planets. The final Fe/Si molar ratio is used in Magrathea <cit.> to determine the mass fraction of the planet's iron core and the planet's radius.
The dust condensation model is run independently of the evolution models discussed in previous sections. We use the dust condensation model for a solar-type star as presented in <cit.> as this is the system the code was developed for. We encountered difficulties converting the surface density profile to the one shown in Equation <ref> and using disc parameters more compatible with an M-dwarf system. We use the solar abundances in the condensation code since the measured metallicity of T1 is similar to the Sun's <cit.>. For these reasons, we use the dust composition profile from a Sun-like star at a single epoch and re-scale the results to fit our T1 disc. Successfully modeling dust condensation in an M-dwarf disc and comparing results with the re-scaled Sun-like disc, used here, will be the subject of future work.
Following the same solar model of <cit.>, the surface density profile for the dust condensation evolution is
Σ (r,t)=M_ disc/π r_0^2(t)exp{-[r/r_0(t)]^2},
where M_ disc=0.21M_⊙ and r_0(t) is the characteristic disc radius. This disc mass corresponds to a disc around a Sun like star immediately after formation. High accretion rates onto the star quickly deplete the disc mass and after one evolution timescale (∼ 26 Kyr), the disc mass is less than 20% its initial disc mass (see <cit.> Figure 2).
The temperature profile is
T^4=3Gτ M_* Ṁ_̇İ/64πσ_SBr^3,
where Ṁ_̇İ= is the accretion rate of gas, σ_SB is the Stefan-Boltzmann constant, the optical depth is τ=κΣ/2, and κ=4 cm^2g^-1 is the opacity for a Solar nebula, the same used in <cit.>. This temperature profile is for a disc dominated by viscous heating <cit.>. The disc is not in a steady state and the mass accretion rate changes in time (see <cit.> Equation 8).
Finally, we re-scale the abundance distribution from the solar model to fit the size of our T1 disc by normalizing the locations of the ice line between the two discs. The re-scaled dust distributions for 12 elements are shown in Figure <ref>. For more details on how the dust condensation results are re-scaled and the validity of extrapolating results from the solar system, see Appendix <ref>.
The chemical equilibrium of condensing dust is modeled with GRAINS <cit.> which includes 33 different elements that form 520 condensed and 242 gaseous species. The combined disc evolution and chemical equilibrium code returns the relative abundance of the elements and condensed species as a function of orbital distance in the disc, at different times. Further details on the dust evolution model can be found in <cit.>.
Using this code, we determine the dust composition of our disc at any orbital distance to track the composition evolution of the solid bodies. Our composition tracking code for the solid bodies is based on the composition tracking code of <cit.> and includes composition changes from pebble accretion. The bodies all begin just exterior to the ice-line and we experiment with different refractory element:water-ice ratios. The refractory elements for a given embryo are determined by the dust composition at the embryo's orbital distance, as determined by the condensation code.
When two bodies collide, the target is the more massive one involved in the collision, with mass M_ t and initial composition
X = (x_1,x_2,...,x_n)
where x_i is the relative abundance of the i^ th species such that
∑_i=1^n x_i =1.
The projectile is the less massive involved in the collision, with mass M_ p and initial composition
Y = (y_1,y_2,...,y_n)
where y_i is the relative abundance of the i^ th species such that
∑_i=1^n y_i =1.
If the collision results in an elastic bounce with no mass exchange, then the composition of each body remains the same. If the collision results in a merger or partial accretion of the projectile, the composition of the target becomes
𝐗' = M_ t𝐗 + M_ p'𝐘/M_ t + M_ p'.
where M_ p' is the mass of the projectile that is accreted by the target. If any fragments are produced from the projectile, they are assigned the composition of the projectile.
If the target becomes eroded, then the composition of the target remains the same but has a less mass M_ lr. The composition of the new fragment(s) becomes
𝐗' = M_ diff𝐗 + M_ p𝐘/M_ tot-M_ lr.
where M_ tot=M_ t + M_ p and M_ diff=M_ t -M_ lr.
We chronologically resolve all the collisions, using the prescription described above, and update the compositions of all the bodies according to the amount and location of the pebbles the body accumulated over the last 100 years. The amount of pebbles a body has accumulated over 100 years, M_ peb, is the mass difference between the body at time t and time t+100 years, after all collisions have been accounted for. The relative abundances of the pebbles is given by,
𝐘_ peb = (y_ peb, 1,y_ peb, 2,...,y_ peb, n),
where y_ peb, i is the relative abundance of the i^th species for the pebble and
∑_i=1^n y_ peb,i =1.
𝐘_ 𝐩𝐞𝐛 is determined by the radial location of the pebble at time t and the output from the dust condensation code. The new composition of a body, which we refer to as the target, from pebble accretion is set by
𝐗'= M_ t𝐗 + M_ peb𝐘_ peb/M_ t + M_ peb.
§ N-BODY SETUP
In our n-body simulations we begin with 30 0.01 M_⊕ embryos exterior to the ice line. The number of embryos are chosen so that the total initial embryo mass is 5% of the mass of the T1 planetary system. We distribute the embryos in an annulus just exterior to the ice line between 0.1-0.15 au. The embryos have a different surface density than the gas profile as they preferentially form at the ice line and the formation of these large solid bodies leads to decoupling from the gas disc to some degree. As a result, we choose for the embryos to follow a surface density profile of Σ_ pl∝ r^-3/2, which is a commonly used surface density profile for the starting bodies in studies of the solar system <cit.>.
We adopt a density of 1.5 g cm^-3 for our embryos which is consistent with 50% ice and 50% rock. We also experiment with multiple initial WMFs to find WMFs that result in planet radii that match the observations. We tested initial WMFs of 15%, 20%, and 25% and find an initial WMF of 20% for planetesimals resulted in planet radii that are in better agreement with the observed T1 planet radii (see Section <ref>). As a result, we report the results for two different initial compositions–the starting composition of the embryo is either 50% water-ice and 50% of the dust composition found at the embryo's initial radial location or, 20% water-ice and 80% of the dust composition found at the embryo's initial radial location.
All of the orbital elements for each body are chosen randomly and follow a uniform distribution. The eccentricities (e) are distributed between 0.0-0.1 and the inclinations (i) between 0.0^∘-0.5^∘. Because our model assumes planetesimals form in a narrow annulus just exterior to the ice line, motivated by results from <cit.> and <cit.> we use a relatively large value for eccentricity (see Section <ref> for more details on this point). The longitude of ascending node (Ω), argument of pericenter (ω), and mean anomaly (f) are all distributed between 0^∘-360^∘.
Using the n-body code rebound and the reboundX module described above, we integrate 100 runs using the mercurius hybrid integrator. We change the random seed generator in each run to provide variation of the orbital elements in the particle disc. We integrate each run for 3 Myr.
Unless being inside the inner cavity which extends out to 0.01 au, all bodies experience growth via pebble accretion – until they reach the PIM –, type-I migration, and eccentricity and inclination damping at all times.
Solid bodies are also free to interact with one another and collisions are resolved with fragmentation. We set the minimum fragment mass to 0.01 M_⊕. The fragmentation model we implement is detailed in <cit.>. A smaller fragment mass would be more realistic. However, this mass is chosen as the result of computational limitations.
§ SYSTEM ARCHITECTURE AND FORMATION HISTORY
Of the 100 runs we conducted, we first focus our analysis on the runs that returned systems with at least six planets since we are interested in systems similar to T1. We define a planet as a body having a mass greater than or equal to 0.2 M_⊕. We find 24 runs that meet this criteria. Of these 24 runs, nine runs had six planets, two runs had seven planets, eight runs had eight planets, four runs had nine planets, and one run had 11 planets. Interestingly, our model is four times more likely to produce eight planets instead of seven. By analyzing the three-body resonance angles throughout the T1 system, the existence of an outer eighth planet in the system has been predicted <cit.>. While <cit.> did not find evidence of TTVs from an eighth planet at a limited range of exterior orbital radii, our models produce systems with an eighth planet that has a mass similar to T1-h and is usually found exterior to the ice line.
Table <ref> lists the planet properties for each planet that formed in these 24 runs. We list the simulation run number, the planet multiplicity (No.), and the mass (M_ p), semi-major axis (a_ p), eccentricity (e), inclination (i), the water mass fraction(WMF), iron (Fe), magnesium (Mg), silicon (Si), and oxygen (O) mass fractions (along with eight other elements) for each planet built from bodies that begin with a 20% or 50% WMF exclusively. Lastly, we list the fraction of the final planet mass that came from pebble accretion (Peb), fragments (Frag), and embryo accretion (Em).
§.§ Mass distributions
Figure <ref> shows the mass (M_ p) and semi-major axis (a_ p) of the planets that formed in the 24 runs. The T1 planets are shown in orange and all the remaining bodies at t=3 Myr in a given run are shown in blue. The size of the dots are proportional to the mass of the planet. The PIM is marked by a black line and the ice line by a vertical blue line for reference. The average total planetary mass of our simulated systems is 5.64 M_⊕ and the total mass of the T1 system is 6.45 M_⊕. On average, each run grows its total initial embryo mass of 0.3 M_⊕ by almost 19 times (∼ 5.64 M_⊕) through pebble accretion.
In the T1 system, the inner two planets are the most massive and then a reversed mass ranking is found for planets d-g where planet mass increases with semi-major axis. In 13 of the 24 runs, the inner most planets are the most massive planets in the system. The bodies that first undergo runaway accretion and accrete the most embryos at the start of the simulation are the first to migrate inwards. As the mass of the body increases, so does its migration rate and the body migrates inwards until it either reaches the inner cavity or is trapped by resonances with inner planets. Since there are no resonances to halt the inward migration of the first protoplanets, this results in a build up of material at the inner edge. This build up of material at the inner edge leads to collisions and accretion that eventually builds the more massive inner planets.
The innermost planets typically accrete more embryos which allows them to grow more massive than the PIM in the inner disc region. This finding is in agreement with <cit.> although <cit.> explored a formation pathway more akin to in-situ planet formation whereas we start with smaller bodies only exterior to the ice line. The bodies that avoid mergers at the start of the simulation migrate inwards at a later time when there are less planetesimals and embryos available to accrete and thus, grow majority of their mass via pebble accretion. These subsequent planets that form grow quickly to the PIM once they cross the ice line. As a result, most of the outer planet masses are near the PIM which increases with distance up to the ice line and can explain the reversed mass ranking of the T1 d-g planets.
The small size of the outermost planet is achieved when the planet forms exterior to the ice line, where the PIM is much lower due to a larger value for τ_ s. While this formation mechanism may explain the small size of T1-h, it is not clear if a planet that grows most of its mass exterior to the ice line is consistent with the observed density of T1-h. In 10 of the 24 runs, the outermost planet is the smallest (or close to the smallest) but is found exterior to the ice line. This places our T1-h analogues at larger semi-major axes and results in planets with larger amounts of water than what is expected from the observed bulk density of T1-h (see Section <ref>).
§.§ Period distributions
Figure <ref> shows the period ratios found between adjacent planets in each of our 24 runs and also the period ratios of the T1 planets. Of the 24 runs, where we focus our analysis, the first-order 3:2 MMR is found in all 24 runs, 13 of the runs contain the first order 4:3 MMR and nine of the runs contain the second order 5:3 MMR. While the stronger resonances of the T-1 system may be found in most of our runs we do not find the 8:5 MMR of the innermost planets observed in the T1 system in any of our 24 runs. We attribute this to the simplified treatment of the inner disc cavity and lack of tidal effects. However, close to an 8:5 MMR, a 2:1 MMR is found in all but four of the runs and some planets may also be found in 5:4 and 6:5 MMRs.
<cit.> demonstrated that once a more accurate treatment of the inner disc region is modeled, by incorporating the effects of an expanding gas-free cavity and the dynamics of the planets in this cavity, the fully formed inner two planets may break out of first order resonances and migrate into the observed 8:5 and 5:3 resonances. Similarly, <cit.> better recovered the observed resonances of the T1 system due to their use of a more complex disc evolution which resulted in more dynamic migration rates of the planets.
Three-body Laplace resonances may also be found throughout the T1 system. These resonances contribute to the long term stability of the system <cit.>. The generalized three-body Laplace relation (GLR) angle is given by,
ϕ_i,i+1,i+2=p λ_i - (p+q) λ_i+1+q λ_i+2,
where λ_i is the mean longitude of the ith planet, and p and q are integers. The GLR is considered stable if the angle ϕ librates about 180^∘. We consider the five main GLR angles observed in the T1 system (see <cit.> for a review of these angles) in all of our runs that contain at least seven planets. We do not find any of these five GLR resonances over the last 1 Myr of simulation time in any of the runs. Again, this could be attributed to the assumed evolution of the gas disc <cit.>, the assumed evolution of the disc's inner cavity <cit.>, and/or the lack of tidal effects <cit.>.
§.§ Eccentricity and inclination distributions
Pebble accretion efficiency increases with the eccentricity of the accreting body until the body moves faster than the pebbles <cit.>. Thus, the eccentricity evolution of the bodies can have significant effects on the final planetary system. Figure <ref> shows the e and i evolution for all the bodies in the 24 runs. The black dashed lines mark the initial values for the starting bodies of e=0.1 and i=0.5^∘. The color corresponds to the mass of the body. While our starting eccentricity is larger than what is commonly used in n-body models, it is worth noting that the commonly adopted smaller value of e=0.01 comes from studies of the solar system where bodies are distributed across the whole range of the starting disc <cit.>. On the contrary, in our formation channel all bodies start in a narrow annulus just exterior to the ice line and there is a relatively large number density of bodies. <cit.> modeled planetesimal formation just exterior to the ice line and found that shortly after formation, the bodies experience body-body scattering which increases the eccentricity distribution of the bodies exterior to the ice line <cit.>. Similar excitation of eccentricity was also found when the planetesimals are formed in narrow axisymmetric dust filaments driven by the streaming instability <cit.>. Motivated by these findings, we choose to initialize our starting bodies with a larger e than what is commonly used.
From Figure <ref> we can see the bodies in our simulations also experience a high degree of scattering at the start of the simulation that increases both the eccentricity and inclination of the bodies. At 50 Kyr the average eccentricity and inclination are e=0.15 and i=6.1^∘, both larger than our initial values. Later, the orbits become dampened as their mass grows through pebble accretion and these larger bodies interact with the gas. At 500 Kyr, the average eccentricity and inclination are e=0.03 and i=0.6^∘. Once bodies begin to reach planet size, interactions with the planets can re-excite orbits. The bodies at the end of our simulations have average values of e=0.07 and i=0.04^∘ and the more massive bodies have less excited orbits than the smaller bodies (see Table <ref> for the eccentricity and inclination of the final planets).
§.§ Collision history and formation timescales
The collision history is a result of the stochastic behavior of the n-body system. Because we randomise the spatial distribution of the starting planetesimals in each run, we also expect each run to have a different collision history. We consider the times of the last collision as a proxy for planet formation timescales. The right panel of Figure <ref> is a histogram of the times for every collision that took place in our 24 runs. The earliest time of the last collision is t∼ 1.5 Kyr in Run38 and Run12 had the last collision at t ∼ 2.5 Myr. The pileup of collisions at t ∼ 2.5 Myr are all from Run12. When considering all 100 runs, Run38 still had the earliest time for the last collision but Run22 had a collision just before t∼ 3 Myr. In all 100 runs, pebble accretion continues after the last collision. In three runs, pebble accretion is still happening just before 3 Myr and the rest of the runs stop pebble accretion before this time as all the bodies have either reached the PIM or are found interior to a body that has reached the PIM.
Run38 reached its final configuration the earliest and experienced its last collision and ceased pebble accretion before 1 Myr. While we did not extend our runs in the absence of gas, it is possible that these systems undergo further evolution during and after the gas disk dissipates. This is expected in systems that do not have stable resonances throughout the system prior to gas disk dissipation. However, the orbital architecture in each of our 24 runs is dominated by strong first order MMRs.
The left panel of Figure <ref> shows a histogram of the total number of fragments produced in each of our 24 systems most similar to T1. Run54 experiences the most fragmentation and produces a total of 371 fragments. This run experiences a super-catastrophic collision around 1.1 Myr, a relatively late time when the colliding bodies have grown more massive, which results in 339 fragments. But we view this collision history as atypical and statistically insignificant. Ten of the runs produce 15 or less fragments. Run55 and Run100 experience the least fragmentation and produce only five fragments. Relative to the fragmentation that may occur in the solar system, our simulations experience little fragmentation <cit.>. However, as discussed below, we find that fragmentation has a significant effect on the multiplicity of the planetary system when fragments are able to grow their mass through pebble accretion.
Fragmentation not only affects the final system architecture and the physical properties of each individual planet, but it can also affect the multiplicity of the system. In all but one of the 24 runs, at least one planet was seeded by a fragment, that is, a fragment produced from a collision grew its mass by pebble accretion and accreted smaller bodies to become a planet. On average, three planets in each run are seeded by a fragment. In Table <ref> the last three columns show the percentage of the final planet mass that came from pebble accretion, the accretion of fragments, and the accretion of other planetesimals. We note that in all but three runs the outermost planet is seeded by fragments and grew most of its mass by pebble accretion.
We further test the effects of fragmentation by performing 100 identical runs where we only resolve collisions with perfect merging. We find this results in systems with lower multiplicities. Five out of the 100 runs with perfect merging returned six planets, and the rest of the runs resulted in fewer than six planets. The mass distribution of the planets with and without fragmentation are similar. This finding suggests that collisional fragmentation is an important process for systems with high terrestrial planet multiplicities.
§ COMPOSITION AND PLANETARY INTERIOR STRUCTURE
The T1 planets have observed bulk densities that are consistent with rocky worlds and water mass fractions (WMFs) less than 20% <cit.>. There are degeneracies between the assumed interior structure and the observed bulk density that result in different WMF predictions for different interior models. <cit.> model the T1 planets with a coupled atmosphere-interior model and find WMFs for the outer four planets of 9-12%. Their predicted WMF at a given CMF are 1σ higher than reported in <cit.> as they use a less compressible water equation of state. They also find T1-d could have a condensed water layer rather than the water vapor atmosphere assumed in <cit.>.
In this section, we discuss the WMFs along with the elemental compositions of the T1 analogs found in our planet formation models and their implications for the structure of the T1 planets.
Figure <ref> shows the output of our condensation code for the 12 main elements we focus our analysis on: O, Fe, Si, Mg, Al, Ca, Ni, Na, Cr, Mn, Co, and P. We choose these elements so that we may make a direct comparison to Earth's compositions as deduced by <cit.>. These data are taken from our dust condensation code at five times the characteristic evolution timescale used in <cit.>, a total of t∼ 130 Kyr, and used to initialize the composition of the bodies at the start of our n-body simulations. The total time of t∼ 130 Kyr corresponds to the dust condensation simulations that best match the relative elemental abundances of the solar system's terrestrial planets and the CM, CO, and CV chondrites <cit.>. Dust condensation is thought to be complete before planetesimal formation so this stage is assumed to take place prior to our n-body simulations. We use these data, along with the assumption that a fraction of the solid material at and exterior to the ice line is water-ice, to set the starting compositions of our bodies and for tracking composition change from pebble accretion. The dust condensation code does not follow the evolution of the ices and so we must make assumptions about what the intial water ice fraction is for the starting bodies.
Table <ref> lists the final WMF and elemental abundances for each planet in our 24 runs after running our composition tracking code. We report values for when the bodies were intialised with a 20% and 50% WMF. We find large variability in all of the elements from planet to planet and run to run. Such variation highlights the sensitivity of the final planet composition on the formation process. To make a more direct comparison of our simulated planets to the T1 planets, all of the simulated planets are binned into seven semi-major axis bins with the same number of planets in each bin. Figure <ref> shows the average and ± 1 σ for the final wt% of water and elements for our seven binned T1 analogs with both initial WMFs. We show Earth's values in black <cit.>.
As compared to the Earth, it appears that these planets tend to have a reduced fraction of oxygen, iron, and magnesium, but an enhanced WMF. We find the inner and outer planets have higher WMFs in contrast to the trend of increasing WMF with orbital distance found in <cit.>.
Among them, the outermost planet, T1-h, has the largest WMF at its vicinity which is exterior to the ice line.
T1-h has a wide range of WMF and O abundance because the outermost planets can accrete pebbles from either interior or exterior to the ice line where water and condensed oxygen is either depleted or abundant (see Figure <ref>).
Across all runs, we find large variability in the final WMFs of our simulated planets ranging from less than 1% up to 50 % (see Table <ref>). If a planet is seeded by a fragment and grows via pebble accretion interior to the ice line, it has a low WMF and hence becomes a dry planet. On the other hand, if a planet grows via pebble accretion exterior to the ice line, it has a high WMF and is a water world.
All of the average WMFs in our T1 analogs are in excess of that of Earth. The WMF_50 values are not consistent with terrestrial worlds, but with water worlds. Significant volatile loss is thought to take place throughout the planet formation process from impacts, irradiation, and green house effects <cit.>. If we were to model volatile loss, the final WMFs would be lower than what we report here. However, it is unclear how much devolitisation the T1 planets underwent throughout the formation process. We examine this more in the following section.
We use our average binned values for elemental compositions of T1 analogs to inform planet interior models with the planet structure code Magrathea <cit.>[Magrathea can be accessed at <https://github.com/Huang-CL/Magrathea>]. magrathea assumes a fully differentiated planet with an iron core, silicate mantle, and water liquid or ice hydrosphere. After specifying the mass of each layer, the code solves the equations of hydrostatic equilibrium and returns the radius for the planet. We feed magrathea the seven average binned values for the iron mass fraction for the core, the average binned value of the WMF for the hydrosphere, and place all remaining elements into the mantle. We use the observed masses of the T1 planets, the default equations of states and phase diagrams in magrathea, and the null-albedo equilibrium temperatures as the start of the adiabatic temperature gradient. We present solutions for a three-layer model (core, mantle, and hydrosphere) here and solutions for a two-layer model (core and mantle) in Section <ref>. We use a 300 K surface temperature for planets b and c and do not model the water-vapor atmosphere required by their high equilibrium temperatures if they do indeed have a water surface layer (see <cit.> for an atmosphere model of T1-b and T1-c).
Table <ref> shows the average binned values for the core mass fraction (CMF), mantle mass fraction (MMF), WMF, and the planet radius as determined by magrathea for a three-layer model for the runs that begin with a 50% and 20% WMF. We also include the observed radii (R_ O) from <cit.> for comparison. The calculated radii that begin with a WMF of 50% are much larger than observations of the T1 planets. The radii of the T1 analog bodies that begin with a WMF of 20% agree much better with observations. As a result, we suggest that either the starting Moon-sized bodies have an initial WMF closer to 20%, or extreme volatile loss of the planets takes place throughout the planet formation process (see Section <ref> for a discussion of this).
Next, we use the average binned values of iron and silicate from the initial 50% WMF to find the WMF needed to match the observed masses and radii of the T1 planets. We use 5000 draws of the correlated masses and radii for each planet found in the <cit.> pipeline[Masses and radii obtained through <https://github.com/ericagol/TRAPPIST1_Spitzer>] rather than fitting distributions to the reported median and standard errors. Table <ref> shows the Fe/Si molar ratios (the same abundances are used to find the CMF and MMF in Table <ref>) and the WMF results for the outer five planets. We do not include planets b and c in this table as we do not find three-layer solutions that match observations. The Fe/Si molar ratio is similiar across all seven planets. The value is higher but within 1σ of the 0.76±0.12 used in <cit.> derived from observed stellar abundances in stars similar to T1.
This shows the WMFs need to be reduced down to about 6%, 4%, 6%, 8%, and 9% for planets d-h, respectively, to obtain the same radii of the T1 planets. These reductions imply that the planets that form from planetesimals with a 50% WMF must lose approximately 75% of their water throughout the formation process in order to match the observed planet densities. The planet analogs that begin with a 20% WMF have Fe/Si molar ratios within 0.1% of the initial 50% WMF runs but have final WMFs mostly within 2σ of the observed inferred WMF in Table <ref>.
§.§ Volatile loss
There are multiple mechanisms that can deplete the planets of their volatile inventory. Volatile loss can take place prior to the planet formation process in the nebulae as the result of chemical interactions between gas and dust, and the formation of chondrules <cit.>. Small pebble may lose some of their volatile reserves through pebble abblation <cit.>. Later, small planetesimals can lose volatiles as they accrete smaller bodies, which heats the body and leads to differentiation <cit.>. As the bodies grow into larger embryos which exceed the PIM, planetary growth proceeds via core accretion. The larger bodies collide with one another in giant impacts, which leads to even further volatile loss <cit.>. Lastly, after the final terrestrial planets have formed, the planets may lose their atmospheres through photoevaporation from the host star <cit.>, core-powered mass-loss where heat from the planet's core is thermally transferred to the planet surface and evaporates the atmosphere <cit.> or, they may lose volatiles from runaway Greenhouse effects <cit.>.
Instead of tracking the multiple ways in which the volatiles may be lost throughout (and after) the formation process, we artificially adjust the WMF while keeping the refractory abundances constant such that the final WMF are similar to those inferred from observations, as described above. However, we consider the collision history to get a sense of the frequency of giant impacts the T1 planets experienced.
The planets that begin with a 20% WMF result in planets with radii and densities similar to observations of the T1 plants. Our simulated planets b, c and e are slightly larger than the observed T1 planets and so these planets would need to undergo slight volatile loss to reproduce the observed radii. Planets b and c are the largest planets in the system and they do experience collisions throughout the formation process which may be a source of volatile loss. Because a 20% WMF reproduces planets with similar radii to the T1 planets without requiring appreciable volatile loss, the starting bodies that formed the T1 planets likely had no less than a 20% WMF on average. However, as noted previously T1-b and T1-c may have water vapor atmosphere which would require an even lower water mass fraction <cit.>.
Since our simulations begin after the formation of planetesimals, giant impacts are the main mechanism for removing volatiles at this stage in the planet formation process. <cit.> showed that when the specific energy, Q_ s, of a collision between two bodies is more than 10^8 J/kg, it can strip an entire ocean of water on a planet that has an atmosphere-to-ocean ratio of 1:300. This same planet can have its entire atmosphere striped if it is involved in a collision with Q_ s≥ 10^7 J/kg. We track the collisions that the simulated planets were involved in to better understand the extent of volatile loss these planets may have experienced. The top panel of Figure <ref> shows Q_ s versus time for all of the collisions the final planets experienced and the bottom panel shows Q_ s versus planet orbital radius. The blue horizontal line marks the specific energy needed to remove the atmosphere of a planet that has a 1:300 atmosphere-to-ocean ratio and the red line marks the specific energy needed to remove an entire ocean from a similar planet.
Two-thirds of our 24 runs had a final planet that experienced at least one ocean evaporating impact. Of these 16 runs, 14 runs had fewer than 10, Run37 experienced 17, and Run54 experienced 42 ocean evaporating impacts. Run54 is also the system that experienced the most fragmentation which created a chain reaction of giant impacts. This can be seen in Figure <ref> near ∼ 1 Myr.
All 24 runs experienced atmosphere stripping collisions. The runs experienced anywhere from 12 up to 83 atmosphere stripping impacts with an average of 36 such collisions. Run37, which experienced 17 ocean evaporating collisions had the most atmosphere stripping collisions. In this run, we would expect such excessive volatile loss to result in dry planets. We observe that the ocean evaporating impacts take place in the inner regions of the disc where orbital speeds are higher and more massive planets are found. As a result, the innermost planets are most susceptible to ocean evaporating impacts which might indicate a lower volatile content for these planets. However, eight of our simulated runs experience no ocean stripping events and no more than 12 atmosphere stripping events, which suggests the planets may experience little volatile loss from giant impacts. The atmosphere stripping impacts can be found throughout regions of the disc but are most commonly found around the ice line. Collisions with smaller impact energies may not result in significant volatile loss and may even be a vehicle for volatile transport <cit.>. More detailed modeling of volatile loss/gain in collision processes in planet formation scenarios is needed in order to better constrain final volatile budgets. From Table <ref>, however, we see that many of the smallest planets in a run form almost all of their mass through pebble accretion. These planets are not involved in any collisions and so we do not expect the smaller planets to lose any volatiles through collisional processes.
§.§ Desiccated Planets
While the slight under-density of the T1 planets compared to Earth suggests a volatile layer and our formation models have led to planets with high WMFs, the volatile loss mechanisms discussed above could lead to desiccated planets devoid of volatiles. Recently, JWST observed thermal emission from secondary eclipses of T1-b <cit.>. The measured temperature of T1-b supports the planet having no atmosphere counter to the water-vapor atmosphere which would result from water-rich surface. We discuss here small core interior solutions to the observed density of the T1 planets which match our composition.
As described above, we find mass fractions of core and mantle with two-layer models in Magrathea which match 5000 draws of the observed masses and radii of the T1 planets from <cit.>. A number of draws of mass and radius do not have solutions assuming only two layers and require a volatile layer. The dessicated CMFs using our simulated Fe/Si ratios (CMF_ D), the CMFs for two-layer planets that match observations (CMF_ O), and the percent of draws with solutions are shown for the seven planets in Table <ref>.
CMF_ D is found from the CMFs in Table <ref> where CMF_ D=CMF/(1-WMF) and is nearly identical for all planets except for T1-h. For T1-b, T1-c and T1-e, 2σ uncertainties of CMF_ O and the CMF_ D of our T1 analogs overlap. However for the remaining four planets, the desiccated CMFs for our T1 analogs are 30-33% which is significantly larger than the CMF inferred from observations for these planets. T1-h needs the smallest core to match observations with a 5% CMF_ O . Our dessicated T1-h analog has the largest CMF_ D, 33%, indicating that all of the iron differentiates into a pure iron core.
To match our iron weight percentages from formation, the outer T1 planets would need a large amount of the iron in the mantle which we previously assumed to be pure magnesium silicate. However, adding iron to the mantle would increase the density of the mantle <cit.>. If we hold total mass constant, the CMF would need to decrease further with iron in the mantle to match the observed radius. <cit.> found a mass-radius line passes through all seven T1 planets for a core-free composition. For this model they used an abundance ratio for Fe/Si/Mg/O of 29.2/17.3/15.3/38.2 wt%. The composition of our T1 analogs after the removal of water is on average 30.0/18.1/16.1/21.2 wt% of Fe/Si/Mg/O. While the Fe/Si/Mg we find coincides well with the core-free composition, a core-free planet requires a high oxygen abundance not seen in our formation models. All of the oxygen weight percents are lower than the 38% needed to oxidize the iron in the mantle.
Other interior models may also fit our compositions and the observed density of the T1 planets. The CMF can be increased by assuming a liquid core or by putting lighter elements in the core. However, a liquid core only increases the inferred CMF of T1-f from 14 to 15%. In addition, a promising area of research not investigated here is the incorporation of water into the mantles of the T1 planets. A hydrated mantle could link our formation mechanisms with the observed lack of atmosphere on T1-b. However, <cit.> found mantles of 1 M_⊕ planets can store 1-2% of their mass in water within the mantle which can change the radius by approximately 0.5% from a dry model. In comparison a 1% change in CMF changes the radius of the planet by 0.2%. A change in the assumed CMF is a much larger effect. To match both our formation models and the observed densities, T1-b and T1-c could be desiccated while the outer planets most likely need a significant volatile layer.
§ DISCUSSION
<cit.> modeled the stellar evolution of T1 and found that the current luminosity of T1 is ∼ 5 × 10^-4 L_⊙ and the luminosity of the star at 10 Myr is ∼ 0.01 L_⊙. However, the stellar luminosity of M-dwarfs in their pre-main sequence stages should be orders of magnitudes larger. <cit.> modeled the luminosity of a pre-main sequence M8 star and found that when the star was ∼1 Myr old its luminosity was ∼0.05 L_⊙ from which point it continually dimmed until it reached the main sequence stage and its current luminosity over the course of ∼1 Gyr.
The PIM is proportional to the temperature of the disc which in turn, depends on the luminosity of the star and the viscosity of the disc. Figure <ref> shows the PIM as a function of four different values for luminosity using the <cit.> temperature profile for the MMSN, Eq. (<ref>), which assumes disc heating is dominated by stellar irradiation. We also show the PIM for the temperature profile used in <cit.> which accounts for stellar irradiation and viscous heating. We plot the T1 planets in orange. We see that the ice line and the luminosity change the PIM, which should affect the subsequent evolution of the system. As the luminosity decreases in time, the PIM also decreases. The evolution of the stellar luminosity, and thus the PIM, is important to capture when modeling the formation of the T1 planets. As the PIM decreases with stellar luminosity and time, this may indicate that planets T1-d and T1-h formed at a later time. Since fragmentation extends the planet formation process and we find that fragments are capable of seeding an entire planet, a fragment produced at a later time which grows by pebble accretion in a relatively depleted gas disc could explain the current masses of T1-d and T1-h.
On the other hand, we note that the other T1 planets appear to follow PIM from an MMSN profile and a specific luminosity.
Nevertheless, accurate temperature profiles are necessary for determining the location of the ice line which also strongly affects the PIM. We adopt a constant, intermediate value for luminosity in our temperature profile and assume a constant location of the ice line which results in planet masses smaller than the observed planet masses for planets e-g. We assume that the ice line is the location in the disk where the temperature is 170 K. However, <cit.> found that in the denser protoplanetary discs that exist around M-dwarfs, the ice line is more likely to exist in the region of the disc where the temperature is 212 K. Future work that implements more accurate temperature profiles in time is necessary to better understand the formation of the T1 system, particularly the role of pebble accretion in the formation process and may more accurately reproduce the observed masses of the T1 planets. However, a cooling disc alone cannot explain the reversed mass distribution as this would imply planets d-g would need to form from the outside in. Even though the ice line would move inwards with a cooling disc, it seems unlikely it would move as far in as the current orbits of planets d,e,f or perhaps even g. An additional explanation for the low mass of T1-h though, would be if the ice line moved interior to T1-h near the time T1-h reached its current mass, thus ceasing pebble accretion and the continued growth of the planet.
The PIM significantly affects the evolution of the system, particularly the mass distribution and thus, migration rates of the planets. <cit.> derived a PIM expression from 3D hydrodynamical simulations while our adopted expression from <cit.> is derived from 2D hydrodynamical results. <cit.> compared their results to <cit.> and found overall good agreement however, their PIM values are a factor of 1.5-2 times smaller than those of <cit.>. This may be attributed to the 3D nature of the <cit.> simulations where gap carving is more difficult to achieve than in the 2D case. Exploring how the 3D PIM expression of <cit.> affects our results is another area of interest for future work.
Our disc evolution only considers mass loss from accretion onto the central star. This leads to relatively low rates of mass loss in the disc which may contribute to the inability of our model to reproduce the higher order resonances where the inner planets are found in. <cit.> showed that mass loss from photoevaporation and disc winds rapidly deplete the disc mass in the T1 system which results in fast then slow migration of the planets, as well as a late expansion of the orbits due to the clearance of the inner cavity. The parameters we chose for our disc model result in relatively modest migration rates. This results in the observed first order MMRs but traps the planets in a three-body resonance not found in the T1 system and does not produce the MMRs of the two inner-most planet pairs.
A faster disc evolution model that permits planetesimals to quickly reach the PIM and quickly produces short migration timescales may help produce the observed planet orbits. Fast disc evolution can also help explain the masses of T1-d and T1-h if they were seeded by a fragment and reached their current masses when the disc dissipated, which prevents them from growing to the PIM. In addition to a disc model that more accurately describes mass loss, a more accurate treatment of the physics in the inner disc cavity may also be needed to produce the observed MMRs between the inner T1 planets <cit.>.
While we find that fragmentation is an important mechanism for producing planets as fragments seed planets that grow primarily from pebble accretion, our simulations assume a relatively large minimum fragment mass due to computational limitations. A smaller minimum fragment mass is likely to affect the planet formation process as more fragments would be produced and each fragment is able to grow its mass through pebble accretion. We note though that the pebble accretion rate sensitively depends on the accreting mass (Eqs. <ref>-<ref>). Additionally, the fragmentation model we use in this study assumes that fragments from a collision are all equally sized and have the same composition. To better understand the role of fragmentation in the formation process, higher resolution models for fragmentation of differentiated bodies are needed. In addition to more accurately modeling planet formation, this should help place tighter constraint on the planet composition.
Volatile loss and gain must be considered in future models in order to reproduce the observed bulk densities of the T1 planets. Our model, which neglects all volatile loss and any atmospheric accretion, over-estimates water mass fractions when starting with reasonable initial WMFs. Detailed modeling of various giant impacts that could have produced the Earth's moon indicate that some vaporization of the Earth's mantle took place, for a wide range of impact energies <cit.>. Accurate handling of volatile and mantle loss from giant impacts <cit.>, irradiation <cit.>, and green house effects <cit.> should help constrain the composition of the final planetary system. However, planet encounters with water rich bodies may also result in increasing the planet's water content. Whether an encounter with a water rich body results in net water loss or water gain depends on the specifics of the collision as demonstrated by previous SPH and n-body simulations <cit.>. Furthermore, terrestrial planets may directly accrete their atmospheres from the surrounding gas disc which may later be reduced by UV and X-ray radiation from the young host star <cit.>. Detailed modeling with respect to atmospheric gain and loss throughout the formation process is also necessary for placing tighter constraints on the bulk composition and interior structure of the planets.
§ CONCLUSIONS
In this study, we presented a disc evolution and pebble accretion model. We incorporated this model into reboundX and used our newly developed module to study the formation of the TRAPPIST-1 (T1) planets. Our model allows for type-I migration and eccentrity and inclination dampening from gas drag in a gas disc. In our simulations, 0.01 M_ bodies began just exterior to the ice line and grew by pebble accretion until the pebble isolation mass (PIM) was reached. We also modeled collisional accretion and fragmentation of the bodies. We used results from a dust condensation code to track the composition evolution of the planets. Using the final compositions of the code and assuming various interior structures, we used the planetary interior structure code Magrathea to obtain radii for our simulated planets.
We reproduced planetary systems that are similar in mass, orbital radius, and multiplicity to the T1 system by numerically modeling planet formation. We found that Moon-sized bodies quickly grow to the pebble isolation mass exterior to the ice line and migrate inwards at rates that commonly result in first-order MMRs between planetary pairs. Our model indicates that the largest planets in the inner system likely grew from a combination of embryo, pebble, and fragment accretion and experienced giant impacts, while the smaller planets in the outer system grew mainly by pebble accretion. We also found that fragmentation between larger bodies plays an important role in seeding the smaller planets as the resulting fragments subsequently grow into planets via pebble accretion.
Tracking the formation process of the planets allowed us to place constraints on the initial water content of the bodies at the start of our simulations. We did not account for any volatile loss but found the inner, larger planets experienced ocean stripping collisions and most planets experienced a few atmosphere stripping collisions. Assigning the initial bodies a WMF of 50% resulted in planets with larger radii and lower densities than those observed in the T1 system. We found that starting bodies with a WMF of 20% resulted in radii and densities similar to those of the T1 planets.
Using our composition constraints and planet interior structure code we found solutions for a two-layer model for planets b and c. This, along with the high number of giant impacts the inner planets experienced throughout their formation process, is inline with recent observations that these planets are likely devoid of an atmosphere. However, the two-layer models seem unlikely for most of the remaining outer planets which suggests that these planets have primordial hydrospheres– an atmosphere and/or a water surface layer. Our composition constraints also indicated that no planets are consistent with a core-free interior structure.
§ ACKNOWLEDGEMENTS
We thank Shichun Huang, Rebecca G. Martin and Zhaohuan Zhu for useful conversations. Computer support was provided by UNLV’s National Supercomputing Center. AC acknowledges support from the NSF through grant NSF AST-2107738.
CCY is grateful for the support from NASA via the Astrophysics Theory Program (grant number 80NSSC21K0141), NASA via the Emerging Worlds program (grant number 80NSSC20K0347), and NASA via the Theoretical and Computational Astrophysics Networks program (grant number 80NSSC21K0497).
§ DATA AVAILABILITY
Simulations in this paper made use of the rebound code (Astrophysics Source Code Library identifier ascl.net/1110.016) and reboundX (Astrophysics Source Code Library identifier ascl.net/2011.020) which can be downloaded freely at <http://github.com/hannorein/rebound> and <https://github.com/dtamayo/reboundx>, respectively. The fragmentation code and bulk composition tracking code for rebound (Astrophysics Source Code Library identifier ascl:2204.010) may be found at <https://github.com/annacrnn/rebound_fragmentation>. magrathea, the planet interior solver, may be downloaded freely at <https://github.com/Huang-CL/Magrathea>. The data underlying this article will be shared on reasonable request to the corresponding author.
mnras
§ PLANET PROPERTIES
In Table <ref>, we list the properties of the planets in each of our 24 runs (out of 100) that produced a system of at least six planets, which most closely resemble the T1 system.
The second column lists the planet multiplicity (No.) of each system.
The next columns are the properties of each planet, including: mass (m_ p), semi-major axis (a_ p), eccentricity (e), inclination (i), and relative abundances (by weight) for water (WMF), O, Fe, Si, Mg, Al, Ca, Ni, Na, Cr, Mn, Co, and P.
The slashes (/) separate the results for initial embryos with a starting WMF of either 20% or 50%.
The three rightmost columns are the percentages of the planet mass that came from pebble accretion (Peb), framgents (Frag), and embryos (Em), respectively.
@|c|c|cccc|ccccccccccccc|ccc|
Final properties of the planets in each of our runs that produced six or more planets.
Run No. M_ p a_ p e i WMF O Fe Si Mg Al Ca Ni Na Cr Mn Co P Peb Frag Em
(M_⊕) (au) (deg) %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 %_20/50 % % %
9*Run3 9*9
1.5 0.012 0.03 0.0 14.0/34.9 13.0/11.5 27.6/20.3 16.5/12.2 14.6/10.8 1.38/1.03 1.51/1.13 1.64/1.21 0.76/0.55 0.39/0.29 0.29/0.21 0.08/0.06 0.15/0.11 54.4 15.8 29.8
1.1 0.016 0.08 0.0 9.4/23.6 17.5/16.8 28.0/22.9 16.7/13.7 14.8/12.2 1.42/1.18 1.55/1.28 1.66/1.36 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.15/0.13 41.0 16.8 42.3
1.2 0.021 0.05 0.0 14.0/35.0 9.7/9.3 28.9/21.1 17.3/12.7 15.3/11.2 1.41/1.03 1.56/1.15 1.73/1.26 0.8/0.58 0.41/0.3 0.31/0.22 0.08/0.06 0.16/0.12 35.6 17.9 46.5
0.5 0.034 0.04 0.0 10.5/26.3 14.2/14.2 28.8/22.9 17.3/13.7 15.3/12.2 1.47/1.19 1.61/1.29 1.71/1.35 0.75/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.7 0.054 0.02 0.0 9.6/23.9 15.7/15.7 28.9/23.5 17.3/14.1 15.3/12.4 1.38/1.12 1.54/1.25 1.73/1.41 0.73/0.58 0.41/0.34 0.3/0.25 0.08/0.07 0.16/0.13 98.4 1.6 0.0
0.9 0.086 0.03 0.0 5.6/13.9 19.5/19.5 27.9/24.7 16.7/14.9 14.7/13.1 1.34/1.19 1.49/1.32 1.68/1.49 0.81/0.72 0.4/0.35 0.3/0.27 0.08/0.07 0.15/0.14 98.8 1.2 0.0
1.0 0.097 0.03 0.0 7.5/18.7 15.7/15.7 28.6/24.4 17.2/14.6 15.1/12.9 1.38/1.18 1.53/1.31 1.72/1.47 0.83/0.71 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 97.2 2.8 0.0
0.4 0.155 0.01 0.0 20.0/50.0 0.1/0.1 29.9/18.7 18.0/11.2 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 88.5 11.5 0.0
0.3 0.246 0.0 0.0 20.0/50.0 0.0/0.0 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.62/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 95.9 4.1 0.0
6*Run12 6*6
2.0 0.012 0.01 0.0 10.3/25.8 16.3/15.6 28.0/22.5 16.8/13.5 14.9/12.0 1.43/1.17 1.56/1.27 1.66/1.33 0.73/0.57 0.4/0.32 0.29/0.23 0.08/0.07 0.16/0.12 8.5 21.0 70.6
1.3 0.016 0.02 0.0 6.3/15.8 23.1/22.2 27.6/24.4 16.5/14.6 14.7/13.0 1.33/1.18 1.49/1.32 1.61/1.41 0.64/0.55 0.39/0.34 0.27/0.24 0.08/0.07 0.15/0.14 87.8 4.3 7.9
0.7 0.021 0.07 0.0 5.8/14.6 21.2/21.2 28.3/25.0 17.0/15.0 15.0/13.2 1.35/1.19 1.5/1.33 1.71/1.51 0.78/0.69 0.41/0.36 0.31/0.28 0.08/0.07 0.16/0.14 86.6 0.0 13.4
0.8 0.028 0.02 0.0 2.6/6.4 22.0/22.0 28.0/26.6 16.8/15.9 14.8/14.1 1.35/1.28 1.5/1.42 1.68/1.6 0.82/0.78 0.4/0.38 0.3/0.29 0.08/0.08 0.16/0.15 98.6 1.4 0.0
0.4 0.044 0.02 0.0 5.0/12.5 22.5/22.5 28.3/25.5 16.9/15.2 15.0/13.5 1.41/1.28 1.56/1.41 1.68/1.51 0.7/0.62 0.4/0.36 0.29/0.26 0.08/0.07 0.16/0.14 97.4 2.6 0.0
0.7 0.051 0.01 0.0 8.2/20.6 17.4/17.4 28.6/24.0 17.2/14.4 15.2/12.7 1.37/1.14 1.52/1.27 1.71/1.44 0.78/0.65 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.3 1.7 0.0
6*Run15 6*6
0.6 0.011 0.03 0.0 13.9/34.6 17.5/14.3 25.9/19.4 15.6/11.6 13.7/10.3 1.27/0.96 1.41/1.06 1.55/1.16 0.71/0.52 0.37/0.28 0.28/0.21 0.08/0.06 0.14/0.11 17.5 1.8 80.7
1.0 0.015 0.06 0.0 10.7/26.9 15.5/14.8 28.1/22.3 16.8/13.4 14.9/11.8 1.39/1.11 1.53/1.23 1.67/1.33 0.76/0.59 0.4/0.32 0.3/0.23 0.08/0.07 0.16/0.12 27.3 1.3 71.4
1.1 0.02 0.05 0.0 12.2/30.5 12.9/12.4 28.4/21.8 17.0/13.0 15.1/11.6 1.42/1.1 1.56/1.2 1.69/1.29 0.77/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 22.5 11.8 65.7
0.5 0.028 0.02 0.0 8.4/21.1 17.1/17.1 28.6/23.8 17.1/14.3 15.2/12.7 1.47/1.24 1.6/1.34 1.69/1.41 0.74/0.6 0.41/0.34 0.3/0.24 0.08/0.07 0.16/0.13 80.1 0.0 19.9
0.6 0.045 0.01 0.0 7.2/18.1 19.4/19.4 28.5/24.5 17.0/14.6 15.2/13.0 1.48/1.29 1.65/1.44 1.66/1.42 0.68/0.56 0.4/0.34 0.28/0.24 0.08/0.07 0.16/0.14 98.1 1.9 0.0
0.7 0.059 0.01 0.0 6.0/15.1 21.1/21.0 28.4/25.0 17.0/15.0 15.0/13.2 1.35/1.18 1.5/1.32 1.71/1.51 0.74/0.64 0.41/0.36 0.31/0.27 0.08/0.07 0.16/0.14 98.4 1.6 0.0
9*Run26 9*9
2.0 0.012 0.12 0.0 10.8/26.9 16.6/15.4 27.7/22.1 16.6/13.2 14.7/11.7 1.39/1.12 1.52/1.22 1.64/1.31 0.74/0.58 0.39/0.31 0.29/0.23 0.08/0.06 0.15/0.12 27.6 5.1 67.2
1.1 0.019 0.15 0.0 5.8/14.5 21.7/21.6 28.2/25.0 16.8/14.9 15.0/13.3 1.59/1.43 1.66/1.49 1.64/1.45 0.66/0.57 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 86.6 13.4 0.0
0.6 0.025 0.07 0.0 6.8/17.0 20.2/20.1 28.4/24.6 17.0/14.7 15.1/13.1 1.53/1.34 1.67/1.47 1.65/1.42 0.67/0.56 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 83.7 0.0 16.3
0.7 0.033 0.05 0.0 7.2/18.1 19.3/19.3 28.5/24.5 17.1/14.7 15.1/13.0 1.35/1.16 1.51/1.3 1.72/1.48 0.72/0.61 0.41/0.35 0.31/0.26 0.08/0.07 0.16/0.14 82.6 0.0 17.4
0.3 0.044 0.05 0.0 0.0/0.0 30.6/30.6 27.7/27.7 16.5/16.5 14.8/14.8 1.54/1.54 1.67/1.67 1.59/1.59 0.58/0.58 0.39/0.39 0.26/0.26 0.08/0.08 0.15/0.15 96.8 3.2 0.0
0.7 0.057 0.04 0.0 7.4/18.5 18.4/18.4 28.4/24.3 17.1/14.6 15.1/12.8 1.36/1.16 1.51/1.29 1.71/1.46 0.76/0.64 0.41/0.35 0.31/0.26 0.08/0.07 0.16/0.13 98.6 1.4 0.0
0.8 0.075 0.04 0.0 1.1/2.8 26.8/26.8 27.2/26.5 16.3/15.9 14.4/14.1 1.31/1.28 1.45/1.42 1.64/1.6 0.8/0.78 0.39/0.38 0.3/0.29 0.08/0.08 0.15/0.15 98.8 1.2 0.0
0.9 0.091 0.03 0.0 0.2/0.6 26.4/26.4 27.3/27.1 16.4/16.3 14.4/14.4 1.31/1.31 1.46/1.45 1.64/1.63 0.79/0.79 0.39/0.39 0.3/0.3 0.08/0.08 0.15/0.15 98.9 1.1 0.0
0.3 0.103 0.03 0.0 19.6/49.0 1.1/0.9 29.7/18.8 17.8/11.3 15.7/10.0 1.44/0.91 1.6/1.01 1.78/1.13 0.84/0.53 0.42/0.27 0.32/0.2 0.09/0.05 0.16/0.1 95.3 4.7 0.0
8*Run29 8*8
0.6 0.012 0.18 0.0 11.7/29.2 19.6/16.9 26.1/20.6 15.7/12.4 13.9/11.0 1.31/1.04 1.44/1.14 1.55/1.22 0.7/0.54 0.37/0.29 0.27/0.22 0.08/0.06 0.14/0.11 21.0 16.1 62.8
1.5 0.016 0.15 0.0 11.2/28.0 14.5/13.9 28.2/22.2 16.9/13.3 15.0/11.8 1.4/1.11 1.55/1.22 1.68/1.32 0.76/0.59 0.4/0.32 0.3/0.23 0.08/0.07 0.16/0.12 37.0 9.1 53.9
0.5 0.021 0.18 0.0 8.7/21.6 17.2/17.2 28.6/23.8 17.1/14.2 15.2/12.6 1.52/1.28 1.63/1.37 1.68/1.39 0.71/0.57 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.9 2.1 0.0
0.5 0.026 0.06 0.0 9.5/23.9 15.7/15.6 28.5/23.3 17.1/13.9 15.1/12.3 1.45/1.2 1.59/1.3 1.69/1.37 0.75/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 78.5 2.5 18.9
0.6 0.034 0.09 0.0 10.5/26.2 16.0/15.4 28.1/22.5 16.8/13.5 14.9/11.9 1.45/1.18 1.57/1.27 1.66/1.32 0.73/0.57 0.4/0.32 0.29/0.23 0.08/0.07 0.16/0.12 50.8 4.6 44.6
0.7 0.054 0.01 0.0 14.9/37.4 7.4/7.4 29.3/20.9 17.6/12.5 15.5/11.1 1.41/1.01 1.57/1.12 1.76/1.26 0.84/0.6 0.42/0.3 0.32/0.23 0.09/0.06 0.16/0.12 61.4 38.6 0.0
0.9 0.086 0.01 0.0 5.4/13.4 19.7/19.7 27.8/24.8 16.7/14.9 14.7/13.1 1.34/1.19 1.49/1.33 1.67/1.49 0.81/0.73 0.4/0.36 0.3/0.27 0.08/0.07 0.15/0.14 98.8 1.2 0.0
0.4 0.137 0.01 0.0 19.9/49.8 0.1/0.1 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.13 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.9 3.1 0.0
6*Run37 6*6
1.5 0.01 0.12 0.0 11.5/28.8 16.0/14.7 27.6/21.6 16.5/13.0 14.6/11.5 1.4/1.11 1.53/1.21 1.64/1.28 0.73/0.56 0.39/0.31 0.29/0.22 0.08/0.06 0.15/0.12 28.1 44.3 27.6
1.9 0.015 0.17 0.0 12.2/30.5 13.1/12.6 28.5/21.9 17.1/13.1 15.1/11.6 1.56/1.24 1.65/1.29 1.68/1.28 0.73/0.54 0.4/0.31 0.29/0.22 0.08/0.06 0.16/0.12 8.7 28.5 62.8
0.5 0.023 0.08 0.0 6.6/16.6 20.0/20.0 28.4/24.7 17.0/14.7 15.1/13.1 1.52/1.34 1.63/1.43 1.66/1.44 0.69/0.59 0.4/0.35 0.29/0.25 0.08/0.07 0.16/0.14 97.7 2.3 0.0
0.5 0.03 0.05 0.0 8.2/20.4 17.8/17.8 28.6/24.0 17.1/14.4 15.2/12.8 1.52/1.3 1.63/1.39 1.68/1.4 0.7/0.57 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.9 2.1 0.0
0.6 0.04 0.04 0.0 9.6/24.1 15.6/15.6 28.8/23.4 17.2/14.0 15.3/12.4 1.46/1.2 1.6/1.31 1.7/1.38 0.73/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.063 0.0 0.0 6.8/16.9 19.7/19.7 28.4/24.6 17.0/14.8 15.0/13.0 1.35/1.17 1.51/1.31 1.71/1.48 0.82/0.71 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
6*Run38 6*6
0.5 0.013 0.19 0.0 7.8/19.5 23.4/21.5 26.6/22.9 15.9/13.7 14.1/12.2 1.4/1.22 1.51/1.31 1.56/1.34 0.67/0.56 0.38/0.32 0.27/0.23 0.08/0.07 0.15/0.13 40.8 12.3 46.9
0.5 0.016 0.14 0.0 9.9/24.6 15.9/15.6 28.3/22.9 16.9/13.7 15.0/12.2 1.43/1.17 1.56/1.27 1.68/1.36 0.75/0.6 0.4/0.33 0.3/0.24 0.08/0.07 0.16/0.13 58.1 10.4 31.5
0.6 0.02 0.15 0.0 13.6/34.0 14.1/12.3 27.3/20.3 16.4/12.2 14.4/10.8 1.33/0.99 1.47/1.1 1.63/1.21 0.76/0.56 0.39/0.29 0.29/0.22 0.08/0.06 0.15/0.11 42.1 7.6 50.3
0.5 0.032 0.05 0.0 7.2/18.1 19.4/19.3 28.3/24.3 16.9/14.5 15.0/12.9 1.52/1.32 1.62/1.41 1.66/1.42 0.7/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 80.3 0.0 19.7
0.4 0.05 0.07 0.0 5.8/14.5 21.5/21.5 28.5/25.2 17.0/15.0 15.2/13.4 1.38/1.22 1.54/1.37 1.65/1.46 0.66/0.57 0.4/0.35 0.28/0.25 0.08/0.07 0.16/0.14 97.0 3.0 0.0
0.7 0.058 0.03 0.0 10.2/25.4 15.2/15.0 28.5/22.9 17.1/13.8 15.1/12.1 1.36/1.09 1.52/1.22 1.72/1.38 0.78/0.62 0.41/0.33 0.31/0.25 0.08/0.07 0.16/0.13 62.1 0.0 37.9
11*Run39 11*11
1.3 0.011 0.05 0.0 14.1/35.3 12.1/10.8 27.8/20.4 16.7/12.2 14.7/10.8 1.35/1.0 1.5/1.1 1.66/1.22 0.78/0.56 0.4/0.29 0.3/0.22 0.08/0.06 0.15/0.11 32.7 25.4 42.0
1.0 0.016 0.14 0.0 7.1/17.6 19.8/19.6 28.2/24.4 16.9/14.6 15.0/12.9 1.5/1.32 1.61/1.41 1.66/1.42 0.7/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 41.3 3.2 55.6
0.9 0.021 0.09 0.0 7.9/19.8 18.5/18.4 28.4/24.0 17.0/14.4 15.1/12.8 1.53/1.32 1.63/1.4 1.66/1.4 0.7/0.57 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 60.0 28.4 11.6
0.6 0.027 0.04 0.0 8.9/22.2 17.4/17.1 28.3/23.4 16.9/14.0 15.0/12.4 1.43/1.2 1.57/1.31 1.68/1.39 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 67.1 15.3 17.6
0.4 0.033 0.05 0.0 0.2/0.6 31.5/31.5 28.0/27.8 16.5/16.4 15.0/14.9 2.08/2.08 1.98/1.98 1.51/1.51 0.4/0.4 0.38/0.38 0.23/0.22 0.08/0.08 0.15/0.15 97.4 2.6 0.0
0.3 0.04 0.03 0.0 0.2/0.6 31.5/31.5 27.9/27.8 16.5/16.4 15.0/14.9 2.01/2.0 1.95/1.94 1.52/1.51 0.42/0.42 0.38/0.38 0.23/0.23 0.08/0.08 0.15/0.15 96.5 3.5 0.0
0.4 0.046 0.03 0.0 0.5/1.2 30.6/30.5 28.0/27.7 16.6/16.5 14.9/14.8 1.55/1.54 1.73/1.72 1.57/1.56 0.52/0.51 0.39/0.39 0.25/0.25 0.08/0.08 0.15/0.15 90.7 9.3 0.0
0.5 0.054 0.03 0.0 3.1/7.7 26.6/26.3 27.8/26.2 16.7/15.8 14.7/13.9 1.31/1.23 1.47/1.38 1.66/1.57 0.65/0.6 0.4/0.38 0.29/0.27 0.08/0.08 0.15/0.15 98.0 2.0 0.0
0.8 0.063 0.03 0.0 6.6/16.6 20.1/20.0 28.3/24.6 17.0/14.8 15.0/13.0 1.35/1.17 1.5/1.31 1.7/1.48 0.81/0.7 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 83.6 3.2 13.2
0.9 0.082 0.02 0.0 2.4/6.0 24.2/24.2 27.3/25.9 16.4/15.6 14.4/13.7 1.31/1.24 1.46/1.38 1.64/1.56 0.8/0.76 0.39/0.37 0.3/0.28 0.08/0.08 0.15/0.14 98.9 1.1 0.0
1.0 0.1 0.01 0.0 2.5/6.4 24.1/23.3 27.2/26.1 16.3/15.7 14.4/13.8 1.31/1.25 1.45/1.39 1.64/1.57 0.79/0.76 0.39/0.37 0.3/0.28 0.08/0.08 0.15/0.14 98.9 1.1 0.0
8*Run41 8*8
1.4 0.012 0.17 0.0 9.2/23.0 19.7/18.3 27.3/22.6 16.3/13.5 14.5/12.0 1.4/1.18 1.53/1.28 1.61/1.33 0.7/0.57 0.39/0.32 0.28/0.23 0.08/0.07 0.15/0.13 14.8 4.1 81.1
1.5 0.019 0.1 0.0 8.3/20.9 19.2/18.5 27.8/23.4 16.6/14.0 14.8/12.4 1.45/1.23 1.57/1.33 1.64/1.38 0.71/0.58 0.39/0.33 0.29/0.24 0.08/0.07 0.15/0.13 50.0 12.0 37.9
0.5 0.025 0.1 0.0 9.5/23.8 16.7/16.2 28.2/23.1 16.9/13.8 15.0/12.3 1.45/1.2 1.58/1.3 1.67/1.36 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 77.3 0.0 22.7
0.5 0.033 0.07 0.0 8.0/20.0 18.0/18.0 28.6/24.1 17.1/14.4 15.2/12.8 1.45/1.23 1.58/1.34 1.69/1.42 0.71/0.59 0.41/0.34 0.29/0.25 0.08/0.07 0.16/0.13 98.0 2.0 0.0
0.6 0.044 0.04 0.0 7.9/19.8 18.4/18.3 28.6/24.1 17.1/14.4 15.2/12.8 1.51/1.3 1.65/1.41 1.67/1.4 0.69/0.56 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.069 0.03 0.0 5.0/12.6 21.9/21.9 28.0/25.2 16.8/15.1 14.8/13.3 1.34/1.2 1.49/1.34 1.69/1.52 0.82/0.74 0.4/0.36 0.31/0.28 0.08/0.07 0.16/0.14 98.6 1.4 0.0
0.9 0.084 0.03 0.0 3.4/8.4 22.7/22.7 27.5/25.6 16.5/15.4 14.5/13.5 1.32/1.23 1.47/1.36 1.65/1.54 0.81/0.75 0.39/0.37 0.3/0.28 0.08/0.07 0.15/0.14 98.8 1.2 0.0
0.3 0.102 0.02 0.0 20.0/49.9 0.2/0.1 29.9/18.7 17.9/11.2 15.8/9.9 1.45/0.91 1.61/1.01 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.5 3.5 0.0
6*Run46 6*6
1.3 0.013 0.08 0.0 11.6/28.9 16.2/14.7 27.4/21.5 16.4/12.9 14.5/11.4 1.35/1.07 1.49/1.18 1.63/1.28 0.75/0.57 0.39/0.31 0.29/0.23 0.08/0.06 0.15/0.12 21.1 3.8 75.1
0.8 0.016 0.11 0.0 14.9/37.3 9.6/8.7 28.3/20.3 17.0/12.2 15.0/10.8 1.36/0.98 1.52/1.09 1.7/1.22 0.81/0.58 0.41/0.29 0.31/0.22 0.08/0.06 0.16/0.11 54.7 14.9 30.4
0.5 0.021 0.07 0.0 7.5/18.7 19.2/19.0 28.3/24.2 16.9/14.4 15.0/12.8 1.5/1.3 1.61/1.39 1.66/1.41 0.7/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 79.0 0.0 21.0
0.6 0.026 0.05 0.0 10.9/27.4 13.4/13.4 28.9/22.7 17.3/13.6 15.3/12.1 1.45/1.15 1.59/1.26 1.71/1.34 0.76/0.59 0.41/0.32 0.3/0.24 0.08/0.07 0.16/0.13 79.9 20.1 0.0
0.6 0.041 0.02 0.0 5.9/14.9 21.4/21.4 28.4/25.0 16.9/14.9 15.1/13.3 1.59/1.43 1.68/1.5 1.64/1.44 0.65/0.56 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 98.1 1.9 0.0
0.6 0.047 0.02 0.0 8.0/20.1 18.1/18.1 28.7/24.2 17.1/14.4 15.2/12.9 1.43/1.21 1.6/1.36 1.67/1.4 0.69/0.56 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
8*Run47 8*8
1.8 0.012 0.09 0.0 12.5/31.2 13.4/12.5 28.1/21.4 16.8/12.8 14.9/11.4 1.38/1.06 1.53/1.17 1.67/1.28 0.77/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 18.6 5.8 75.5
1.3 0.017 0.21 0.0 8.1/20.3 19.2/18.7 28.0/23.7 16.8/14.2 14.9/12.6 1.36/1.15 1.52/1.29 1.65/1.39 0.7/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 30.9 2.1 67.0
0.6 0.022 0.1 0.0 8.2/20.5 17.8/17.8 28.7/24.1 17.2/14.5 15.2/12.8 1.37/1.15 1.53/1.28 1.71/1.44 0.71/0.58 0.41/0.34 0.3/0.25 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.6 0.029 0.09 0.0 9.6/24.0 15.3/15.3 28.8/23.4 17.2/14.0 15.3/12.4 1.4/1.14 1.56/1.27 1.7/1.38 0.74/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.3 0.038 0.1 0.0 3.8/9.5 24.6/24.6 28.2/26.0 16.8/15.5 15.0/13.9 1.48/1.38 1.66/1.54 1.63/1.5 0.63/0.57 0.4/0.37 0.27/0.25 0.08/0.08 0.16/0.14 95.4 4.6 0.0
0.7 0.046 0.04 0.0 8.9/22.3 16.7/16.6 28.8/23.8 17.2/14.2 15.3/12.7 1.39/1.15 1.55/1.28 1.69/1.39 0.71/0.57 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.073 0.03 0.0 6.4/16.1 19.5/19.5 28.1/24.5 16.9/14.7 14.9/13.0 1.35/1.17 1.5/1.31 1.69/1.48 0.82/0.72 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.7 1.3 0.0
0.9 0.089 0.03 0.0 7.5/18.8 16.5/16.5 28.3/24.1 17.0/14.5 15.0/12.7 1.36/1.16 1.52/1.29 1.7/1.45 0.82/0.7 0.4/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.8 1.2 0.0
7*Run51 7*7
0.6 0.011 0.16 2.56 13.5/33.8 17.8/14.7 25.9/19.5 15.5/11.7 13.7/10.3 1.25/0.95 1.4/1.05 1.55/1.17 0.72/0.54 0.37/0.28 0.28/0.21 0.08/0.06 0.14/0.11 46.5 22.0 31.6
1.0 0.015 0.21 0.67 9.0/22.5 17.5/17.1 28.2/23.3 16.9/13.9 15.0/12.4 1.47/1.23 1.59/1.33 1.66/1.37 0.72/0.58 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 26.7 1.4 71.9
0.5 0.019 0.18 3.77 7.4/18.5 18.9/18.8 28.4/24.3 17.0/14.5 15.1/12.9 1.49/1.29 1.61/1.39 1.67/1.42 0.71/0.6 0.4/0.34 0.29/0.25 0.08/0.07 0.16/0.13 78.6 0.0 21.4
0.5 0.025 0.17 0.3 6.7/16.8 20.1/20.0 28.3/24.6 16.9/14.7 15.0/13.1 1.54/1.36 1.64/1.44 1.65/1.43 0.69/0.58 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 80.5 0.0 19.5
0.6 0.04 0.05 0.15 7.1/17.7 20.1/19.9 28.2/24.3 16.9/14.5 15.0/13.0 1.56/1.37 1.65/1.44 1.65/1.41 0.68/0.56 0.4/0.34 0.28/0.24 0.08/0.07 0.16/0.13 80.7 0.0 19.3
0.8 0.064 0.01 0.03 5.6/14.1 22.5/22.1 27.8/24.8 16.7/14.9 14.7/13.1 1.32/1.18 1.47/1.31 1.68/1.5 0.81/0.72 0.4/0.36 0.31/0.27 0.08/0.07 0.15/0.14 81.3 0.0 18.7
0.2 0.102 0.01 0.01 19.7/49.3 0.7/0.6 29.8/18.7 17.9/11.3 15.8/9.9 1.44/0.91 1.6/1.01 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.16/0.1 94.0 6.0 0.0
8*Run54 8*8
1.1 0.011 0.13 0.0 9.0/22.6 19.1/18.1 27.6/23.0 16.5/13.7 14.7/12.2 1.45/1.22 1.57/1.32 1.62/1.34 0.69/0.56 0.39/0.32 0.28/0.23 0.08/0.07 0.15/0.13 6.6 93.4 0.0
1.9 0.016 0.15 0.0 9.3/23.3 17.3/16.7 28.1/23.1 16.8/13.8 14.9/12.3 1.43/1.19 1.57/1.3 1.66/1.36 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.16/0.13 14.7 35.2 50.1
0.6 0.021 0.14 0.0 7.8/19.5 19.6/19.1 28.0/23.8 16.7/14.2 14.8/12.6 1.46/1.26 1.58/1.35 1.65/1.4 0.72/0.6 0.4/0.34 0.29/0.24 0.08/0.07 0.15/0.13 78.0 0.0 22.0
0.7 0.033 0.06 0.0 7.1/17.7 19.0/19.0 28.4/24.4 17.0/14.7 15.0/12.9 1.36/1.16 1.51/1.3 1.71/1.47 0.82/0.7 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
0.6 0.043 0.06 0.0 8.0/20.1 17.2/17.2 28.4/23.8 17.0/14.3 15.0/12.6 1.36/1.14 1.52/1.27 1.7/1.43 0.8/0.67 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.7 0.057 0.05 0.0 8.2/20.5 16.6/16.6 28.5/23.9 17.1/14.3 15.1/12.6 1.37/1.14 1.52/1.27 1.72/1.44 0.78/0.65 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.4 1.6 0.0
0.8 0.069 0.04 0.0 13.4/33.6 9.6/9.6 29.1/21.5 17.5/12.9 15.4/11.4 1.4/1.04 1.56/1.15 1.75/1.3 0.84/0.62 0.42/0.31 0.32/0.23 0.08/0.06 0.16/0.12 69.3 30.7 0.0
0.9 0.09 0.01 0.0 6.9/17.3 17.3/17.2 28.2/24.3 16.9/14.6 14.9/12.9 1.36/1.17 1.51/1.3 1.7/1.46 0.82/0.71 0.4/0.35 0.31/0.26 0.08/0.07 0.16/0.13 98.9 1.1 0.0
7*Run55 7*7
0.8 0.012 0.08 0.0 13.3/33.2 17.1/14.4 26.3/19.9 15.8/11.9 13.9/10.5 1.27/0.96 1.41/1.07 1.57/1.19 0.73/0.55 0.38/0.28 0.28/0.21 0.08/0.06 0.15/0.11 14.5 1.5 84.0
1.0 0.016 0.1 0.0 7.3/18.2 19.4/19.2 28.2/24.2 16.9/14.5 15.0/12.9 1.49/1.3 1.6/1.39 1.66/1.42 0.71/0.6 0.4/0.34 0.29/0.25 0.08/0.07 0.16/0.13 40.3 0.0 59.7
0.5 0.025 0.07 0.0 5.4/13.4 24.2/23.5 27.6/24.9 16.5/14.8 14.7/13.3 1.61/1.48 1.67/1.52 1.59/1.43 0.61/0.53 0.39/0.35 0.27/0.24 0.08/0.07 0.15/0.14 80.7 0.0 19.3
0.6 0.028 0.05 0.0 7.2/18.1 20.0/19.8 28.2/24.3 16.9/14.5 15.0/12.9 1.58/1.39 1.66/1.45 1.64/1.4 0.66/0.55 0.4/0.34 0.28/0.24 0.08/0.07 0.16/0.13 82.5 0.0 17.5
0.6 0.032 0.05 0.0 13.2/32.9 10.2/10.1 29.0/21.7 17.4/13.0 15.4/11.5 1.41/1.05 1.57/1.17 1.73/1.29 0.8/0.59 0.41/0.31 0.31/0.23 0.08/0.06 0.16/0.12 52.9 0.0 47.1
0.7 0.052 0.02 0.0 6.1/15.2 21.0/21.0 28.5/25.1 17.1/15.0 15.1/13.3 1.35/1.19 1.51/1.33 1.68/1.48 0.68/0.59 0.41/0.36 0.29/0.25 0.08/0.07 0.16/0.14 98.3 1.7 0.0
0.8 0.068 0.01 0.0 5.7/14.3 21.1/21.1 28.2/25.0 16.9/15.0 14.9/13.2 1.34/1.19 1.5/1.33 1.7/1.51 0.83/0.74 0.4/0.36 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
8*Run60 8*8
0.4 0.012 0.18 0.0 6.8/17.0 25.3/23.3 26.4/23.3 15.8/13.9 14.0/12.4 1.46/1.31 1.54/1.38 1.54/1.35 0.63/0.54 0.37/0.33 0.26/0.23 0.08/0.07 0.15/0.13 41.9 0.0 58.1
1.2 0.016 0.17 0.0 11.3/28.3 17.3/15.6 27.1/21.5 16.3/12.9 14.4/11.4 1.35/1.07 1.49/1.18 1.61/1.27 0.73/0.56 0.39/0.31 0.29/0.22 0.08/0.06 0.15/0.12 17.0 4.9 78.1
0.5 0.025 0.18 0.0 5.5/13.8 21.8/21.8 28.2/25.2 16.9/15.0 15.0/13.4 1.55/1.4 1.65/1.48 1.65/1.46 0.67/0.58 0.4/0.35 0.28/0.25 0.08/0.07 0.16/0.14 97.8 2.2 0.0
0.5 0.032 0.08 0.0 7.6/18.9 18.7/18.7 28.5/24.2 17.0/14.5 15.1/12.9 1.51/1.3 1.63/1.4 1.67/1.42 0.7/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.9 2.1 0.0
0.7 0.052 0.05 0.0 5.9/14.7 21.5/21.5 28.5/25.2 17.0/15.1 15.1/13.4 1.35/1.19 1.51/1.33 1.67/1.48 0.66/0.56 0.4/0.36 0.28/0.25 0.08/0.07 0.16/0.14 85.2 0.0 14.8
0.9 0.082 0.02 0.0 5.0/12.4 20.6/20.6 27.7/24.9 16.6/14.9 14.6/13.2 1.33/1.19 1.48/1.33 1.67/1.5 0.81/0.73 0.4/0.36 0.3/0.27 0.08/0.07 0.15/0.14 88.7 0.0 11.3
0.2 0.107 0.04 0.0 20.0/50.0 0.2/0.1 29.8/18.6 17.9/11.2 15.8/9.9 1.44/0.9 1.6/1.0 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 95.2 4.8 0.0
0.2 0.13 0.01 0.0 20.0/50.0 0.2/0.1 29.8/18.6 17.9/11.2 15.8/9.9 1.44/0.9 1.6/1.0 1.79/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 95.6 4.4 0.0
8*Run68 8*8
1.0 0.011 0.11 0.0 12.2/30.6 17.3/15.0 26.5/20.6 15.9/12.3 14.0/10.9 1.28/1.0 1.43/1.11 1.59/1.23 0.75/0.58 0.38/0.29 0.29/0.22 0.08/0.06 0.15/0.11 27.7 7.3 65.0
1.0 0.015 0.13 0.0 6.1/15.2 21.5/21.2 28.0/24.7 16.7/14.8 14.9/13.1 1.51/1.35 1.61/1.44 1.64/1.44 0.69/0.59 0.4/0.35 0.28/0.25 0.08/0.07 0.15/0.14 40.6 1.2 58.2
0.5 0.019 0.11 0.0 7.6/19.1 18.5/18.5 28.5/24.2 17.0/14.5 15.1/12.9 1.5/1.29 1.63/1.4 1.67/1.42 0.71/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 80.7 0.0 19.3
0.6 0.025 0.05 0.0 11.3/28.2 13.1/13.1 28.8/22.5 17.2/13.5 15.3/11.9 1.46/1.16 1.6/1.26 1.71/1.33 0.76/0.58 0.41/0.32 0.3/0.23 0.08/0.07 0.16/0.12 59.7 0.0 40.3
0.5 0.033 0.05 0.0 9.1/22.8 16.7/16.5 28.6/23.5 17.1/14.1 15.2/12.5 1.48/1.23 1.61/1.33 1.68/1.38 0.73/0.58 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 81.7 0.0 18.3
0.6 0.044 0.05 0.0 8.3/20.9 17.7/17.6 28.7/24.0 17.2/14.4 15.2/12.7 1.38/1.15 1.54/1.29 1.7/1.42 0.71/0.58 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.7 0.057 0.04 0.0 8.3/20.9 17.6/17.6 28.6/23.9 17.2/14.4 15.1/12.7 1.36/1.14 1.52/1.27 1.72/1.44 0.75/0.62 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.6 1.4 0.0
0.8 0.075 0.01 0.0 7.0/17.5 18.8/18.5 27.8/24.0 16.7/14.4 14.7/12.7 1.34/1.15 1.49/1.28 1.67/1.45 0.81/0.7 0.4/0.34 0.3/0.26 0.08/0.07 0.15/0.13 88.2 0.0 11.8
6*Run73 6*6
1.0 0.011 0.06 0.0 12.3/30.7 16.7/14.7 26.8/20.8 16.1/12.5 14.2/11.0 1.3/1.01 1.45/1.12 1.61/1.24 0.74/0.57 0.38/0.3 0.29/0.22 0.08/0.06 0.15/0.12 66.5 15.3 18.2
0.7 0.015 0.12 0.0 10.2/25.4 16.6/15.8 27.9/22.5 16.7/13.5 14.8/12.0 1.39/1.13 1.53/1.24 1.66/1.34 0.75/0.59 0.4/0.32 0.29/0.24 0.08/0.07 0.15/0.12 34.3 21.4 44.3
0.7 0.019 0.05 0.0 5.8/14.5 21.5/21.4 28.2/25.0 16.8/14.9 15.0/13.3 1.52/1.37 1.63/1.46 1.65/1.45 0.68/0.59 0.4/0.35 0.28/0.25 0.08/0.07 0.16/0.14 52.7 33.6 13.7
0.6 0.025 0.05 0.0 7.2/18.1 19.0/19.0 28.4/24.4 17.0/14.6 15.1/13.0 1.46/1.26 1.6/1.38 1.67/1.43 0.71/0.6 0.4/0.35 0.29/0.25 0.08/0.07 0.16/0.13 82.2 0.0 17.8
0.5 0.033 0.04 0.0 7.8/19.5 18.3/18.3 28.5/24.2 17.1/14.4 15.2/12.8 1.5/1.29 1.63/1.4 1.67/1.41 0.71/0.58 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 81.5 0.0 18.5
0.7 0.044 0.02 0.0 7.0/17.4 19.8/19.7 28.5/24.6 17.1/14.8 15.1/13.1 1.36/1.17 1.51/1.3 1.7/1.47 0.7/0.59 0.41/0.35 0.29/0.25 0.08/0.07 0.16/0.14 98.5 1.5 0.0
6*Run74 6*6
1.5 0.013 0.1 0.0 10.0/25.0 19.4/17.7 27.0/22.1 16.2/13.2 14.3/11.7 1.37/1.14 1.5/1.24 1.6/1.3 0.7/0.56 0.38/0.31 0.28/0.23 0.08/0.06 0.15/0.12 13.2 3.5 83.3
0.7 0.016 0.18 0.0 4.6/11.6 22.9/22.9 28.1/25.5 16.8/15.2 14.9/13.6 1.49/1.37 1.6/1.46 1.65/1.49 0.69/0.61 0.4/0.36 0.28/0.26 0.08/0.08 0.16/0.14 54.5 32.0 13.5
0.5 0.02 0.17 0.0 6.5/16.3 22.6/21.7 27.5/24.2 16.4/14.4 14.6/12.9 1.46/1.3 1.58/1.4 1.6/1.41 0.66/0.57 0.39/0.34 0.28/0.24 0.08/0.07 0.15/0.13 76.7 3.9 19.4
0.6 0.026 0.08 0.0 7.1/17.8 19.5/19.4 28.5/24.5 17.0/14.7 15.1/13.0 1.42/1.22 1.56/1.35 1.68/1.44 0.69/0.58 0.4/0.35 0.29/0.24 0.08/0.07 0.16/0.14 98.2 1.8 0.0
0.6 0.041 0.04 0.0 6.8/17.0 19.8/19.8 28.5/24.7 17.1/14.8 15.1/13.1 1.41/1.23 1.56/1.35 1.69/1.46 0.7/0.59 0.41/0.35 0.29/0.25 0.08/0.07 0.16/0.14 81.4 0.0 18.6
0.8 0.054 0.02 0.0 6.0/14.9 21.1/20.9 27.9/24.7 16.8/14.8 14.8/13.1 1.34/1.18 1.49/1.31 1.69/1.49 0.82/0.73 0.4/0.35 0.31/0.27 0.08/0.07 0.15/0.14 87.2 0.0 12.8
8*Run81 8*8
0.9 0.011 0.18 0.0 9.0/22.6 20.6/19.0 27.1/22.6 16.2/13.5 14.4/12.0 1.42/1.2 1.53/1.29 1.59/1.33 0.69/0.56 0.38/0.32 0.28/0.23 0.08/0.07 0.15/0.12 80.6 6.5 12.9
1.6 0.018 0.06 0.0 11.7/29.1 14.0/13.4 28.3/22.0 16.9/13.2 15.0/11.7 1.42/1.11 1.56/1.22 1.68/1.3 0.76/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 43.5 15.2 41.3
0.5 0.023 0.14 0.0 8.0/19.9 17.9/17.9 28.5/24.1 17.1/14.4 15.1/12.8 1.49/1.28 1.61/1.37 1.68/1.41 0.72/0.59 0.4/0.34 0.29/0.24 0.08/0.07 0.16/0.13 97.1 2.9 0.0
0.6 0.031 0.05 0.0 9.8/24.5 15.4/15.4 28.7/23.2 17.2/13.9 15.2/12.3 1.5/1.23 1.62/1.32 1.69/1.36 0.73/0.58 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 98.1 1.9 0.0
0.6 0.04 0.05 0.0 10.9/27.3 13.7/13.6 28.9/22.8 17.3/13.6 15.3/12.1 1.44/1.14 1.59/1.26 1.71/1.34 0.75/0.58 0.41/0.32 0.3/0.23 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.8 0.064 0.01 0.0 6.8/17.0 19.8/19.7 28.3/24.5 17.0/14.7 15.0/13.0 1.35/1.17 1.51/1.3 1.71/1.48 0.82/0.71 0.41/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.6 1.4 0.0
0.3 0.102 0.02 0.0 20.0/49.9 0.5/0.3 29.8/18.7 17.9/11.2 15.8/9.9 1.45/0.9 1.61/1.0 1.79/1.12 0.84/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.4 3.6 0.0
0.4 0.133 0.02 0.0 20.0/50.0 0.1/0.0 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 97.2 2.8 0.0
9*Run93 9*9
1.8 0.012 0.1 0.0 13.1/32.8 12.9/11.8 27.8/20.9 16.7/12.6 14.7/11.1 1.35/1.01 1.49/1.12 1.67/1.26 0.79/0.59 0.4/0.3 0.3/0.23 0.08/0.06 0.15/0.12 49.7 39.0 11.3
1.1 0.017 0.2 0.0 10.1/25.3 15.4/15.1 28.5/22.9 17.1/13.7 15.1/12.2 1.44/1.17 1.58/1.28 1.69/1.35 0.74/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 32.1 40.9 27.0
0.5 0.023 0.07 0.0 9.4/23.5 16.0/15.9 28.6/23.4 17.1/14.0 15.2/12.4 1.48/1.23 1.61/1.33 1.69/1.37 0.73/0.58 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 80.8 0.0 19.2
0.3 0.03 0.12 0.0 1.4/3.4 29.5/29.5 28.0/27.3 16.6/16.1 15.0/14.6 2.01/1.97 1.94/1.9 1.54/1.49 0.45/0.43 0.39/0.37 0.24/0.23 0.08/0.08 0.15/0.15 75.8 24.2 0.0
0.6 0.036 0.09 0.0 9.5/23.8 15.7/15.6 28.7/23.4 17.2/14.0 15.2/12.4 1.45/1.19 1.59/1.3 1.7/1.38 0.74/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 82.1 0.0 17.9
0.6 0.048 0.04 0.0 9.1/22.8 16.5/16.5 28.8/23.7 17.2/14.1 15.3/12.6 1.43/1.19 1.6/1.33 1.68/1.38 0.71/0.56 0.41/0.33 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.8 0.076 0.01 0.0 7.2/18.0 18.2/18.2 28.1/24.1 16.9/14.5 14.9/12.7 1.35/1.15 1.5/1.28 1.69/1.45 0.82/0.71 0.4/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.7 1.3 0.0
0.3 0.12 0.01 0.0 19.9/49.7 0.6/0.4 29.8/18.7 17.9/11.2 15.8/9.9 1.45/0.91 1.61/1.01 1.79/1.12 0.84/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.3 3.7 0.0
0.3 0.191 0.0 0.0 19.7/49.2 0.6/0.5 29.9/18.9 18.0/11.3 15.9/10.0 1.45/0.91 1.61/1.02 1.79/1.13 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.06 0.17/0.1 96.2 3.8 0.0
6*Run94 6*6
1.7 0.012 0.03 0.0 10.8/27.0 17.4/16.0 27.4/21.9 16.4/13.1 14.6/11.6 1.42/1.16 1.54/1.24 1.62/1.29 0.71/0.55 0.39/0.31 0.28/0.22 0.08/0.06 0.15/0.12 21.3 2.5 76.2
1.0 0.017 0.1 0.0 9.9/24.8 15.0/15.0 28.7/23.2 17.2/13.9 15.2/12.3 1.47/1.2 1.6/1.3 1.7/1.37 0.75/0.59 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 45.4 28.3 26.3
0.6 0.023 0.06 0.0 13.3/33.4 11.3/10.7 28.4/21.2 17.1/12.7 15.1/11.2 1.37/1.02 1.53/1.14 1.7/1.27 0.79/0.58 0.41/0.3 0.3/0.23 0.08/0.06 0.16/0.12 64.5 2.1 33.4
0.7 0.03 0.03 0.0 6.7/16.8 19.4/19.4 28.2/24.5 17.0/14.7 15.0/13.0 1.35/1.17 1.5/1.3 1.7/1.47 0.82/0.71 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 97.5 2.5 0.0
0.5 0.042 0.03 0.0 3.0/7.6 24.0/23.9 27.4/25.8 16.5/15.5 14.5/13.6 1.31/1.23 1.46/1.37 1.65/1.55 0.8/0.75 0.39/0.37 0.3/0.28 0.08/0.08 0.15/0.14 97.7 2.3 0.0
0.8 0.048 0.02 0.0 6.8/17.1 17.7/17.6 28.2/24.4 16.9/14.6 14.9/12.9 1.36/1.17 1.51/1.3 1.69/1.47 0.82/0.71 0.4/0.35 0.31/0.27 0.08/0.07 0.16/0.14 98.5 1.5 0.0
8*Run95 8*8
0.5 0.013 0.19 0.0 7.6/19.1 24.8/22.5 26.2/22.8 15.7/13.6 13.9/12.1 1.36/1.2 1.48/1.3 1.54/1.34 0.65/0.55 0.37/0.32 0.27/0.23 0.08/0.07 0.15/0.13 41.7 16.2 42.2
1.2 0.016 0.12 0.0 13.1/32.8 11.7/11.0 28.3/21.2 17.0/12.7 15.0/11.2 1.37/1.02 1.52/1.14 1.7/1.27 0.8/0.6 0.4/0.3 0.31/0.23 0.08/0.06 0.16/0.12 31.6 11.0 57.4
0.9 0.02 0.14 0.0 12.1/30.3 13.5/12.8 28.2/21.7 16.9/13.0 14.9/11.5 1.38/1.06 1.53/1.18 1.68/1.29 0.77/0.59 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 58.5 12.6 28.9
0.5 0.032 0.08 0.0 10.1/25.3 15.9/15.4 28.2/22.8 16.9/13.6 15.0/12.1 1.43/1.17 1.57/1.27 1.67/1.35 0.75/0.59 0.4/0.32 0.3/0.24 0.08/0.07 0.16/0.13 39.7 41.9 18.4
0.8 0.041 0.05 0.0 17.9/44.7 4.4/3.8 29.0/19.2 17.4/11.5 15.4/10.2 1.4/0.93 1.56/1.03 1.74/1.15 0.83/0.55 0.41/0.27 0.31/0.21 0.08/0.06 0.16/0.11 40.2 2.9 56.9
0.8 0.066 0.04 0.0 5.6/14.0 21.9/21.6 27.9/24.9 16.8/14.9 14.8/13.2 1.33/1.18 1.48/1.32 1.68/1.5 0.82/0.73 0.4/0.36 0.31/0.27 0.08/0.07 0.15/0.14 85.9 0.0 14.1
0.9 0.076 0.02 0.0 7.8/19.6 17.3/17.3 28.2/23.8 16.9/14.3 14.9/12.6 1.36/1.14 1.51/1.27 1.7/1.43 0.82/0.7 0.4/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.6 1.4 0.0
0.3 0.121 0.0 0.0 20.0/50.0 0.0/0.0 30.0/18.7 18.0/11.2 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 96.4 3.6 0.0
9*Run98 9*9
0.7 0.01 0.11 0.0 14.8/37.1 13.4/11.2 26.9/19.5 16.2/11.7 14.3/10.3 1.3/0.94 1.44/1.04 1.62/1.17 0.77/0.55 0.39/0.28 0.29/0.21 0.08/0.06 0.15/0.11 25.9 18.9 55.2
1.6 0.016 0.04 0.0 13.5/33.7 10.8/10.3 28.6/21.2 17.1/12.7 15.1/11.2 1.39/1.03 1.54/1.15 1.71/1.27 0.79/0.58 0.41/0.3 0.31/0.23 0.08/0.06 0.16/0.12 22.0 33.0 45.0
0.7 0.021 0.09 0.0 9.0/22.5 18.0/17.3 28.0/23.2 16.7/13.9 14.8/12.3 1.41/1.18 1.55/1.29 1.66/1.37 0.73/0.59 0.4/0.33 0.29/0.24 0.08/0.07 0.15/0.13 52.0 25.3 22.7
0.6 0.028 0.04 0.0 8.7/21.7 17.3/17.2 28.7/23.8 17.1/14.2 15.2/12.7 1.47/1.23 1.64/1.38 1.68/1.39 0.7/0.56 0.41/0.34 0.29/0.24 0.08/0.07 0.16/0.13 98.3 1.7 0.0
0.6 0.037 0.03 0.0 9.9/24.8 15.1/15.1 28.8/23.2 17.3/13.9 15.3/12.3 1.45/1.18 1.61/1.31 1.7/1.37 0.74/0.58 0.41/0.33 0.3/0.24 0.08/0.07 0.16/0.13 98.2 1.8 0.0
0.7 0.048 0.03 0.0 9.9/24.7 15.2/15.2 28.8/23.2 17.3/13.9 15.3/12.3 1.39/1.12 1.55/1.25 1.72/1.39 0.76/0.6 0.41/0.33 0.31/0.25 0.08/0.07 0.16/0.13 97.8 2.2 0.0
0.8 0.064 0.01 0.0 8.8/21.9 16.3/16.3 28.5/23.6 17.1/14.2 15.1/12.5 1.37/1.13 1.52/1.26 1.72/1.42 0.82/0.68 0.41/0.34 0.31/0.26 0.08/0.07 0.16/0.13 98.7 1.3 0.0
0.3 0.101 0.02 0.0 20.0/49.9 0.1/0.1 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 97.0 3.0 0.0
0.4 0.132 0.02 0.0 20.0/49.9 0.1/0.1 30.0/18.8 18.0/11.3 15.9/9.9 1.45/0.91 1.61/1.01 1.8/1.12 0.85/0.53 0.43/0.27 0.32/0.2 0.09/0.05 0.17/0.1 97.2 2.8 0.0
6*Run100 6*6
0.4 0.013 0.11 0.0 9.9/24.7 19.6/17.8 26.9/22.0 16.1/13.2 14.2/11.7 1.33/1.09 1.47/1.21 1.6/1.31 0.72/0.58 0.38/0.31 0.28/0.23 0.08/0.06 0.15/0.12 29.4 6.2 64.4
1.2 0.016 0.1 0.0 12.0/30.1 14.1/13.2 28.0/21.6 16.8/13.0 14.8/11.5 1.4/1.09 1.54/1.19 1.67/1.28 0.76/0.58 0.4/0.31 0.3/0.23 0.08/0.06 0.16/0.12 23.7 1.4 74.9
0.4 0.022 0.14 0.0 3.2/8.1 25.1/25.1 28.0/26.2 16.7/15.6 14.9/13.9 1.53/1.45 1.63/1.53 1.63/1.52 0.66/0.61 0.4/0.37 0.28/0.26 0.08/0.08 0.15/0.14 97.0 3.0 0.0
0.5 0.026 0.05 0.0 7.3/18.2 18.8/18.8 28.4/24.4 17.0/14.6 15.1/12.9 1.48/1.28 1.6/1.38 1.68/1.43 0.72/0.6 0.4/0.35 0.29/0.25 0.08/0.07 0.16/0.13 80.6 0.0 19.4
0.5 0.034 0.05 0.0 7.0/17.5 19.7/19.6 28.4/24.5 17.0/14.6 15.1/13.0 1.54/1.35 1.64/1.43 1.66/1.42 0.68/0.57 0.4/0.35 0.28/0.24 0.08/0.07 0.16/0.14 97.6 2.4 0.0
0.7 0.054 0.0 0.0 6.1/15.2 21.1/21.1 28.5/25.1 17.1/15.1 15.1/13.3 1.35/1.18 1.51/1.32 1.7/1.5 0.68/0.58 0.41/0.36 0.29/0.26 0.08/0.07 0.16/0.14 98.5 1.5 0.0
§ RE-SCALING THE DUST CONDENSATES
We re-scale the dust condensation model around a Sun-like star from <cit.> to a model for the dust condensation around a T1-like star.
To this end, we re-scale the size of the disc by matching the location of the ice line. The ice line, the location where T=170 K, is at 6.5 au around a Sun-like star at the time we take from the <cit.> model.
We multiply the radii of the Sun-like disk by 0.1/6.5 such that the new ice line resides at 0.1 au, which results in a disc spanning between 0.01-0.3 au.
Our composition tracking code then uses the unmodified relative abundances at a given radii of the re-scaled disc to initialize the bodies and to track the composition change from pebble accretion.
Differences in stellar evolution between G-type and M-dwarf stars will lead to different disc masses, lifetimes, mid-plane pressures, temperature profiles and thus different evolutionary tracks and timescales for dust condensation <cit.>. However, extrapolating the relative abundances of the dust from a disc around a Sun-like star at a single epoch and re-scaling for disc size should be representative of the dust in an M-dwarf system with a solar composition, such as TRAPPIST-1 <cit.>, at some earlier epoch. In the dust condensation code the dust condensates do not affect the subsequent evolution of the disc and to first approximation, the results reported here should be valid for the T1 system at some earlier time. While the dust condensation is determined locally by density, mid-plane pressure and temperature, and differences in stellar evolution between G and M-type stars can lead to differences in these parameters, a fast evolution of the condensation only occurs when the disk local temperature is higher than the condensation temperature. This can be seen in Fig. 7 of <cit.> and a clear condensation front can be seen over time. As soon as the temperature reduces to below the condensation temperature, the abundance evolution becomes slowly varying. Thus, once the disc cools sufficiently the dust condensation is sensitive primarily to the initial disc composition.
Figure <ref> compares the density of the gas disc ρ, temperature T, and mid-plane pressure P profiles over 0.01–0.3 au of the re-scaled profile around a Sun-like star from <cit.> and the analytic profiles used in the n-body simulations at t=0, as described in Section <ref>.
To derive the analytic P profile we use
P=ρ c_ s^2
where ρ = Σ/(H√(2 π)) is the gas density, H is the gas scale height and c_ s is the sound speed.
The profiles used in <cit.> correspond to a disc shortly after disc formation and our disc profiles are for a more evolved disc and so differences can be seen between the profiles which may lead to differences in the dust evolution. However, the density and pressure are still within approximately an order of magnitude of one another while the temperatures are below the condensation temperatures of the major elements thus, the relative abundances in condensed dust could remain similar in these two discs as both discs begin with a solar-like composition.
|
http://arxiv.org/abs/2307.07326v1 | 20230714130357 | Distributed Planning for Rigid Robot Formations using Consensus on the Transformation of a Base Configuration | [
"Jeppe Heini Mikkelsen",
"Matteo Fumagalli"
] | cs.RO | [
"cs.RO",
"cs.SY",
"eess.SY"
] |
On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks
Hafsa Bousbiat
University of Klagenfurt
Klagenfurt, Austria
[email protected]
Yassine Himeur
University of Dubai
Dubai, UAE
[email protected]
Abbes Amira
University of Sharjah
Sharjah, UAE
[email protected]
Wathiq Mansoor
University of Dubai
Dubai, UAE
[email protected]
August 12, 2023
==========================================================================================================================================================================================================================================================================================================
plain
plain
This paper presents a novel planning method that achieves navigation of multi-robot formations in cluttered environments, while maintaining the formation throughout the robots motion. The method utilises a decentralised approach to find feasible formation parameters that guarantees formation constraints for rigid formations. The method proves to be computationally efficient, making it relevant for reactive planning and control of multi-robot systems formation. The method has been tested in a simulation environment to prove feasibility and run-time efficiency.
Robot swarms, Robot formations, Multi-robot Systems, Motion planning, Consensus, Distributed systems.
§ INTRODUCTION
Multi-robot operations such as remote swarm operation by a single user, cooperative object transportation, or large-scale area surveying may require a multi-robot system to move in formation. When controlling formations, the robots should move in a reactive manner both to avoid obstacles while keeping formation and for ensuring communication between neighboring robots, while at the same time avoiding self-collisions. This paper presents a multi-robot formation planner that achieves formation control, collision avoidance and communication among the agents in real-time.
Formation planning and control approaches can be divided into centralised and decentralised approaches <cit.>. Centralised approaches compute the robot motions at a central location and subsequently transmit control references to the robots, while decentralised approaches allow all robots in the computation of the formation motion, thus making decentralised approaches generally more robust than centralised methods, due to them not having a single point of failure. Furthermore, robot formations can be either rigid or non-rigid. In rigid formations the robots move in a fixed geometric shape, while in non-rigid formations the formation is permitted to deform. Rigid formations can be less prone to breaking, due to them relying on driving the robots towards an a priori specified formation where relative distances can be guaranteed, but are less flexible in where they can navigate compared to non-rigid formations. Lastly, computational methods for formation planning and control can be optimal or feasible. Optimal methods aim at finding the motion of the robots ensuring that a cost function is minimised, such as distance, time, energy, etc. Feasible methods are used to find a set of motions that is only feasible for the robots to perform, thereby being substantially faster than optimal methods, making them more applicable for real multi-robot systems.
An early form of distributed feasible non-rigid formation motion control was proposed by Reynolds in 1984 <cit.>, where agents move in formation using local interaction rules. A common approach for this is to use artificial potential fields (APF) <cit.>, such as in <cit.>. However, APF methods are prone to local minima and can thereby break formation. In <cit.> the authors propose a distributed feasible rigid formation planning method where the rotation and translation of a base configuration is found by calculating the first principal component of the robot positions using a consensus algorithm, and an assignment of robots to the formation is found using a distributed negotiation algorithm. In <cit.> a similar, but optimal, approach for finding the formation and assignment that minimises the distance travelled by the robots to reach the formation was proposed. In <cit.> the authors find the optimal assignment, scaling, translation, and rotation for navigating a rigid formation of drones towards a goal in a cluttered environment, in a distributed manner, while combining individual robot collision avoidance and local planning algorithms to navigate the robots to the desired formation. However, the method does not ensure that robots remain in formation while in transit from one formation to another.
In this paper we present a novel distributed, feasible planner for rigid formations. Similarly to <cit.>, our method finds the scaling, rotation, and translation of a base configuration. However, we propose a continuous, fast, light-weight, distributed planner that allows robots moving in a rigid formation while obeying constraints on the formation parameters. We achieve this by mapping the desired velocities of the robots into the parameter space of a formation transformation, performing consensus and constraint steps, and mapping back to the velocity space of the robots. Furthermore, our method continuously keeps the robots
in formation.
In this paper the following notation style is used: Lowercase italic symbols x are variables, bold lowercase italic symbols x are vectors, bold uppercase symbols 𝐗 are matrices, and calligraphic symbols 𝒳 are sets. _≥0 and _>0 denotes positive and strictly positive real numbers respectively.
A common approach to implementing distributed non-rigid formation planning and control is to use artificial potential fields (APF) <cit.>, such as in <cit.>. However, APF methods are prone to local minima and can thereby break formation.
In <cit.> the authors propose a distributed method of finding the rotation and translation of the base configuration by calculating the first principal component of the robot positions using a consensus algorithm, and a distributed negotiation algorithm for finding an assignment of robots to the formation which avoids local minima.
Computational methods for formation planning can be optimal or feasible methods. Optimal methods aim at finding the motion of the robots ensuring that a cost function is minimised, such as in <cit.>, where a method for finding the optimal formation and assignment that minimises the distance travelled by the robots to reach the formation was proposed. Feasible methods are used to find a set of motions that is feasible for the robots to perform, which makes them substantially faster than optimal methods, making it more applicable for real multi-robot systems. An early form of distributed feasible non-rigid formation motion control was proposed by Reynolds in 1984 <cit.>, where agents move in formation using local interaction rules.
In <cit.> the authors find the optimal assignment, scaling, translation, and rotation for navigating a rigid formation of drones towards a goal in a cluttered environment, in a distributed manner by combining individual robot collision avoidance and local planning algorithms to navigate the robots to the desired formation. However, the method does not ensure that robots remain in formation while in transit from one formation to another.
In this paper we present a novel distributed, feasible planner for rigid formations.
Similarly to <cit.>, our method finds the scaling, rotation, and translation of a base configuration. However, we propose a continuous, fast, light-weight, distributed planner that alows robots moving in a rigid formation while obeying constraints on the formation parameters. We achieve this by mapping the desired velocities of the robots into the parameter space of a formation transformation, performing consensus and constraint steps, and mapping back to the velocity space of the robots. Furthermore, our method continuously keeps the robots in formation.
In this paper the following notation style is used: Lowercase italic symbols x are variables, bold lowercase italic symbols x are vectors, bold uppercase symbols 𝐗 are matrices, and calligraphic symbols 𝒳 are sets. _≥0 and _>0 denotes positive and strictly positive real numbers respectively.
As robots are becoming more common, both in industrial and service settings, their ability to coexist and collaborate is becoming increasingly important. In some cases, robots are required to move in formation, e.g, for manual teleoperation of swarms, for cooperative object transportation, or for ensuring communication between robots. To deploy robot formations in the real world, it is paramount that the robots can move in a reactive manner to avoid static and dynamic obstacles, such as buildings and vehicles in an urban environment, while keeping formation. This requires formation planners that can run in real time on drones with limited computational capabilities. The aim of this paper is to present a distributed planner for robot formations, meeting these requirements.
Formation planning and control can be divided into two categories: centralised and decentralised <cit.>. Centralised approaches treat the robot formation as a composite robot, where robot motions are computed at a central location and subsequently transmitted to the robots, where the central location is either a robot in the formation with sufficient computational capabilities or a ground station. In contrast, decentralised approaches allow all the robots to participate in the computation of the formation motion, which is achieved by local interaction rules and information exchange between robots. This paper presents a decentralised approach, due to its increased robustness. The decentralised approach can be further divided into two categories: optimal, and feasible methods. Optimal methods aim at finding the motion of the robots ensuring that a cost function, e.g. distance, time, energy, etc., is minimised, whereas feasible methods only aim at finding a set of motions that is possible for the robots to perform. Furthermore, robot formations can be classified as either rigid or non-rigid. In rigid formations the robots move in a fixed geometric shape, while in non-rigid formations the formation is permitted to deform.
An early form of distributed feasible non-rigid formation motion control was proposed by Reynolds in 1984 <cit.>, where agents, called Boids, move in formation using three local interaction rules: 1) separation: avoid collisions with nearby agents, 2) cohesion: stay close to nearby agents, 3) alignment: match velocity vector with nearby agents. These three rules have formed the basis of a lot of subsequent work on distributed non-rigid formation control and planning. One approach to applying these rules is by the use of artificial potential fields (APF) <cit.>. In <cit.> the authors propose the use of potential fields for formation planning and control. By moving along the negative gradient of the potential fields, the robots attempt to move into formation. However, potential field methods are prone to local minima where the robots might get stuck and thereby break formation.
For rigid formations, the problem of formation planning and control can be posed as the problem of finding a scaling/expansion, rotation, and translation of a base configuration, as well as an assignment of the robots to the formation. In <cit.>, the authors propose a distributed method of finding the rotation and translation of the base configuration by calculating the first principle component of the robot positions using a consensus algorithm, and a distributed negotiation algorithm for finding an assignment of robots to the formation. Another way to approach the problem of formation planning and control is to solve it as an optimisation problem. In <cit.> the authors propose a method for finding the optimal formation and assignment that minimises the distance travelled by the robots to reach the formation. However, these works only consider the problem of bringing robots to a static formation and not moving the formation through an environment. In <cit.> the authors build on their work in <cit.> to, in a distributed manner, find the optimal assignment, scaling, translation, and rotation for navigating a rigid formation of drones towards a goal in a cluttered environment. They solve this by iteratively growing the convex hull of the robots towards the goal, while avoiding obstacles, and finding the optimal formation within this area that minimises the distance to the goal. They then use local planners to navigate the robots to the formation. However, the method does not consider how to ensure that robots remain in formation while in transit from one formation to another.
The computational cost of optimal methods severely hinders its application in real multi-robot systems, especially for robots where computational power is limited. Since feasible methods are only concerned with finding a set of motions that the robots can perform, it is in general substantially faster than optimal methods making it more applicable for real multi-robot systems. However, as a lot of feasible methods do not guarantee that the robots stay within the constraints of the formation and are prone to the formation becoming disjoint, e.g., when encountering an obstacle that it has to circumnavigate, they can be unsafe to deploy on real multi-robot systems.
This paper deals with rigid formations, since they are less prone to the formation breaking, due to them relying on driving the robots towards an a priori specified formation where relative distances can be guaranteed. Like the works in <cit.>, our method aims to find the scaling, rotation, and translation of a base configuration. Our main contribution is to propose a continuous, fast, light-weight, distributed planner for having robots move in a rigid formation while obeying constraints on the formation parameters. We achieve this by mapping the desired velocities of the robots into the parameter space of the formation transformation, performing consensus and constraint steps, and mapping back to the velocity space of the robots. This approach could potentially prove useful for formations of micro-drones, where the available onboard computing power is severely limited. In this paper, we do not consider the problem of assigning robots in the formation and assume that this knowledge is known a priori. For a homogeneous multi-robot system this assumption is fair, since the assignment of the robots in the formation has little to no effect on the performance of the formation as a whole.
In this paper, the following notation style is used: Lowercase italic symbols x are variables, bold lowercase italic symbols x are vectors, bold uppercase symbols 𝐗 are matrices, and calligraphic symbols 𝒳 are sets. _≥0 and _>0 denotes positive and strictly positive real numbers respectively.
As robots are becoming more common, both in industrial and service settings, their ability to coexist and collaborate is becoming increasingly important. In some cases, robots are required to move in formation, e.g, for manual teleoperation of swarms, for cooperative object transportation, or for ensuring communication between robots, where formations can be classified as either rigid or non-rigid. In rigid formations the robots move in a fixed geometric shape, while in non-rigid formations the formation is permitted to deform. Formation planning and control can be divided into two categories: centralised and decentralised <cit.>. Centralised approaches treat the robot formation as a composite robot where robot motions are computed at a central location and subsequently transmitted to the robots, where the central location is either a robot in the formation with sufficient computational capabilities or a ground station. However, this is not robust as the central location represents a single point of failure. In contrast, decentralised approaches allow all the robots to participate in the computation of the formation motion, ensuring that operation of the formation continues despite robots dying. This is achieved by local interaction rules and information exchange between robots. The decentralised approach can be further divided into two categories: optimal, and feasible methods. Optimal methods aim at finding the motion of the robots ensuring that a cost function, e.g. distance, time, energy, etc., is minimised, whereas feasible methods only aim at finding a set of motions that is possible for the robots to perform. Our method falls within the feasible methods.
An early form of distributed feasible non-rigid formation motion control was proposed by Reynolds in 1984 <cit.>, where agents, called Boids, move in formation using three local interaction rules: 1) separation: avoid collisions with nearby agents, 2) cohesion: stay close to nearby agents, 3) alignment: match velocity vector with nearby agents. These three rules have formed the basis of a lot of subsequent work on distributed non-rigid formation control and planning. One approach to applying these rules is by the use of artificial potential fields (APF) <cit.>. In <cit.> the authors propose the use of potential fields for formation planning and control, where robots avoid obstacle and robot-robot collisions, and are kept in communication range. The potential field is constructed as the sum of attractive and repulsive potential fields, where the attractive potential field grows with the distance away from a desired point or relative distance, and the repulsive potential field grows inversely with the distance to the nearest obstacle or robot. By moving along the negative gradient of the potential fields, the robot will attempt to move into formation. However, potential field methods are prone to local minima where the robots might get stuck and thereby break formation.
Unlike non-rigid formations, rigid formations are less prone to the formation breaking, since it relies on driving the robots towards an a priori specified formation where relative distances can be guaranteed. For rigid formations, the problem of formation planning and control can be posed as the problem of finding a scaling/expansion, rotation, and translation of a base configuration, as well as an assignment of the robots to the formation. In <cit.>, the authors propose a distributed method of finding the rotation and translation of the base configuration by calculating the first principle component of the robot positions using a consensus algorithm, and a distributed negotiation algorithm for finding an assignment of robots to the formation. Another way to approach the problem of formation planning and control is to solve it as an optimisation problem. In <cit.> the authors propose a method for finding the optimal formation and assignment that minimises the distance travelled by the robots to reach the formation. They also show that the optimal assignment, rotation, and translation can be solved separately. However, these works only consider the problem of bringing robots to a static formation and not moving the formation through an environment. In <cit.> the authors build on their work in <cit.> to, in a distributed manner, find the optimal assignment, scaling, translation, and rotation for navigating a rigid formation of drones towards a goal in a cluttered environment. They solve this by iteratively growing the convex hull of the robots towards the goal, while avoiding obstacles, and finding the optimal formation within this area that minimises the distance to the goal. They then use local planners to navigate the robots to the formation. However, the method does not consider how to ensure that robots remain in formation while in transit from one formation to another.
The computational cost of optimal methods severely hinders its application in real multi-robot systems, especially for robots where computational power is limited. Since feasible methods are only concerned with finding a set of motions that the robots can perform, it is in general substantially faster than optimal methods making it more applicable for real multi-robot systems. However, as a lot of feasible methods do not guarantee that the robots stay within the constraints of the formation and are prone to the formation becoming disjoint, e.g., when encountering an obstacle that it has to circumnavigate, they can be unsafe to deploy on real multi-robot systems.
Like the works in <cit.>, our method aims to find the scaling, rotation, and translation of a base configuration. Our main contribution is to propose a continuous, fast, light-weight, distributed planner for having robots move in a rigid formation while obeying constraints on the formation parameters. We achieve this by mapping the desired velocities of the robots into the parameter space of the formation transformation, performing consensus and constraint steps, and mapping back to the velocity space of the robots. This approach could potentially prove useful for formations of micro-drones, where the available onboard computing power is severely limited. In this paper, we do not consider the problem of assigning robots in the formation and assume that this knowledge is known a priori. For a homogeneous multi-robot system this assumption is fair, since the assignment of the robots in the formation has little to no effect on the performance of the formation as a whole.
In this paper, the following notation style is used: Lowercase italic symbols x are variables, bold lowercase italic symbols x are vectors, bold uppercase symbols 𝐗 are matrices, and calligraphic symbols 𝒳 are sets. _≥0 and _>0 denotes positive and strictly positive real numbers respectively.
The rest of this paper is organised as follows: In <ref> the problem of moving a swarm of robots through an environment while staying in formation is presented. In <ref> the transformation of a base configuration used to solve the aforementioned problem is presented. In <ref> the method for solving the problem is presented in four steps. In <ref> the resulting planning algorithm is presented.
Contribution and approach
- decentralized (why?)
- feasible methods (why(?)
- rigid transformations (why?)
contribution: fast, lightweight, distributed planner: algorithm speed has not been an issue before in the intro, same as computational complexity.
§ PROBLEM DESCRIPTION AND APPROACH
Consider a swarm of N planar robots, 𝒱 = {1,…,N}, where the position of robot i is denoted by p_i∈^2. It is assumed that the robots are holonomic and that each robot has a local kinematic controller, ensuring that it is able to track a reference velocity. Therefore, the dynamics of each robot is represented using the following single-integrator model:
d/dtp_i = v_i, ∀ i ∈𝒱.
It is assumed that the robots exchange information with each other through wireless communication. The communication network can be modelled as an undirected dynamic graph 𝒢(t) = (𝒱,ℰ(t)), where ℰ(t) = {(i,j)∈𝒱×𝒱 | i ≠ j ∧ ||p_i - p_j||_2 ≤ r_c} are time-varying communication links between robot pairs, with r_c being the communication range. 𝒩_i(t) = {j ∈𝒱 | (i,j) ∈ℰ(t)} is the neighbour set of robot i, i.e., the robots with which robot i has a direct communication link. Furthermore, there is a distance r_d ≤ r_c wherein communication performance is assumed to be perfect or near perfect, and after which it starts to degrade. The goal of this paper is to derive a distributed algorithm that finds the velocities v_i for each robot in the swarm, such that they each attempt to track a local desired velocity v_des,i while remaining in formation, maintaining communication and avoiding collisions. This is achieved by injecting an intermediary formation planner between the local planner and the controller on each robot, where the desired velocities are supplied by the local planners; see <ref>. The local planners could be a variety of planners and is not within the scope of this paper.
§ FORMATION TRANSFORMATION
The positions of the robots in the formation is parameterised through a transformation of a base configuration. Consider a base configuration ℬ={c_1,…,c_N}, where c_i∈^2 is the position in the base configuration associated with robot i. The position of robot i in the formation is then found according to the following scaling, rotation, and translation.
p_i = 𝐑𝐒c_i + t,
where 𝐒∈_>0^2×2 is a strictly positive diagonal scaling matrix, 𝐑∈ SO(2) is a rotation matrix, and t∈^2 is a translation vector; see <ref>
𝐑 =
[ cosφ -sinφ; sinφ cosφ ], 𝐒 =
[ s_x 0; 0 s_y ], t =
[ t_x; t_y ].
The parameter vector of the transformation is denoted as
η = (φ,s,t) ∈^5,
s = (s_x,s_y)∈^2, t = (t_x,t_y)∈^2.
cross/.style=cross out, draw=black, fill=none, minimum size=2*(#1-), inner sep=0pt, outer sep=0pt, cross/.default=5pt
The base configuration ℬ is determined a priori and can have any desired shape, e.g., grid, triangular, hexagonal, etc., see <ref>.
§ METHOD
Having found a transformation that expresses the position of each robot in the formation, the motion that ensures that each robot stays in formation can be found through a four steps approach: tracking, consensus, constraint satisfaction and recovering velocity.
* tracking,
* consensus,
* constraint satisfaction,
* recovering velocity.
To ensure that the robots find their velocities in a distributed way, each robot i carries an instance of the transformation parameters, denoted as η_i, and performs the steps locally.
§.§ Step 1: Tracking
In the first step, the time derivative of the parameters, which ensures that each robot tracks its desired velocity, is found. Using the chain rule, the time derivative of the position of robot i in the swarm, with respect to the time derivative of its parameters, can be expressed as
d/dtp_i = 𝐉_η_𝐢d/dtη_i,
where 𝐉_η_𝐢 is the Jacobian of the transformation in (<ref>) with respect to η_i,
𝐉_η_i = [
-sinφ_i s_x,i c_x,i - cosφ_i s_y,i c_y,i
cosφ_i s_x,i c_x,i -sinφ_i s_y,i c_y,i. …
.
cosφ_i c_x,i -sinφ_i c_y,i 1 0
sinφ_i c_x,i cosφ_i c_y,i 0 1
].
From this, the time derivative of the parameters for robot i can be found as
d/dtη_i = 𝐉^+_η_iv_des,i,
where (·)^+ denotes the right Moore-Penrose pseudo-inverse, and v_des,i is the desired velocity of robot i. Since the rows of the Jacobian are linearly independent, the pseudo-inverse can be computed as
𝐉_η_𝐢^+ = 𝐉_η_𝐢^⊤(𝐉_η_𝐢𝐉_η_𝐢^⊤)^-1.
However, since the Jacobian is underdetermined and since the individual desired velocities may not conform to a feasible formation motion, applying the update in (<ref>) does not result in a unique solution across the robots and therefore there will be discrepancies among the robots as to the parameters.
§.§ Step 2: Consensus
To ensure that the robots find the same solution, a consensus step is applied to (<ref>)
d/dtη_i = 𝐉^+_η_iv_des,i-λ_i∑_j∈𝒩_i (η_i - η_j)_consensus step, λ_i ∈_>0,
where λ_i is a strictly positive multiplier that determines how fast consensus is reached. Applying this step drives the solutions of the robots together, ensuring that they remain in formation.
§.§ Step 3: Constraint Satisfaction
The two prior steps allow unbounded scaling, which can result in loss of communication or collisions. To remedy this, an additional step is applied to constrain the solution. Due to the communication model, there is a range wherein the robots are assumed to have perfect, or near perfect, communication. After that, the communication attenuates until it ceases to work. Furthermore, there is a minimum distance that robots prefer to have to each other and a minimum distance that they have to keep from each other to avoid collisions. Therefore, a soft constraint and a hard constraint on the transformation parameters are introduced. The set in which the hard constraint requires the parameters to be within is denoted as 𝒞_h∈^5, and the set in which the soft constraint prefers the parameters to be within is denoted as 𝒞_s∈^5, with 𝒞_s⊆𝒞_h.
Since it is only the scaling parameter that has an influence on the relative distances between the robots, as 𝐑,t∈ SE(2), the rotation and translation parameters, φ and t, are unconstrained, i.e., 𝒞_s,ϕ,𝒞_h,ϕ∈ and 𝒞_s,t,𝒞_h,t∈^2. The soft and hard constraint sets on the scaling parameter s, 𝒞_s,s,𝒞_h,s⊆^2_>0, both consist of a subset of a quarter circle in the positive quadrant; see <ref>. To help avoid collisions, it is preferred that the scaling in x and y is greater than the lower bound ε_s∈_>0 and it is required to be greater than the lower bound ε_h∈_>0, where ε_s ≥ε_h. To help ensure communication, the Euclidean norm of the scaling is preferred to be smaller than an upper bound r̅_s∈_>0 and is required to be smaller than an upper bound r̅_h∈_>0, where r̅_s ≤r̅_h.
§.§.§ Soft Constraint
The soft constraint step attempts to drive the parameters toward the soft constraint set 𝒞_s as
d/dtη_i = 𝐉^+_η_iv_des,i - λ_i ∑_j∈𝒩_i(η_i - η_j)…
- μ_i(η_i - proj(η_i,𝒞_s))_soft constraint step,
where μ_i ∈_≥0 is a positive penalty multiplier that determines how hard the soft constraint attempts to drive the parameters into 𝒞_s, and proj(η_i,𝒞_s) ∈^5 is a projection of the parameters onto 𝒞_s
proj(η_i,𝒞) = (φ_proj,s_proj,t_proj).
Since the rotation and translation is unconstrained, their projection is set to their current value,
φ_proj = φ, t_proj = t.
There are seven different cases for how the projection of the scaling parameter s is performed,
s_proj =
(ε_s,ε_s) if s_x < ε_s ∧ s_y < ε_s,
(s_x,ε_s) if s_x ≥ε_s ∧ s_x < δ_s ∧ s_y < ε_s,
(ε_s,s_y) if s_x < ε_s ∧ s_y ≥ε_s ∧ s_y < δ_s,
(δ_s,ε_s) if s_x ≥δ_s ∧ s_y < ε_s,
(ε_s,δ_s) if s_x < ε_s ∧ s_y ≥δ_s,
r_max,ss||s||_2 if s_x ≥ε_s ∧ s_y ≥ε_s ∧ ||s||_2 > r̅_s,
s otherwise,
where δ_s = √(r̅_s^2 - ε_s^2), see <ref>.
§.§.§ Hard Constraint
Having applied the soft constraint, the hard constraint needs to be applied. The hard constraint is applied by scaling the parameter derivative by a matrix 𝐀_𝐢
d/dtη_i←𝐀_𝐢d/dtη_i = 𝐀_𝐢𝐉^+_η_iv_des,i…
- λ_i 𝐀_𝐢∑_j∈𝒩_i(η_i - η_j) - μ_i𝐀_𝐢(η_i - proj(η_i,𝒞_s))
where 𝐀_𝐢 is a diagonal matrix
𝐀_𝐢 = diag(a_φ,i,a_s,i1_2×1,a_t,i1_2×1).
Since the rotation and translation are unconstrained, their scaling parameter is one,
a_φ,i = 1, a_t,i = 1.
The scaling parameter derivative is scaled such that it always lies within the hard constraint set 𝒞_h,s, as
a_s,i = max α_s,
s.t. s_i + α_sd/dts_i∈𝒞_h,s,
α_s ∈ [0,1].
This ensures that the scaling parameters cannot exit the hard constraint set, as, if they try to exit, their derivative converges to zero as they approach the edge of the set; see <ref>. For the specific hard constraint set in <ref>, <ref> can be solved as
a_s,i = min_≥ 0(1,(ε_h - s_i)⊘d/dts_i,- β±√(β^2 - 4αγ)/2α),
where min_≥ 0 denotes the smallest positive element, ⊘ denotes the Hadamard division operator, and
α = (d/dts_i^⊤)·(d/dts_i),
β = 2(d/dts_i^⊤)·s_i,
γ = s_i^⊤·s_i - r̅_h^2.
§.§ Step 4: Recovering Velocity
Since the local controllers on the robots work in the velocity space, the parameter derivative must be transformed back to a velocity. This is achieved by pre-multiplying (<ref>) with the Jacobian in (<ref>). However, this does not ensure that the robots return to formation if it is broken due to unforeseen perturbations. Therefore, an additional term is added that drives the robots towards their current desired formation
v_i = 𝐉_η_𝐢d/dtη_i- K_i(p_i - (𝐑_𝐢𝐒_𝐢c_i + t_i))_perturbation rejection
= Γ_𝐢v_des,i - λ_i𝐉_η_𝐢 𝐀_𝐢∑_j∈𝒩_i(η_i - η_j) …
- μ_i 𝐉_η_𝐢 𝐀_𝐢(η_i - proj(η_i,𝒞_s)) …
- K_i(p_i - (𝐑_𝐢𝐒_𝐢c_i + t_i)),
where Γ_𝐢 = 𝐉_η_𝐢𝐀_𝐢𝐉^+_η_𝐢, and K_i∈_+ is a positive feedback gain that determines how fast the robot return to formation. The recovered velocity in <ref> consists of the original desired velocity v_des,i with the corrections ensuring that the robots move in formation, that the formation stays within constraints and that the robots return to formation in case of perturbations. λ_i can be interpreted as a stiffness coefficient determining how much the robots keep formation, and μ_i a stiffness coefficient determining how much the robots want to obey the soft constraint.
§ ALGORITHM
The structure of the algorithm can be seen in <ref>. The algorithm runs locally on each robot at a fixed rate, and the robots exchange information over a communication network. The algorithm relies on the parameters λ_i, μ_i and K_i being chosen by the operator. In <ref> the effect of these parameters can be seen, which can serve as a basis for hand tuning. The algorithm consists of two parts, a parameter space part and a velocity space part.
§.§ Parameter Space
The parameter space part takes as input the desired velocity of the robot v_des,i, the parameter vectors of the communication neighbours η_𝒩_i, and the two multipliers λ_i and μ_i. It maps v_des,i into the parameter space and performs the consensus and constraint satisfaction step. From this, it generates a parameter and multiplier derivative, which it uses to update the parameters and multiplier using numerical integration, e.g. Euler integration. It then outputs the current parameters and parameter derivatives to the feedback part, and transmits the current parameters to its communication neighbours.
§.§ Velocity Space
The velocity space part takes as input the parameters, parameter derivatives and the feedback gain K_i. It maps the parameter derivative back into the velocity space and performs a disturbance rejection step ensuring that the robot tracks the desired position in the formation. From this, it generates a velocity reference which it then outputs to the local kinematic controller on the robot.
§ SIMULATION RESULTS
To test the formation planning algorithm, a local planner on each drone must be provided. In this paper, an artificial potential field planner is used, where the obstacle closest to each robot produces a repulsive force, the position of the robot in the desired formation produces an attractive force, and since the robots themselves can be considered dynamic obstacles, the robot closest to each robot also produces a repulsive force. The desired velocity is set to the sum of these three forces, <cit.>. To demonstrate the effect of the parameters λ, μ and K, the formation planner is tested in simulation where: the base configuration is a unit grid configuration with c_i∈ (-1,0,1)×(-1,0,1); the robots start at the configuration with transformation parameters η = (0,1,1,0,0); the goal configuration is the configuration with transformation parameters η = (5/4π,1.5,1.5,15,0) and there are two circular obstacles with radius 2 m and centre at (6,-2) m and (8.5,5) m.
The formation planner is tested in three different settings where the parameters are varied, in turn, to demonstrate their effect on the outcome of the planner. The parameters are kept identical across all robots. The remaining simulation parameters can be seen in <ref>. The simulation is run in MATLAB R2022b on a Lenovo Thinkpad L15 with an AMD Ryzen 7 pro 5850u CPU with a frequency of maximum 4505 MHz. The algorithm uses a sampling time of 1 ms.
§.§ Test 1: Varying λ
In the first test, the formation planner is simulated with λ={1,2,8,32}, see <ref>. The magnitude of λ has a great effect on the robots and how much they keep in formation. It is evident that as λ increases, the robots keep the formation tighter.
§.§ Test 2: Varying μ
In the second test, the formation planner is simulated with μ={0,10,20,100}, see <ref>. The magnitude of μ has a great effect on the scaling parameters. As μ increases, the scaling parameter remains closer to the soft constraint set 𝒞_s,s.
§.§ Test 3: Varying K
In the third test, the formation planner is simulated with K={0,2}, see <ref>. Furthermore, in this test, the initial positions of the robots are perturbed with normal distributed random noise ω∼𝒩(0,0.5𝐈_2×2). When K=0, the robots cannot cancel the effect of the perturbation and return to formation, resulting in the robots being unable to reach the goal configuration. However, when K=2, the robots are driven towards their current desired formation and the effect of the random perturbation is cancelled, resulting in the robots terminating at the goal configuration.
§.§ Run-Time Efficiency
To evaluate the applicability of the formation planning algorithm, the run-time efficiency of the simulation, for each of the steps of the algorithm, is calculated, which depends on both the hardware and the implementation of the simulation. As can be seen in <ref>, the complete algorithm has a mean run time of less than 23 μs and a maximum run time of 142 μs, meaning that the algorithm can run at a rate of more than 7 kHz, making it highly likely that it can be deployed in real time. Furthermore, since the simulation is run in MATLAB, it is expected that it can be greatly improved by implementing it in a more efficient programming language, such as C/C++.
§.§ Dynamic Obstacles with Varying Robot and Obstacle Velocities
For testing the formation planners ability to handle dynamic environments, it is tested in simulation in the following scenario:
* The base configuration is a unit grid configuration with c_i∈ (-1,0,1)×(-1,0,1).
* The robots start at the configuration with transformation parameters η = (0,1,1,-10,0).
* The goal configuration is the configuration with transformation parameters η = (0,1,1,10,0).
* There are two lanes of circular obstacles with radius 0.5 (m) with distance between obstacles in each lane of 5 (m) and distance between each lane of 5 (m), travelling symmetrically along the y-axis in opposite direction at a velocity of v_obst.
In the simulation, the robots must cross the two lanes of moving obstacles, while avoiding collisions, emulating a realistic traffic scenario.
§ CONCLUSIONS
This paper presented a distributed motion planning algorithm for rigid formations. The planner takes as input the local desired velocities of the robots and calculates the alterations to the velocities that ensure that they keep formation. The algorithm is able to handle hard constraints, and its efficacy and run-time efficiency have been evaluated. The efficacy of the algorithm has been shown through simulation results proving it to be a promising approach for deployment on robots with limited computational ability.
§.§ Simulation Parameters
|
http://arxiv.org/abs/2307.03946v1 | 20230708100056 | Superconducting Gap Structure of Filled Skutterudite LaOs$_4$As$_{12}$ Compound through $μ$SR Investigations | [
"A. Bhattacharyya",
"D. T. Adroja",
"A. D. Hillier",
"P. K. Biswas"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
[email protected]
Department of Physics, Ramakrishna Mission Vivekananda Educational and Research Institute, Belur Math, Howrah 711202, West Bengal, India
[email protected]
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom
Highly Correlated Matter Research Group, Physics Department, University of Johannesburg, PO Box 524, Auckland Park 2006, South Africa
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom
Deceased
ISIS Facility, Rutherford Appleton Laboratory, Chilton, Didcot Oxon, OX11 0QX, United Kingdom
Filled skutterudite compounds have gained attention recently as an innovative platforms for studying intriguing low-temperature superconducting properties. Regarding the symmetry of the superconducting gap, contradicting findings from several experiments have been made for LaRu_4As_12 and its isoelectronic counterpart, LaOs_4As_12. In this vein, we report comprehensive bulk and microscopic results on LaOs_4As_12 utilizing specific heat analysis and muon-spin rotation/relaxation (μSR) measurements. Bulk superconductivity with T_C = 3.2 K was confirmed by heat capacity. The superconducting ground state of the filled-skutterudite LaOs_4As_12 compound is found to have two key characteristics: superfluid density exhibits saturation type behavior at low temperature, which points to a fully gapped superconductivity with gap value of 2Δ/k_BT_C = 3.26; additionally, the superconducting state does not show any sign of spontaneous magnetic field, supporting the preservation of time-reversal symmetry. These results open the door for the development of La-based skutterudites as special probes for examining the interplay of single- and multiband superconductivity in classical electron–phonon systems.
Superconducting Gap Structure of Filled Skutterudite LaOs_4As_12 Compound through μSR Investigations
P. K. Biswas
August 12, 2023
====================================================================================================
§ INTRODUCTION
Due to their potential as thermoelectric materials for either refrigeration or power generation applications, many filled skutterudite compounds with RT_4X_12 stoichiometry (R = alkali metals, alkaline earth metals, lanthanides, or light actinides; T = Fe, Os, Ru; X = P, As, Sb) have lately been the focus of several investigations <cit.>. With two formula units RT_4X_12 per unit cell, these compounds form a body-centered cubic structure (space group Im3̅, No: 204). The structures consist of rigid covalently bonded cage-forming frameworks T_4X_12 that encapsulate various bonded guest atoms R. This leads to local anharmonic thermal vibrations (rattling modes), which would reduce phononic heat conduction and open the door to their potential as promising thermoelectric materials. Because of the significant hybridization between the 4f band manifold and electronic conduction states, as well as the degree of freedom provided by the R-f-derived multipole momenta of the cubically symmetric X_12 cages, those compounds may include a variety of distinct electronic and magnetic ground states. For examples, consider unconventional superconductivity <cit.>, Kondo effect <cit.>, heavy fermios <cit.>, non-Fermi liquid behavior <cit.>, etc.
The majority of the Pr- and Ce-based filled skutterudite compounds are hybridized gap semiconductors or show magnetic transitions, however PrOs_4Sb_12 <cit.>, PrRu_4Sb_12 <cit.> and PrRu_4As_12 <cit.> show superconducting transitions at 1.8 K, 0.97 K and 2.4 K, respectively. PrOs_4Sb_12 is highly intriguing for a variety of reasons <cit.>, including: (i) it is the first known example of a heavy-fermion superconductor containing Pr; (ii) it shows unconventional strong-coupling superconductivity that breaks time-reversal symmetry; and (iii) instead of magnetic fluctuations, electric quadrupole fluctuations may be involved in the superconducting pairing process. The unique band structure of these compounds and the hybridization effects between localized f electrons and conduction electrons appear to play a crucial role, in addition to the fact that the origin of the majority of those unconventional phenomenologies is unknown. It was recently revealed that the Fermi level of La compounds is placed at a prominent peak arising from the T-d band manifold, which might contribute to electronic instability <cit.>. Several La-based compounds LaT_4X_12 are especially remarkable within the filled skutterudite class due to their remarkable superconducting properties. For examples, LaFe_4P_12 (T_C = 4.1 K) <cit.>, LaOs_4P_12 (T_C = 1.8 K) <cit.>, and LaRu_4Sb_12 (T_C = 3.6 K) <cit.>, with a special attention to the LaRu_4As_12 (T_C = 10.3 K, H_c2 = 10.2 T)- with the highest superconducting transition temperature. <cit.>.
The ratio of the heat capacity jump Δ C to γT_C is ΔC/(γT_C)=1.75 for LaRu_4As_12 comparison to the BCS value of 1.43 <cit.>. While the majority of La-based filled skutterudites are completely gapped superconductors, past research has shown numerous unique aspects of LaRu_4As_12, such as a positive curvature of H_c2, nonexponential behavior of the electronic heat capacity, and square root field dependency of the Sommerfeld coefficient (γ) <cit.>. We recently reported unambiguous evidence of multiband s+s-wave superconductivity in LaRu_4As_12 using muon-spin rotation measurements, with 2Δ_1/k_BT_C = 3.73 for the larger gap and 2Δ_ 2/k_BT_C = 0.144 for the smaller gap <cit.>. Furthermore, inelastic X-ray scattering experiments indicated essentially temperature-independent phonon modes between 300 K and 20 K, with the exception of 2 K, where a weak softening of the specific phonon modes is detected <cit.>. All of these results demonstrate the relevance of the electron–phonon interaction in the superconductivity of LaRu_4As_12, and they accord well with the DFT-based phonon simulations <cit.>.
Another isostructural La-based filled skutterudite compound, LaOs_4As_12, has been reported by Shirotani et al. to exhibit superconductivity with T_C. = 3.2 K <cit.>. LaOs_4As_12 has also shown some signs of multiband superconductivity, such as the upward curving of the upper critical field around the transition temperature and unusual behavior in the electronic specific heat data <cit.>. A single-gap, s-wave superconducting ground state, however, is suggested by a recent study of the temperature dependency of lower critical field <cit.>. Another study found that the high-amplitude lanthanum phonons dominate the vibrational eigenmodes at low energies based on the phonon dispersion relation determined from inelastic neutron scattering experiments <cit.>.
We have thus performed systematic muon-spin rotation and relaxation (μSR) measurements to examine the superconducting pairing process in the LaOs_4As_12 compound. Contrary to prior experimental work asserting two-band superconductivity <cit.>, we demonstrate that the low-temperature behavior of the superfluid density points to a fully gapped superconducting Fermi surface. Furthermore, the preservation of time-reversal symmetry is confirmed by the lack of spontaneous magnetic fields in the superconducting state, ruling out unusual pairing processes. The transition from two-band to single-band superconductivity in LaRu_4As_12 to LaOs_4As_12 is caused by differences in interband coupling strength in the Fermi surface, as evidenced by the different degrees of hybridization and electronic properties observed in the Fermi surfaces of both compounds <cit.>. These results underline the significance of LaRu_4As_12 and LaOs_4As_12 compounds as an important platform for investigating filled skutterudites for the competition between single-band and multiband superconductivity in electron–phonon driven systems.
§ EXPERIMENTAL DETAILS
The high-temperature molten-metal-flux technique, described in <cit.>, was used to grow single crystals of LaOs_4As_12. In a quartz ampule, elements with purities higher than 99.9% and a molar ratio of La:Os:Cd:As → 1:4:12:48 were combined. The details on the single crystal growth can be found in <cit.>. The relaxation approach was used to measure the heat capacity in a Quantum Design physical properties measurement (PPMS) system. Temperatures as low as 0.38 K were attained utilizing a He-3 attachment to the PPMS <cit.>.
The μSR measurements were carried out using small size unaligned single crystals (0.1 mm × 0.1 mm × 0.1 mm, total mass 1 g), which gave powder average muon signal, of LaOs_4As_12. The MuSR spectrometer at the Rutherford Appleton Laboratory, ISIS Neutron and Muon Source in the UK was used to perform the μSR measurements <cit.>. In a μSR experiment, the sample is injected with 100% spin-polarized muons. Each implanted muon thermalizes, at which point it decays (lifetime τ_μ = 2.2 μs) into a positron (and two neutrinos) which is preferentially released in the direction of the muon spin at the moment of decay. Utilizing detectors carefully placed around the sample, the decay positrons are detected and time-stamped. It is possible to calculate the asymmetry in the positron emission as a function of time, A(t), using the collected histograms from the forward (F) and backward (B) detectors, A(t)=N_F(t)-α N_B(t)/N_F(t)+α N_B(t), where α is a calibration factor for the instrument and N_F(t) and N_B(t) are the number of positrons counted in the forward and backward detectors, respectively. Detectors are placed longitudinally during ZF-μSR, and a correction coil is used to cancel out any stray magnetic fields up to 10^-4 mT. To investigate the time reversal symmetry ZF-μSR measurements were carried out <cit.>. In the vortex state, TF-μSR measurements were performed with applied fields of 20, 30, 40, 50, and 60 mT, which is greater than the lower critical field H_c1 (∼5 mT) and lower than the upper critical field H_c2 (∼1 T) <cit.>. The sample was covered using a thin silver foil after being mounted onto a high purity (99.995%) silver sample holder using diluted GE-varnish. The sample was cool down to 300 mK using a dilution refrigerator. To generate the vertex lattice by trapping the applied TF, we applied field above T_C and then sample was cooled in the field to the base temperature of 300 mK. We used WiMDA <cit.> software to analyze the μSR data.
§ RESULTS AND DISCUSSION
§.§ Crystal Structure & Physical Properties
LaOs_4As_12 crystallizes in a CoAs_3-type skutterudite structure packed with La atoms and has a body-centered cubic structure with the space group Im3̅ (No. 204) as shown in Figure <ref>. The large icosahedron cage made of As atoms is located around the electropositive La sites, which lack four-fold rotational symmetry. Between the cages, a transition metal ion called Os forms a cubic sublattice. The low temperature specific heat measurements C_P as a function of temperature at zero magnetic field are shown in the inset of Figure <ref>a. Using the equations C_P = γ T + β T^3, the normal state heat capacity is fitted. We calculated the lattice contribution to the specific heat β = 0.613 mJ/mol K^4 and the electronic parameter (Sommerfeld's coefficient) γ = 90.47 mJ/mol K^2 from this. The Debye temperature is determined using the Debye model as Θ_D = (12π^4nR/5β)^1/3, where R is the universal gas constant, which is 8.314
J/mol-K, and n denotes the number of atoms in the compound (n = 17). The value of Θ_D is thus calculated to be approximately 377 K, which agrees with the previous measurement <cit.>. Figure <ref>a displays the low-T electronic specific heat C_e that was produced after the phonon contribution was taken into account. The heat capacity jump at T_C (Δ C_e/γ T_C) is calculated to be 1.2, which is less than 1.43 the value expected for a weak-coupling BCS superconductivity. The fit to the exponential temperature dependency of C_e(T) yields Δ(0) = 0.40 meV, which is close to the 0.45 meV value obtained from the TF-μSR data analysis (discussed in section-B). Thus, the value of 2Δ(0)/k_BT_C = 2.9, which is less than the 3.53 anticipated for weak-coupling BCS superconductors. However, the linear fitting shown in Figure <ref>b shows that this material exhibits BCS behavior with a single isotropic gap.
§.§ Superconducting Gap Structure: TF-μSR
The pairing mechanism and superconducting gap structure of the LaOs_4As_12 were investigated by TF-μSR experiments down to 0.3 K. The TF-μSR asymmetry time spectra in the presence of 20 mT and 50 mT applied magnetic fields at above and below T_C are shown in Figures <ref>a–d. Because of the extra inhomogeneous field distribution of the vortex lattice generated inside the superconducting mixed state of LaOs_4As_12, the spectrum in Figure <ref>a,c in the superconducting state at 0.3 K demonstrate a greater relaxation. Using the Gaussian damped decay function, the asymmetry spectra were fitted <cit.> using the following equation,
A_TF(t) = A_scexp(-σ_TF^2t^2/2)cos(γ_μB_sct+ϕ) +
A_bgcos(γ_μB_bgt+ϕ).
The gyromagnetic muon ratio is γ_μ/2π = 135.53 MHz/T, and the initial asymmetries of muons stopping on the sample and on the silver holder are A_sc and A_bg, respectively (constant across the entire temperature range). The local fields B_sc and B_bg represent muons stopping on the sample and on the sample holder, respectively, whereas ϕ represents initial phase value and σ_TF represents the Gaussian depolarization rate. We calculated the values of A_sc = 76% and A_bg = 24% of the total asymmetries by fitting 0.3 K data. When additional temperature data were analyzed, A_bg was kept constant and A_sc was found nearly temperature independent. The emergence of bulk superconductivity is indicated by an increase in the σ_TF rate as the system approaches the superconducting state. With the use of the following formula, the superconducting contribution to the relaxation σ_sc was determined, σ_sc = √(σ_TF^2-σ_nm^2), where the nuclear magnetic dipolar contribution, is denoted by the symbol σ_nm, which is derived from high-temperature fits and is temperature independent. Figure <ref>e depicts the temperature dependence of σ_sc in several applied TF fields. Due to low H_c2 value, as seen in Figure <ref>f, σ_sc depends on the applied field. Brandt demonstrated that the London penetration depth λ_L(T) is linked to σ_sc for a superconductor with H_ext/H_c2 ≤ 0.25 <cit.>.
σ_sc[μ s^-1] = 4.83 × 10^4(1-H_ext/H_c2)
×{1+1.21[1-√((H_ext/H_c2))]^3}λ_L^-2[nm].
This relationship has been used to compute the temperature dependency of λ_L(T). As demonstrated in Figure <ref>f, isothermal cuts perpendicular to the temperature axis of σ_sc data sets were utilized to estimate the H-dependence of the depolarization rate σ_sc(H). The normalized λ_L^-2(T)/λ_L^-2(0) temperature variation, which is directly proportional to superfluid density, is shown in Figure <ref>a. The data were fitted using the following equation <cit.>:
σ_sc(T)/σ_sc(0) = λ_L^-2(T)/λ_L^-2(0)
= 1 + 1/π∫_0^2π∫_Δ(T)^∞(δ f/δ E) ×EdEdϕ/√(E^2-Δ(T,ϕ))^2,
where f = [1+exp(E/k_BT)]^-1 is the Fermi function. We take Δ_k(T,ϕ) = Δ(T)g_k(ϕ), where we assume a temperature dependence that is universal Δ(T) = Δ_0 tanh[1.82{1.018(T_C/T-1)}^0.51]. The magnitude of the gap at 0 K is Δ_0, and the function g_k denotes the gap's angular dependence, which is equal to 1 for one isotropic energy gap s, 1 for two isotropic s+s wave energy gap and cos(2ϕ) for d-wave gap, where ϕ is the azimuthal angle along the Fermi surface.
Figure <ref>a illustrates our comparison of three distinct gap models: employing a single isotropic s-gap wave, a multigap s+s-wave gap, and a nodal d-wave gap. As seen in the figure, the superfluid density saturates at low temperatures, which is a characteristic of the s-wave model with a single gap. An isotropic single-band s-wave model with a gap value of 0.45 meV provides the best representation of the data, with a gap to T_C ratio 2Δ(0)/k_BT_C = 3.26, which is less than the BCS weak-coupling limit (=3.53). On the other hand, the substantial rise in the χ^2 value puts the d-wave model and s+s-wave (multigap) model inappropriate for this system. A two-gap s+s-wave model of multiband superconductivity has been shown to be compatible with the temperature dependence of magnetic penetration depth of LaRu_4As_12. The higher gap to T_C ratio computed in the s + s-wave scenario, 2Δ_1(0)/k_BT_C = 3.73, is fairly comparable to the value of 3.53 for BCS superconductor in case of LaRu_4As_12 <cit.>. For LaRu_4As_12, 2 K specific phonon modes exhibit modest softening when compared to 20 K, demonstrating that the electron–phonon interactions causing the superconductivity have an audible impact on the vibrational eigenstates <cit.>. Using McMillan's relation, it is also possible to determine the electron–phonon coupling constant (λ_e-ph) <cit.>:
λ_e-ph = 1.04+μ^*ln(Θ_D/1.45T_C)/(1-0.62μ^*)ln(Θ_D/1.45T_C)-1.04.
where μ^* is the repulsive screened Coulomb parameter usually assigned as μ^* = 0.13. The calculated value of the λ_e-ph is 0.534. The London model is described as λ_L^2=m^*c^2/4π n_s e^2. It connects the effective mass enhancement m^* [=(1+λ_e-ph)*m_e], superconducting carrier density n_s [=m^*c^2/4π e^2λ_L(0)^2], and London penetration depth. By employing the s-wave model, we determined the London penetration depth of λ_L(0) = 168 nm. The effective mass enhancement is calculated to be m^* = 1.53 m_e, and the superconducting carrier density is predicted to be n_s = 1.53 × 10^27 carriers m^-3. References <cit.> include a description of the computations in detail. The calculated values of ,n_s = 8.6 × 10^27 carriers m^-3 and m^* = 1.749 m_e for LaRu_4As_12 <cit.>. The fitted parameters for LaOs_4As_12 and LaRu_4As_12 (for comparison) are shown in Table <ref>. To explain the observed nature of the superconducting gap structures, it is important to comprehend the electronic structures of these compounds, which have been carried <cit.> and the results suggest that the single-band order parameter in
LaOs_4As_12 seems to be associated with the hybridized As-p and Os-d electronic character
of the Fermi surface. On the other hand, the lack of hybridization for the disjointed Fermi surface of LaRu_4As_12, may explain its multiband superconducting nature.
§.§ Preserved Time Reversal Symmetry: ZF-μSR
In order to determine if there is a spontaneous magnetic field present in the superconducting ground state, we conducted the ZF-μSR experiment. Figure <ref>b shows the time evolution of the asymmetry spectra for T = 0.3 K < T_C and T = 3.5 K > T_C. The ZF-μSR spectra recorded in the normal and superconducting states show the same relaxations that can be found in overlapping ZF-μSR spectra, indicating that the superconducting state does not shows any spontaneous magnetic field or spin fluctuations. This result suggests that the time-reversal symmetry is preserved in LaOs_4As_12 superconducting state. The strong resemblance of the ZF-μSR spectra (above and below T_C) suggests that the time-reversal symmetry is also retained in the superconducting state of LaRu_4As_12. In order to fit the ZF data, a Lorentzian function was used <cit.>,
G_ZF(t) = A_sc(t)exp(-λ_ZF t)+A_bg,
where λ_ZF is the electronic relaxation rate, A_sc stands for the sample asymmetry, A_bg for the constant nondecaying background signal. The red line in Figure <ref>b indicates the fits to the ZF-μSR data. The ZF-μSR asymmetry data fitting parameters are λ_ZF = 0.754(4) μs^-1 at 0.3 K and λ_ZF = 0.744(5) μs^-1 at 3.5 K. No conclusive evidence of TRS breaking can be found since the relaxation rate change is within the error bar.
§ SUMMARY
We employed TF-μSR to determine the gap symmetry of the superconducting state of LaOs_4As_12. An isotropic BCS-type s-wave gap model explains the temperature dependence of the superfluid density. The gap to T_C ratio, which was determined from the s-wave gap fit to the superfluid density, is 3.26; nonetheless, this is smaller than 3.53 expected for conventional BCS systems. The ZF-μSR spectra at 0.3 K and 3.5 K are strikingly similar, indicating that the time-reversal symmetry is intact. These results open up the possibility of using the compounds LaRu_4As_12 and LaOs_4As_12 as special research platforms for investigating filled skutterudites for the interplay between single- and multiband superconducting order parameters in conventional systems.
§.§ Acknowledgements
We thank T. Cichorek and J. Juraszek for providing LaOs_4As_12 sample and the ascii heat capacity data. We would like to thank T. Cichorek, P. P. Ferreira, R. Lucrezi, J. Juraszek, C. Heil and L. T. F. Eleno for interesting discussions. AB expresses gratitude to the Science and Engineering Research Board for the CRG Research Grant (CRG/2020/000698 & CRG/2022/008528) and CRS Project Proposal at UGC-DAE CSR (CRS/2021-22/03/549). DTA appreciates the support provided by the Royal Society of London for the Newton Advanced Fellowship between the UK and China, the International Exchange between the UK and Japan, and EPSRC-UK (Grant number EP/W00562X/1). We thanks the ISIS Facility for the beam time, RB1520431 <cit.>.
apsrev4-1
|
http://arxiv.org/abs/2307.05091v1 | 20230711074616 | Group theoretical and ab-initio description of color center candidates in fluorographene | [
"M. S. Tacca",
"M. B. Plenio"
] | cond-mat.mtrl-sci | [
"cond-mat.mtrl-sci",
"quant-ph"
] |
[][email protected]
Institut für Theoretische Physik and IQST, Albert-Einstein-Allee 11, Universität Ulm, D-89081 Ulm, Germany.
[][email protected]
Institut für Theoretische Physik and IQST, Albert-Einstein-Allee 11, Universität Ulm, D-89081 Ulm, Germany.
We present a group theoretical and ab-initio analysis of lattice point defects in fluorographene, with a focus on neutral and negative V_CF vacancies.
By using a combination of density functional theory calculations and group theory analysis, we investigate the many-body configurations of the defects and calculate the vertical absorption and zero-phonon line energies of the excited states and their dependence with strain.
The description of the defects is extended by computing their formation energy, as well as further relevant parameters as the Jahn-Teller energy for neutral V_CF and the zero field splitting for negative V_CF vacancies.
Based on our results, we discuss possible quantum applications of these color centers when coupled to mechanical oscillation modes of the hosting two-dimensional material.
The symmetry and active orbitals of the defects exhibit a parallelism with those of the extensively studied NV centers in diamond. In this context, the studied defects emerge as interesting candidates for the development of two-dimensional quantum devices based on fluorographene.
Group theoretical and ab-initio description of color center candidates in fluorographene
M. B. Plenio
August 12, 2023
========================================================================================
§ INTRODUCTION
Point defects are of increasing interest in the fields of quantum information and sensing due to their potential applications, among which are the promising NV center technologies <cit.>.
By coupling the localized states introduced by color centers with mechanical oscillation modes, hybrid quantum devices with long-range interactions mediated by phonons can be fabricated through appropriate design <cit.>.
The introduction of color centers in two-dimensional (2D) materials is particularly promising for the continuously accelerated development of quantum technologies.
Two-dimensional resonators can be mechanically coupled with cavities through opto-thermal, electromagnetic, or further interactions <cit.>.
The dynamics of 2D membranes and other micro- and nano-devices have been widely studied for their potential applications in quantum and mass sensors, quantum simulators, and nanophotonics <cit.>.
Because color centers in 2D structures lie naturally on the surface of the material, high sensitivity to the environment is expected <cit.>.
Various materials, including graphene <cit.>, MoS_2 <cit.>, hexagonal boron nitrite (h-BN) <cit.> and others <cit.> have been studied as candidates for 2D systems.
In particular, h-BN, a wide-band insulator that can host color centers <cit.>, has been proposed as a platform for quantum simulation and ultra-sensitive force detection <cit.>.
In this work, we explore the potential of defect-bearing fluorographene <cit.> as a platform for the realization of hybrid quantum devices.
Fluorographene (FG) is a stoichiometric 2D derivative of graphene, in which one fluorine atom is bonded to each carbon atom.
This material has been used for a variety of applications, including electrochemical sensors, batteries, and electrocatalysis, as well as electronic applications such as transistors and solar cells <cit.>.
A key characteristic of FG is that the carbon atoms exhibit sp_3 hybridization instead of the sp_2 one found in graphene. As a result, the electronic properties of FG are closer to those of diamond than to those of graphite.
In fact, the structure of FG is similar to the fluorine-terminated (111) diamond surface, which has been proposed as a suitable candidate for the implementation of a quantum simulator at room temperature <cit.>.
The application of polarized nuclear spins in quantum simulators is an active research field, in particular for the previously mentioned h-BN based systems <cit.>.
Although it has been well established that FG presents a large band gap, its precise value has been a longstanding issue that appears to have been clarified only recently <cit.>.
Initial measurements suggested a band gap larger than 3 eV <cit.>, and latter measurements yielded a value of 3.8 eV, consistent with the first results <cit.>.
Additional photoluminescence emission peaks have been observed at 3.56 <cit.> and 3.65 eV <cit.>, with the latter being attributed to phonon-assisted radiative recombination.
On the theoretical field, the initial density functional theory (DFT) <cit.> calculations at the local density approximation (LDA) and generalized gradient approximation (GGA) theory levels
resulted in predicted band gap values close to 3 eV <cit.>,
in excellent agreement with the experimental measurements.
However, more refined calculations including the exact exchange interaction through the hybrid screened functional (HSE) predicted a larger band gap of ≈5 eV <cit.>.
Additional calculations incorporating electron-electron interactions via Green's function methods (GW) on top of either LDA or GGA to further improve the description of the electronic structure, led to a predicted band gap of about 7.5 eV <cit.>.
The inclusion of electron–hole interactions through the Bethe–Salpeter equation (BSE-GW), one of the most advanced methods beyond DFT, partially cancels the electron–electron interactions and results in predicted band gap values between 5.4 <cit.> and 5.65 eV <cit.>.
It is worth noting that the latter values are in agreement with the results obtained via the HSE method, which is computationally less demanding.
The discrepancies between the measured and calculated values of the band gap have been tentatively linked to midgap states resulting from defects in the material <cit.>. A combined experimental and theoretical study has confirmed this hypothesis, showing that the band gap value is in agreement with previously reported BSE-GW results <cit.>.
The longstanding FG bandgap conundrum highlights the importance of characterizing defects in materials.
However, most theoretical works on FG have primarily focused on improving the accuracy of band gap predictions for the pristine material. Thus, the calculation of defects is often relegated to a secondary place <cit.>, or analyzed at the GGA level of the theory, which strongly underestimates the band gap <cit.>.
In this work, we investigate the electronic structure of two types of defects in FG: a F vacancy (V_F) and a double F and C vacancy (V_CF), for which different charge states were considered.
The paper is structured as follows.
We present the description of the theoretical method in <ref>.
Our approach involves using DFT to obtain the single-particle localized states and group theory to construct the many-body configurations.
In <ref> we discuss our results.
We start with a description of pristine fluorographe and the V_F defect in <ref>. Neutral and negative V_CF vacancies are presented in <ref>.
We examine the transitions between ground and excited states introduced by the defects and analyze their dependence on strain. In addition, we compute the Jahn-Teller energy for the neutral defect and the zero field splitting for the negatively charged one.
Given that the symmetry of the V_CF defect is equivalent to that of a NV center, a parallelism can be established between both systems. Based on previous NV studies, in <ref> we discuss possible applications of defective FG sheets as quantum hybrid resonators.
Our calculations of the formation energy of the defects are presented in <ref>.
The conclusions are presented in <ref>.
§ METHODS
The computational details of our work, based on previous studies of related 2D systems <cit.>, are as follows. We employed the DFT code Quantum Espresso <cit.> and used a supercell approach to study defects in FG. We used the HSE method with PBE functional <cit.>, adjusting the parameter α=0.35 to match the band gap of fluorographene obtained with the latest calculations and experimental data <cit.>.
In order to perform geometrical relaxations including HSE, we used norm conserving pseudopotentials. We used an energy cutoff of 100 Ry and, unless otherwise stated, we used a value of 0.01 eV/Å as criterion for the convergence of the atomic forces.
We considered a 15 Å vacuum spacing between fluorographene sheets.
For our calculations we considered 7×7 hexagonal supercells to avoid interaction between defects. For the calculations involving strain in x and y directions we used 7×8 orthogonal supercells. In both cases we considered only the Γ point in the reciprocal space and therefore a single q point in the Hartree-Fock calculation for the HSE method.
In our study, we employed the ΔSCF method <cit.> to calculate relevant transition energies, which involves computing the energy difference between the ground state and excited states with different electronic occupations.
We determined the vertical absorption energy (VAE) by keeping the ground state geometry fixed and imposing an excited electronic occupation for the calculation of the excited states. The zero-phonon line (ZPL) was obtained after performing a geometrical relaxation of the excited electronic configuration.
It should be noted that the ΔSCF method is applicable only to configurations corresponding to a single Slater determinant. To estimate the energy of multi-determinantal configurations, we used auxiliary single-determinant states <cit.>. It is worth stressing that this method provides only an estimation of the transition energies for such configurations <cit.>.
§ RESULTS
§.§ Pristine fluorographene and V_F
We obtained a lattice parameter of 2.58 Å for pristine fluorographene, in good agreement with available theoretical <cit.> and experimental <cit.> data, and a band-gap of 5.65 eV.
We start our analysis of defects with the simple fluorine vacancy, V_F, which lowers the C_6v symmetry of pristine fluorographene to C_3v.
The V_F vacancy leaves a C atom with a dangling sp^3 bond, which corresponds directly to a molecular orbital (MO) with spatial symmetry A_1. We denoted this single-electron orbital a_1.
The geometry of the system and the a_1 orbital are illustrated in <ref>.
According to our spin-polarized DFT calculation, the a_1 orbital is half occupied in the ground state (GS), resulting in a magnetic moment of the defect of 1 and a ^2A_1 many-body configuration.
The molecular orbitals of the majority (up) and minority (down) spins are well localized, with the up state located within the valence band and the down state inside the band gap (see <ref>).
The first excited state (ES) can be constructed by promoting an electron from the highest occupied valence bands, which have E symmetry, to the unoccupied a_1 state. In this case, the well-localized a_1 orbital is doubly occupied, and there is a single hole in the E bands.
We calculated the VAE and ZPL following the methodology described in <ref>, and obtained values of 3.44 eV and 3.00 eV, respectively.
The ZPL value is consistent with absorption bands observed in less fluorinated fluorographene samples <cit.>. As suggested in Ref. <cit.>, it is likely that the optical transitions introduced by this midgap state were initially attributed to a much lower band-gap of fluorographene.
§.§ V_CF
A V_CF defect in fluorographene also lowers the symmetry of the system to C_3v. In this case, there are three sp^3 dangling bonds of the C atoms around the defect, and an in-depth group-theory analysis becomes relevant.
Using the projection operator method <cit.> we determined that the three localized orbitals that can be formed have symmetries A_1 and E. The single-particle orbitals a_1, e_x and e_y are given by
a_1 =1/√(3)(σ_1+σ_2+σ_3)
e_x =1/√(6)(2σ_1-σ_2-σ_3)
e_y =1/√(2)(σ_2-σ_3) ,
where σ_i corresponds to the dangling orbital of each C atom. The geometry of the system and the orbitals is presented in <ref>.
The most symmetric a_1 orbital lies lowest in energy.
There are three electrons to fill the orbitals, so that in the ground state two electrons are located in the a_1 orbital, and one in an e orbital.
The configuration is then a_1^2e^1, and the spatial symmetry of the many-body wave function in the C_3v symmetry induced by the defect is A_1 ⊗ A_1 ⊗ E = E. The S=1/2 spin of the ground state configuration gives a spin doublet, so that the total state corresponds to ^2E. As discussed below, this situation is analogous to the configuration of a neutral NV^0 center <cit.>.
A neutral NV^0 center has four molecular orbitals formed from the corresponding dangling bonds, two with a_1 symmetry and a double degenerated e orbital <cit.>. However, one a_1 orbital is located well below the valence band, and is not relevant for the transitions of interest. The remaining three orbitals are located within the band gap and accommodate three electrons, which is precisely the same configuration as the V_CF vacancy in fluorographene.
Then, the conclusions derived from group theory for NV centers apply also to V_CF. Note that they include the resulting many-body configurations but not necessary their energy order, which is beyond a group theoretical analysis.
The similarity motivates also the study of the negatively charged V_CF^- defect, which is analyzed in <ref>.
The many-body configurations corresponding to the ground and first excited states of the V_CF defect are presented in <ref>. The first excited states are obtained by promoting an electron to the E orbitals, that is, a a_1^1 e^2 configuration. The spatial symmetry of the resulting many-body states is given by A_1 ⊗ E ⊗ E=A_1 ⊕ A_2 ⊕ E. We constructed the electronic configurations given by the single-particle orbitals using the projection operator method.
Note that we obtained three doublets with different symmetry and in particular a ^2A_2 doublet which, as pointed out in Ref. <cit.>, has been misidentified in some works as ^2A_1 for NV^0.
These states can become mixed by different interactions such as spin-orbit, spin-spin, electric and magnetic fields, and strain, as analyzed for NV^0 in several works <cit.>.
Given that the many-body ground state presents spatial degeneracy, the system is Jahn-Teller unstable, giving rise to an adiabatic potential energy surface (APES) with the typical “Mexican hat" shape <cit.>.
Therefore, the geometrical configuration of the ground state will have a symmetry lower than C_3v, namely C_1h.
For simplicity, we keep the labels of the C_3v symmetry for the configurations in our notation.
In our analysis, we first relaxed the system while enforcing C_3v symmetry to obtain the high symmetry (HS) structure. We then lifted the symmetry restriction and obtained the C_1h lower symmetry structure with the lowest energy (LE), a method similar to the one presented in Ref. <cit.> for the study of a neutral NV^0 center, analogous to our system. For these calcualtions we used a stricter force convergence criterion of 1 meV/Å.
We found that the Jahn-Teller stabilization energy, which is the energy difference between the HS and LE structures, was E_JT=30 meV. This value is about one third of the value found for a neutral NV^0 center <cit.> and close to the one found for a negatively charged NV^- center <cit.>.
There are three equivalent LE points separated by warping barriers, with saddle points with an energy δ above the minimum <cit.>.
By computing the direct path between two equivalent minimum energy configurations located at different LS points, we obtained δ=20 meV.
In <ref> we present the single-particle levels for both the ground state ^2 E and the first excited state ^4 A_2.
The levels ℰ^0_±1/2 and 𝒜^1_±3/2 of each manifold can be described by using a single Slater determinant (see <ref>). Therefore, the transition energies can be obtained straightforwardly using the ΔSCF method, and are given by the difference between the energies of each configuration.
We estimated the transition energies for the remaining excited states using single Slater determinant configurations <cit.> (see <ref>).
While this method has been successfully used to compute transition energies between multi-determinantal configurations, it only provides a rough estimation of the energies <cit.>.
For example, the method does not account accurately for the geometrical relaxation energy (Stokes shift), given that the geometry of the actual configuration cannot be computed.
In our calculations of the ZPL for the higher excited states we considered the same geometry as the one obtained for the first excited state, given that all these excited states have the same a_1^1 e^2 electronic occupation <cit.>.
Note that the excited state ^2E' will also present Jahn-Teller distortion, however the accuracy of our method is not enough to estimate its E_JT.
In <ref> we present the many-body states and their corresponding VAE and ZPL transition energies for the V_CF defect. The ^2A_1 state lies at 7.8 eV and is omitted.
The values of the optical transitions from the ground state to the excited states ^2E' and ^2A_2, although approximated, are consistent with available experimental data that shows absorption features at around 2.9 eV and 4.8 eV in less fluorinated fluorographene, attributed to single V_F vacancies <cit.>.
Only non-radiative transitions are allowed between these states and the ^4A_2 state.
The latter state is split via spin-spin interaction into two double-degenerated states, with M_S=±1/2 and M_S=±3/2 <cit.>. Since the M_S=±3/2 states only couple via very weak non-axial spin-orbit interaction with the ground state, they are long-lived and have been proposed as qubit candidates for NV^0 centers <cit.>.
In <ref> we present the dependence of the ZPL transition energy between the ground state and the first excited state on strain.
Strain is defined as the ratio of the lattice deformation (Δ l_i) to its initial dimension (l_i), that is, ϵ=Δ l_i/l_i with i=x,y.
When strain is applied in the y direction, we obtain a variation of -8.5 eV/strain for the transition energy, whereas we obtain a lower value of -1.3 eV/strain when strain is applied in the x direction.
The value in the y direction is not far from the large 12 eV/strain shift obtained for V_NN_B defects in h-BN sheets <cit.>.
As shown in <ref>, strain in the x direction affects mainly the e_y single-particle orbital of the ground state ^2E, which is occupied by an electron in our DFT calculation.
This dependence is consistent with the geometry of the e_y orbital (see <ref>).
On the other hand, when strain is applied in the y direction in the ground state,
the occupied e_y orbital remains almost constant in energy.
Finally, in the excited state ^4A_2 both e_x and e_y orbitals are occupied, and the energy change when strain is applied in either direction is similar.
As a result, when computing the energy difference between the ground and first excited states, there is a larger variation in energy when strain is applied in the y direction. This is because the energy variation of each state with strain in the x direction partially compensates.
§.§ V_CF^-
As discussed before, the negatively charged V_CF^- defect possesses the same symmetry as a NV^- center.
In <ref> we present the many-body states corresponding to V_CF^-, which were obtained using the projection technique of group theory.
We adopt the hole picture for the description of this defect, which is more convenient given that the electronic occupation is larger than half-filled.
The interactions arising between states have been studied in previous works <cit.>.
Only the states 𝒜^0_±1 and ℰ^3_±1 correspond to single-determinant configurations and can be calculated with the ΔSCF method.
However, the convergence of the ℰ^3_±1 state could not be achieved with the HSE method used.
Note that the difficulty in convergence is expected for this case where a hole occupies a degenerated E orbital (a^1 e_x^1 e_y^2 electron occupation) <cit.>.
Then, we used the a^1 e_x^1.5 e_y^1.5 configuration for the calculation of this state.
By comparing the results using the a^1 e_x^1.5 e_y^1.5 configuration with preliminary calculations using a^1 e_x^1 e_y^2 and a larger convergence threshold, we estimate a difference in the energy of ≈0.05 eV, in agreement with previous reports <cit.>.
As in <ref>, we estimated the transition energies of the remaining states by using auxiliary states (see <ref>).
The excited singlets ℰ^1_0 and 𝒜̃^2_0 have two holes in the orbitals with E symmetry, which results in the same electronic occupation as the ground state.
Then, we considered the ground state geometry in the estimation of the ZPL for these excited states, assuming their ZPL energies equal to their vertical excitation energies.
According to Hund's rules, the remaining singlet ℰ^4_0 lies higher in energy than the excited triplet ℰ^3_+1, so that we omitted it.
In <ref> we present the single-particle levels for the ground state, which can be described with a single determinant.
The empty e orbitals of the ground state ^3A_2 are pushed up in energy into the conduction band when compared to the same levels of the ^4A_2 state of the neutral V_CF defect (<ref>).
However, our DFT calculations show that these states remain well localized, and the molecular orbitals are similar to those shown in <ref>.
In <ref> we show the VAE transitions for the excited states. As discussed before, the VAE provides an estimation of the ZPL for the singlet states.
For the excited triplet ^3E we obtained a ZPL energy of 2.3 eV.
Note that this value of ZPL for ^3E is lower than the absorption features experimentally observed <cit.>.
This indicates that the presence of the negatively charged defects is not energetically favored, which is consistent with the formation energy analysis presented in <ref>.
Consequently, the negatively charged state should be stabilized by applying a gate voltage.
A distinguishing feature of the NV^- center defect is that it allows for high fidelity preparation of the m=0 sublevel of its ground state, labeled 𝒜^0_0 in our system, due to a convenient intersystem crossing (ISC) between triplet and singlet states <cit.>.
Taking as reference the VAE of the many-body states of V_CF^- (<ref>), the ordering of the levels for our system would be the same as that of the NV center.
If that was indeed the case, symmetry considerations allow in principle the existence of a similar ISC, which could then be tested using available models <cit.>.
However, our rough estimations for the ZPL values suggest that the ^1A_2 singlet remains above the ^3E triplet.
In order to decide this question conclusively it is of considerable interest to extend this study using alternative ab-initio methods better suited for the calculation of multireference states <cit.>, since an accurate description of the states ordering is a first step to determine if an ISC similar to the one in NV^- centers is also present in V_CF^- defects in FG.
Spin-orbit and spin-spin interactions split the excited states ^3E into four sublevels, and the fine structure is further split into two branches (E_x, E_y) under the application of non-axial strain <cit.>.
<Ref> shows the dependence of the ZPL of the ^3E state with application of strain in x and y directions.
We obtained a value of approximately -8 eV/strain for both directions which, as in the case of the neutral defect, is comparable to the strain shift obtained for defects in h-BN sheets <cit.>.
Another parameter of interest in the description of the defect is the zero-field splitting (ZFS) tensor.
The ZFS is determined to first order by dipolar spin-spin interactions, and we calculated its value for the ground state from our DFT results <cit.>.
For the C_3v symmetry of the defect, only the axial ZFS parameter D is different from zero. We obtained D=2.97 GHz, which is close to the value for NV centers (D=2.88 GHz <cit.>).
In addition, we calculated the dependence of the ZFS for the ^3A_2 ground state on strain (<ref>). We obtained a shift of -10 GHz/strain for both directions.
The symmetry breaking induced by strain allows a non-zero value of the transversal component of the ZFS (E parameter).
Our calculations yield a value of E≈-20 MHz for ±1% strain, which is close to the numerical accuracy of the method used.
§.§ Applications to hybrid resonators
Strain induced by the mechanical motion of the material, for example, through the drum oscillatory modes of a FG membrane suspended from its edges, provides an intrinsic mean of coupling phonons with electronic degrees of freedom.
This method does not require the use of external components, resulting in a device that is less prone to noise and decoherence, and has lower complexity in its scalability than devices relying on auxiliary components to provide the coupling <cit.>.
However, intrinsic strain coupling is typically relatively small, which led to several proposals aimed at increasing the interaction by using electric or magnetic fields <cit.>, or cavities <cit.> coupled to the resonator.
Depending on the system, qubits can be encoded in either the orbital or spin electronic degrees of freedom of color centers, which makes orbit-strain or spin-strain interactions relevant for phonon coupling <cit.>.
Typically, the spin-strain coupling strength is rather small, with values in the order of 10 GHz/strain for devices with implanted NV centers <cit.>.
On the other hand, orbit-strain coupling is much stronger, approximately 10^8 times larger than spin-strain coupling, given that the molecular orbitals are directly affected by the changes in the lattice induced by mechanical motion <cit.>.
Values for orbit-strain coupling are typically in the range of PHz/strain for different quantum hybrid devices using NV centers <cit.> and h-BN sheets with defects <cit.>.
The dynamics of a freestanding 2D material sheet can be described through the elasticity theory of membranes.
In the membrane limit in which the material has vanishing thickness, which is fulfilled by single or few layers sheets, the frequency of the fundamental mechanical mode ω_m^0 for membranes with simple geometries is aproximated in terms of the pretension T, the surface mass density ρ_s, a geometrical form factor α given by the non-trivial zero of the mode profile and a characteristic dimension of the system d <cit.>,
ω_m^0 = α/d√(T/ρ_s) .
For a circular membrane, α=2.4 and d is equal to the radius R <cit.>, while for a ribbon of lengh L clamped in the extremes, α=π and d=L <cit.>.
The pretension value depends on the fabrication of the membrane <cit.>,
and is related to the strain ϵ and in-plane Youngs module of the material E_s by T=E_s ϵ. For graphene membranes of a few m of radius, T was estimated to be ≈4×10^-2 N/m <cit.>.
For fluorographene, ρ_s=1.706 mg/m^2 <cit.> and E_s=100 N/m <cit.>.
Taking T ≈ 4×10^-2 N/m as reference, we obtain ω_m^0 ≈ 10 MHz for fluorographene membranes of d≈ 1 m, a value in agreement with the ones obtained for similar devices of h-BN <cit.> and graphene <cit.>.
It is worth mentioning that driven devices can achieve frequencies of the order of GHz, as was obtained for MoS_2 piezo-resonators <cit.>.
The membrane strain is related to its deflection, and for small deflections it can be aproximately written in terms of the maximum vertical displacement ξ <cit.>,
ϵ = β(ξ/d)^2 ,
where β is a geometrical factor, which for a ribbon-shaped membrane corresponds to 8/3 <cit.>.
Static deflections in membrane devices can be tuned using a voltage gate, and typical values for membranes of a few m of radius are in the order of 10 nm, that leads to static built-in strains of ≈10^-4 <cit.>.
Strain induced dynamically through time-dependent bias can achieve the same order of magnitude <cit.>.
The fundamental oscillation modes of micro-scale membranes around the equilibrium point have a vertical displacement of approximately 0.1 nm, which corresponds to an induced strain of ϵ≈ 10^-8. These reference values correspond to a h-BN ribbon <cit.>.
The quadratic dependence of the strain with the vertical displacement, which in turn depends on the membrane geometry and material through
ξ=√(ħ/ (2Mω_m^0)) (M is the effective mass) <cit.>, leads to a spread in the reference values, ranging from ξ≈10^-2 nm and ϵ≈10^-10 for a similar h-BN device <cit.> to ξ≈10 nm and ϵ≈10^-4 for the already mentioned driven resonators <cit.>.
For comparison, the strain of a three-dimensional (3D) diamond micro cantilever with implanted NV centers in the fundamental mode is ≈ 10^-12, and can be increased to ≈10^-6 through mechanical drive <cit.>.
A scaling-down of the latter device to the nanoscale was proposed to achieve a larger orbit-strain coupling (up to the ≈10 MHz) in the fundamental mode <cit.>, through a larger induced strain. In this regard, 2D membranes arise as promising candidates, given their comparatively large achievable strain.
Our ab-initio calculations suggest a deformation potential <cit.> of Ξ≈ 1 PHz/strain for flourographene membranes, a value similar to the one obtained for previously studied h-BN resonators <cit.>.
If we consider a fluorographene membrane of d≈ 1 m hosting a color center and oscillating in the fundamental mode with a vertical displacement of ξ≈ 0.1 nm, we obtain an orbit-strain coupling of g=10 MHz.
The obtained coupling is about 10^3 times larger than the values obtained for 3D mechanical resonators with NV^- centers <cit.>.
For the latter devices, different cooling schemes were proposed <cit.>.
The “off-resonant” scheme uses the m_s = 0 sublevel 𝒜_0^0 of the ground state and the ℰ_0^3y level of ^3E as two-level system, and convert the strain coupling to an effective transverse interaction using a laser detuned by ω_m^0 from the transition energy <cit.>.
The “resonant” scheme involves tuning the energy difference between the ℰ_0^3x and ℰ_0^3y levels of the ^3E state to be equal to ω_m^0, while driving the transition from the 𝒜_0^0 ground state to ℰ_0^3y with a laser. This allows for resonant excitation to the ℰ_0^3x state by removing a phonon from the mechanical mode <cit.>.
However, scaling down these devices from the microscale to the nanoscale is necessary to achieve ground-state cooling using these methods <cit.>.
The inherent larger coupling in our system would enable the implementation of these methods in a flourographene membrane device, thereby extending the proposal for the NV^- center to the V_CF^- defect.
Another possible protocol uses the 𝒜_±1^0 levels in a Λ configuration with an excited state formed with the ℰ_±1^3(x,y) levels, which are mixed through spin-orbit interaction <cit.>. This scheme relies on stimulated Raman transitions to remove phonons from the resonator, and has the advantage of combining the stronger orbit-strain coupling with the larger coherence of spin states <cit.>.
§.§ Formation energy and stability
The formation energy for a defect with charge q is obtained from <cit.>
E_f^q(ϵ_F)=E_d^q-E_bulk-∑_i n_i μ_i +q ϵ_F + E_corr^q ,
where E_d^q is the total energy of the supercell with the defect, E_bulk is the energy of the pristine supercell, n_i are the number of atoms that have been added (n_i>0) or removed (n_i<0) to form the defect, with μ_i the corresponding chemical potentials.
The energy depends on the total charge with the Fermi energy ϵ_F, measured from the top of the valence band.
The final term E_corr^q accounts for corrections such as finite 𝐤-point sampling and electrostatic interactions <cit.>. Here, we apply the Freysoldt–Neugebauer–Van de Walle (FNV) correction scheme <cit.>.
In <ref> we present the formation energy for the V_F and V_CF defects as a function of the Fermi energy, which can be varied by applying a gate voltage.
We considered two different scenarios for the chemical potentials.
In the first scenario, the defective membrane is in equilibrium with F_2, which results in a fluorine-rich environment. For this case, we obtain μ_F=μ_F_2/2 from the energy of a F_2 molecule and μ_C=μ_FG-μ_F from the difference between the energy of the pristine fluorographene primitive cell (μ_FG) and the fluorine chemical potential.
In the second scenario, we considered a carbon-rich environment and calculated μ_C from a graphene primitive cell. We obtained the fluorine chemical potential from the difference with μ_FG, which gives μ_F=μ_FG-μ_C.
The formation energy of V_CF is independent from the environment, since the third term in <ref>, ∑_i n_i μ_i=n_Cμ_C +n_Fμ_F, equals μ_FG by definition for both environments. On the other hand, the formation energy for the V_F defect is higher in the F_2-rich environment, as expected.
The formation energy of V_F is higher than that of V_CF only in the special condition of F_2-rich environment and ϵ_F ≲ -0.5 eV. For the remaining conditions, the V_F defect is more stable than V_CF. However, molecular dynamics calculations suggest that the latter defect is also thermodynamically stable <cit.>.
§ CONCLUSIONS
In this study, we investigated the electronic properties of V_F, V_CF and V_CF^- defects in FG membranes.
We computed the many-body states from single-particle DFT results making use of group-theoretical considerations, obtained the transition energies between the states and analyzed their dependence with non-axial strain.
The obtained energy shift under strain for the studied defects was in the order of 1 PHz/strain, which is comparable to the one found for defects in h-BN sheets.
This value leads to an orbit-strain coupling of g≈10 MHz for membranes of ≈ 1 m.
Due to the similarities of V_CF defects in FG with NV centers on diamond, some proposals for NV centers resonators can be mapped to 2D devices based on FG with V_CF^- defects, taking advantage of the larger strain achievable in 2D materials.
Furthermore, extending this study with alternative ab-initio methods would be useful to determine if an ISC similar to the one present in NV centers could also be expected in this system.
Our findings suggest that the V_CF defect in FG membranes can be a promising candidate for developing nanomechanical resonators with strong orbit-strain coupling and contribute to the understanding of defects in two-dimensional materials and their quantum applications.
This work was supported by the ERC Synergy grant
HyperQ (Grant No. 856432) and by the BMBF via the
project CoGeQ (grant No. 13N16101). The authors acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant No. INST 40/575-1 FUGG (JUSTUS 2 cluster).
M. S. T. thanks J. S. Pedernales for helpful discussions.
§ MULTI-CONFIGURATIONAL STATES
For V_CF, the ground and first excited states are directly described by a single determinant, while the remaining states are multi-configurational.
In principle, the ΔSCF method does not allow to compute configurations composed by several determinants. However, it is possible to obtain a rough estimation of these multi-configurational states from single-determinant auxiliary configurations <cit.>.
To illustrate the method, consider the state 𝒜^3. We note that
√(2)𝒜_+1/2^3 + 𝒜_+1/2^1 =√(3)|a_1e_xe_y⟩ .
Considering that the energy of the states is independent of the spin projection, E( 𝒜_+1/2^1 )=E( 𝒜_+3/2^1 )=E( 𝒜^1 ), we obtain
E( 𝒜^3 )=1/2(3E( |a_1e_xe_y⟩ )-E( 𝒜^1 )).
Similarly, for the two remaining excited states of V_CF we obtain the following expressions,
E(ℰ^2) =1/3(6E(|a_1e_xe_y⟩)-2E( 𝒜^1 )-E(𝒜^3))
E(𝒜^4 ) =2E(|a_1e_xe_x⟩ )-E(ℰ^2) .
For V_CF^-, we obtain the following expressions for the transition energies of the multiconfigurational states
E(ℰ^1) =2E(|e_xe_y⟩)
E(𝒜^'2) =2 E(|e_xe_x⟩)-E(ℰ^1)
E(ℰ^4) =2E(|ae_x⟩) .
|
http://arxiv.org/abs/2307.05261v2 | 20230711135117 | Do All Fast Radio Bursts Repeat? Constraints from CHIME/FRB Far Side-Lobe FRBs | [
"Hsiu-Hsien Lin",
"Paul Scholz",
"Cherry Ng",
"Ue-Li Pen",
"Mohit Bhardwaj",
"Pragya Chawla",
"Alice P. Curtin",
"Ketan R. Sand",
"Shriharsh P. Tendulkar",
"Bridget Andersen",
"Kevin Bandura",
"Tomas Cassanelli",
"Amanda M. Cook",
"Matt Dobbs",
"Fengqiu Adam Dong",
"Gwendolyn Eadie",
"Emmanuel Fonseca",
"Bryan M. Gaensler",
"Utkarsh Giri",
"Antonio Herrera-Martin",
"Jane Kaczmarek",
"Joseph Kania",
"Victoria Kaspi",
"Kholoud Khairy",
"Adam E. Lanman",
"Calvin Leung",
"Dongzi Li",
"Kiyoshi W. Masui",
"Juan Mena-Parra",
"Bradley W. Meyers",
"Daniele Michilli",
"Nikola Milutinovic",
"Aaron B. Pearlman",
"Ziggy Pleunis",
"Masoud Rafiei-Ravandi",
"Mubdi Rahman",
"Pranav Sanghavi",
"Kaitlyn Shin",
"Kendrick Smith",
"Ingrid Stairs",
"David C. Stenning",
"Keith Vanderlinde",
"Dallas Wulf"
] | astro-ph.HE | [
"astro-ph.HE"
] |
Paul Scholz
[email protected]
0000-0001-7453-4273]Hsiu-Hsien Lin
Institute of Astronomy and Astrophysics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan
Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada
0000-0002-7374-7119]Paul Scholz
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
Department of Physics and Astronomy, York University, 4700 Keele Street, Toronto, ON MJ3 1P3, Canada
0000-0002-3616-5160]Cherry Ng
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
Laboratoire de Physique et Chimie de l’Environnement et de l’Espace, Université d’Orléans / CNRS, 45071 Orléans Cedex 02, France
0000-0003-2155-9578]Ue-Li Pen
Institute of Astronomy and Astrophysics, Academia Sinica, Astronomy-Mathematics Building, No. 1, Sec. 4, Roosevelt Road, Taipei 10617, Taiwan
Canadian Institute for Theoretical Astrophysics, 60 St. George Street, Toronto, ON M5S 3H8, Canada
Canadian Institute for Advanced Research, MaRS Centre, West Tower, 661 University Avenue, Suite 505
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
Perimeter Institute for Theoretical Physics, 31 Caroline Street N, Waterloo, ON N25 2YL, Canada
0000-0002-3615-3514]Mohit Bhardwaj
Department of Physics, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, 15213, PA, USA
0000-0002-3426-7606]Pragya Chawla
Anton Pannekoek Institute for Astronomy, University of Amsterdam, Science Park 904, 1098 XH Amsterdam, The Netherlands
0000-0002-8376-1563]Alice P. Curtin
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
0000-0003-3154-3676]Ketan R. Sand
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
0000-0003-2548-2926]Shriharsh P. Tendulkar
Department of Astronomy and Astrophysics, Tata Institute of Fundamental Research, Mumbai, 400005, India
National Centre for Radio Astrophysics, Post Bag 3, Ganeshkhind, Pune, 411007, India
0000-0001-5908-3152]Bridget Andersen
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-3772-2798]Kevin Bandura
Lane Department of Computer Science and Electrical Engineering, 1220 Evansdale Drive, PO Box 6109 Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0003-2047-5276]Tomas Cassanelli
Department of Electrical Engineering, Universidad de Chile, Av. Tupper 2007, Santiago 8370451, Chile
0000-0001-6422-8125]Amanda M. Cook
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-7166-6422]Matt Dobbs
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-4098-5222]Fengqiu Adam Dong
Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 Canada
0000-0003-3734-8177]Gwendolyn Eadie
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
Department of Statistical Sciences, University of Toronto, Toronto, ON M5S 3G3, Canada
0000-0001-8384-5049]Emmanuel Fonseca
Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0002-3382-9558]B. M. Gaensler
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-5553-9167]Utkarsh Giri
Department of Physics, University of Wisconsin-Madison, 1150 University Ave, Madison, WI 53706, USA
0000-0002-3654-4662]Antonio Herrera-Martin
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0003-4810-7803]Jane Kaczmarek
Dominion Radio Astrophysical Observatory, Herzberg Research Centre for Astronomy and Astrophysics, National Research Council Canada, PO Box 248, Penticton, BC V2A 6J9, Canada
CSIRO Space & Astronomy, Parkes Observatory, P.O. Box 276, Parkes NSW 2870, Australia
0000-0002-3354-3859]Joseph Kania
Department of Physics and Astronomy, West Virginia University, P.O. Box 6315, Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0001-9345-0307]Victoria Kaspi
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0009-0005-7115-3447]Kholoud Khairy
Lane Department of Computer Science and Electrical Engineering, 1220 Evansdale Drive, PO Box 6109 Morgantown, WV 26506, USA
Center for Gravitational Waves and Cosmology, West Virginia University, Chestnut Ridge Research Building, Morgantown, WV 26505, USA
0000-0003-2116-3573]Adam E. Lanman
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
0000-0002-4209-7408]Calvin Leung
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
NHFP Einstein Fellow
0000-0001-7931-0607]Dongzi Li
Cahill Center for Astronomy and Astrophysics, California Institute of Technology, 1216 E California Boulevard, Pasadena, CA 91125, USA
0000-0002-4279-6946]Kiyoshi W. Masui
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
0000-0002-0772-9326]Juan Mena-Parra
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-8845-1225]Bradley W. Meyers
International Centre for Radio Astronomy Research (ICRAR), Curtin University, Bentley WA 6102 Australia
0000-0002-2551-7554]Daniele Michilli
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
0000-0001-8292-0051]Nikola Milutinovic
Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 Canada
0000-0002-8912-0732]Aaron B. Pearlman
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
Banting Fellow
McGill Space Institute Fellow
FRQNT Postdoctoral Fellow
0000-0002-4795-697X]Ziggy Pleunis
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-7694-6650]Masoud Rafiei-Ravandi
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
0000-0003-1842-6096]Mubdi Rahman
Sidrat Research, 124 Merton Street, Suite 507, Toronto, ON M4S 2Z2, Canada
0000-0001-5504-229X]Pranav Sanghavi
Department of Physics, Yale University, New Haven, CT 06520, USA
0000-0002-6823-2073]Kaitlyn Shin
MIT Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave, Cambridge, MA 02139, USA
0000-0002-2088-3125]Kendrick Smith
Perimeter Institute for Theoretical Physics, 31 Caroline Street N, Waterloo, ON N25 2YL, Canada
0000-0001-9784-8670]Ingrid Stairs
Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 Canada
0000-0002-9761-4353]David C Stenning
Department of Statistics & Actuarial Science, Simon Fraser University, Burnaby, BC, Canada
0000-0003-4535-9378]Keith Vanderlinde
Dunlap Institute for Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto, 50 St. George Street, Toronto, ON M5S 3H4, Canada
0000-0001-7314-9496]Dallas Wulf
Department of Physics, McGill University, 3600 rue University, Montréal, QC H3A 2T8, Canada
Trottier Space Institute, McGill University, 3550 rue University, Montréal, QC H3A 2A7, Canada
We report ten fast radio bursts (FRBs) detected in the far side-lobe region (i.e., ≥ 5^∘ off-meridian) of the Canadian Hydrogen Intensity Mapping Experiment (CHIME) from 2018 August 28 to 2021 August 31. We localize the bursts by fitting their spectra with a model of the CHIME/FRB synthesized beam response. CHIME/FRB did not observe repetition of similar brightness from the uniform sample of 10 side-lobe FRBs in a total exposure time of 35580 hours. Under the assumption of Poisson-distributed bursts, we infer that the mean repetition interval above the detecting threshold of the far side-lobe events is longer than 11880 hours, which is at least 2380 times larger than the interval from known CHIME/FRB detected repeating sources, with some caveats, notably that very narrow-band events could have been missed. Our results from these far side-lobe events suggest one of two scenarios: either (1) all FRBs repeat and the repetition intervals span a wide range, with high-rate repeaters being a rare subpopulation, or (2) non-repeating FRBs are a distinct population different from known repeaters.
§ INTRODUCTION
Fast radio bursts (FRBs) are bright radio transients with milliseconds duration, cosmological origin, and unknown physical mechanism <cit.>. In the past decade, over 600 FRBs have been published <cit.>, of which fifty have been seen to repeat <cit.>. Nearly two dozen FRBs have been localized to their host galaxies using interferometry. With galaxy identification, redshift and host type can be determined, which are crucial for understanding the nature of FRBs <cit.>.
There is a diversity of physical models for FRBs <cit.> with many models allowing for repetition and many not. One possibility is that some FRBs do not repeat, which motivates cataclysmic scenarios such as a merger system for black holes or neutron stars <cit.>. Another possibility is that all FRBs will be seen to repeat as long as the observation time is long enough and with sufficient instrumental sensitivity. Two repeaters have been observed to show periodic activity windows <cit.>. For some repeaters, the bursts appear clustered, and the waiting times could be from a few hours to several months <cit.>. In addition, there is an observed dichotomy between the morphology of apparent non-repeaters and repeater bursts <cit.>.
To detect an FRB, either the telescope must have a high sensitivity that allows it to detect apparently faint, more common events, or a long exposure time, so that it can detect apparently bright, rare events. For a radio telescope, most of the sensitivity is directed towards a “main lobe”, where the telescope is pointed (along the meridian in the case of CHIME), but the telescope and its synthesized beams have lower sensitivity out to the horizon in “sidelobes”. With sufficient exposure time, events will be detected in the sidelobes. Since sidelobes are less sensitive than the main lobe, events detected in sidelobes will be rare and apparently bright. Assuming that all FRBs have the same luminosity function, apparently bright events are typically closer than faint events, because nearby sources tend to be brighter than farther sources <cit.>.
For instance, CHIME/FRB detected apparently bright bursts from the Galactic magnetar SGR 1935+2154 in the far side-lobe regions <cit.>. Moreover, the Parkes (Murriyang) telescope detected another apparently bright and low-DM FRB, FRB 20110214A, in the sidelobes of two of the beams of the multi-beam receiver <cit.>. With ∼60 hrs of follow-up observations, the Parkes telescope did not detect any repeating bursts <cit.>. The non-detection of repetition raises a key question in the FRB field: Do all FRBs repeat? And if they do, how long an exposure time is needed to observe the repetition from an apparent non-repeater with what luminosity function? A sample of side-lobe FRBs would be helpful to answer that question.
In this paper, we present 10 far side-lobe FRBs detected by CHIME/FRB with hour angles up to 2.81 hr (42.1 deg) over a time span of three years. The 10 far side-lobe FRBs are a good sample for our science questions, as they are detected from the same telescope and the searches for repetition are performed using the same pipeline. In Section <ref>, we discuss the identification and the localization of the far side-lobe events. In Section <ref>, we discuss the constraints on the repetition interval of one-off FRB events by using the 10 far side-lobe events. Finally, we summarize and discuss future possibilities in Section <ref>.
§ FAR SIDE-LOBE FRBS
§.§ Identification
For CHIME/FRB, events are in far side-lobes if they are located at least several beam widths (1.3-2.6 deg from 800 to 400 MHz) away from the meridian. The signature of such a detection is a dynamic spectrum (i.e., waterfall plot, see Fig. <ref>) with more than two spikes in the spectral profile, as has been seen in the SGR 1935+2154 detection by CHIME/FRB <cit.> and FRB 20110214A by the Parkes telescope <cit.>. The detected spectrum is the product of the intrinsic spectral profile and the beam response. The beam response in the far side-lobe region shows spiky patterns across frequency channels <cit.>. When a source is detected in the far side-lobe region, the spectrum of the event therefore shows at least two spikes, which is different from narrow-band events detected in the main lobe and near side-lobes <cit.>.
As CHIME/FRB forms 1024 beams with four E-W columns and 256 N-S rows <cit.>, spectral spikes arising from side-lobe events must have similar amplitude across all four East-West (E-W) beams in the same North-South (N-S) row of CHIME/FRB's formed beams. The sensitivity in a side-lobe is much lower than in the main lobe <cit.>. Therefore, a far side-lobe event must be very bright to satisfy the triggering conditions.
We visually inspect dynamic spectra of CHIME/FRB events, for which the triggering criteria are described by <cit.>, and find 10 far side-lobe events from 2018 August 28 to 2021 August 31. Figure <ref> shows the dynamic spectrum of one of the far side-lobe FRBs, while Figure <ref> (see Appendices) shows the dynamic spectrum of all 10 far-side lobe FRBs reported in this paper. We also show far side-lobe events from pulsars PSR B0329+54 and PSR B0531+21 (the Crab pulsar) in Figure <ref>, which we use to validate the following localization analysis.
§.§ Localization of far side-lobe events – overview
In CHIME/FRB, there are three major localization pipelines: the header localization, the intensity localization, and the baseband localization, which provide precision on the order of degrees, sub-degree, and arcminute to sub-arcminute, respectively <cit.>. The baseband localization for the side-lobe events is under development. We will review the header localization in Section <ref> and discuss the intensity localization as applied to far side-lobe events in Section <ref>.
The main result in this paper is the non-detection of repetition from the far side-lobe samples, which we will discuss in Section <ref>. We use the fitted sky positions presented here (from intensity data) for a conservative search for repetition within ± 3 degrees (i.e., within the main beam) around the source positions.
§.§ Header Localization
The real-time CHIME/FRB system provides an initial localization, using only the band-summed signal-to-noise ratios from each beam where a signal was detected by the package, called the “header localization” <cit.>. The header localization provides a refinement of the estimated position of the event given the data available in the real-time pipeline, but the method assumes that the event occurred within ± 2.5 deg of the CHIME meridian. This is true for the vast majority of FRBs that CHIME/FRB detects, but is obviously an erroneous assumption for far side-lobe events and so we must use a different method.
§.§ Intensity Localization
The CHIME/FRB intensity data consist of total-intensity dynamic spectra in 16384 frequency channels sampled at 0.98304 ms time resolution. A dynamic spectrum is saved for each beam that detected a signal in the real-time search plus all beams adjacent (in both the E-W and N-S directions) to those detection beams <cit.>. We have developed three independent methods using those data sets to localize far side-lobe events. The first method (hereafter Method 1) fits a detailed model of CHIME/FRB's synthesized beams and an underlying source spectrum to the spectra of the burst using a Markov Chain Monte Carlo (MCMC) method described below. Methods 2 and 3 attempt to simplify the localization procedure by fitting only for the spiky interference pattern in contrast to the more complex model in Method 1. These methods use the concept of diffraction and are presented in Sections <ref> and <ref> in the Appendices.
Since Method 1 shows smaller localization errors than the other two methods, we apply Method 1 for localizing far side-lobe FRBs. Here we describe Method 1. The spectra are first downsampled to 64 subbands extracted from a time window that has four times the measured boxcar width of the burst. These spectra are fitted with the product of the CHIME/FRB beam model[Available at: <https://github.com/chime-frb-open-data/chime-frb-beam-model/>] <cit.> and an underlying power-law burst spectrum which can be described by
A_400 MHz(ν/400 MHz)^α ,
where A_400 MHz is the amplitude in units of signal-to-noise at a reference frequency of 400 MHz, ν is the frequency, and α is the power-law index.
This results in four free parameters: α, A_400 MHz, plus the two parameters of the sky position. We use flat priors on the position of the event that span 80 deg to either side of the meridian E–W, and in the N–S direction the extent of the beams that detected the event. Note that, for these far side-lobe events, we use only the FFT-formed beam model, and do not consider the effect of the primary beam of the telescope (which is available in the CHIME/FRB beam model[<https://chime-frb-open-data.github.io/beam-model/>]). This, as well as any deviation from a power-law in the true spectrum of the FRB, leads to significant residuals in the fits. This choice was made as the localization precision is dominated by the rapidly varying spike pattern of the synthesized beam, rather than the much slower varying, as a function of frequency, primary beam response. Including the primary beam model necessitates a much wider range in amplitude to be searched due to the suppression from the primary beam far from meridian.
The models are fit to the spectra using the package <cit.>.[<https://emcee.readthedocs.io>] The output MCMC samples often display multiple modes in RA, Dec parameter space.
However, for each of the 10 events in this paper, there is only a single mode for which the model visually matches the spiky signature in the data and that mode has the highest posterior density. We therefore filter out these extraneous samples as being sub-optimal fits of the data.
Figure <ref> shows the output, before and after filtering the samples, of such a fit for an example side-lobe event. Figure <ref> shows the spectra and the resulting fitted (derived from the posterior medians) model. Fitted models and MCMC output sample distributions for all of the side-lobe events are shown in Appendix Figures <ref>, <ref>, <ref>, and <ref>.
The giant pulses of bright pulsars PSRs B0329+54 and B0531+21 are commonly detected in the far side-lobes of CHIME and trigger the CHIME/FRB search pipeline and we use these to verify our far side-lobe localization methods. We collected intensity data of far side-lobe events from those two sources: 47 single-pulses from PSR B0531+21 and 50 single-pulses from PSR B0329+54 <cit.>, with hour-angle (HA) larger than 15 deg and S/N larger than 9.0. Figure <ref> shows the similar distribution of S/N and HA for the far side-lobe events from PSRs B0329+54 and B0531+21. The pulsar events therefore broadly span the E-W beam response of CHIME/FRB, so can be used to calibrate our localization methods; which primarily depend on that E-W response. Figures <ref> and <ref> show the offsets between the true position of the pulsar (PSRs B0329+54 and B0531+21, respectively) and the measured position from Method 1. We use these offsets to estimate a conservative systematic error for our Method 1 localizations. Namely, we use the offsets that encompass the 90% credible intervals for 90% of the pulsar events. This results in a systematic uncertainty in RA of 0.07 and 0.10 in Dec. In Table <ref> we present this error summed in quadrature with the statistical uncertainty derived from the posterior samples. We note that there is a slight systematic offset evident in HA for both pulsars. We do not yet understand its origin, and do not attempt to correct for it as it is well-encompassed by our conservative systematic error estimation.
Appendix Figures <ref> and <ref> show the far side-lobe localization results for PSRs B0531+21 and B0329+54 from the three localization methods.
§.§ Properties of 10 far side-lobe FRBs
We list the properties of the 10 far side-lobe FRBs in Table <ref>, including the position as measured by Method 1 and the S/N reported by . For the DM, we report the total value as well as the excess DM (DM_exc) by subtracting the NE2001 and YMW16 models at the modeled position <cit.> from the total DM.
From the position offset in the Right Ascension (RA) between the model and the header localization, which is by design within 2.5 deg of CHIME's meridian, we find that the far side-lobe events appear on both of the western and eastern sky. The source trajectory in the telescope frame of RA and DEC is curved tracks to lower DEC. The beam points to the meridian, and therefore the position offset in Dec is always positive. To illustrate this, the apparent curves are shown in Figure <ref> for PSRs B0531+21 and B0329+54; and it can be seen that the true DEC of the source is always lower than the apparent DEC.
lcccccccccccc
The properties of our sample of 10 far side-lobe FRBs.
FRB Name MJD_400a S/N_bonsai RA σ_RAb Dec σ_Decc 2cHAd Exposuree DMf 2cDM_excg
MJD (HH:MM:SS) (^') (DD:MM:SS) (^') (deg) (hrs) (hrs/day) (pc cm^-3) 2c(pc cm^-3)
J2000 J2000 total NE2001 YMW16
20190125B 58508.62754493 13.2 14:29:48 7 +49:37:54 6 12.3 0.82 2.5 177.9(1) 147.0 154.9
20190202B 58516.28548606 20.2 07:02:19 5 +31:57:30 7 8.6 0.57 1.4 464.8(1) 370.6 338.8
20190210D 58524.80198392 20.6 22:17:58 7 +52:53:20 6 -26.0 -1.73 5.7 359.3(1) 106.3 84.6
20191104B 58791.32137548 14.9 01:20:37 5 +26:42:05 7 18.1 1.20 2.7 192.2(1) 147.1 156.0
20191202A 58819.08389041 14.1 19:51:56 19 +70:49:07 8 42.1 2.81 17.1 117.9(1) 50.7 49.7
20191219E 58836.94944142 11.1 21:18:30 8 +55:50:48 7 -10.3 -0.69 2.5 736.7(1) 503.0 336.2
20201105A 59158.27994338 10.7 02:42:18 5 +14:23:13 7 -15.6 -1.04 2.1 262.4(1) 218.9 226.0
20201129A 59182.42296311 16.4 07:52:17 7 +53:16:55 6 -17.8 -1.19 4.0 274.6(1) 219.6 221.8
20210310B 59283.39254997 15.7 13:42:21 5 +35:33:46 6 -16.7 -1.12 2.7 135.5(2) 110.6 115.1
20210810A 59436.03512922 45.2 15:17:55 5 +32:09:24 6 -18.9 -1.26 3.0 246.9(1) 223.4 223.3
aThe topocentric time-of-arrival at the bottom of the band (i.e., 400.390625 MHz) reported by the beam header.
b,c The 90%-confidence uncertainty from the Method 1 localization algorithm including systematic uncertainty (see text) in units of minutes of arc on the sky.
dThe hour angle of the side-lobe event, which is relative to the meridian.
eThe daily exposure time of the side-lobe events.
fThe total DM of the side-lobe events, which is reported by offline algorithms via maximization of the S/N of the burst <cit.>.
gThe last two columns show the excess DM after subtracting the NE2001 or YMW16 model <cit.>, respectively.
§.§ GRB and GW Counterparts search
Using our localizations, we check for possible high-energy counterparts to the 10 side-lobe FRBs. More specifically, we check for temporal (up to one week), and spatial (within 3σ of each other's localization regions) coincidence between our set of FRBs and all known gamma-ray bursts (GRBs) published in the Gamma-ray Coordination Network (GCN)[<www.gcn.gsfc.nasa.gov>] circulars. We limit the GRBs to those that are well localized (e.g., localization errors <1 deg in RA and DEC), as it is difficult to claim significant spatial coincidences for GRBs with either unknown or large uncertainty regions. We do not find any GRB-FRB pairs with the given criteria.
If we only search for spatial (rather than temporal and spatial coincidence), we similarly do not find any coincident GRB-FRB pairs <cit.>.
We similarly checked the LIGO GraceDB[<https://gracedb.ligo.org/superevents/public/O3/>] for gravitational wave (GW) events with temporal (within one week) and spatial coincidence. Since the FRB localization region is much smaller than the LIGO error region, we define a spatial coincidence to be when the FRB position is within the 90% localization region of the GW event. We also restrict the search to events that involve at least one neutron star since a pure binary black hole merger is not expected to create electromagnetic bursts <cit.>. We further restrict the false alarm rate (FAR, as mentioned in <cit.>, a measurement of how frequently a non-astrophysical event would be falsely reported from the GW data searches), to FAR < 10^-7 yr^-1 and select only vetted superevents (labelled ). Of our detections, no events were both spatially and temporally coincident with a GW event. FRB 20191219E was temporally coincident with a GW event – GW S191213g, which has a 77% chance of being a binary NS merger and 23% chance of being terrestrial noise. However, the two were not spatially coincident. Hence we consider that these two events are unlikely to be linked.
§.§ Host galaxy search
We also search for host galaxies of the 10 side-lobe events within the reported 90% confidence localization region listed in Table <ref>. Prior to that, we check if there are cataloged Galactic H2 and/or star-forming regions <cit.> within the localization region of the side-lobe events that can contribute to their extragalactic DMs listed in Table <ref>. Only in the case of FRBs 20190210D and 20191219E, we identify multiple nearby ionizing regions. This is unsurprising as both the side-lobe events are Galactic plane sources with Galactic latitudes (b) of -3.3 and 4.5 degrees, respectively. The contribution of these ionizing regions to the FRB DM is hard to quantify due to the poor localization region of FRBs 20190210D and 20191219E. This, along with considerable uncertainty in the predictions of Galactic DM models for low latitude sources <cit.>, make the host association quite challenging <cit.> unless the FRBs are localized to arcsecond precision <cit.>.
As discussed above, due to the large localization region (≈ 36 arcmin^2), we are unable to make a robust host association for any of our side-lobe FRBs. Although the standard formalism of chance association probability (P_ cc) described in <cit.> would give ≲ 10% probability only for galaxies of r-band apparent magnitude m_ r≤ 13 AB mag in the localization region of side-lobe events, this is not a sufficient condition to make a robust association <cit.>. Noting the prospects of some of the side-lobe events to be local Universe FRBs <cit.>, we check the Galaxy List for the Advanced Detector Era Version 2.3 (GLADE v2.3) catalog <cit.>, which contains all of the brightest galaxies up to a luminosity distance of 91 Mpc, to identify galaxies that satisfy the aforementioned r-band constraint. In all except FRBs 20190125B and 20210310B, we do not find a very nearby (< 100 Mpc) galaxy within the FRB 90% localization region. In the FRB 20190125B 90% confidence localization region, we find NGC 5660, a star-forming spiral galaxy at a distance of 38 Mpc <cit.>, as a promising host candidate (P_ cc < 1%), and in the case of FRB 20210310B, we find NGC 5273 <cit.> and NGC 5276 <cit.> as promising candidates. Note that if the FRB source is in a globular cluster, as is FRB 20200120E which has been localized to a globular cluster ∼15 arcmin away from the center of M81 <cit.>, such an association could be missed during the search.
§.§.§ FRB 20191202A
FRB 20191202A is the most interesting source among the 10 side-lobe events because of the very low DM_ ext≈ 50 pc cm^-3 (see Table <ref>). Using the technique described by <cit.>, we estimate the maximum redshift z_max of the FRB to be ≈ 0.04 (90% confidence upper limit). If FRB 20191202A is located at z_max, and if we assume it is in a faint star-forming dwarf galaxy similar to that of FRB 20121102A <cit.>, it would have an r-band magnitude of ≈19.9 AB mag. As the FRB field-of-view is imaged by the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS) release 1 (PS1) Survey <cit.> with an r-band depth for galaxies ≈ 21.5 AB mag (5σ), we search for extended galaxy candidates in the Pan-STARRS1 Source Types and Redshifts with Machine learning (PS1-STRM) photometric redshift (z_ ph) catalog <cit.> and find 66 sources. When we apply the r-band Kron magnitude (rKmag) < 19.9 AB mag and z_photz - 3σ_photz-err < z_max = 0.04, we find four galaxies, which are listed in Table <ref>. If none of the four galaxies is the FRB host, it would mean that the FRB host is the faintest host known to date.
Interestingly, we note that the FRB excess-DM is similar to that of FRB 20200120E <cit.>, where the source is localized to a globular cluster of M81 at 3.6 Mpc <cit.>. Therefore, provided the FRB has an extragalactic origin, the absence of a very nearby host in the GLADE 2.3V catalog within its completeness limit of ≈ 100 Mpc suggests that either the predicted Milky Way DM contribution is significantly overestimated or the FRB host contribution is negligible. Within ∼ 3-4 degrees centered at the FRB location, we find several Galactic pulsars[Pulsars J1955+6708, J1953+67, and J2043+7045, identified using Pulsar Survey Scraper <cit.> (visited on 15/02/2023).] with DM ≈ 57 pc cm^-3. Moreover, from <cit.>, the maximum DM through the Milky Way’s disk at the FRB Galactic latitude of ≈ 21 deg. is 66 ± 7 pc cm^-3.
These measurements suggest that the NE2001 and YMW16 models do not significantly overestimate the Milky Way disk DM contribution, making the FRB particularly promising for constraining the Milky Way halo DM contribution <cit.>.
§ REPETITION AND EXPOSURE
In this Section, we describe the search for repetition candidates, the lower bound of the exposure time, and calculate the lower bound of the repetition interval.
§.§ The search for repetition
Since the side-lobe FRBs were sufficiently bright to be detected in the side-lobe region, any repeating bursts from their sources above the detecting threshold could potentially be detected either in the main lobe or the side-lobe. To search for repetition in the main lobe from 2018 August 28 to 2021 August 31 in the database, we apply the following conditions:
* the S/N of the trigger must be higher than 9,
* the position of the trigger must be less than 3 deg (i.e., within the main beam) away from the modeled position of the side-lobe event,
* the absolute difference of the DM between the trigger and the side-lobe event must be less than 5 pc cm^-3, as CHIME known repeaters' DM variation is less than 5 pc cm^-3 <cit.>.
The conditions for the initial trigger for which we apply these criteria is described by <cit.>. We do not find any associated event detected in the main lobe.
To investigate whether there are repeating events detected in the side-lobes, we use the modeled position of each side-lobe event to construct the apparent position on the CHIME sky as the apparent curve, and we searched for potentially associated events along the apparent curve. Figure <ref> shows the apparent curves for PSRs B0531+21 and PSRs B0329+54 as a function of the detected local sidereal time (LST) at CHIME. The transit happens when the LST is equal to the RA of the source.
We search for repetition of the 10 side-lobe FRBs in all CHIME/FRB events with the same conditions as above. If the associated event is again in the side-lobe region and on the apparent curve (i.e., the hour angle is several degrees away from the meridian), we expect a trigger would show the spiky pattern in the waterfall plot. However, we did not find candidates that satisfy these conditions. One possible selection bias is that the spectral bandwidth of the repeating event is less than the separation of the spikes in the dynamic spectrum, which we may miss during the virtual inspection. Table <ref> shows that the minimum of the absolute hour-angle of the 10 far side-lobe events is 8.6 deg. The corresponding spectral separation of the spikes is ∼90.6 MHz (See Method 2 in Appendix), where 5 of the 62 repeater events reported in the first CHIME/FRB catalog have a spectral bandwidth less than 90.6 MHz <cit.>. Since we compare the spiky separation in terms of the minimum of the hour-angle offset and the spectral bandwidth from 62 repeat events, the probability of missing the narrow-band repeating events in the side-lobe is 5/62 ∼8%.
Subject to the above caveats, we conclude that the CHIME telescope probably did not detect repetitions from any of the 10 side-lobe FRBs from 2018 August 28 to 2021 August 31.
§.§ The lower bound of the exposure time and the repetition interval
<cit.> shows that the sensitivity at the hour angle of the detection is lower than the sensitivity interior to the hour angle of the detection <cit.>. Hence, we consider twice the hour angle of each detection as a lower bound of the daily exposure time for each of the repetition intervals, i.e., we only consider the region with higher sensitivity than the sensitivity at the detecting hour-angle for the exposures. <cit.> use solar data to show that CHIME’s response is highest near the meridian <cit.>. Regardless of whether the source is in the main lobe or side-lobe, the detection criterion is that the triggering S/N (i.e., the product of the beam-response and the S/N ratio at the center of the main-beam) is 9 and above.
Note that the exposure time in the main lobe is defined as having the sky location in question within the full-width at half-maximum (FWHM) region of a synthesized beam at 600 MHz <cit.>, which does not include locations between the FWHM of beams. However in the far side-lobes, since the beam response is spiky, for sufficiently broadband FRBs (but see Section <ref>), there should be sufficient sensitivity at all locations.
To constrain the repetition interval we apply Poisson statistics[Note that the repetition distribution of many known repeaters are not Poissonian.]
<cit.>. For each of the far side-lobe events,
P_i(k=0;λ_i) = e^-λ_i, λ_i=r_it_i,
where P_i(k=0;λ_i) represents the individual Poisson probability distribution of zero repetition
: k is the number of occurrences in an interval t_i and we take zero for non-repeating sources, λ_i is the the individual average number of events and equal to the individual repetition rate (r_i) times the individual observing duration (t_i).
Since there is no repetition from each of the far side-lobe events, if we assume that each of them has the same lower bound of the repetition interval (i.e., 1/r = 1/r_1 = ... = 1/r_10), the Poisson distribution for all far side-lobe event is
P_tot(k=0;λ_tot) = ∏_i=1^10 e^-r_it_i = e^-r(∑_i=1^10t_i)=e^-rT_exp, total,
where T_exp, total is the total exposure time. Hence, we can sum individual exposure times into a total exposure time for the far side-lobe samples.
Since CHIME only observes a strip of sky transiting directly overhead, the exposure varies significantly with declination. We account for this dependence by scaling the exposure with the cosine of the declination <cit.>. Hence, the daily exposure time for each side-lobe event, as shown in Table <ref>, is
T_exp, daily = 2×|HA| (deg)×1/cosθ_DEC×24 (hours)/360 (deg) ,
where the factor of 2 accounts for the two sides of side-lobe, |HA| and θ_DEC corresponds to the absolute HA and the modeled declination value of each far side-lobe event in Table <ref>.
From 2018 August 28 to 2021 August 31, the CHIME/FRB system was operational for 845.59 out of 1099 days, and on average 988.6 of 1024 online synthesized beams were running during this up time. This leads to 74% operational up time[The relevant beams were operational the average amount of time.]. Thus, we calculate the lower bound of the total exposure time for the 10 side-lobe event as
T_exp, total = 0.74×∑^10_i=1 T_exp, daily×1099 (days)
≃ 0.74×43.75 (hours/day)
×1099 (days)
=35580 hrs.
With a total exposure time of 35580 hours[Note that the exposure time is dominated by FRB 20191202A, which has a high declination of ∼71 deg and a daily exposure time of 17.1 hours, as listed in Table <ref>.] for the 10 side-lobe events, CHIME/FRB did not detect repeat bursts from the 10 far side-lobe events listed in Table <ref>.
Note that by using twice the hour-angle for our exposures we are being quite conservative. The sensitivity of CHIME's primary beam far from meridian is fairly flat <cit.>, i.e., it does not drop rapidly as a function of hour angle. There is therefore significant sensitivity outside of our exposure window that is comparable to that within the window. There may also be unaccounted-for sources of incompleteness, such as our search for repetition missing narrowband bursts (Section <ref>).
Since the exposure time is dominated by the low-sensitivity side-lobes with ∼ 10% from the high-sensitivity main lobe, the non-detection of repeat bursts implies that follow-up observations of CHIME/FRB non-repeating sources with a high sensitivity telescope have a significant chance of non-detections. For instance, the sensitivity ratio[The side-lobe sensitivity is ∼1×10^-2–1×10^-3 lower than the main lobe <cit.>. Since there are four E-W beams, we consider the side-lobe sensitivity is at least ∼4×10^-2 less than the main lobe.] between CHIME/FRB's side-lobe region and main lobe is ∼4×10^-2, which is approximately equal to the sensitivity ratio between CHIME/FRB's main lobe and FAST's main beam. For the total exposure time of 35580 hours on the 10 far side-lobe FRBs, CHIME/FRB has ∼32000 hours and ∼3580 hours exposure time in the side-lobe and main-lobe, respectively. To have the same monitoring efforts on non-detections, FAST would need ∼400 hours exposure time, which would be prohibitively expensive to perform on the large sample of non-repeating FRBs. On the other hand, the future detection of even one repeat burst of our 10 sidelobe events by any telescope, including FAST, would be interesting, and would strongly suggest universal repetition with a wide range of repeat times.
Using Equation <ref> with a confidence level (CL) of 95% and a total exposure time of 35580 hrs, the corresponding lower bound on the repetition interval (1/r) above CHIME/FRB's sensitivity limit is 11880 hours. This limit could explain the non-detection of repeating bursts with tens to hundreds of hours follow-up observations by the Parkes <cit.>, GBT <cit.>, and Arecibo <cit.> telescopes .
Whether all FRBs repeat or not is an open question in the FRB field. In the first CHIME/FRB catalog, the repeaters and the apparent one-off events show different properties in morphology, where in general the former are narrow-band and the latter are broadband <cit.>. Here we compare the repetition intervals of repeaters to that of the far side-lobe events. The average repetition rate of 44 CHIME/FRB repeaters is ∼0.2 hr^-1, which leads to a mean repetition interval of 5 hours[Note the median repetition interval is ∼ 33 hours.] <cit.>. The lower limit on the repetition interval of CHIME/FRB Catalog 1 non-repeaters
<cit.> is of the order ∼10 hours.
Our measured mean repetition interval for the 10 side-lobe events, 11880 hours, is 2380 times longer than that for the CHIME/FRB repeater sample. Under our assumptions, this implies that these two samples come from vastly different regimes in repetition rate. These differing regimes may be due to a wide, or multi-modal, distribution of rates in a single physical population, or it may be due to separate physical populations, one of which may be cataclysmic. Future observations with longer exposure time on FRBs, such as CHIME/FRB <cit.> and BURSTT <cit.>, would be helpful to further constrain the repetition of the apparent non-repeating FRBs.
§ CONCLUSIONS
We report 10 far side-lobe FRBs detected by CHIME/FRB from 2018 August 28 to 2021 August 31. We use the intensity data for these sources to localize them with sub-degree precision. Over three years, we did not find any repeat bursts from any of the side-lobe sources from the CHIME/FRB database with conditions of S/N ≥ 9.0, distance between the modeled position of the far side-lobe event and the header position of the CHIME/FRB database less than 3 deg, and DM difference larger than 5 pc cm^-3.
With the long exposure time of 35580 hours on far side-lobe events, we find that the Poisson repetition interval for the one-off events is longer than 11880 hours, which is at least 2380 times longer than for CHIME/FRB repeaters. Longer exposure time on FRBs with future FRB-surveys would be helpful to understand whether all FRBs repeat or not. This study shows the advantage of considering events detected in the low-sensitivity sidelobes of telescopes to probe for rare, bright events with a long exposure time.
We acknowledge that CHIME is located on the traditional, ancestral, and unceded territory of the syilx/Okanagan people. We thank the Dominion Radio Astrophysical Observatory, operated by the National Research Council Canada, for gracious hospitality and expertise. CHIME is funded by a grant from the Canada Foundation for Innovation (CFI) 2012 Leading Edge Fund (Project 31170) and by contributions from the provinces of British Columbia, Quebec and Ontario. The CHIME/FRB Project is funded by a grant from the CFI 2015 Innovation Fund (Project 33213) and by contributions from the provinces of British Columbia and Quebec, and by the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto. Additional support was provided by the Canadian Institute for Advanced Research (CIFAR), McGill University and the McGill Space Institute via the Trottier Family Foundation, and the University of British Columbia. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research & Innovation. The National Radio Astronomy Observatory is a facility of the National Science Foundation (NSF) operated under cooperative agreement by Associated Universities, Inc. FRB research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. FRB research at WVU is supported by an NSF grant (2006548, 2018490). Computations were performed on the Niagara and Cedar supercomputers at the SciNet HPC Consortium <cit.>. SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; the Ontario Research Fund - Research Excellence; and the University of Toronto.
P.S. is a Dunlap Fellow.
Ue-Li Pen receives support from Ontario Research Fund—research Excellence Program (ORF-RE), Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, CRD 523638-18, 555585-20], Canadian Institute for Advanced Research (CIFAR), Canadian Foundation for Innovation (CFI), the National Science Foundation of China (Grants No. 11929301), Thoth Technology Inc, Alexander von Humboldt Foundation, and the Ministry of Science and Technology(MOST) of Taiwan(110-2112-M-001-071-MY3). Computations were performed on the SOSCIP Consortium’s [Blue Gene/Q, Cloud Data Analytics, Agile and/or Large Memory System] computing platform(s). SOSCIP is funded by the Federal Economic Development Agency of Southern Ontario, the Province of Ontario, IBM Canada Ltd., Ontario Centres of Excellence, Mitacs and 15 Ontario academic member institutions.
MB is a Mcwilliams Fellow and an International Astronomical Association Gruber fellow.
A.P.C. is a Vanier Canada Graduate Scholar.
K.R.S acknowledges support from FRQNT Doctoral Research Award.
S.P.T. is a CIFAR Azrieli Global Scholar in the Gravity and Extreme Universe Program.
B. C. A. is supported by an FRQNT Doctoral Research Award.
A.M.C. was supported by the Government of Ontario through an Ontario Graduate Scholarship.
M.D. is supported by a CRC Chair, NSERC Discovery Grant, CIFAR, and by the FRQNT Centre de Recherche en Astrophysique du Québec (CRAQ).
F.A.D. is funded by the U.B.C Four Year Fellowship.
G.E. is supported by an NSERC Discovery Grant (RGPIN-2020-04554) and by a Canadian Statistical Sciences Institute (CANSSI) Collaborative Research Team Grant.
B.M.G. is supported by an NSERC Discovery Grant (RGPIN-2022-03163), and by the Canada Research Chairs (CRC) program.
V.M.K. holds the Lorne Trottier Chair in Astrophysics & Cosmology, a Distinguished James McGill Professorship, and receives support from an NSERC Discovery grant (RGPIN 228738-13), and from the FRQNT CRAQ.
C.L. was supported by the U.S. Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.
K.W.M. holds the Adam J. Burgasser Chair in Astrophysics and is supported by an NSF Grant (2008031).
A.B.P. is a Banting Fellow, McGill Space Institute (MSI) Fellow, and a Fonds de Recherche du Quebec – Nature et Technologies (FRQNT) postdoctoral fellow.
Z.P. is a Dunlap Fellow.
K.S. is supported by the NSF Graduate Research Fellowship Program.
FRB research at UBC is supported by an NSERC Discovery Grant and by the Canadian Institute for Advanced Research. The CHIME/FRB baseband system is funded in part by a CFI JELF award to I.H.S.
D.C.S is supported by by NSERC grant number RGPIN/03985-2021.
CHIME/FRB
Astropy <cit.>,
Numpy <cit.>, Scipy <cit.>,
Matplotlib <cit.>
§ OTHER TWO METHODS OF LOCALIZATION
§.§ Method 2
In the E-W direction, CHIME is an interferometer with 4 elements, corresponding to the focal lines of the 4 cylinders. The resulting beam response is thus an interference pattern of those 4 `slits' and can be thought of as analogous to double-slit interference. We therefore utilize the idea of double-slit interference[We revised the formula in Section 14.7 in http://web.mit.edu/8.02t/www/802TEAL3D/visualizations/coursenotes/modules/guide14.pdfMIT course notes accessed on 22/Jun/2023.] for testing the localization of the far side-lobe events,
W(f, θ) = ∑_b=0^3I(f) cos^2(πdf/csin(θ+ϕ_b)),
where I(f) is the spectral profile of 1024 frequency channels of the brightest time-bin after masking any radio-frequency-interference (RFI) in the dedispersed dynamic spectrum of each beam, d is 22 m for the separation of the CHIME cylinder focal lines <cit.>, f is the frequency in MHz, c is the speed of light, θ is the position offset from meridian from -90 to +90 deg, ϕ_b accounts for the beamforming offset between the four beam columns such that ϕ_0,1,2,3= (-0.4, 0, 0.4, 0.8) deg, where the second beam is located on the meridian, and the others have an offset of 0.4 deg relative to the adjacent one. We assume that the I(f) is a flat spectrum with power-law index of zero,
Figure <ref> shows W(f,θ) at different θ angles. The separation of the spiky pattern across the frequency channels in one beam is related to the position offset, where the larger position offset yields a smaller separation of the spiky pattern. For instance, the spectral separation of spikes in a single beam with an offset of 5 and 20 deg is 156 and 40 MHz, respectively. The shift in the spiky pattern between beams in the 4 beam row, from East to West, tells us whether the source is in the Eastern or Western sky.
We test the localization of the far side-lobe events of PSR B0329+54 and B0531+21, where their waterfalls are shown in Figure <ref>, with the following procedures. First, we mask the dynamic spectrum and convert the 1024 frequency channels of the brightest time-bin into the spectral profile, W(f). Second, we sum up the W(θ, f) of the four E-W beams and average over the frequency into I(θ). Third, we fit a Gaussian profile to I(θ) at its maximal peak and determine the best θ and the 68% confidence interval. Figure <ref> shows the I(θ) and the Gaussian fit for the far side-lobe events of PSR B0329+54 and B0531+21. Last, we assume the source is on the trajectory of the second Western beam, which points to the meridian, and we find the best localization corresponding to θ. For this method, the localization offset is ∼1 deg for the HA from 15 to 50 deg as shown in Figure <ref>.
§.§ Method 3
A third complementary far side-lobe localization method takes advantage of the distinct, knotty spectrum of a far side-lobe event detected by inteferometrically combining signals from multiple detectors. The separation of these spiky patches of the spectrum is correlated to the degree of offset from the meridian. We inject simulated Gaussian bursts at a wide range of meridian offsets to empirically fit for this correlation, and found that the relation between the knot separation (y) in MHz and the East-West offset from Meridian (x) in degree can be represented by:
y = 772.943/x + 1.844 .
We can see that the closer the separation and hence the larger number of knots in the spectrum, the further out the source is in the side lobe. On the other hand, sources within ∼3 deg of the meridian do not show any knotty spectrum as they are within the main lobe.
Similar to what was described in Section <ref>, we can tell the direction (East or West of the meridian) of the source by comparing the spectrum across the four East-West beams of the same row. In principle, this method is less susceptible to Radio Frequency Interference (RFI) present in part of the spectrum, because we only need to measure one of the knot separations to model the corresponding E-W offset. In practice, the fact that the intensity data has limited spectral resolution introduces uncertainty in the fitted knot separation,
a feature we attempt to overcome by averaging as many knot separations observed for each burst. Systematic offsets can be seen in the coordinates determined by this method in Fig. <ref>. This is most likely because simulated signals were only generated for the meridian at zenith angle = 0 , leading to error in the empirical relationship in Equation <ref>.
§.§ comparisons
Figure <ref> shows the localization comparisons for the three methods. Figure <ref> shows the localization comparison for Method 1 and Method 2. For the three methods, we note that Method 1 has the smallest systematic error, and Method 3 has the largest one. We did not understand the origin of the systematic error. Further investigation with the voltage data of far side-lobe events from pulsars may be helpful to better understand the systematic errors.
§ WATERFALLS OF THE 10 FAR SIDE-LOBE EVENTS
Figure <ref> shows the dynamic spectra of all far side-lobe FRBs and an example of far side-lobe detections of B0329+54 and B0531+21. Figure <ref> shows the spectra and fitted model for the Method 1 intensity localizations of example pulses from PSRs B0329+51 and B0531+21 and the 10 side-lobe events. Figures <ref>–<ref> show the distributions of the posterior samples from the Method 1 intensity localizations for example pulses from PSRs B0329+51 and B0531+21 and the 10 side-lobe events.
aasjournal
|
http://arxiv.org/abs/2307.04502v1 | 20230710114651 | Modular Completely Dirichlet forms as Squares of Derivations | [
"Melchior Wirth"
] | math.OA | [
"math.OA",
"math-ph",
"math.FA",
"math.MP",
"quant-ph"
] |
We prove that certain closable derivations on the GNS Hilbert space associated with a non-tracial weight on a von Neumann algebra give rise to GNS-symmetric semigroups of contractive completely positive maps on the von Neumann algebra.
Distributed Decisions on Optimal Load Balancing
in Loss Networks
Qiong Liu1, Chenhao Wang2, Ce Zheng1
1Télécom Paris, Institut Polytechnique de Paris, France
2Beijing Normal University, China
Email: [email protected], [email protected], [email protected]
==========================================================================================================================================================================================================================
§ INTRODUCTION
The interplay between derivations and symmetric semigroups of unital (or contractive) completely positive maps has proven fruitful for applications in quantum information theory <cit.>, operator algebras <cit.> and beyond. Using the framework of completely Dirichlet forms, this connection is particularly well-understood in the case of tracially symmetric semigroups after the seminal work of Cipriani and Sauvageot <cit.>.
In many situations however one encounters non-tracial reference states or weights: In quantum statistical mechanics, the reference state is typically a Gibbs state, which is not a trace at finite temperature; in quantum probability in the study of Lévy processes on compact quantum groups, the natural reference state is the Haar state, which is only a trace for the class of compact quantum groups of Kac type; and in the structure theory of von Neumann algebras, one is faced with non-tracial states when the von Neumann algebra has a non-trivial type III summand.
In the non-tracial setting, the connection between derivations and symmetric semigroups of completely positive maps is much less understood. Recently, it was shown by the author that every GNS-symmetric semigroup of unital completely positive maps gives rise to a canonical derivation via its associated Dirichlet form <cit.>. This result was (partially) extended to KMS-symmetric semigroups by Vernooij and the author <cit.>.
There has also been work in the opposite direction – starting with a derivation to construct a completely Dirichlet form <cit.>. However, these results all rely on additional structural assumptions on the derivation, usually some form of (approximate) innerness. This means that natural examples like derivations arising from cocycles on non-unimodular groups or Voiculescu's derivation in non-tracial free probability could not be treated in this framework.
In this article, we prove in a general context that closable derivations give rise to GNS-symmetric semigroups of completely bounded maps. More precisely, our main result is the following.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation. Let ℰ be the closure of the quadratic form _0 given by (_0)=(δ) and _0(a)=δ(a)_^̋2. Then the strongly continuous semigroup associated with is the GNS implementation of a GNS-symmetric semigroup of contractive completely positive maps on the left von Neumann algebra generated by Å.
Here a normal Tomita bimodule is a bimodule over a Tomita algebra that additionally carries a complex one-parameter group (_z) and an involution satisfying some compatibility conditions, and a symmetric derivation δÅ→$̋ is a map that intertwines the complex one-parameter groups and involutions onÅand$̋ and satisfies the product rule
δ(ab)=aδ(b)+δ(a)b.
These objects were introduced in <cit.> and appear to be the natural non-tracial analogs of the Hilbert bimodule and derivation occurring in the context of completely Dirichlet forms on tracial von Neumann algebras.
Combined with the results from <cit.>, we thus obtain a comprehensive picture of GNS-symmetric quantum Markov semigroups analogous to the result of Cipriani and Sauvageot for tracially symmetric semigroups. Among other potential applications, we hope that this result opens the gate for applications to non-tracial free probability and deformation/rigidity theory of type III von Neumann algebras similar to recent work in this direction in the tracial case.
One main difficulty when trying to prove that closable derivations generate completely Dirichlet forms (or semigroups of completely positive maps) is that the property defining derivations, the product rule, is an algebraic property, while Dirichlet forms are defined in terms of order properties, and the domain of a derivation is not necessarily closed under order operations. As such, the problem of properly dealing with domains is crucial. Note that it is unavoidable to allow for unbounded derivations as everywhere defined derivations yield norm continuous semigroups of completely positive maps, which is too restrictive for many applications.
In the tracial case, this difficulty can be overcome since order operations such as taking the positive part can be expressed in terms of functional calculus and as such can be approximated by polynomials. In the non-tracial case, the order operations can still be expressed in terms of functional analysis in the setup of Haagerup L^p spaces, but the product rule is formulated in terms of Hilbert algebra multiplication, which is different from the product of two operators in Haagerup L^2 (which is only in L^2 if it is zero). Therefore it is not clear how to connect the two.
Instead of trying to follow the proof in the tracial setting, our proof strategy instead relies on Haagerup reduction method, which allows to embed a von Neumann algebra as an expected subalgebra of a bigger von Neumann algebra that can be approximated by finite von Neumann algebras. As it turns out, this reduction method is well-suited to reduce the problem at hand to the known case of tracial von Neumann algebras. One key challenge are again domain issues: For the Haagerup construction one has to extend the derivation to a domain on a crossed product that is sufficiently big, but such that the extension still satisfies the product rule. The essential new technical ingredient to overcome this kind of domain problems lies in the introduction of a new locally convex topology on the domain of a derivation that allows to extend derivations to derivations on a completion.
As a final note, considering the results from <cit.>, it is a natural question whether the results from the present article can be extended to cover KMS-symmetric semigroups. For one, our methods crucially use commutation with the modular group, which fails for KMS-symmetric maps if they are not GNS-symmetric. But more severely, it seems like there are additional algebraic obstructions, already in finite dimensions: It is shown in <cit.> that if is a completely Dirichlet form on L^2(M_n(),ϕ), then there exist self-adjoint matrices v_j∈ M_n() such that
(ρ^1/4xρ^1/4)=∑_j tr(ρ^1/4[v_j,x]ρ^1/4^2),
where ρ is the density matrix inducing the state ϕ on M_n(). However, without further assumptions on the operators v_j, the quadratic form on the right side of the previous equation is not necessarily a completely Dirichlet form.
§.§ Outline of the article
In Section <ref> we recall some basics regarding modular theory, completely Dirichlet forms on standard forms of von Neumann algebras and Tomita bimodules and derivations. In Section <ref> we introduce a topology on the domain of a derivation, the δ-topology, and show that derivations can be extended to derivations on the completion in the δ-topology. In Section <ref> we give a closability criterion for derivations in our setting. In Section <ref> we discuss how derivations can be extended to crossed products and discuss how completely Dirichlet forms behave with respect to change of the reference weight. Then we state and prove the main result of this article, Theorem <ref>, showing that the quadratic form associated with a closable derivation is a modular completely Dirichlet form. Finally, in Section <ref> we discuss several classes of examples, including inner derivations, derivations arising in non-tracial free probability and derivation induced by cocycles on (possibly non-unimodular) locally compact groups.
§.§ Acknowledgments
The author was funded by the Austrian Science Fund (FWF) under the Esprit Programme [ESP 156]. For the purpose of Open Access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
§ BASICS
In this section we briefly recap some material concerning modular theory and in particular Hilbert and Tomita algebras, completely Dirichlet forms, Tomita bimodules and derivations that is used in the later sections.
§.§ Modular theory
As our approach is formulated in the language of Hilbert and Tomita algebras, we summarize the relevant definitions here. Our treatment mostly follows <cit.>.
An algebra Å with involution ^♯ (resp. ^♭) and inner product ⟨ · ,· ⟩ is called left (resp. right) Hilbert algebra if
* for every a∈Å the map π_l(a)Å→Å, b↦ ab (resp. b↦ ba) is bounded,
* ⟨ ab,c⟩=⟨ b,a^♯ c⟩ (resp. ⟨ ab,c⟩=⟨ b,ca^♭⟩) for all a,b,c∈Å,
* the involution ^♯ (resp. ^♭) is closable,
* the linear span of all products ab with a,b∈Å is dense in Å.
Let M be a von Neumann algebra and ϕ a normal semi-finite faithful weight on M. We write _ϕ for the definition ideal {x∈ M|ϕ(x^∗ x)<∞} and (π_ϕ,L^2(M,ϕ),Λ_ϕ) for the associated semi-cyclic representation.
The prototypical example of a left Hilbert algebra is Å=Λ_ϕ(_ϕ∩_ϕ^∗) with the product Λ_ϕ(x)Λ_ϕ(y)=Λ_ϕ(xy), the involution Λ_ϕ(x)^♯=Λ_ϕ(x^∗) and the inner product inherited from L_2(M,ϕ), that is, ⟨Λ_ϕ(x),Λ_ϕ(y)⟩=ϕ(x^∗ y). In this case, π_l(Å)^''=π_ϕ(M). We write Å_ϕ for this left Hilbert algebra.
Conversely, every left Hilbert algebra Å gives rise to a von Neumann algebra π_l(Å)^'' acting on the completion of Å and a weight
ϕπ_l(Å)^''_+→ [0,∞], ϕ(x)=ξ^2 if x^1/2=π_l(ξ),
∞ otherwise.
If Å is a full left Hilbert algebra <cit.>, then ϕ is a normal semi-finite faithful weight on π_l(Å)^'', and Å is canonically isomorphic to Å_ϕ.
Let be the completion of the left Hilbert algebra Å. Since the involution ^♯ on Å is closable, its closure S on exists and has a polar decomposition S=JΔ^1/2. The operator Δ is a non-singular positive self-adjoint operator, called the modular operator, and J is an anti-unitary involution, called the modular conjugation. If Å is the left Hilbert algebra associated with a weight ϕ, we write Δ_ϕ and J_ϕ for the associated modular operator and modular conjugation. We write Λ_ϕ^'_ϕ^∗→ L_2(M,ϕ) for the map x↦ J_ϕΛ_ϕ(x^∗).
If Å is full, the modular conjugation J gives rise to the positive self-dual cone P={π_l(a)Ja| a∈Å} and π_l(Å)^'' is in standard form <cit.>.
The modular operator Δ gives rise to a point weak^∗ continuous group of automorphisms x↦Δ^itxΔ^-it on π_l(Å)^''. If ϕ is a normal semi-finite faithful weight on M, the group σ^ϕ given by σ^ϕ_t(x)=π_ϕ^-1(Δ_ϕ^itπ_ϕ(x)Δ_ϕ^-it) is called the modular group associated with ϕ.
If (α_t)_t∈ is a point weak^∗ continuous group of ∗-automorphisms on M, then an element x∈ M is called entire analytic if the map t↦α_t(x) has an extension z↦α_z(x) to the complex plane such that z↦ω(α_z(x)) is analytic for every ω∈ M_∗. The entire analytic elements form a weak^∗ dense ∗-subalgebra of M.
A Tomita algebra is a left Hilbert algebra Å endowed with a complex one-parameter group (U_z)_z∈ of algebra automorphism such that
* z↦⟨ a,U_z b⟩ is analytic for all a,b∈Å,
* (U_z a)^♯=U_z̅(a^♯) for all a∈Å, z∈,
* ⟨ U_z a,b⟩=⟨ a,U_-z̅b⟩ for all a,b∈Å, z∈,
* ⟨ a^♯,b^♯⟩=⟨ U_-ib,a⟩ for all a,b∈Å.
Note that every Tomita algebra becomes a right Hilbert algebra when endowed with the involution
Å→Å,a↦ a^♭=U_-i(a^♯).
For a full left Hilbert algebra Å let
Å_0={ξ∈⋂_n∈D(Δ^n) | Δ^nξ∈Å for all n∈}.
For every ξ∈Å_0 the map t↦Δ^itξ has an entire analytic extension z↦ U_zξ with U_zξ∈Å_0 for all z∈. This makes Å_0 into a Tomita algebra such that π_l(Å_0)^''=π_l(Å)^''.
In particular,
(Å_ϕ)_0={Λ_ϕ(x)| x∈_ϕ∩_ϕ^∗, x entire analytic for σ^ϕ}.
§.§ Completely Dirichlet forms
Completely Dirichlet forms in the non-tracial setting were introduced by Goldstein and Lindsay <cit.> in the language of GNS Hilbert spaces of states (or weights) and by Cipriani <cit.> in the language of standard forms with a fixed cyclic vector. Our approach is somewhat different from both of these formulations in that we use left Hilbert algebras, but in view of the previous subsection it is equivalent to the formulation by Goldstein–Lindsay (and to that of Cipriani in case the left Hilbert algebra has a unit).
Let Å be a full left Hilbert algebra with completion . Let C be the closure of {Δ^1/4a| a∈Å, 0≤π_l(a)≤ 1} and let P_C be the metric projection onto C. We say that a closed densely defined quadratic form on is a Dirichlet form with respect to Å if ∘ J= and (P_C(a))≤(a) for all a∈ with Ja=a.
The Dirichlet form is called completely Dirichlet form if for every n∈ the quadratic form
^(n)⊗ M_n()→ [0,∞], ^(n)([ξ_ij])=∑_i,j=1^n (ξ_ij)
is a Dirichlet form with respect to Å⊙ M_n(). Here M_n() carries the normalized Hilbert–Schmidt inner product and the multiplication and involution on Å⊙ M_n() are given by [a_ij][b_ij]=[∑_k a_ikb_kj], [a_ij]^♯=[a_ji^♯].
A (completely) Dirichlet form with respect to Å is called modular (or GNS-symmetric) if ∘ U_t= for all t∈.
Completely Dirichlet forms are of particular interest for their connection to semigroups of contractive completely positive maps on von Neumann algebras. Let us briefly sketch this correspondence. Proofs can be found in <cit.> for the wider class of KMS-symmetric semigroups. The result for GNS-symmetric semigroups follows from the fact that GNS symmetry is equivalent to KMS symmetry and commutation with the modular group (see <cit.> for example).
Let M be a von Neumann algebra. A quantum dynamical semigroup is a semigroup of normal contractive completely positive operators on M that is continuous in the point weak^∗ topology. If ϕ is a normal semi-finite faithful weight on M, a quantum dynamical semigroup (P_t) is called GNS-symmetric with respect to ϕ if ϕ∘ P_t≤ϕ for all t≥ 0 and
ϕ(P_t(x)^∗ y)=ϕ(x^∗ P_t(y))
for all x,y∈_ϕ and t≥ 0.
Every GNS-symmetric quantum dynamical semigroup gives rise to a strongly continuous semigroup (T_t) on L^2(M,ϕ), its GNS implementation, acting by T_tΛ_ϕ(x)=Λ_ϕ(P_t(x)) for x∈_ϕ, and the associated quadratic form is a modular completely Dirichlet form with respect to Å_ϕ. Vice versa, the strongly continuous semigroup associated with a modular completely Dirichlet form is the GNS implementation of a GNS-symmetric quantum dynamical semigroup.
We call a completely Dirichlet form a quantum Dirichlet form if the associated quantum dynamical semigroups consists of unital maps. A criterion in terms of the form itself is given in <cit.>.
§.§ Tomita bimodules and derivations
Tomita bimodules were introduced in <cit.> as codomains of the derivations associated with modular completely Dirichlet forms.
Let Å be a Tomita algebra. A Tomita bimodule over Å is an inner product space $̋ endowed with non-degenerate commuting left and right actions ofÅ, an anti-isometric involution→̋$̋ and a complex one-parameter group (_z) of isometries such that
* aξ b≤π_l(a)π_r(b)ξ for a,b∈Å, ξ∈$̋,
*⟨aξb,η⟩=⟨ξ, a^♯ηb^♭⟩fora,b∈Å,ξ,η∈$̋,
* _z(aξ b)=(U_z a)(_z ξ)(U_z b) for a,b∈Å, ξ∈$̋,z∈,
*(aξb)=(Jb)(ξ)(Ja)fora,b∈Å,ξ∈$̋,
* _z =_z̅ for z∈.
Let ̋̅ be the completion of $̋. The first two bullet points imply thatπ_l(a)↦(ξ↦aξ)extends to a non-degenerate∗-homorphism fromπ_l(Å)toB(̋̅). If this map can be extended to a normal∗-homomorphism fromπ_l(Å)^''toB(̋̅), then we say that$̋ is a normal Tomita bimodule. Requiring normality for the right action instead leads to the same notion of normal Tomita bimodule.
If Å is a Tomita algebra and $̋ a bimodule overÅ, we call a linear mapδÅ→$̋ a derivation if it satisfies the product rule
δ(ab)=aδ(b)+δ(a)b
for a,b∈Å. If $̋ is a Tomita bimodule overÅ, we say that a derivationδÅ→$̋ is symmetric if δ∘ J=∘δ and δ∘ U_z=_z∘δ for all z∈.
If Å is a full left Hilbert algebra and a modular quantum Dirichlet form with respect to Å, it is shown in <cit.> that
Å_={a∈Å_0| U_z a∈() for all z∈}
is a Tomita subalgebra of Å_0 and a core for . Moreover, by <cit.> there exists a Tomita bimodule $̋ overÅand a symmetric derivationδÅ_→$̋ such that
(a,b)=⟨δ(a),δ(b)⟩_
for a,b∈Å_.
Under the minimality condition =̋lin{δ(a)b| a,b∈Å_}, the pair (,̋δ) is uniquely determined by up to isometric isomorphism preserving the Tomita bimodule structure and intertwining the derivations <cit.>. By a slight abuse of notation, any such pair (,̋δ) is called the first-order differential structure associated with . If $̋ is a normal Tomita bimodule, the quantum Dirichlet formis called Γ-regular. A characterization in terms of the carré du champ is given in <cit.>.
§ Δ-TOPOLOGY AND COMPLETENESS
In this section we introduce a locally convex topology on the domain of a closable symmetric derivation, called theδ-topology. This topology is strong enough to ensure that the derivation extends to a derivation on the completion, which is a key technical ingredient in the proof of the main theorem later.
For the definition of theδ-topology recall that the Mackey topologyτ(M,M_∗)on a von Neumann algebraMis the finest linear topology𝒯onMsuch that the topological dual of(M,𝒯)isM_∗. Equivalently, it is the finest locally convex topology onMthat coincides with the strong^∗topology on norm bounded sets <cit.>. It has the advantage over the other usual locally convex topologies onMof being complete, which is convenient for several of the following arguments.
LetÅbe a Tomita algebra with completion, let$̋ be a normal Tomita bimodule over Å and δÅ→$̋ a symmetric derivation. We define theδ-topology𝒯_δonÅas the coarsest locally convex topology that makes the maps
Å→⊕̋̅, a↦(Δ^n a,δ(Δ^n a))
continuous with respect to the norm topology on⊕̋̅for alln∈and the maps
Å→ B(), a↦π_l(Δ^n a)
continuous for the Mackey topology onB()for alln∈. Clearly, theδ-topology is stronger than the topology induced by the graph norm(·_^2+δ(·)_^̋2)^1/2.
If is a Γ-regular modular quantum Dirichlet form and (,̋δ) the associated first-order differential structure, then Å_ is complete in the δ-topology.
Let (a_j) be a Cauchy net in Å_ with respect to the δ-topology. In particular, (Δ^n a_j,δ(Δ^n a_j))_j is Cauchy in ⊕̋̅ for all n∈. Since Δ^n and δ are closable on Å_, it follows that there exists a∈ such that (Δ^n a_j,δ(Δ^n a_j))→ (Δ^n a,δ̅(Δ^n a)) for all n∈. In particular, a∈⋂_n∈(Δ^n) and Δ^n a∈(δ̅)=() for all n∈.
Moreover, as the Mackey topology is complete, for n∈ there exists x_n∈ B() such that π_l(Δ^n a_j)→ x_n with respect to τ(B(),B()_∗). For b∈Å_ we have
x_n b=lim_j π_l(Δ^n a_j)b=lim_j π_r(b)Δ^n a_j=π_r(b)Δ^n a.
Since Å_ is dense in , it follows that Δ^n a∈Å_^'' and π_l(Δ^n a_j)→π_l(Δ^n a) for all n∈.
Altogether we conclude that a∈Å_ and a_j→ a in the δ-topology.
If Å is a Tomita algebra, $̋ a Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation, the inclusion of Å into its completion extends to an injective map from the completion of Å in the δ-topology to .
We have to show that if (a_j) is a Cauchy net in Å with respect to the δ-topology and a_j→ 0 in , then a_j→ 0 in the δ-topology. Since Δ^n, n∈, and δ are closable, we have (Δ^n a_j,δ(Δ^n a_j))→ 0 for all n∈. Furthermore, using the completeness of the Mackey topology and a similar argument as in the previous lemma, one sees that π_l(Δ^n a_j)→ 0 in τ(B(),B()_∗) for all n∈. Hence a_j→ 0 in the δ-topology.
LetÅ^δdenote the set of all elementsa∈for which there exists a net(a_j)inÅsuch thata_j→ainand(a_j)is Cauchy in theδ-topology. By the previous lemma,Å^δis a completion ofÅin theδ-topology, and we call it simply theδ-completion ofÅ. It is not hard to see thatÅ^δis a Tomita subalgebra of(Å^'')_0and contained in(δ̅).
Recall that ifis a normal Tomita bimodule overÅ, we can continuously extend the left and right action ofÅand the mapsand_t,t∈, to the Hilbert completion̋̅. This is usually not possible for_z,z∈∖. We define
^̋a={ξ∈̋̅| t↦_tξ has an entire extension}.
If it exists, this entire extension is unique and will be denoted byz↦_zξ. Clearly,⊂̋^̋aand_z⊂_zfor allz∈.
If we endow ^̋a with the coarsest locally convex topology that makes
^̋a→̋̅, ξ↦_inξ
continuous for all n∈, then ^̋a is complete.
If A is the unique non-singular positive self-adjoint operator in ̋̅ such that _t=A^it for t∈, then ^̋a=⋂_n∈(A^n) and ^̋a is the projective limit of the Banach spaces ((A^n)∩(A^-n),·_̋̅+A^n · _̋̅+A^-n · _̋̅) in the topology described in the lemma. In particular, ^̋a is complete.
Since$̋ is a normal Tomita bimodule over Å, the Hilbert completion ̋̅ has a canonical structure of a π_l(Å)^''-π_l(Å)^'' correspondence determined by
π_l(a)·ξ· Jπ_r(b)^∗ J=aξ b
for a,b∈Å and ξ∈$̋.
Ifa∈Å^δ,(a_j)is a net inÅsuch thata_j→ain theδ-topology andξ∈̋̅, then
_t(π_l(a)·ξ)=lim_j _t(π_l(a_j)ξ)=lim_j π_l(U_t a_j)_tξ=π_l(U_t a)·_tξ.
Thus, ifξ∈^̋a, thenz↦π_l(U_z a)·_zξis an entire continuation oft↦_t(π_l(a)·ξ), which impliesπ_l(a)ξ∈^̋a. Likewise, ifb∈Å^δ, thenξ·Jπ_r(b)^∗J∈^̋a. It is then routine to check that the bimodule structure given byaξb=π_l(a)·ξ·Jπ_r(b)^∗J, then group(_z)_z∈and the restriction ofmake^̋ainto a Tomita bimodule overÅ^δ.
If Å is a Tomita algebra, $̋ is a Tomita bimodule overÅandδÅ→$̋ is a closable symmetric derivation with closure δ̅, then δ̅(Å^δ)⊂^̋a and δ̅Å^δ→^̋a is a symmetric derivation.
If a∈Å^δ and (a_j) is a net in Å such that a_j→ a in the δ-topology, then
_tδ̅(a)=lim_j _t δ(a_j)=δ(U_t a_j)=δ̅(U_t a).
It follows that t↦_tδ̅(a) has the entire continuation z↦δ̅(U_z a), which implies δ̅(a)∈̋̅^a. Again, routine computations show that the restriction of δ̅ to Å^δ is a symmetric derivation from Å^δ to ^̋a.
§ CLOSABILITY OF DERIVATIONS
In this section we give a simple criterion for the closability of derivations inspired by a well-known result (see <cit.> and <cit.> for the non-tracial case) on the closability of the derivation used in free probability.
IfÅis a Tomita algebra and$̋ is a Tomita bimodule over Å, we say that ξ∈$̋ is a bounded vector if there existsC>0such thataξb≤Cabfor alla,b∈Å. In this case, the mapsa↦aξandb↦ξbextend to bounded linear operators from the completionofÅto$̋, which we denote by R(ξ) and L(ξ), respectively.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule overÅandδÅ→$̋ a derivation. If δ(Å) is contained in the space of bounded vectors, then (δ^∗) is a subbimodule of $̋ and
δ^∗(aξ b)=a^∗δ^∗(ξ)b-L(δ(a^∗))^∗(ξ b)-R(δ(b^∗))^∗ (aξ)
fora,b∈Åandξ∈(δ^∗).
Let a,b,c∈Å and ξ∈(δ^∗). By the product rule,
⟨ aξ b,δ(c)⟩ =⟨ξ,a^∗δ(c)b^∗⟩
=⟨ξ,δ(a^∗ c b^∗)-δ(a^∗)cb^∗-a^∗ cδ(b^∗)⟩
=⟨ aδ^∗(ξ)b-L(δ(a^∗))^∗(ξ b)-R(δ(b^∗))^∗ (aξ),c⟩.
Thus aξ b∈(δ^∗) and the claimed identity for δ^∗(aξ b) holds.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule over$̋ and δÅ→$̋ a derivation. Ifδ(Å)is contained in the space of bounded vectors and(δ^∗)is a cyclic subset, thenδis closable.
By the previous lemma, (δ^∗) is a subbimodule of $̋. Hence, if(δ^∗)is cyclic, then it is dense in$̋. Therefore, δ is closable.
§ COMPLETELY DIRICHLET FORMS ASSOCIATED WITH CLOSABLE DERIVATIONS
In this section we prove the main theorem of this article, Theorem <ref>, showing that the closure of the quadratic form associated with a closable symmetric derivation is a modular completely Dirichlet form.
As mentioned in the introduction, we rely on Haagerup's reduction method. To set up the stage for its use, we first discuss crossed products of Tomita algebras and Tomita bimodules. To extend closable symmetric derivations to a sufficiently large domains on the crossed product, we use theδ-completion technique developed in Section <ref>. Further, to reduce the problem to the tracial case, we need a “change of reference weight” argument and an analysis of approximation properties of completely Dirichlet forms. This will be dealt with in the following lemmas. Finally, in Proposition <ref> we discuss the relation between the derivation we started with and the first-order differential structure of the associated completely Dirichlet form.
LetÅbe a Tomita algebra,$̋ a normal Tomita bimodule over Å and δÅ→$̋ a closable symmetric derivation. Throughout this section we endowÅwith theδ-topology and$̋ with the projective topology induced by the maps →̋̋̅, ξ↦_inξ for n∈, and we assume that Å and $̋ are complete in these topologies. As discussed in Section <ref>, this can always be achieved by passing to the completions.
LetGbe a countable subgroup of, viewed as discrete group. The vector spaceC_c(G;Å)≅ C_c(G)⊙Åcan be made into a Tomita algebra by the operations
(a∗ b)(g) =∑_h∈ GU_-ha(g-h)b(h),
a^♯(g) =U_-g(a(-g)^♯),
(U_z a)(g) =U_z a(g).
Moreover, the vector spaceC_c(G;)̋becomes a normal Tomita bimodule overC_c(G;Å)with the operations
(aξ)(g) =∑_h∈ GU_-h a(g-h)ξ(h)
(ξ b)(g) =∑_h∈ G_-hξ(g-h)b(h)
(ξ)(g) =_-gξ(-g)
(_z ξ)(g) =_z ξ(g).
Furthermore,1_C_c(G)⊙δ C_c(G;Å)→ C_c(G;)̋is a closable symmetric derivation, whose closure we denote by1⊗δ̅.
We writeÅ̃for the(1⊙δ)-completion ofC_c(G;Å),̋̃forC_c(G;)̋^aandδ̃for the restriction of1⊗δ̅toÅ̃. By Lemma <ref> the mapδ̃is a (closable) symmetric derivation fromÅ̃tő̃.
If x∈ L(G)⊗ 1_̋̅ and a∈Å̃, then x a, a x∈Å̃ and δ̃(x a)=xδ̃(a), δ̃(a x)=δ̃(a)x.
Let x=y⊗ 1 with y∈ L(G), let (y_i) be a bounded net in [G] such that y_i→ y in the strong^∗ topology and let x_i=y_i⊗ 1. Clearly, x_i→ x in the Mackey topology.
If a∈ C_c(G;Å), then x_i a∈ C_c(G;Å) and Δ^n(x_i a)=x_i Δ^n a, (1⊙δ)(x_i a)=x_i(1⊙δ)(a), π_l(Δ^n (x_i a))=x_iπ_l(Δ^n a). It follows that x_i a→ xa in ℓ^2(G;), the net (x_i a) is Cauchy in the δ̃-topology and (1⊙δ)(x_i a)→ x(1⊙δ)(a). Thus xa∈Å̃ and δ̃(xa)=xδ̃(a).
A similar argument shows that if (a_j) is a Cauchy net in C_c(G;Å) with respect to 𝒯_δ̃ and a_j→ a in ℓ^2(G;)̋, then (x a_j) is Cauchy with respect to 𝒯_δ̃ and xa_j→ xa in ℓ^2(G;)̋. Hence if a∈Å̃, then xa∈Å̃ and δ̃(xa)=xδ̃(a). The statement for ax can be proven analogously.
For the next lemma recall thatÅ_ϕ=Λ_ϕ(_ϕ∩_ϕ^∗)is the full left Hilbert algebra induced by the weightϕ, the coneC_ϕis the closure of{Δ_ϕ^1/4a| a∈Å_ϕ,0≤π_l(a)≤ 1}, andM_ϕdenotes the centralizer ofϕ.
Let M be a von Neumann algebra, ϕ a normal semi-finite faithful weight on M and x∈ M_ϕ be positive and invertible. Let ψ=ϕ(x^1/2· x^1/2).
If is a modular (completely) Dirichlet form on L^2(M) with respect to Å_ϕ, x()⊂() and (xa,b)=(a,xb) for all a,b∈(), then is also a modular (completely) Dirichlet from with respect to Å_ψ.
Since x is invertible, the weight ψ is faithful and J_ψ=J_ϕ, and since x commutes with (Δ_ϕ^it), we have Δ_ψ^it=x^itΔ_ϕ^it(·)x^-it=Δ_ϕ^it(x^it· x^-it).
Let A be the positive self-adjoint operator associated with and T_t=e^-tA. The commutation relation x()⊂() and (xa,b)=(a,xb) for a,b∈() implies that x commutes strongly with A^1/2. Hence T_t(xa)=x T_t(a) for all a∈ H and t≥ 0. Since T_t commutes with J_ϕ, we also have T_t(ax)=T_t(a)x for a∈ H, t≥ 0. In particular, (T_t) and (Δ_ψ^is) commute.
Moreover, C_ψ=x^1/4C_ϕ x^1/4. Indeed, a direct computation shows that _ψ=_ϕ and Λ_ψ(y)=Λ_ϕ(y)x^1/2 for y∈_ϕ. Hence, if y∈_ϕ with 0≤ y≤ 1, then
Δ_ψ^1/4Λ_ψ(y)=x^1/4Δ_ϕ^1/4Δ_ϕ(y)x^1/4∈ x^1/4C_ϕ x^1/4.
The converse inclusion follows by swapping the roles of ϕ and ψ.
Therefore, if a∈ C_ψ, then
T_t(a)=x^1/4T_t(x^-1/4a x^-1/4)x^1/4∈ C_ψ.
Thus is a Dirichlet form with respect to Å_ψ by <cit.>. The result for completely Dirichlet forms follows easily by applying the same argument to the forms ^(n) on L^2(M⊗ M_n()).
Let M be a von Neumann algebra and ϕ a normal semi-finite faithful weight on M. Let (M_n) be an increasing sequence of von Neumann subalgebras with weak^∗ dense union and assume that M_n is the range of a ϕ-preserving conditional expectation E_n on M. Let H_n denote the closure of Λ_ϕ(_ϕ∩ M_n) and let P_n denote the orthogonal projection from H to H_n.
If is a closed densely defined quadratic form on H such that for every n∈ the quadratic form |_H_n is a Dirichlet form with respect to Λ_ϕ(_ϕ∩_ϕ^∗∩ M_n) and ∘ P_n≤, then is a Dirichlet form with respect to Å_ϕ.
Let (T_t) be the strongly continuous semigroup associated with . Since ∘ P_n≤, we have T_t(H_n)⊂ H_n by Ouhabaz' theorem <cit.>. Thus T_t commutes with P_n, and it is easy to see that (T_t P_n) is the semigroup associated with |_H_n, viewed as semigroup on H. In particular, T_t P_n J_ϕ=J_ϕ T_t P_n. In the limit we obtain T_t J_ϕ=J_ϕ T_t.
It remains to show that T_t(C_ϕ)⊂ C_ϕ for all t≥ 0. A direct computation shows that E_n and P_n are related by P_nΛ_ϕ(x)=Λ_ϕ(E_n(x)). Moreover, E_n is GNS-symmetric with respect to ϕ, which implies that P_n commutes with (Δ_ϕ^it). Thus P_n(C_ϕ) is the closure of {Δ_ϕ^1/4Λ_ϕ(x)| x∈_ϕ∩_ϕ^∗∩ M_n, 0≤ x≤ 1}. In particular, P_n(C_ϕ)⊂ C_ϕ.
Since _n is a Dirichlet form with respect to Λ_ϕ(_ϕ∩_ϕ^∗∩ M_n) and (T_t P_n) is the associated semigroup, we have T_t P_n(C_ϕ)⊂ P_n (C_ϕ). Moreover, since ⋃_n M_n is weak^∗ dense in M, we have P_n→ 1 strongly by Kaplansky's density theorem. Therefore, T_t(C_ϕ)⊂ C_ϕ.
To prove that the quadratic form associated with a closable symmetric derivation is a completely Dirichlet form, we will reduce the problem to the tracially symmetric case by means of Haagerup's reduction method. We only recall the necessary definitions here and refer to <cit.> for proofs in the case of states and to <cit.> for the extension to weights.
LetM=π_l(Å)^'', letϕbe the weight induced by the full left Hilbert algebraÅ^''onM, letG=⋃_n∈2^-n, letM̃=M⋊_σ^ϕG=π_l(Å̃)^''and letϕ̃be the dual weight ofϕonM̃. Let(a_n)be a sequence of self-adjoint elements ofL(G)⊗ 1⊂𝒵(M̃_ϕ̃),ϕ_n=ϕ e^-a_n,M_n=M̃_ϕ_nandτ_n=ϕ_n|_M_n. HereN_ψdenotes the centralizer of the weightψonNand𝒵(M)is the center of the von Neumann algebraN.
By <cit.> the sequence(a_n)can be chosen such that
* M_n is semi-finite with normal semi-finite faithful trace τ_n,
* for each n∈ there exists a conditional expectation E_n from M onto M_n such that ϕ̃∘ E_n=ϕ̃ and σ^ϕ̃_t∘ E_n=E_n∘σ^ϕ̃_t for all t∈,
* E_n(x)→ x strongly^∗ for every x∈ M.
In the following we fix a sequence(a_n)with these properties. The concrete construction is irrelevant for our purposes.
Let Å be a Tomita algebra with completion , let $̋ be a normal Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation. The closure of the quadratic form
→ [0,∞], a↦δ(a)_^̋2 if a∈Å,
∞ otherwise
is a modular completely Dirichlet form with respect to Å^''. If moreover Å is unital, then is a modular quantum Dirichlet form.
We continue to use the notation from the previous discussion. The derivation δ̃Å̃→̋̃ is a restriction of 1⊗δ̅. Let denote the closure of the quadratic form
ℓ^2(G;)→ [0,∞], a↦δ̃(a)_̋̃^2 if a∈Å̃,
∞ otherwise.
It is clear that (a)=(1⊗δ̅)(a)_ℓ^2(G;̋̅)^2 for a∈(). Furthermore, ()=(1⊗δ̅) and the strongly continuous semigroups (T_t) and (T̃_t) associated with and , respectively, are related by T̃_t=𝕀_ℓ^2(G)⊗ T_t.
The map ι→ℓ^2(G;), a↦_0⊗ a is an isometric embedding such that ι(C_Å^'')= C_Å̃^''∩ι(). Thus, if is a (completely) Dirichlet form with respect to Å̃^'', then is a (completely) Dirichlet form with respect to Å^''.
Since M is in standard form on and M̃ is in standard form on ℓ^2(G;), these spaces can be canonically identified with L^2(M,ϕ) and L^2(M̃,ϕ̃), respectively, and we will tacitly do so in the following. Under these identifications, Å^''=Å_ϕ, Å̃^''=Å_ϕ̃ and Δ_ϕ̃^it=𝕀_ℓ^2(G)⊗Δ_ϕ^it.
Let
𝒜_n={x∈_ϕ̃∩_ϕ̃^∗∩ M_n|Λ_ϕ̃(x e^-a_n/2)∈Å̃}.
Since e^a_n/2∈M̃_ϕ̃, if x∈𝒜_n, then
Λ_ϕ̃(x)=Λ_ϕ̃(x e^-a_n/2)e^a_n/2∈Å̃
by Lemma <ref>. Reversing the roles of e^-a_n/2 and e^a_n/2 we get
𝒜_n={x∈_ϕ̃∩_ϕ̃^∗∩ M_n|Λ_ϕ̃(x)∈Å̃}.
Since Å̃ is a Tomita algebra, it follows easily that 𝒜_n is a ∗-algebra. Define an 𝒜_n-𝒜_n-bimodule structure on ℓ^2(G;̋̅) by
xξ y=Λ_ϕ̃(x)·ξ·Λ_ϕ̃^'(y).
Using that ̋̃ is a Tomita bimodule over Å̃, it is not hard to see that this left and right action are contractive (anti-) ∗-homomorphisms. Moreover, extends to an anti-unitary involution on ℓ^2(G;̋̅) intertwining the left and right action. We still denote this extension by .
Let
∂_n𝒜_n→ L^2(M̃,ϕ̃), ∂_n(x)=δ̃(Λ_ϕ̃(xe^-a_n/2)).
Since e^-a_n/2∈M̃_ϕ̃ and x∈M̃_ϕ_n, we have
∂_n(x^∗) =δ̃(Λ_ϕ̃(x^∗ e^-a_n/2))
=δ̃(Λ_ϕ̃^'( e^-a_n/2x^∗))
=δ̃(J̃Λ_ϕ̃(xe^-a_n/2))
=δ̃(Λ_ϕ̃(xe^-a_n/2))
=∂_n(x).
Moreover, it follows from Lemma <ref> combined with e^-a_n/2∈M̃_ϕ̃ and x,y∈M̃_ϕ_n that
∂_n(xy) =δ̃(Λ_ϕ̃(xye^-a_n/2))
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(x))·Λ_ϕ̃(y e^-a_n/2)
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(xe^-a_n/2))·Λ_ϕ̃(e^a_n/2 y e^-a_n/2)
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(xe^-a_n/2))·Λ_ϕ̃^'(σ^ϕ_n_-i/2(y))
=Λ_ϕ̃(x)·δ̃(Λ_ϕ̃(ye^-a_n/2))+δ̃(Λ_ϕ̃(xe^-a_n/2))·Λ_ϕ̃^'(y)
=x∂_n(y)+∂_n(x)y.
The operator ∂_n is closable when viewed as operator in L^2(M_n,τ_n) since δ̃ is closable and the map Λ_τ_n(x)↦Λ_ϕ̃(xe^-a_n/2) extends to an isometry ι_n from L^2(M_n,τ_n) to L^2(M̃,ϕ̃).
Since τ_n is a trace, <cit.> implies that the closure Q_n of the quadratic form
L^2(M_n,τ_n)→ [0,∞], a↦∂_n(x)^2 if a=Λ_τ_n(x), x∈𝒜_n,
∞ otherwise
is a completely Dirichlet form.
Let H_n=Λ_ϕ̃(_ϕ̃∩_ϕ̃^∗∩ M_n) and let _n be the closure of the quadratic form
H_n→ [0,∞], a↦δ̃(a)^2 if a∈Å̃,
∞ otherwise.
In other words, _n=Q_n∘ι_n^-1.
Note that ι_n maps {Λ_τ_n(x)| x∈_ϕ̃∩_ϕ̃^∗∩ M_n, 0≤ x≤ 1} onto {Λ_ϕ̃(x e^-a_n/2)| x∈_ϕ̃∩_ϕ̃^∗∩ M_n, 0≤ x≤ 1}. Since ϕ_n is a trace on M_n, the latter set coincides with {Λ_ϕ_n(x)| x∈Å_ϕ_n∩ H_n, 0≤ x≤ 1}. It follows that _n is a completely Dirichlet form with respect to Å_ϕ_n∩ H_n.
Moreover,
_n(Δ_ϕ_n^it a) =_n(e^-i a_n/2 t(Δ_ϕ̃^it a)e^ia_n/2 t)
=e^-ia_n/2 t(_tδ̃(a))e^ia_n/2 t^2
=δ̃(a)^2
=_n(a)
for a∈Å̃. This can easily be extended to the closure so that _n is a modular completely Dirichlet form.
By Lemma <ref> we have e^-a_n/2(_n)⊂(_n) and _n(e^a_n/2a,b)=_n(a,e^-a_n/2b) for a,b∈(_n). Furthermore, e^-a_n/2∈M̃_ϕ̃. Hence _n is also a modular completely Dirichlet form with respect to Å_ϕ̃∩ H_n=Λ_ϕ̃(_ϕ̃∩_ϕ̃^∗∩ M_n) by Lemma <ref>.
Let P_n denote the orthogonal projection from ℓ^2(G;) onto H_n. By definition, |_H_n=_n. To apply Lemma <ref>, we have to check that ∘ P_n≤.
Let (T_t) be the strongly continuous semigroup associated with . As discussed above, (𝕀_ℓ^2(G)⊗ T_t) is the strongly continuous semigroup associated with . The modular group of ϕ_n is given by Δ_ϕ_n^it=e^-ita_n(𝕀_ℓ^2(G)⊗Δ_ϕ^it)(·)e^ita_n. Since (T_t) commutes with (Δ_ϕ^it) and e^a_n∈ L(G)⊗ 1_, the semigroup (𝕀_ℓ^2(G)⊗ T_t) commutes with (Δ_ϕ_n^it).
Since M_n is the centralizer of ϕ_n, the subspace H_n is the fixed-point set of (Δ_ϕ̃_n^it). In particular, (𝕀_ℓ^2(G)⊗ T_t)(H_n)⊂ H_n. From Ouhabaz's theorem <cit.> we deduce ∘ P_n≤.
Now Lemma <ref> shows that is a modular completely Dirichlet form with respect to Å_ϕ̃.
If Å is unital with unit 1_Å, then the left and right action of Å on $̋ are unital since they are non-degenerate by definition. Thus
δ(1_Å)=1_Å·δ(1_Å)+δ(1_Å)· 1_Å-δ(1_Å)=0
and hence(1_Å)=0. ThusT_t(1_Å)=0, which implies thatis a quantum Dirichlet form.
In the situation of the previous theorem, we callthe completely Dirichlet form associated with δ.
If Å is not unital, the completely Dirichlet form associated with a derivation is not necessarily a quantum Dirichlet form, even in the commutative case. For example, this is the case for the standard Dirichlet energy (f)=∫_Ω∇ f^2 with domain H^1_0(Ω)∩ L^∞(Ω) if Ω is a bounded Lipschitz domain.
If Å is a unital Tomita algebra, $̋ a normal Tomita bimodule overÅandδÅ→$̋ a closable symmetric derivation with associated completely Dirichlet form , then the first-order differential calculus associated with is a corestriction of (^̋a,δ̅|_Å_). In particular,
δ̅(ab)=aδ̅(b)+δ̅(a)b
for a,b∈Å_.
Since Å is unital, is a modular quantum Dirichlet form by <ref>. Let (_̋,δ_) be a first-order differential calculus associated with . By definition, Å⊂Å_⊂Å^'' and the graph norm of δ̅ coincides with the graph norm of δ_ on Å_. Thus π_l(Å)^'' is strong^∗ dense in π_l(Å_)^'' and Å is a core for δ_. It follows that the linear hull of {δ_(a)b| a,b∈Å} is dense in _̋.
Let
Ulin{δ_(a)b| a,b∈Å}→,̋ U(δ_(a)b)=δ(a)b.
By <cit.> the map U is well-defined and extends to an isometric Å_-bimodule map from ̋̅_ to ̋̅ such that U(δ_(a))=δ(a) for a∈Å.
If a∈Å_, let (a_n) be a sequence in Å such that a_n-a_δ̅→ 0. As discussed above, this implies δ_(a_n)→δ_(a). Hence U(δ_(a))=δ̅(a). If a,b∈Å_, then
δ̅(ab) =U(δ_(ab))
=U(aδ_(b)+δ_(a)b)
=a U(δ_(b))+U(δ_(a))b
=aδ̅(b)+δ̅(a)b.
Moreover, δ J=δ can be extended by continuity to δ̅J=δ̅, and
→̋̅, z↦δ̅(U_z a)
is an entire continuation of t↦_t δ̅(a) for a∈Å_ by <cit.>.
Thus δ̅(Å_)⊂^̋a and δ̅ is a symmetric derivation on Å_. The statement now follows from the uniqueness of the first-order differential calculus associated with a modular completely Dirichlet form <cit.>.
The previous result holds more generally with the same proof if Å is not necessarily unital, but the completely Dirichlet form associated with δ is still a quantum Dirichlet form.
In the light of Lemma <ref>, one has Å^δ⊂Å_ in the situation of the previous proposition. It is an interesting question if one always has equality or if different derivations with δ-complete domains can have the same associated completely Dirichlet form.
§ EXAMPLES
In this section we present several classes of derivations that give rise to modular completely Dirichlet forms according to Theorem <ref>. The first three classes of examples concern inner derivations, before we treat derivations arising in non-tracial free probability in Example <ref> and derivations induced by cocycles on locally compact groups in Example <ref>.
Let Å be a Tomita algebra, $̋ a normal Tomita bimodule overÅandξ∈$̋ be a bounded vector. Assume that there exists ω∈ such that _t ξ=e^iω tξ for all t∈.
The map
δÅ→⊕̋,̋ a↦ i(ξ a-aξ,(ξ)a-a(ξ))
is a bounded derivation, and it is symmetric when ⊕̋$̋ is endowed with the involution(η,ζ)↦ (ζ,η)and the complex one-parameter group(e^-iω z_z,e^iω z_z).
It follows that the closure of the quadratic form
Å→ [0,∞), a↦ξ a-aξ_^̋2+(ξ)a-a(ξ)_^̋2
is a (bounded) modular completely Dirichlet form with respect toÅ^''. In the case=̋Å, this was first proven by Cipriani <cit.>. See also <cit.> for arbitrary Tomita bimodules$̋ over Å.
The next example is a (partial) extension of the previous example allowing for vectors implementing the inner derivation that are not necessarily bounded.
Let Å be a Tomita algebra with Hilbert completion . For ξ∈(Δ^1/2) the operator
π_l^0(ξ)Å→, a↦ξ a
is closable since π_l^0(ξ^♯)⊂π_l^0(ξ)^∗. Likewise, if ξ∈(Δ^-1/2), then
π_r^0(ξ)Å→, a↦ aξ
is closable with π_r^0(ξ^♭)⊂π_r^0(ξ)^∗. Hence if ξ∈(Δ^-1/2)∩(Δ^1/2), then π_l^0(ξ)-π_r^0(ξ) is closable with π_l^0(ξ^♯)-π_r^0(ξ^♭)⊂ (π_l^0(ξ)-π_r^0(ξ))^∗.
Now assume that there exists ω∈ such that Δ^itξ=e^iω tξ for all t∈. This implies in particular ξ∈(Δ^-1/2)∩(Δ^1/2).
Similar to the last example, one can turn Å⊕Å into a Tomita bimodule over Å if one equips it with the usual bimodule structure, the involution (η,ζ)↦ (Jζ,Jη) and the complex one-parameter group (e^-iω zU_z,e^iω zU_z).Then the map
δÅ→Å⊕Å, a↦ i(ξ a-aξ,(Jξ)a-a(Jξ))
is a closable symmetric derivation.
Thus the closure of the quadratic form
Å→ [0,∞), a↦ξ a-aξ_^2+(Jξ)a-a(Jξ)_^2
is a modular completely Dirichlet form with respect to Å^''. This result has first been obtained by Cipriani and Zegarlinski <cit.>.
The previous examples require eigenvectors of the modular group to construct a symmetric derivation, which may be hard to find. In the following examples we show that in certain situations one can start with an arbitrary element if one “averages” the action of the modular group to ensure modularity.
Let M be a von Neumann algebra with separable predual. A normal semi-finite weight faithful weight ϕ on M is called integrable<cit.> if
_ϕ={x∈ M: ∫_σ^ϕ_t(x^∗ x) dt exists in the σ-strong topology}
is weak^∗ dense in M.
If ϕ is integrable, the set
Å={Λ_ϕ(x)| x∈ M analytic for σ^ϕ, σ^ϕ_z(x)∈_ϕ∩_ϕ^∗∩_ϕ∩_ϕ^∗ for all z∈}
is a Tomita subalgebra of (Å_ϕ)_0 with Hilbert completion L^2(M) and π_l(Å)^''=M, as can be seen from <cit.> together with a standard mollifying argument.
Let (V_t) be the translation group on L^2(), that is, V_t f(s)=f(s+t), and let L^2()^a be the set of all entire analytic elements for (V_t). Endow L^2()^a⊙Å with the left and right action of Å given by a(f⊗ b)c=f⊗ abc, the complex one-parameter group (V_z⊙ U_z)_z∈ and the involution f⊗ a↦f̅⊗ Ja. It can be checked that this makes L^2()^a⊙Å into a normal Tomita bimodule, which we denote by $̋.
Leta∈(Δ_ϕ^1/2)∩(Δ_ϕ^-1/2)withJa=aand define
δÅ→ L^2(;L^2(M)), δ(b)(s)=(U_-sa)b-b(U_-sa).
We have
δ(U_t b)(s) =(U_-sa)(U_t b)-(U_t b)(U_-sa)
=U_t((U_-(s+t)a)b-b(U_-(s+t)a))
=U_t δ(b)(s+t).
Thusδ∘ U_t=(U_t⊗ V_t)∘δ. In particular,δmaps into^̋a.
It is not hard to check thatδÅ→^̋ais a symmetric derivation. To show closability, first note that for every fixeds∈the mapb↦δ(b)(s)is closable as seen in the previous example. Ifb_n→ 0andδ(b_n)→ξ, then there exists a subsequence such thatδ(b_n_k)(s)→ξ(s)for a.e.s∈. Closability of the mapb↦δ(b)(s)impliesξ(s)=0for a.e.s∈, which proves the closability ofδ.
Thus the closure of the quadratic form
Å→ [0,∞), b↦∫_(U_-sa)b-b(U_-sa)^2 ds
is a modular completely Dirichlet form.
If we drop the assumptionJa=a, a similar argument shows that the closure of the quadratic form
Å→ [0,∞), b↦∫_((U_-sa)b-b(U_-sa)^2+(U_-sJa)b+b(U_-sJa)^2) ds
is a modular completely Dirichlet form with respect toÅ^''.
A similar construction is possible if one starts with a weight with periodic modular group instead of an integrable weight and integrates over a period of the modular group.
The following class of examples of derivations was introduced by Nelson <cit.> in the context of non-tracial free probability.
Let M be a von Neumann algebra, ϕ a normal faithful state on M and B⊂ M a ∗-subalgebra. Let ∂ B→ M⊗ M be a linear map such that
∂(xy)=(x⊗ 1)·∂(y)+∂(x)· (1⊗ y)
for x,y∈ B. Note that Nelson works with M⊗ M^ instead, but under the identification x⊗ y↦ x⊗ y^, the M-bimodules M⊗ M and M⊗ M^ (with the bimodule structure used in <cit.>) are isomorphic.
Let ω∈ and write M_∞ for the set of entire analytic elements for σ^ϕ. Nelson <cit.> calls the map ∂ an e^ω-modular derivation if B⊂ M_∞, B is invariant under σ^ϕ_z for all z∈, ∂(B)⊂ M_∞⊙ M_∞ and
∂(σ^ϕ_z(x))=e^iω z(σ^ϕ_z⊗σ^ϕ_z)(∂(x))
for all x∈ B and z∈.
One example given by Nelson is the free difference quotient from free probability (see <cit.> in the non-tracial case). Given a ∗-subalgebra B of M and an element a∈ M that is algebraically free from B (and a^∗ is algebraically free from a if a≠ a^∗), let
∂_a B[a]→ B[a]⊙ B[a], ∂_a(a)=1⊗ 1, δ|_B=0
(and δ_a(a^∗)=0 if a^∗≠ a). If a is an eigenvector of Δ_ϕ to the eigenvalue e^ω, then ∂_a is an e^ω-modular derivation.
Let us see how an e^ω derivation gives rise to a symmetric derivation in our sense. For x,y∈ M let (x⊗ y)^†=y^∗⊗ x^∗. The conjugate derivation of ∂ is the map
∂̂ B→ M_∞⊙ M_∞, ∂̂(x)=∂(x^∗)^†.
Let Å=Λ_ϕ(B). Since B is consists of the analytic elements for σ^ϕ and is invariant under σ^ϕ_z for z∈, the set Å is a Tomita subalgebra of (Å_ϕ)_0=Λ_ϕ(M_∞).
Let =̋(Λ_ϕ(M_∞)⊙Λ_ϕ(M_∞))^⊕ 2 with left and right action of Å given by
a(ξ_1⊗η_1,ξ_2⊗η_2)b=(aξ_1 ⊗η_1 b,aξ_2 ⊗η_2 b),
involution given by (ξ_1⊗η_1,ξ_2⊗η_2)↦ (Jη_2⊗ Jξ_2,Jη_1⊗ Jξ_1), and complex one-parameter group (_z)=(e^iω zΔ_ϕ⊗ϕ^iz,e^-iω zΔ_ϕ⊗ϕ^iz). One can check that this makes $̋ into a normal Tomita bimodule overÅ.
Let
δÅ→,̋ δ(Λ_ϕ(x))=(Λ_ϕ⊗ϕ(∂(x)),Λ_ϕ⊗ϕ(∂^†(x))).
The product rule for∂and∂̂translate to the product rule forδ, thee^ωmodularity of∂ensuresδ∘Δ_ϕ^iz=_z ∘δand the definition of∂̂andare tailored to guaranteeδ∘ J_ϕ=∘δ.
All of these properties follow by routine calculations, let us just show the product rule (for the first component of)δas illustration. Letδ_1(Λ_ϕ(x))=Λ_ϕ⊗ϕ(∂(x)). By the product rule for∂we have
δ_1(Λ_ϕ(xy)) =Λ_ϕ⊗ϕ((x⊗ 1)∂(y)+∂(x)(1⊗ y))
=Λ_ϕ⊗ϕ((x⊗ 1)Λ_ϕ⊗ϕ(∂(y))+∂(x))(1⊗σ^ϕ_-i/2(y))
=(π_l(Λ_ϕ(x))⊗ 1)δ_1(Λ_ϕ(y))+(1⊗π_r(Λ_ϕ(y)))δ_1(Λ_ϕ(x)).
Thus, ifδis closable, the closure of the associated quadratic form is a completely Dirichlet form with respect toÅ_ϕon the GNS Hilbert spaceL^2(M,ϕ).
To compare that to the result of Nelson, he showed <cit.> that one gets a completely Dirichlet form on the GNS Hilbert spaceL^2(M_ϕ,ϕ)of the centralizerM_ϕofϕ, which is of course a tracial von Neumann algebra.
Our methods allow to extend this result to the “fully” non-tracial setting in that we obtain a modular completely Dirichlet form on the GNS Hilbert space ofMon not just of the centralizer. Note however that Nelson's definition of the mapδbetweenL^2spaces seems slightly different, owing to the use ofM⊗M^instead ofM⊗M.
The last example concerns group von Neumann algebras. The case of discrete groups was treated in <cit.>, but to cover general locally compact groups, possibly non-unimodular, one needs the theory for non-tracial reference weights as developed here.
Let G be a locally compact group with left Haar measure μ and modular function Δ_G. As discussed in <cit.>, the space C_c(G) of compactly supported continuous function on G with the L^2 inner product, the convolution product, the involution f^♯(g)=Δ_G(g)^-1f(g^-1) and the complex one-parameter group U_z f(g)=Δ_G(g)^izf(g) forms a Tomita algebra. We write λ and ρ for the associated left and right action of C_c(G) on L^2(G) and Å_G for the associated full left Hilbert algebra.
Let π be a strongly continuous orthogonal representation of G on the real Hilbert space H. A continuous map b G→ H is called 1-cocycle if b(gh)=b(g)+π(g)b(h) for all g,h∈ G. We extend π to a unitary representation of G on the complexification H^ of H and write ξ↦ξ̅ for the anti-unitary involution induced by H⊂ H^.
On C_c(G;H^) define a left and right action of C_c(G) by
(f∗ξ)(g) =∫_G f(h)π(h)ξ(h^-1g) dμ(h)
(ξ∗ f)(g) =∫_G f(h^-1g)ξ(h) dμ(h),
an anti-unitary involution by (ξ)(g)=-Δ_G(g)^-1/2π(g)ξ(g^-1) and a complex one-parameter group by _z ξ(g)=Δ_G(g)^izξ(g). One can check that C_c(G;H^) with this operations is a Tomita bimodule over C_c(G).
Let
δ C_c(G)→ C_c(G;H^), δ(f)(g)=f(g)b(g).
Using the cocycle property of b, one gets
δ(f_1∗ f_2)(g) =∫_G f_1(h)f_2(h^-1g) dμ(h) b(g)
=∫_G f_1(h)f_2(h^-1g)(π(h)b(h^-1g)+b(h)) dμ(h)
=(f_1∗δ(f_2))(g)+(δ(f_1)∗ f_2)(g).
It is readily verified that δ also satisfies δ∘ J=∘δ and δ∘ U_z=_z∘δ for all z∈. Hence δ is a symmetric derivation. As a multiplication operator, it is clearly closable.
Therefore,
L^2(G,μ)→ [0,∞], (f)=∫_G f(g)^2b(g)^2 dμ(g)
is a modular completely Dirichlet form with respect to Å_G. The associated quantum dynamical semigroup on L(G) is given by
P_t(∫_G x̂(g)λ(g) dμ(g))=∫_G e^-tb(g)^2x̂(g)λ(g) dμ(g).
In this case, complete positivity of P_t also follows directly from Schönberg's theorem as g↦b(g)^2 is a conditionally negative definite function on G.
[article]citetitle#1[article]title#1 |
http://arxiv.org/abs/2307.06122v2 | 20230712122046 | The endpoint of partial deconfinement | [
"David Berenstein",
"Kai Yan"
] | hep-th | [
"hep-th"
] |
^† Department of Physics, University of California, Santa Barbara, CA 93106
^ Department of Physics,
The University of Chicago, 933 East 56th Street, Chicago, Illinois 60637, USA
We study the matrix quantum mechanics of two free hermitian N× N matrices subject to a singlet constraint in the microcanonical ensemble. This is the simplest example of a theory that at large N has a confinement/deconfinement transition. In the microcanonical ensemble, it also exhibits partial confinement with a Hagedorn density of states. We argue that the entropy of these configurations, calculated by a counting of states based on the fact that Young diagrams are dominated by Young diagrams that have the VKLS shape. When the shape gets to the maximal depth allowed for a Young diagram of SU(N), namely N, we argue that the system stops exhibiting the Hagedorn behavior. The number of boxes (energy) at the transition is N^2/4, independent of the charge of the state.
The endpoint of partial deconfinement
David Berenstein ^†, Kai Yan ^†,
=====================================
§ INTRODUCTION
The confinement/deconfinement transition plays an important role in the study of gauge theories.
Thanks to the AdS/CFT correspondence, the confinement/deconfined phase can be associated to spacetimes with and without a black hole <cit.>. In the gravity side, this transition is the Hawking-Page first order phase transition <cit.>.
The physics in AdS tells an additional story. For low energies, there is a Hagedorn density of states (basically, we have a spectrum of strings propagating in an AdS spacetime).
The Hagedorn temperature and details of the phase transition were studied perturbatively in <cit.>. The Hagedorn temperature in N=4 SYM at large N has been computed more recently using methods of
integrability <cit.>.
Currently, this part of the behavior of the duality at low energies with respect to N^2, but still large energies compared to the string scale can be claimed to be well understood.
Usually, in the study of first order phase transitions, there is a Maxwell construction that lets one fix the temperature at the transition temperature and one can vary the energy by occupying different regions of space with different phases of the theory. This is a coexistence between two phases. This way, the temperature stays fixed when one varies the energy. In the Hagedorn setup, the exponential growth of states fixes the temperature by different means and usually occurs at a higher temperature than the first order Hawking-Page phase transition. However, as shown in <cit.> (see also <cit.>), at zero coupling
the two transitions are the same.
From the point of view of black hole physics, small black holes have negative specific heat, while large black holes have positive specific heat. The small black holes are thermodynamically unstable in the canonical ensemble. If one fixes the energy instead of the temperature, one can have negative specific heat. This just indicates a faster growth of entropy with the energy than one would naively imagine. Basically, one needs (∂^2_ES)>0 to get a negative specific heat. The Hagedorn behavior S∝ E sits exactly at infinite specific heat, and any perturbation can in principle turn the specific heat negative.
These arguments suggest that there should be a notion of a Maxwell construction between
two phases that describes the Hagedorn behavior at zero coupling, as the Hagedorn and the
confinement/deconfinement transition coincide. The thermodynamic limit in this setup is associated with phase transitions at large N, so the growth of states is produced by growing the size of the gauge group, not the volume of space.
A notion of a mixture of confinement and deconfinement should occur in the variables that are becoming a thermodynamic volume. In this case, the notion of volume is in the labels of the internal degrees of freedom of the matrices themselves. This idea was proposed as a way to understand the small black holes in AdS space <cit.>.
A notion of a subgroup being deconfined, while the rest is confined is called partial deconfinement (see <cit.>
for a short review). A natural question is if the process of going from partial deconfinement to full deconfinement is a crossover, or if there is a phase transition that separates them.
in <cit.> it was argued that there is a phase transition closely related to the Gross-Witten-Wadia <cit.> transition separating partial deconfinement and confinement. Similar observations about phase transitions at large N related to deconfinement are found in <cit.> (see also <cit.>).
The main issue to be concerned about is that if one wants to understand the phase transition well, one needs to fix the energy, rather than the temperature.
Standard path integral methods in imaginary time work well if one fixes the temperature. Fixing the energy is not as simple. Counting states directly can be very hard. This is why it is important to have simple models where the behavior one wants to study can be understood in detail.
In this paper, we study such a simple model. The model we consider is the theory of two free N× N hermitian matrix quantum mechanics, subject to a singlet constraint. For simplicity, the angular frequency of the matrices is set equal to one, so that the energy and the occupation number are the same. This gauge theory is one of the simplest that exhibits Hagedorn behavior and where partial deconfinement has been argued to be valid <cit.>. It has also been argued that generic corrections can turn the specific heat negative <cit.> as would be expected from a system that could in principle describe small AdS black holes. The theory also has a conserved charge, so one can study the model as a function of both energy and charge.
Large N counting suggests that we parametrize the information in terms of ϵ=E/N^2, and the fraction of charge to energy q=Q/E. The large N transition is studied by taking N→∞ keeping these quantities fixed.
In this short note, we study the counting of states combinatorially using techniques from representation theory and tensor products of said representations.
The goal is to better understand what fraction of the gauge group is partially deconfined as a function of the energy/charge and to use this information to make predictions about the locus of the phase transition. The states are determined by triples of Young diagrams and their degeneracy within this representation is given by squares of Littlewood Richardson coefficients.
The number of boxes in the young diagrams is the total occupation number of each of the matrices and the total occupation number.
The large N limit requires large representations and a lot of our results are related to the asymptotic growth of Littlewood-Richardson coefficients, following results in <cit.>. The most important information is the typical shape of the Young diagrams that realize these estimates is the VKLS shape, attributed to Vershik, Kerov, Logan, and Shepp <cit.> and that these dominate the entropy.
The transition occurs when the typical shape, rescaled to the number of boxes, reaches the maximum depth allowed by SU(N) representations. This occurs at an energy E=N^2/4 (there are subleading corrections in N) regardless of the charge of the state.
The paper is organized as follows. In section <ref>, we introduce the model we study and the method of counting states using representation theory and young diagrams. We explain that the counting of states is computed by adding squares of Littlewood-Richardson coefficients and that these must become large. Basically, there are too few Young diagrams to give the correct counting of states, so the counting is mostly on the multiplicity of the representation counting. We then use results in combinatorics to show that at large energy, the state counting is dominated by a specific shape: the VKLS shape. If a transition from partial deconfinement to deconfinement is to occur in the microcanonical ensemble, we conjecture that this must happen when the VKLS shape becomes disallowed at finite N (the shape has the depth that is greater than N as a function of the number of boxes in the Young diagram),
This predicts a specific energy for the transition at large N.
In section <ref> we address our conjecture numerically. We do this by computing the degeneracy numerically: we compute the Littlewood Richardson coefficients and verify that the shapes that maximize these are VKLS: they are close to minimizing the hook length for a fixed number of boxes. We also observe that there are non-trivial critical exponents once the energy gets larger than the transition energy, verifying that the transition is weakly first order (it is somewhere between first and second order). We present numerical evidence for the main claim: that the transition happens at a fixed energy E=N^2/4 regardless of the charge of the state.
Finally, in section <ref> we conclude.
§ COUNTING STATES AND THE TYPICAL YOUNG TABLEAUX
The system we will be studying is a matrix quantum mechanics of two hermitian matrices X,Y. The Hamiltonian is given by
Ĥ= 1/2tr[ P_X^2+P_Y^2+ X^2+ Y^2]
and notice that each of the 2N^2 oscillators has angular frequency ω=1. Since the theory is free, the energy becomes identical as the occupation number plus the zero point energy. For convenience, we set the energy of the ground state to zero. The occupation number of X and Y are also conserved separately, we can call the difference of these occupation numbers the charge of the state. The system also has a O(2) symmetry that rotates X into Y which is a different symmetry and we will not be concerned with it directly.
The system has a SU(N) symmetry that acts by conjugation X→ U X U^-1, Y→ U Y U^-1.
We will restrict to the singlet state under the SU(N) symmetry. Our goal is to analyze this system in the microcanonical ensemble at large N and at different values of the energy.
The scaling must be such that E= N^2 ϵ, where ϵ is a normalized energy divided by the growth of states at large N. We will do similar rescalings with the entropy.
The first goal is to show the Hagedorn behavior of states for this system. There is a representation of the counting of states in terms of traces. However, it is more instructive to start with the partition function at infinite N for the singlet states. This has been computed in <cit.>. The generating function of states is given by
Z=∏_n=1^∞1/1 -x^n-y^n
The x powers count how many X are excited and the y powers count how many Y are excited in total and should be assumed to be positive real variables.
The partition function is convergent so long as x+y<1. Let us concentrate on the first term
Z_1= 1/1-x-y= ∑_k=0^∞ (x+y)^k
If we fix the energy E, we need to fix k=E. There are exactly 2^k states accounted for in this sum (fix k first and set x=y=1). These are all the possible words made of x,y that have a length of exactly k. Each letter can be chosen to be x or y at any position of the word.
If we include the other terms from the product, we have additional positive contributions.
This shows that the number of states grows at least exponentially with the energy, giving us a Hagedorn behavior. The entropy would be S≥ k log 2 = Elog 2.
Using thermodynamic relations T dS = dE, we find a temperature T= (log(2))^-1= β^-1
We can now also fix the charge Q=(n_x-n_y)/2.
When we expand the term (x+y)^k, we get
(x+y)^k = ∑_n_1+n_2=kk n_1 x^n_1 y^n_2
This can also be interpreted probabilistically. The probability of getting an X is n_1/k= (1/2 +q) and the probability of getting a y is n_2/k= (1/2-q), where we have introduced the average charge per unit letter
q= Q/E = 1/2( n_1-n_2 /n_1+n_2)
and -1/2≤ q≤1/2.
The (Shannon) entropy of such words is the number of letters times the entropy per letter
S= E(- p_x log(p_x) -p_y log (p_y))
The expression β_q =- p_x log(p_x) -p_y log (p_y) can be interpreted as an effective inverse temperature. In terms of q, it is given by
β_q= -(1/2 +q) log(1/2 +q) - (1/2 -q) log(1/2 -q),
and we notice that when we set q=0 we recover the original result for arbitrary words β= log(2).
Now, let us consider finite N at large temperature. In this limit, a classical physics computation should be accurate. We have 2N^2 degrees of freedom and we have N^2 constraints. It is easy to argue, by a scaling argument (see <cit.> for example) that one should have exactly E= N^2 T in this classical limit.
The basic idea is that the Gibbs partition function is given by
Z∼∫ (d^2N^2 p) (d^2N^2 q) δ^N^2( p· q) exp(-β p^2/2-β q^2/2)
and one can then rescale p,q to eliminate β from the exponent. The quadratic constraints of the gauge transformations are schematically written as p· q, understanding that these are N× N matrices of constraints.
The measure scales like β^-2N^2 and each delta function constraint, which is quadratic in p,q, scales like β, giving a total of β^N^2 from the constraints. This leaves us with a total scaling of β^-N^2. This is the same as a partition function with N^2 harmonic oscillator degrees of freedom in phase space. If one adds a delta function of the energy, one needs to replace N^2 by N^2-1 above.
The entropy, by the thermodynamic relation T dS= dE then behaves as S≃ N^2 log(T) ∼ N^2 log (E/N^2). In this regime, the entropy only grows logarithmically with the energy as opposed to linearly in the energy.
Notice that we are not taking into account the integration constant for the entropy carefully, as is standard in a classical calculation.
Since the behavior at low and high energies are very different, there must be either a crossover from the Hagedorn behavior described above S∝ E, or an actual phase transition at large N that separates these two behaviors.
Both of these possibilities, by large N counting, should occur at an energy that scales with N^2. The idea of partial deconfinement versus full deconfinement is that this change of behavior is actually a continuous phase transition: that some quantities become discontinuous at some value of the energy with some non-trivial critical exponents. One needs to keep ϵ=E/N^2 finite when taking N→∞ to see the phase transition.
We're being very careful here to state that the transition occurs at a fixed energy per degree of freedom. The Hagedorn behavior makes the temperature stay constant at the Hagedorn temperature of the system β=log(2) for various values of the energy. In that sense, it is essentially a first order transition.
Because of this, we have to study the system in the microcanonical ensemble.
The exit of the Hagedorn part of the phase diagram requires the temperature to start increasing again at a specific value of the rescaled energy per degree of freedom ϵ= E/N^2.
We need to study when this happens.
If we also take into account the charge, a phase transition would indicate that there will be a curve in ϵ, q where some thermodynamic quantities are discontinuous. That phase transition curve denotes the transition from the partial deconfinement phase to the fully deconfined phase.
Our goal in this section is to argue precisely how that phase transition appears in the counting of states done more carefully at large but finite N.
§.§ State counting with Young tableaux
Let us again start with the problem of counting states in the model we have described.
The Hilbert space without constraints is described by the occupation numbers of the 2N^2 harmonic oscillators. We can call these (a_X^†)^i_j and (a_Y^†)^i_j. They have matrix indices, both an upper and a lower index. Generically, to build a state, one needs to contract the upper indices and lower indices. A naive counting of states is done in terms of traces that implement these contractions. However, the states created this way are not orthogonal states at finite N. At some point not only are the multi-traces not orthogonal, but one is also overcounting: there are relations. The traces are useful as algebraic generators of the states. Short traces are also simple observables that can be evaluated in a complicated state. In holography, these would represent excitations on top of a background.
Finding the orthogonal basis of states is not automatically easy. For a single matrix model, this is
done using characters <cit.> and the representation is in terms of Young diagrams.
For more than one matrix, one can choose a basis of
restricted Schur functions <cit.> (see also <cit.>), or one can also find a double coset ansatz for writing explicit states <cit.>.
To understand this, the system is free. This means one can actually do rotations on the upper and lower indices of the X and Y independently of each other.
We would then have a U(N)^2 symmetry of upper indices and lower indices separately for X and another one such for Y. Basically, the starting symmetry is larger than U(N), but only one U(N) is gauged. It is convenient to use the extra symmetry to construct states,
By symmetry here, we mean that the U act by unitary transformations on the Hilbert space. Therefore, states in different representations of the symmetry are orthogonal.
This is the idea behind the restricted Schur constructions. It is also a convenient way to analyze more general quiver theories ( see <cit.>).
It is convenient to classify states in the full Hilbert space, including non-singlet states by the representation content under the U(N)^4 symmetry.
The final U(N) symmetry that we gauge sits in a diagonal of this U(N)^4. It acts on upper indices as a fundamental, and on lower indices as an antifundamental.
Thus, the U(N)^4 content also keeps track of the U(N) gauge symmetry that we want to gauge in the end.
The idea to represent the states is that the X oscillators commute. To decompose into representations of U(N) one symmetrizes or antisymmetrizes in the upper indices according to a Young diagram. We do the same with the lower indices. Notice that a permutation of two X causes a permutation of both the upper and the lower indices that they carry. This is a commutative operation in the algebra of raising operators. A permutation of the upper indices can therefore be undone in the lower indices by this mechanism. This means that the Young diagram of the upper indices (the symmetry properties under permutations) is the same as the Young diagram of the lower indices [For fermions, the opposite is true <cit.>. See also <cit.> and references therein. Therefore the two Young diagrams are transposed between upper and lower indices.]. We can now do the same with the Y oscillators.
We now want to be more mindful of the four U(N) symmetries. These will be called U(N)_X,U,U(N)_X,L,
U(N)_Y,U, U_Y,L, where we distinguish upper and lower indices by U,L.
We can organize the information we have collected so far by saying that
we have four Young diagrams Υ_X,U= Υ_X,L and Υ_Y,U= Υ_Y,L and these are paired (identical between upper and lower indices of X and Y respectively).
Each of these is associated with an irreducible representation of U(N). We now want to collect all the upper indices together. Because the U(N) we want to gauge acts the same on the upper indices of X and Y,
the main observation is that the upper indices transform as elements of a tensor product representation R(Υ_X,U)⊗ R(Υ_Y,U) with respect to this diagonal U(N).
We decompose these into irreducible representations of the diagonal action.
If we take two representations R_1, R_2, we have that
R_1⊗ R_2 = ⊕_R_3 c^R_3_R_1,R_2 R_3
where the c^R_3_R_1,R_2 are the multiplicities of the irreducible representation R_3 appearing in the product. These are known as Littlewood-Richardson coefficients.
Now we do the same with the lower indices. This results in a different representation appearing on the lower indices, which we call R̃_3, with multiplicity c^R̃_3_R_1,R_2.
The upper indices of X transform as the fundamental with respect to the diagonal group U(N) we are gauging and the lower indices transform in the conjugate representation. To make a singlet R_3⊗R̅̃̅_3 needs to contain a singlet. This can only occur if the Young diagrams of R_3 and R̃_3 are the same, and the multiplicity is one.
Now, the upper indices have an additional degeneracy of c^R_3_R_1,R_2 and the same is true for the lower indices. These need to be multiplied when we are counting states.
We find therefore that the partition function at fixed n_x, n_y requires us to choose a young diagram for X with n_x boxes, a young diagram for Y with n_y boxes and a young diagram for the product representation, which must by necessity have n_x+n_y boxes.
The total number of states is then a sum over all the representation choices obtained this way and counted with degeneracies
N(n_x,n_y) = ∑_ν=Υ(n_x),μ=Υ(n_x), σ=Υ(n_x+n_y)
(c^σ_μν)^2
This result also appears in this form in <cit.> (see also <cit.>). There are other ways of generating the states using
Two important observations are in order. First, if the Young diagram σ=R_3 has more
than N rows, then we do not count it, as it is not an allowed representation of U(N). In that case, we set the corresponding c^σ_μν to zero.
Second, the Littlewood-Richardson coefficients are otherwise independent of N.
This means that at finite N and infinite N the numbers c^σ_μν are the same if they are allowed. As a corollary, the counting of states at finite N and infinite N agree if the total number of boxes n_x+n_y≤ N. The partition function given by equation <ref>, interpreted combinatorially in terms of these sums of squares of Littlewood Richardson coefficients is also known in the mathematics literature, a result that is attributed to Harris and Willenbring <cit.>.
§.§ The typical Young tableaux
We have two results concerning the counting of states. First, we have the infinite N counting and we also have the finite N counting, whose essential constraint is that all the Young tableaux Υ
must be allowable for U(N). If we combine both results, we get that when both countings are allowed then
N(n_x,n_y) = ∑_ν=Υ(n_x),μ=Υ(n_x), σ=Υ(n_x+n_y)
(c^σ_μν)^2≃exp( β_q(n_x+n_y))
At this stage, we want to ask what Young diagrams dominate the sum and how large do the
c^σ_μν become. Basically, we want to ask if maximizing over c^σ_μν and effectively reducing the problem to one term is sufficiently representative of the entropy or not.
If the answer is yes (a statement that we will argue later), we can then study how the shape of the dominant Young diagrams behaves as we take n_x+n_y large. The main idea we want to advance is that if σ is the dominant shape and it is allowed for U(N), then for all intents and purposes the entropy at finite N and infinite N at energy E=n_x+n_y are the same. Their difference in entropy will be small and suppressed. If the shape is not allowed for U(N), then the state counting for U(N) and U(∞) is substantially different at energy E. The energy at which the dominant shape for E=n_x+n_y ceases to be allowed is then associated with a change of thermodynamic behavior away from the result at infinite N. This is the critical point in E that we are looking for.
§.§.§ Large Littlewood Richardson coefficients
So far, we have used group theory to argue that the counting of states can be done by summing over triples of Young diagrams, with n_x, n_y and n_x+n_y boxes. How many of these triples are there?
The number of young diagrams with n_x boxes is given by the partitions of n_x. The same is true for n_y, n_x+n_y.
The asymptotic number of partitions at large n_x, n_y (without any constraints) is
P(n_x, n_y, n_x+n_y) ∼exp(π√(2n_x/3)+π√(2n_y/3)+π√(2(n_x+n_y)/3)) .
This means that the maximum possible entropy associated with this sum (if all terms are the same)
scales like
S_# terms∼π√(2n_x/3)+π√(2n_y/3)+π√(2(n_x+n_y)/3)≪β_q (n_x+n_y)
which is much smaller than the entropy of the system. After all, they scale like √(n), rather than n.
In essence, we find that the entropy is not concentrated on the number of partitions. Instead, we can find the following inequality
log(P(n_x, n_y, n_x+n_y) max_μ,ν,σ (c^σ_μ,ν)^2 )> S ∼β_q (n_x+n_y)
Also, if we reduce the sum to the one term that maximizes the Littlewood Richardson coefficient, we find that
log(max_μ,ν,σ (c^σ_μ,ν)^2 )< S
Combining these two, we find that
log(max_μ,ν,σ (c^σ_μ,ν
)^2)∼β_q(n_x+n_y) - O(√(n))
We conclude that the term with the maximum Littlewood Richardson coefficient has an entropy associated with it that is roughly equal to the thermodynamic entropy of the system, up to subleading corrections that can be treated as a small perturbation. The basic claim we make is that the term with the maximum Littlewood-Richardson coefficient is sufficiently representative.
Our next problem is to find at large n_x, n_y, what is the shape of the Young diagram that maximizes the Littlewood-Richardson coefficient, if there is such a shape. This is a well-known problem in combinatorics.
We will here quote the main result of <cit.> on the asymptotic behavior of the shape associated with the maximum Littlewood Richardson coefficient.
The shape of the asymptotic young diagram is known as the VKLS shape.
To understand what this shape does, let us recall the dimension of the
representation associated with a Young diagram. This is given by taking a product of labels associated with each box and dividing by the hook lengths. The labels of the boxes are as follows, shifted by N.
mathmode, boxframe=normal, boxsize=2em
0 1 2 3 …
-1 0 1 …
-2 -1 0 …
⋮ ⋱ ⋱
They start by 0 in the (1,1) corner and add one when moving to the right and substracting one going vertically down. Basically, it is i-j, where i is the horizontal label, and j is the vertical label counting from he top.
Let us call the label of the (i,j) box L_i,j
The dimension of the representation is given by
d_ν= ∏_i,j(N+ L_i,j)/h_i,j
where h_i,j is the hook length of the (i,j) box.
When we consider the large N limit, we have that
d̃_ν= lim_N→∞d_ν/N^|nu| = ∏_i,j1/h_i,j
Roughly stated, the normalized size of the representation is the inverse product of the hooks of the Young diagram. The VKLS shape is the asymptotic
shape that maximizes the normalized dimension d̃_ν when we take large values, |ν|→∞.
Taking logarithms, we find that
log(d̃_ν)= -∑_i,jlog(h_i,j)
To maximize d_ν, we must minimize the sum F=∑_i,jlog(h_i,j)∝∫ dx dy log(h_x,y), which can be represented as an integral. Since the number of boxes is fixed, we can choose the area of the (x,y) plane to be fixed and be equal to one. The VKLS shape is the shape of the region that minimizes the functional F at fixed area, equal to one.
The shape is described as follows. Consider the region in between the two curves
f(s) = 2 (√(2-s^2)+s sin ^-1(s/√(2)))/π
f̃(s) = |s|
in the interval s∈( -√(2),√(2)). If we think of the curve given by |s| as the labels of the rows and columns of the Young diagram, the curve f(s) rotated so that it lies in the lower right quadrant is the VKLS shape.
Importantly, the f(s) curve intersects the |s| curve at s=±√(2). The distance from the origin in geometric units is 2.
In the asymptotic calculation of <cit.>, all three shapes have the VKLS shape, properly scaled to the corresponding number of boxes.
The VKLS shape, as described above, is depicted in figure <ref>.
We need to convert the area to the correct number of boxes to restore units: the area is n_x+n_y rather than one.
The length of the legs must be scaled by √(n_x+n_y) to accomplish this.
Therefore, the depth of the VKLS shape Young diagram in proper units is 2√(n_x+n_y)≤ N, and it must be bounded by N as that specifies the maximum allowed depth of the Young diagram columns. We find that the VKLS shape is allowed only if E=n_x+n_y≤ N^2/4.
Our prediction for the transition from partial deconfinement to full deconfinement based on this argument is that it occurs exactly at energy E=N^2/4, regardless of the value of q.
The value N^2/4 is also reported in <cit.>, by using different means.
This is the asymptotic large N statement, so there can be corrections that are subleading in N that we can not account for from the arguments above.
To test this statement, we do numerical calculations to see if the change of behavior occurs at fixed energy per degree of freedom ϵ=E/N^2= 1/4.
From our perspective, the partially deconfined gauge group has size 2√(n_x+n_y) and the confined portion is SU(N-2√(n_x+n_y)). This is done by looking at the number of rows less than N that are empty in the Young diagram. The definition is as in <cit.>. In this paper, we can actually quantify this property at large N. In this setup, we do not have access to the characterization of states in terms of the distribution of eigenvalues of the Polyakov loop, as in <cit.>, or in terms of the absolute value of the Polyakov loop.
§ NUMERICS AND THE PHASE TRANSITION
In this section, we are providing numerical evidence that the reasoning above is correct. The process is twofold. First, we wish to calculate the maximum Littlewood Richardson coefficients and compare them to the hook length formula. We wish to check that these coefficients are maximized sharply for the lower values of the hook length. Secondly, we need to compare different N in a meaningful way. The simplest way to do so is to notice that large N scaling requires that both E,S∼ N^2, so that we need to base our calculations on rescaled energy ϵ=E/N^2 and rescaled entropy s= S/N^2. Since S∝ E in the Hagedorn region, it is convenient to use the rescaled free energy F/N^2= E/N^2- T S/N^2 = ϵ - T s, which vanishes at large N for ϵ<ϵ^*, the energy of the phase transition. At least in principle, this provides a convenient parameter to distinguish the two phases F/N^2=0 and F/N^2≠ 0. This parameter changes continuously at the phase transition.
When doing calculations at finite N, there should be finite N corrections on top of these that we can not determine directly from the limit shape without extra input. Roughly stated, the VKLS curve is an approximation to the rugged edges of the Young diagrams. Because the
curve becomes tangent to |s| at the edge of the distribution, how to treat the edge can affect the size of the Young diagram versus the edge of the VKLS shape. This can be an effect that is much larger than order 1 but necessarily much less than √(n_1+n+2), the naive size of the shape. At this stage, this is a systematic error that affects how quickly the systems converge to the large N result at very moderate N≃ 4–7, where we will be doing our calculations. To estimate roughly, the Young diagram can cover completely the VKLS curve, or instead be completely covered by the VKLS curve. The difference in area between the two is of order the length of the edge of the diagram. This scales like √(n_1+n_2). Since n_1+n_2 ≃ O(N^2), this difference is of order N. We therefore expect that the transition occurs at n_1+n_2 = N^2/4+ O(N). The additional piece must be positive, as for n=4,5,6 the number N^2/4 is still small, especially if compared to the maximal depth of the Young diagram, which has roughly the same size.
For the first part, we check numerically that the problem that gives rise to the VKLS shape is
sound in the regime of parameters we are analyzing. We compute for fixed E=n_x+n_y the distribution of the Littlewood-Richardson coefficients as a function of the hook length formula of the large Young diagram, at E=n_x+n_y boxes with both species equal to each other n_x=n_y.
This is done by using the lrcalc package in Sage. We generate all Young diagrams for n_x boxes and n_x+n_y boxes. We make sure that
the maximum depth of the Young Diagram is fixed at N=5,6,7 to compare different values of
N. We compute the distributions by iterating over these choices.
This is depicted in figure <ref>. We clearly see that the maximum Littlewood-Richardson coefficient is peaked at low values of the hook length formula.
We also point out that as we increase the energy, the value of N at which the coefficient distribution peaks and saturates grows and the hook length formula moves towards the left (decreases).
§.§ Free energy
The next step is to compute the free energy. We have argued
that in the absence of charge, before the transition the scaling of the entropy is given by:
S = Eln2
The effective temperature is then:
β_eff = ∂ S/∂ E = ln 2
It is easy to see that F=0 at this temperature. After the transition, however, both temperature and entropy scale as the power laws in energy, and similarly for the free energy. we compute the free energy summing over all allowed states, not just the one that maximizes the Littlewood Richardson coefficient.
The temperature is computed in the microcanonical ensemble by finite differences T≡ (Δ S/Δ E)^-1, at fixed q=0. This results in some dispersion relative to the β_eff=log(2) from large N when we do it at finite N. We normalize both the energies and the free energy by dividing by N^2, to check convergence for large N. Since the Littlewood
Richardson coefficients are hard to compute, in practice we are restricted in energy E≤ 34. For N=12 (the maximum depth we can compute at), we have N^2/4=36, which is larger than the maximum energy where we did our computations. Therefore the data at this level is below the expected transition point.
The figure
Fig <ref> shows that the rescaled free energy F̃ versus the rescaled energy Ẽ collapses at large E/N^2 and that deviations start to appear close to E^*≃ 0.25 N^2. At larger N, the curve flattens to zero below E^* ≃ 1/4 N^2.
One can see that F̃ remains relatively flat and close to zero all the way up to Ẽ≈ 0.25≈Ẽ_̃*̃, as we conjectured earlier.
We also check to see if we have non-trivial critical exponents at the transition, assuming that E^*/N^2= ϵ^*=1/4 in figure <ref>.
In the figure we include two determinations of the free energy for ϵ^*<1/4. We compute the free energy with the temperature determined by finite differences, and compare it to the free energy assuming that β=log(2) is fixed. The energy relative to the conjectured transition point is ϵ̃= ϵ -1/4 = ϵ-ϵ^*.
The best value of the fit is
F̃∝ -ϵ̃^1.77∼- (ϵ-ϵ^*)^2-1/4
which seems to show a non-trivial critical exponent. Given our data, we are choosing a simple rational number as a stand-in for the exponent.
The seemingly strange straight lines in the figure are actually the same data point assuming different values of N: both the free energy and the energy are rescaled by the same amount 1/N^2. This becomes more obvious when the coloring of figure
<ref> is used to parse the data. The second plot zooms closer into the region ϵ≃ 0.25. We also see that F/N^2≃ 0 is well supported below ϵ=1/4.
Using the relations dϵ = T d s and f=F/N^2= ϵ - T s, we find that
df ≃ -dT s = -dT(s-s^*)- s^* dT. Since s=s^* at the transition, the term with (s-s^*) is suppressed. Instead, we find that dT/dϵ∝ (ϵ-ϵ^*)^2.77, so that we should have T= T^* + α (ϵ-ϵ^*)^1.77 near the transition. The relation between T-T^* and ϵ-ϵ^* shows a non-trivial critical exponent, where ϵ-ϵ^* ≃ (T-T^*)^0.56. Since the power law is less than one, the specific heat itself diverges with a non-trivial exponent, signaling a weak first order transition. This is very similar to the critical exponents found in <cit.>
It should be interesting to derive these exponents directly from the change of shape of the Young diagram..
§.§ Charge dependence
Our arguments in general require that the phase transition always occur at energy E=N^2/4.
We have found evidence in the case Q=0 that this is the case. We now want to do that at Q≠ 0. To get a proper limit, we keep q fixed (equivalently Q/E fixed). This ratio is
q=(n_1-n_2)/( 2(n_1+n_2)). If we want for example n_1=2 n_2, this corresponds to q= Q/E= 1/6, and n_1= 3 n_2 corresponds to q= 1/4, whereas 3 n_1= 5n_2 is q=1/8.
These must be done at energies that are multiples of 3,4,8 respectively. The number of data points we can actually compute is more sparse with multiples of 4,8, so it is less reliable,
The same information as in <ref> can be plotted at different Q/E. We get the results in figure <ref>.
The figures for different Q/E all support the idea that the phase transition occurs exactly at E/N^2= 1/4 and the plots are qualitatively very similar. For Q/E= 1/4 the charge is getting large and closer to the maximum value Q/E=1/2.
Naive power fits with a shift do not show a universal behavior, other that the critical exponent being larger than 1. The data is also sparse. The best data point is at q=1/6 and at a relatively low N. This is shown in <ref>.
The χ^2 is best for q=1/6, but given the variance of all the answers, we need more data to make a more definite statement. The question of if the fit is good or not at this stage has too many systematic errors to put a proper error bar on it. The main reason is that we do not know if the range is small enough for the power law fit to be dominated by the first non-trivial term.
Asymptotically, the temperature becomes linear in ϵ and the free energy must scale like -ϵlog(ϵ). The cutoff ϵ≃ 1 might be too large a cutoff for larger q. We also don't know if N is large enough for finite N corrections
to be unimportant or not. This requires much more data at high energy [The Littlewood Richardson coefficients are asymptotically hard to compute <cit.>, so in our computations, we have had to make a table of all possibilities. The limits we have here correspond to what could be reasonably computed with a laptop.].
§ CONCLUSION
In this paper, we have presented both theoretical and numerical evidence that the transition from partial deconfinement to full deconfinement can be understood simply in terms of counting of states for the free gauge matrix model based on Young diagrams. These have a typical shape, and when the typical shape, scaled to the number of boxes reaches the maximum allowed depth of the Young diagrams, the transition takes place. Before the transition, the shape is independent of the charges. We presented numerical evidence that this occurs exactly at the place where this counting suggests.
At the exit point, the large N free energies stops being zero. There are non-trivial critical exponents on the exit side of the Hagedorn region of the microcanonical phase diagram, which verifies with our methods that the transition is weakly first order.
The claim we are making is that the transition from partial deconfinement to deconfinement corresponds to a change in the typical shape of the Young diagram. To the extent that the shape of the Young diagram can be also considered as a geometric object, the transition as we describe above is stating that there is a geometric interpretation of the transition (a geometric order parameter), which is different from the description of the transition in terms of the absolute value of the Polyakov loop that has been used in other works. How to relate our observations with the VKLS shape to the Polyakov loop is beyond the scope of the present paper, but it should be an interesting avenue of exploration. Both of these approaches are very different in how one deals with the physical questions.
The problem of the shape of the Young diagram seems to be intimately related to counting states. If one replaces the problem of counting states with Young diagrams with the problem of counting states with traces, the transition occurs when the number of relations between traces competes with the number of states to the point that there are large cancelations and the entropy decreases substantially from what infinite N would dictate. Basically, traces are becoming very redundant.
If we equate entropy with information, we can say that this is a transition on the information content of the state. This is also suggestive of a closer connection with black holes as the entropy can be computed geometrically for black holes.
Notice that this description is an alternative point of view to the change in the expectation value of the Polyakov loop variables, which relates the problem to a change in the distribution of eigenvalues of the gauge field.
It is clear that our techniques work also in cases of more matrices or in systems with fermions instead of bosons.
One then needs to consider more young diagrams or different combinations of them, but with our methods, the computations again require maximizing products of Littlewood Richardson coefficients.
All of these will give rise to variations of the combinatorial problem that leads to the VKLS shape, as described in this paper. It is this effective shape that is controlling the transition in all these setups.
Also, given the information about the phase transition that can be learned from physics calculations, one should keep in mind that the physics intuition may also bear some fruit in the study of the estimation of Littlewood Richardson coefficients beyond the VKLS regime. This is an important combinatorial problem in its own right,
It is obviously interesting to ask how to translate combinatorial information about Young diagrams into computations of other observables in the matrix model.
As a case in point, for the one matrix model and because of its relations to half BPS states in N=4 SYM, a collection of such methods have been understood in <cit.> (see also <cit.>). It would be interesting to understand similar statements in this setup. At least in principle, since we know how to write the SU(N) generators for X,Y separately, information on the shape of the Young tableaux can be obtained by building the Casimir operators of the different SU(N) groups that are not gauged. Hopefully, this will lead to an improvement in the understanding of correlators for these states and how these are modified when changes occur in the typical Young diagram. That should lead to an interesting determination of the critical behavior near the partial deconfinement to deconfinement transition.
Ideally, because the VKLS states dominate the entropy in this case, the VKLS shape states could also dominate in cases where the theory is interacting with a non-trivial potential. In these cases, a microcanonical computation would be out of reach by direct methods.
These interacting models are closer to black holes in that one would expect to have chaotic dynamics and satisfy the eigenstate thermalization hypothesis. Maybe they could even have negative specific heat. We are currently looking into these ideas.
D.B. would like to thank D. O'Connor, S. Ramgoolam for discussions and correspondence. D.B. research was supported in part by the International Centre for Theoretical Sciences (ICTS) while participating in the program - ICTS Nonperturbative and Numerical Approaches to Quantum Gravity, String Theory and Holography (code: ICTS/numstrings-2022/9). Research supported in part by the Department of Energy under grant DE-SC 0011702.
|
http://arxiv.org/abs/2307.04366v1 | 20230710064648 | A New Wind Farm Active Power Control Strategy to Boost Tracking Margins in High-demand Scenarios | [
"Simone Tamaro",
"Carlo L. Bottasso"
] | physics.flu-dyn | [
"physics.flu-dyn",
"cs.CE",
"cs.SY",
"eess.SY"
] |
Explanation Needs in App Reviews: Taxonomy and Automated Detection
Max Unterbusch
University of Cologne
[email protected]
Mersedeh Sadeghi
University of Cologne
[email protected]
Jannik Fischbach
Netlight Consulting GmbH | fortiss GmbH
[email protected]
Martin Obaidi
Leibniz University Hannover, Software Engineering Group
[email protected]
Andreas Vogelsang
University of Cologne
[email protected]
August 12, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================
empty
empty
This paper presents a new active power control algorithm designed to maximize the power reserve of the individual turbines in a farm, in order to improve the tracking accuracy of a power reference signal. The control architecture is based on an open-loop optimal set-point scheduler combined with a feedback corrector, which actively regulate power by both wake steering and induction control. The methodology is compared with a state-of-the-art PI-based controller by means of high-fidelity LES simulations. The new wind farm controller reduces the occurrence of local saturation events, thereby improving the overall tracking accuracy, and limits fatigue loading in conditions of relatively high-power demand.
§ INTRODUCTION
The growth of wind energy penetration in the electricity mix requires new control algorithms to keep the electrical grid in balance <cit.>. When operating in active power control (APC) mode, a wind farm intentionally extracts less than the available power from the wind, in order to meet the demands of the transmission system operator (TSO). The application of APC to a wind farm is not trivial and introduces new challenges. In fact, the maximum available power dependents on ambient conditions, which vary dynamically in uncertain ways <cit.>. Additionally, wind may suddenly drop, possibly leaving not enough power reserves to track a given reference signal <cit.>. In a wind farm, the situation is further complicated by the presence of low-momentum turbulent wakes, which are responsible for power losses and fatigue loading of waked turbines <cit.>. Various solutions have been proposed to mitigate wake effects, such as induction and yaw control <cit.>. The latter consists of “steering” the wake away from downstream rotors, and its effectiveness for power boosting has been demonstrated numerically <cit.>, experimentally in the wind tunnel <cit.>, as well as in field trials <cit.>.
Different APC approaches have been presented in the literature. An open-loop APC strategy is discussed in <cit.>. The authors showed that the lack of feedback poses a limitation on the power tracking accuracy of the method, especially in conditions of strong waking. Furthermore, an equal dispatch of power sharing among the turbines proved to be suboptimal, due to the different local power reserves induced by the heterogeneity of the flow.
Recently, various authors have used model predictive control (MPC) for APC <cit.>. The main drawback of such methods lies with the need of a dynamic farm flow model, which can be computationally expensive.
Simpler control structures based on classical PI (proportional integral) loops have also been extensively investigated <cit.>. While lacking the sophistication of MPC, such methods do not need a wind farm flow model and can provide fast response times with simple implementations. The APC PI controller of ref. <cit.> operates on the tracking error and adjusts the power demands to follow a reference, sharing power in an arbitrary, static manner among the turbines. The method includes gain scheduling based on the fraction of saturated wind turbines, defined as the ones whose available power is smaller than the demanded one. This method was improved in ref. <cit.> by dynamically adjusting the set-points of the wind turbines, with the goal of equalizing their loading. The authors tested this methodology with an actuator disk model using large eddy simulations (LES). Later, this approach was also demonstrated with the more sophisticated actuator line method (ALM) in LES <cit.>. So far these PI-based methods have been applied only to induction control, and they are not necessarily optimal. Moreover, saturation conditions are problematic, due to the possible local lack of power reserves (margins), which are not explicitly accounted for nor monitored in the existing implementations.
In this paper, a new wind farm control architecture is presented to improve the power tracking accuracy in conditions of strong persistent wakes, when the wind farm power demand is close to the maximum available power. An improved tracking performance is obtained by explicitly maximizing the power margin, in order to hedge against wind lulls. This novel methodology combines wake steering with induction control. Wake steering is used because of its ability to increase power margins by mitigating wake effects <cit.>. Wake steering is implemented through an open-loop model-based set-point optimal scheduler, closely following the standard implementation that has recently become popular in power-boosting wind farm control <cit.>. Induction control is implemented through a fast closed-loop corrector to improve tracking accuracy. The new methodology is demonstrated in a partial wake impingement scenario of a cluster of turbines, using a TUM-modified version of NREL's ALM-LES Simulator fOr Wind Farm Applications (SOWFA) <cit.>.
The paper is structured as follows. First, the novel APC methodology is presented. Second, the simulation model is described and finally, results are discussed for steady-state and unsteady conditions.
§ METHODOLOGY
The core of the proposed wind farm control architecture is an open-loop model-based set-point optimal scheduler. This control element determines the yaw misalignment of each turbine and its contribution to the demanded value (i.e. power share), given the power demand required by the TSO and the ambient conditions. The latter can be obtained in real time from SCADA data or with wind sensing methods <cit.>. A feedback loop serves the main purpose of correcting tracking errors, which will inevitably arise from the open loop control element during operation. A sketch of the overall control architecture is shown in fig. <ref>. The closed and the open loops are executed at two distinct time rates, since their outputs involve physical phenomena characterized by different time scales. Specifically, the open loop updates the yaw-set points and the power shares at a slower rate, due to the time required by the wake to propagate downstream. On the other hand, the closed loop changes the turbine inductions at a faster pace, to reduce tracking errors.
§.§ Open-loop set-point optimal scheduler
The open-loop component of the algorithm provides the optimal set-points in terms of yaw misalignment and power share. These are computed by a gradient-based optimization that maximizes the smallest power reserve within the wind turbines of the farm, for a given overall power demand.
The power of the ith turbine is noted P_i = P_i (A_i,u_i), where A_i indicates the local ambient conditions (here assumed to include wind speed, wind direction and turbulence intensity), and u_i are the control inputs (namely, induction and yaw misalignment). Power is computed using a wind farm flow model, which here is based on the FLOw Redirection and Induction in Steady-state (FLORIS v2) tool <cit.>.
The maximum power that can be captured by turbine i by adjusting its control set-point u_i (while keeping the set-points of the other turbines fixed) is computed as
P_a,i = arg max_u_i P_i (A,u_i) = 1/2ρπR^2 C_p U^3 cos^P_p(γ),
where ρ is the air density, R is the wind turbine radius, U is the undisturbed free-stream velocity, and P_p is the cosine exponent relating the yaw misalignment angle γ to power. The algorithm looks for the combination of set-points that produce the maximum possible minimum power ratio P_i/P_a,i across all turbines in the farm, while satisfying the power demand of the TSO. This can be expressed as
min_u max_i ∈ [1,N]P_i/P_a,i
such that ∑_i=1^N P_i=P_ref.
In fact, the smaller the power ratio P_i/P_a,i, the larger the margin m_i = 1-P_i/P_a,i that is available to compensate against drops in the wind. Equation (<ref>) represents a constrained optimization problem, which is solved with the gradient-based Sequential Quadratic Programming (SQP) method <cit.>. The optimization does not need to be performed in real time during operation. Rather, it is executed offline for a set of ambient conditions and relative wind farm capacities. Results are collected in a look-up table, which is then interpolated at run-time, similarly to what is routinely done for power-boosting wind farm control <cit.>.
In the example shown later in this work, the open loop is executed every 30 seconds.
§.§ Closed-loop corrector
The closed-loop corrector is directly taken from the work of ref. <cit.>, and it is executed every 0.01 seconds. The corrector consists of a simple PI feedback loop that operates on the power tracking error, which arises from the open-loop component of the control structure. The tuned PI gains used in this work are K_P,APC=0.2 and K_I,APC=0.05^-1.
§.§ Identification of saturation conditions
On each turbine, the occurrence of saturations is determined by a condition that combines tracking error and pitch angle. In particular, a saturation is detected when the blade pitch is at its optimal value and the tracking error exceeds a given negative threshold, set to the value of 100 in this work. The magnitude of this threshold determines the aggressiveness of the wind farm controller. This method was chosen because it can be implemented based on standard information that is readily available on board wind turbines, and does not rely on uncertain and difficult-to-estimate parameters such as thrust coefficient or axial induction.
§ NUMERICAL MODEL
§.§ Steady-state model
The engineering farm flow model FLORIS v2 <cit.> is used here both to synthesize the open-loop part of the controller and to perform steady-state analyses, prior to testing in the dynamic higher-fidelity LES-ALM environment. The standard FLORIS implementation is extended with the option to derate the turbines by modifying the C_p and C_t tables, following a basic curtailment approach. Moreover, a linear dependency of the power loss exponent P_p with C_t is also included in the model <cit.>, so that
P_p=A C_t+B,
where A=-1.56 and B=3.16, based on experimental and numerical observations. This dependency between the power loss exponent and the thrust coefficient is particularly relevant when combining derating and yaw misalignment, since the wind turbines operate at a wide range of C_t values due to their dynamic curtailment.
§.§ Unsteady simulations
LES-ALM simulations are used for testing the performance of the new APC formulation, because they are able to deal with the complex dynamics typical of wind turbine wakes and their interactions <cit.>.
The filtered ALM of refs. <cit.> is used to model the blades, by projecting forces computed along the lifting lines onto the LES mesh grid. Simulations are run with a turbulent wind obtained from a precursor generated in stable atmospheric conditions. The Cartesian mesh consists of approximately 13.5 million cells, and uses six refinement levels. The smallest cells measure 1, and are located in correspondence of the rotors. The computational domain, grids and turbine layout are shown in fig. <ref>.
§ RESULTS AND ANALYSIS
The scenario analyzed in this paper consists of a cluster of three IEA 3.4 wind turbines <cit.>, installed at a distance of 4 diameters and misaligned by half a diameter relatively to the incoming wind vector. The scenario is adapted from <cit.>, and it is chosen to mimic the typical operating conditions of an onshore wind plant with close spacings and partial wake overlaps. The inflow is characterised by a turbulence intensity of 6% at hub height, a shear of 0.2, and a mean wind speed of 9.5, equal to the rated speed of the turbines.
§.§ Steady-state conditions
First, the open-loop optimal scheduler is demonstrated in steady-state conditions. For each turbine, fig. <ref> reports the yaw set-points and power share percentage that maximize the smallest power margin.
The figure shows that the most upstream turbines are misaligned relatively to the wind, with the goal of increasing the power reserves of the downstream ones. Moreover, power share is not distributed equally, because of different local inflow conditions and wake effects.
These margin-optimal set-points (noted induction + yaw in the following) are compared to the ones of two alternative strategies in fig. <ref>. In the first of these strategies (noted induction), only induction is used to match the demand (i.e. the turbines are always aligned with the incoming wind vector). In the second (noted first yaw then induction), the turbines are first misaligned to maximize power capture, and then induction control is used to match the demand. In both cases, the power share is computed in order to maximize the smallest power margin in the wind farm.
The figure shows that —as expected— the margin drops to zero in correspondence of the maximum power of the plant, and increases as the power demand is lowered and the wind turbines are derated. Compared to the induction case, the methods featuring wake steering are able to significantly increase the power margin for a wide range of wind farm power demands. Furthermore, the first yaw then induction strategy generates similar margins to the induction + yaw case at relatively high TSO demands. However, its performance drops slightly as the power demand is lowered, because of the power losses caused by its larger persistent yaw misalignments. These losses are particularly enhanced by the low thrust coefficient at which the turbines operate, due to curtailment <cit.>. Because of its better ability to generate large margins, only the induction + yaw strategy is considered in the remainder of this work.
§.§ Unsteady simulations
Next, the methodology is tested with unsteady CFD simulations. Results are compared with the controller developed in ref. <cit.>, which is assumed here as the state-of-the-art benchmark.
A dynamic reference power signal typical of automatic generation control (AGC) is used as input signal. AGC is the secondary response regime of grid frequency control, and it consists in the modification of the power output of a plant depending on the dynamically changing requests by the transmission system operator <cit.>. A similar signal has been considered by other authors <cit.>.
Fig. <ref> presents the average velocity fields in the wind farm obtained with the benchmark control and with the proposed induction+yaw approach. The effect of yaw misalignment can be clearly observed, as the wakes of the upstream turbines appear to have been deflected in Fig. <ref>.
Fig. <ref> shows a comparison of the power tracking error obtained with the benchmark method and the newly proposed one.
The figure shows that the benchmark method presents frequent negative deviations from the reference signal. These deviations are due to the power saturation of the wind turbines operating in waked inflow conditions. On the other hand, the controller featuring wake steering is capable of reducing the frequency of occurrence of these phenomena, thereby improving the overall tracking accuracy. For the results of fig. <ref>, the new wind farm controller reduces the root-mean-square of the tracking error by 42.6% relatively to the benchmark. In the latter, the significant error occurring at t≈760 is due to a simultaneous saturation of all the wind turbines in the cluster.
In order to better understand how the local power margin is increased by the new method, the pitch angles commanded by the wind turbine controllers are plot in fig. <ref>.
For a standard curtailment derating strategy, larger power reserves are obtained for larger absolute differences between the commanded pitch angle and the optimal value. Figures <ref> and <ref> show that waked turbines display the highest margin increase compared to the benchmark case, due to the lowered impact of the impinging wakes. On the other hand, the most upstream wind turbine (see fig. <ref>) generally displays a lower margin with the new control strategy, because of its yaw misalignment. Nevertheless, for the benchmark controller, the frequent saturation of the downstream turbines number 2 and 3 forces the upstream turbine number 1 to compensate, and in these conditions its margin drops relatively to the new proposed formulation.
Finally, the effect of the new methodology on loads is briefly considered. Fig. <ref> shows the damage equivalent loads (DEL), computed by rainflow counting (<cit.>), for the tower base fore-aft bending moment of each turbine.
Results indicate that the new control strategy reduces fatigue compared to the benchmark one. These results can be explained by the fact that the benchmark controller is unable to maintain load balancing within the farm in high-power-demand conditions, due to the frequent saturation events. Conversely, the new controller reduces the extent of the saturation phenomena, thereby suppressing the abrupt controller actions that are responsible for high-amplitude fatigue cycles.
§ CONCLUSIONS
A new wind farm control methodology for power tracking was presented. The methodology combines wake steering and induction control with the aim of maximizing the lowest power margin within a wind farm. The implementation is based on a slow-rate open-loop optimal set-point scheduler, combined with a fast feedback loop corrector. Compared to a state-of-the-art benchmark, the new methodology is capable of reducing the root-mean-square of the tracking error in conditions of power demand close to the maximum capacity of the plant. In such conditions, the fatigue of the individual wind turbines is also mitigated, because of less frequent saturation phenomena.
-2.5cm
§ ACKNOWLEDGMENT
The authors acknowledge the support of the German Federal Ministry for Economic Affairs and Climate Action (BMWK) through the PowerTracker project. The authors express their appreciation to the Leibniz Supercomputing Centre (LRZ) for providing access and computing time on the SuperMUC Petascale System under Projekt-ID pr84be “Large-eddy Simulation for Wind Farm Control”.
IEEEtran
|
http://arxiv.org/abs/2307.05954v1 | 20230712065659 | Ellipsoid Fitting Up to a Constant | [
"Jun-Ting Hsieh",
"Pravesh K. Kothari",
"Aaron Potechin",
"Jeff Xu"
] | math.PR | [
"math.PR",
"cs.CC",
"cs.DS"
] |
Ellipsoid Fitting Up to a Constant
Jun-Ting HsiehCarnegie Mellon University. . Supported by NSF CAREER Award #2047933. Pravesh K. KothariCarnegie Mellon University, . Supported by NSF CAREER Award #2047933, Alfred P. Sloan Fellowship and a Google Research Scholar Award.
Aaron PotechinThe University of Chicago, . Supported in part by NSF grant CCF-2008920.
Jeff XuCarnegie Mellon University, . Supported in part by NSF CAREER Award #2047933, and a Cylab Presidential Fellowship.
August 12, 2023
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In <cit.>, Saunderson, Parrilo and Willsky asked the following elegant geometric question: what is the largest m= m(d) such that there is an ellipsoid in ^d that passes through v_1, v_2, …, v_m with high probability when the v_is are chosen independently from the standard Gaussian distribution N(0,I_d). The existence of such an ellipsoid is equivalent to the existence of a positive semidefinite matrix X such that v_i^⊤X v_i =1 for every 1 ≤ i ≤ m — a natural example of a random semidefinite program. SPW conjectured that m= (1-o(1)) d^2/4 with high probability. Very recently, Potechin, Turner, Venkat and Wein <cit.> and Kane and Diakonikolas <cit.> proved that m ≥ d^2/log^O(1)(d) via certain explicit constructions.
In this work, we give a substantially tighter analysis of their construction to prove that m ≥ d^2/C for an absolute constant C>0. This resolves one direction of the SPW conjecture up to a constant. Our analysis proceeds via the method of Graphical Matrix Decomposition that has recently been used to analyze correlated random matrices arising in various areas <cit.>. Our key new technical tool is a refined method to prove singular value upper bounds on certain correlated random matrices that are tight up to absolute dimension-independent constants. In contrast, all previous methods that analyze such matrices lose logarithmic factors in the dimension.
empty
empty
empty
§ INTRODUCTION
What's the largest m so that for m points v_1, v_2, …, v_m ∈^d sampled independently from the d-dimensional standard Gaussian distribution (0,I_d), there exists an ellipsoid that passes through each of the v_is with high probability? This latter condition is equivalent to asking for a positive semidefinite matrix Λ such that v_i^⊤Λ v_i = 1 for 1 ≤ i ≤ m and thus, equivalently, the question asks for the largest m such that the basic stochastic semidefinite program above remains feasible with high probability.
It is not hard to prove that for any m≤ d+1, an ellipsoid as above exists with high probability over the v_is <cit.>. On the other hand, since the dimension of the smallest linear subspace that contains the positive semidefinite cone of d × d matrices is d+1 2∼ d^2/2, it is easy to prove that for m ≫ d^2/2, there cannot be an ellipsoid passing through v_is with high probability.
In 2013, Saunderson, Parrilo and Willsky <cit.> studied this basic geometric question and conjectured that there is a sharp phase transition for the problem (from feasibility/existence of an ellipsoid to non-existence of an ellipsoid) as m crosses d^2/4.
Let >0 be a constant and v_1,…, v_m ∼(0,I_d) be i.i.d. standard Gaussian vectors in ^d. Then,
* If m ≤ (1-) d^2/4, then v_1,…, v_m have the ellipsoid fitting property with probability 1-o_d(1).
* If m ≥ (1+) d^2/4, then v_1,…, v_m have the ellipsoid fitting property with probability o_d(1).
This bound is 1/2 of the dimension of the smallest linear subspace containing the positive semidefinite cone of d × d matrices. Said differently, the SCPW conjecture (developed in a sequence of works <cit.>) posits that the positive semidefiniteness constraint “forces" a drop of a factor 2 in the threshold m for infeasibility. The SCPW conjecture was motivated by results of numerical experiments (see also the experiments presented in <cit.>).
Early on, <cit.> established the existence of a feasible ellipsoid for any m ≤ O(d^6/5-) whp. Recently, there has been a new wave of progress on this bound. A recent result <cit.> on establishing Sum-of-Squares lower bounds for the Sherringtin-Kirkpatrick model, as a corollary yields an estimate of m ≤ O(d^3/2-). In fact, though not explicitly stated, their work already contains ideas that imply a significantly stronger bound of m ≤ d^2/(d). Very recently, two independent works <cit.> and <cit.> analyzed two slightly different explicit constructions for Λ to recover a similar bound of m ≤ d^2/(d). In their works <cit.>, the authors ask the question of analyzing their construction (or a different one) to obtain an improved and almost optimal estimate of m = d^2/C for some absolute constant C>0.
The main result of this work achieves this goal. Specifically, we prove:
For m ≤ c d^2 for some universal constant c>0 and v_1,…,v_m ∼(0, I_d) drawn independently, with probability at least 1-o_d(1), there exists an ellipsoid passing through each v_i.
We note that the failure probability is in fact 2^-d^ for some small constant due to the nature of our proof for matrix norm bounds.
We establish <Ref> by analyzing the construction of Kane and Diakonikolas <cit.> (which is a variant of the construction proposed in <cit.>). Our argument can be used to recover a bound of c ∼ 1/10^8. We have not tried to optimize this bound. Numerical experiments suggest that the <cit.> construction we analyze cannot approach c=1/4, so establishing the sharp constant in the SCPW conjecture will likely need new ideas. <Ref> shows a summary of our result compared to prior work.
Our key idea departs from the analysis technique of <cit.> and instead relies on the method of graphical matrix decomposition. This method decomposes a random matrix with correlated entries into a sum of structured random matrices called graph matrices. Graph matrices can be thought of as an analog of the Fourier basis in the analysis of functions over product spaces. This method was first employed in the works establishing tight sum-of-squares lower bound on the planted clique problem <cit.> and has since then been employed in several follow-up works on proving sum-of-squares lower bounds and more recently in analyzing well-conditionedness of linear algebraic algorithms for generalizations of tensor decomposition <cit.>).
The key technical work in the analysis then becomes understanding the smallest and the largest singular values of graph matrices. All prior works rely on arguments that establish bounds on the largest singular values that are accurate up to polylogarithmic factors in the underlying dimension of the matrices. The work of <cit.> recently showed how to use such bounds to also obtain estimates of the smallest singular values of graph matrices (which, otherwise are significantly more challenging to prove). Our analysis builds on their conceptual framework but with significant technical upgrades. This is because the quantitative bounds proved in <cit.> do not allow us to directly obtain an improvement on the previous estimates <cit.>.
Our main technical contribution is a new method to establish bounds on the largest singular values of graph matrices that are tight up to dimension-independent absolute constants. This allows us to obtain substantially improved estimates for the SCPW conjecture. Given the host of previous applications of such bounds, we expect that our results will have many more applications down the line.
Concurrent Work We note that a concurrent work of Bandeira <cit.> also obtains a sharper analysis of <cit.> to establish a similar result as this work. They analyze the same construction of identity perturbation as us. In their work, <cit.> ask the question of obtaining estimates that hold with inverse exponential failure probability (as opposed to inverse polynomial failure probability that their work establishes). They also outline a proof strategy that could potentially achieve this goal. We note that our proof does indeed recover an inverse exponential failure probability naturally.
§.§ Technical overview
Following the convention of <cit.>, for the rest of the paper we will assume that v_1,…,v_m ∼(0, 1/d I_d) such that each vector has expected norm 1.
Note that this does not change the problem as we can simply scale Λ.
Our construction of Λ is the “identity perturbation construction”, which is the same one analyzed in <cit.> and was proposed in <cit.>.
As an intuition, observe that Λ = I_d almost works: v_i^T I_d v_i = v_i_2^2 ≈ 1.
Thus, the idea is to define Λ as a perturbation of I_d: Λ = I_d - ∑_i=1^m w_i v_i v_i^T, where w = (w_1,…,w_m) ∈^m.
To determine w, observe that the constraints v_i^T Λ v_i = 1 give m linear constraints on w, and this can be written as a linear system represented by a matrix M ∈^m× m with entries M[i,j] = v_i, v_j^2.
Thus, given that M is full rank, w is uniquely determined by w = M^-1η for some vector η (see <Ref>).
This construction satisfies v_i^T Λ v_i = 1 automatically, so the next thing is to prove that Λ≽ 0.
Therefore, we have two high-level goals:
* Prove that M is full rank and analyze M^-1.
* Prove that ∑_i=1^n w_i v_i v_i^T has spectral norm bounded by 1.
These two immediately imply that Λ is a valid construction.
To achieve the first goal, we decompose M into several components.
Roughly, we write M = A + B where A is a perturbed identity matrix A = I_m - T and B is a rank-2 matrix (see <Ref>).
We first show that T_≤ O(√(m)/d) < 0.5 with m ≤ c d^2 (<Ref>), hence A is well-conditioned.
Then, using the fact that B has rank 2, we can apply the Woodbury matrix identity (<Ref> and <ref>) — a statement on the inverse of low-rank corrections of matrices — to conclude that M is invertible and obtain an expression for M^-1 (<Ref>).
This is carried out in <Ref>.
Next, for the second goal, we need to further expand A^-1.
Since T_ < 1, we can apply the Neumann series and write A^-1 = (I_m - T)^-1 = ∑_k=0^∞ T^k.
For the analysis, we select certain thresholds to truncate this series such that the truncation error is small.
Then, we write M^-1 in terms of the truncated series plus a small error, which will be useful later for the analysis of R.
This is carried out in <Ref>.
Finally, given the expression of M^-1,
we are able to express using the terms that show up for M^-1, and the bulk of our work culminates in bounding the spectral norm of in <ref>.
Bounding R_≤ 1 implies that Λ≽ 0, completing the proof.
Requiring tight norm bounds
Our main technical lemmas are the spectral norm bounds of T (<Ref>) and the matrices in the decomposition of R at <ref>.
Clearly, we need our norm bound T_≤ O(√(m)/d) to be tight without polylog factors so that m ≤ O(d^2) suffices, and similarly for matrices from R.
The standard starting point is the trace moment method: for any symmetric matrix M ∈^n× n and q∈ (usually taking q = (n) suffices),
M_^2q≤( M^2q) = ∑_i_1,i_2,…,i_2q∈ [n] M[i_1,i_2] M[i_2,i_3] ⋯ M[i_2q, i_1]
We view the summand as a closed walk i_1 → i_2 →⋯→ i_2q→ i_1 on n vertices.
For a random matrix, we study the expected trace (M^2q).
In the simple case when M is a Gaussian matrix (GOE), we see that after taking the expectation, the non-vanishing terms are closed walks where each edge (u,v) is traversed even number of times.
This is in fact true for any M as long as the odd moments are zero.
Thus, a precise upper bound on (M^2q) can be obtained by carefully counting such closed walks (see <cit.>).
Our matrices are more complicated; each entry is a mean-zero polynomial of Gaussian random variables.
To carry out the trace method, we represent the matrices as graphs, hence the term graph matrices.
The framework of graph matrices was first introduced by <cit.>, and
over the years, off-the-shelf norm bounds (e.g. <cit.>) for graph matrices have been developed and successfully used in several works <cit.>.
However, the currently known norm bounds are only tight up to polylog factors, hence not sufficient for us.
Therefore, the bulk of our paper is to prove norm bounds for these matrices that are tight up to constant factors. In fact, our bounds are even tight in the constant when the matrices are explicitly written down following the graph matrix language. That said, we do not pursue the tight constant-factor dependence in this work: we believe that an analysis of our candidate matrix following the current road-map but with norm bounds tight-to-constant would still fall short of reaching the conjectured threshold of d^2/4.
In the context of a fine-grained understanding for graph matrices, Potechin and Cai <cit.> determined the limiting distribution of the spectrum of the singular values of Z-shaped and multi-Z-shaped graph matrices. However, their results are only for these specific graph matrices, and their analysis does not technically give norm bounds as they do not rule out having a negligible proportion of larger singular values.
Key idea towards tight norm bounds
Here, we briefly discuss the high-level ideas for proving tight norm bounds.
To illustrate our techniques, in <Ref> we will give a full proof for a matrix that arises in our analysis as an example, and also discuss key ideas that allow us to analyze more complicated matrices.
The key to counting walks is to specify an encoding, which we view as information required for a walker to complete a walk.
If we can show that such an encoding uniquely identifies a walk, then we can simply bound the walks by bounding the number of possible encodings.
Thus, all we need to do is to come up with an (efficient) encoding scheme and prove that the walker is able to complete a walk.
Using standard encoding schemes, we quickly realize that the walker may be confused during the walk, i.e., the walker does not have enough information to perform the next step.
Thus, we need to pay for additional information in the encoding to resolve confusions.
So far, this is the same high-level strategy that was used in prior work <cit.>, and this extra pay is often the source of extra log factors in the norm bounds.
Our key innovation is to pay for the extra information during steps that require much less information than normal.
Roughly speaking, we label each step of the walk as either (1) visiting a new vertex, (2) visiting an old vertex via a new edge, (3) using an old edge but not the last time, (4) using an old edge the last time (see <Ref>).
The high level idea is that the dominating walks in the trace are the ones that use only the 1st and 4th types, while the 2nd and 3rd types require less information (which we call gaps).
The main observation is that the walker will be confused only when there are steps of the 2nd and 3rd type involved, but we can pay extra information during these steps to resolve potential (future) confusions.
This is illustrated in <Ref>.
§.§ Comparison to prior works
Our candidate matrix construction of Λ is essentially same as <cit.>, while we adopt different techniques to bound the spectral norm of the non-Identity component.
In particular, they use an elegant cover (or -net) argument which is significantly different than ours.
That said, though a major obstacle being the norm bound for the invertibility, their argument suffers an additional polylog gap from the epsilon-net argument, and this is partially why we adopt the proof strategy via graph matrix decomposition that is seemingly more complicated.
Closer to our analysis is the work of <cit.>. They study a construction of “least-square minimization” proposed by <cit.>, which is equivalent to projecting out the identity mass onto the subspace of matrices satisfying the constraints. In particular, their matrix analysis proceeding via Woodbury expansion and Neumann series using graph matrices serves as a road-map for our current work. In this work, we develop a more refined understanding of the structured random matrices that we believe would be useful in further and more fine-grained investigations of problems in average-case complexity.
In the context of Planted Affine Plane problem, <cit.> reaches the threshold of O(d^2 ) implicitly. They adopt the framework of pseudo-calibration <cit.> to obtain a candidate matrix, and follow a similar recipe as ours via graph matrix decompositions and spectral analysis.
That said, it is an interesting question whether solutions coming from a pseudo-calibration type of construction might give us some extra mileage in ultimately closing the constant gap.
A natural idea is to analyze the planted distribution pioneered in <cit.>: unfortunately, it can be easily verified that the low-degree polynomial hardness for the particular planted distribution actually falls apart even if we assume an arbitrary constant gap.
§ PROOF OF MAIN RESULT
Given v_1, v_2,…, v_m that are i.i.d. samples from (0,1/dI_d), recall that we must construct a matrix Λ such that (1) v_i^T Λ v_i = 1 for any i∈[m], and (2) Λ≽ 0.
In this section, we describe our candidate matrix (<Ref>).
To prove that it satisfies the two conditions above, we need to analyze certain random matrices (and their inverses) that arise in the construction, which involves decomposing the matrices into simpler components.
We will state our key spectral norm bounds (<Ref> and <Ref>) whose proofs are deferred to later sections,
and complete the proof of <Ref> in <Ref>.
§.§ Candidate construction
The following is our candidate matrix Λ, which is the same one as <cit.>.
Given v_1,…,v_m ∼(0, 1/d I_d), we define the matrix Λ∈^d × d to be
Λ I_d - ∑_i=1^m w_i v_i v_i^T
where we take w = (w_1, w_2,…, w_m) to be the solution to the following linear system,
M w = η
for η∈^m given by
η_i v_i_2^2-1, ∀ i∈ [m]
and M∈^m× m with entries given by
M[i,j] ⟨ v_i, v_j ⟩^2, ∀ i,j∈ [m]
We first make the following simple observation.
For any i∈ [m], the constraint v_i^T Λ v_i = 1 is satisfied.
For any i∈ [m],
v_i^T Λ v_i = v_i^T I_d v_i - ∑_j∈ [m] w_j ⟨ v_i, v_j⟩^2
= v_i_2^2 - ⟨ M[i], w⟩
= v_i_2^2 - η_i =1
Here M[i] is the i-th row of M, and the equality above follows from M w = η and η_i = v_i_2^2 - 1 from <Ref>.
Structure of subsequent sections
For Λ to be well-defined, we require that M is full rank (hence invertible).
Note that it is easy to see that M is positive semidefinite, since M is a Gram matrix with M[i,j] = v_i^⊗ 2, v_j^⊗ 2.
To analyze M, we will show a decomposition of M in <Ref> that allows us to more easily analyze its inverse.
In <Ref>, we will prove that M is in fact positive definite (<Ref>).
Next, to prove that Λ≽ 0,
we will write Λ = I_d - where
∑_i=1^m w_i v_iv_i^T = ∑_i=1^m ( M^-1η)[i] · v_iv_i^T
and prove that _ is bounded by 1.
This is done in <Ref>.
Finally, combining the analyses, we finish the proof of <Ref> in <Ref>.
§.§ Decomposition of M
The proof of <Ref> requires careful analysis of the matrix M from <Ref> and its inverse.
To this end, we first decompose M as M = A + B such that intuitively, A is perturbation of a (scaled) identity matrix and B has rank 2.
We will later see how this decomposition allows us to analyze M^-1 more conveniently.
M = M_ + M_β +M_D + 1+1/d I_m_ A
+ 1/dJ_m + 1/d( 1_m ·η^T + η· 1_m^T) _ B
where J_m is the all-ones matrix, M_α, M_β∈^m× m are matrices with zeros on the diagonal and M_D ∈^m× m is a diagonal matrix, defined as follows:
* M_α[i,j] ∑_a ≠ b ∈ [d] v_i[a]· v_i[b] · v_j[a]· v_j[b] for i ≠ j ∈ [m],
* M_β [i,j] ∑_a ∈ [d]( v_i[a]^2 -1/d)( v_j[a]^2 - 1/d) for i ≠ j ∈ [m],
* M_D[i,i] v_i_2^4 - 2/dv_i_2^2 - 1 for i∈[m].
For any off-diagonal entry i≠ j ∈ [m], on the right-hand side we have
M[i,j] = v_i, v_j^2
= ∑_a∈[d] v_i[a] v_j[a] ^2
= ∑_a≠ b∈[d] v_i[a]· v_i[b]· v_j[a]· v_j[b]
+ ∑_a∈[d] v_i[a]^2 · v_j[a]^2
The first term is exactly M_α[i,j].
For the second term,
∑_a∈[d] v_i[a]^2 · v_j[a]^2
= ∑_a∈[d]v_i[a]^2 - 1/dv_j[a]^2 - 1/d + 1/dv_i_2^2 + v_j_2^2 - 1/d
= ∑_a∈[d]v_i[a]^2 - 1/dv_j[a]^2 - 1/d_M_β[i,j]
+ v_i_2^2 - 1/d_1/dη_i
+ v_j_2^2 - 1/d_1/dη_j + 1/d
Thus, M[i,j] = M_α[i,j] + M_β[i,j] + 1/d + 1/d( 1_m ·η^T + η· 1_m^T)[i,j].
For the diagonal entries, the right-hand side of the (i,i) entry is
M_D[i,i] + 1+1/d + 1/d + 2/dη_i
= v_i_2^4 - 2/dv_i_2^2 - 1 + 1 + 2/d + 2/d (v_i_2^2 -1)
= v_i_2^4 = M[i,i]
This completes the proof.
The intention behind this decomposition is so that for v_i ∼(0, 1/dI_d), M_α, M_β, M_D are all mean 0 (while not the same variance) since v_i_2^2 = 1 and v_i_2^4 = 1 + 2/d.
Therefore, we expect M_α + M_β + M_D_ to be small, which implies that A is positive definite and well-conditioned.
Furthermore, observe that B has rank 2:
B = 1/dJ_m + 1/d( 1_m ·η^T + η· 1_m^T)
= 1/d[ 1_m η ]·[ 1 1; 1 0 ]·[ 1_m; η ]
§.§ Inverse of M
The decomposition of M into A and a rank-2 matrix B (<Ref>) allows us to apply the Woodbury matrix identity about the inverse of low-rank corrections of invertible matrices.
[Matrix Invertibility]
Suppose A ∈^n_1 × n_1 and C ∈^n_2 × n_2 are both invertible matrices, and U∈^n_1 × n_2 and V ∈^n_2 × n_1 are arbitrary.
Then, A + U C V is invertible if and only if C^-1 + V A^-1 U is invertible.
[Woodbury matrix identity <cit.>]
Suppose A ∈^n_1 × n_1 and C ∈^n_2 × n_2 are both invertible matrices, and U∈^n_1 × n_2 and V ∈^n_2 × n_1 are arbitrary. Then
(A+ UCV)^-1 = A^-1 - A^-1U(C^-1+ VA^-1U)^-1VA^-1
In light of <Ref>, we can write B in <Ref> as B = UCU^T where U = V^T = 1/√(d)[ 1_m η ]∈^m × 2 and
C = [ 1 1; 1 0 ],
and M = A + UCU^T.
Note that C^-1 = [ 0 1; 1 -1 ], and we have
C^-1 + U^T A^-1U = [ 1_m^T A^-1 1_m/d 1 + η^T A^-1 1_m/d; 1+ η^T A^-1 1_m/d -1 + η^T A^-1η/d ][ r s; s u ]
We first need to show that A is invertible.
Recall from <Ref> that A = (1+1/d)I_m + M_α + M_β + M_D.
We will prove the following lemma, whose proof is deferred to <Ref>.
Suppose m ≤ cd^2 for a small enough constant c.
With probability 1-o_d(1), we have
* M_α_≤ 0.1,
* M_β_≤ 0.1,
* M_D_≤ O√(log d/d).
As an immediate consequence, we get the following:
With probability 1-o_d(1), the matrix A from <Ref> is positive definite (hence full rank), and
0.5 I_m ≼ A ≼ 1.5 I_m
Since A = (1+1/d)I_m + M_α + M_β + M_D, by <Ref> the eigenvalues of A must lie within 1 ± 0.2 ±O(1/√(d)) ∈ (0.5, 1.5) (we assume d is large).
Next, from <Ref>, we can prove that M is invertible (<Ref>) by showing that the 2× 2 matrix C^-1 + U^T A^-1U is invertible, which is in fact equivalent to ru - s^2 ≠ 0.
We first need the following bound on the norm of η, whose proof is deferred to <Ref>.
With probability at least 1-o_d(1),
η_2^2 ≤ (1+o_d(1)) 2m/d
Suppose m ≤ cd^2 for a small enough constant c.
Let r,s,u ∈ be defined as in <Ref>.
With probability at least 1-o_d(1), we have
* r ∈m/d· [2/3, 2],
* |s| ≤ 1+o_d(1),
* u ∈ [-1, -1/2].
Thus, we have
s^2 - ru ≥Ωm/d
As a consequence, M is invertible.
By <Ref>, we know that 2/3 I_m ≼ A^-1≼ 2I_m.
Thus, r = 1/d 1_m^T A^-1 1_m ∈1/d1_m_2^2 · [2/3,2], hence r ∈m/d· [2/3, 2].
For u, we have
1/dη^T A^-1η≤1/d A^-1_·η_2^2 < (1+o_d(1))·4m/d^2 < 1/2
where the last inequality follows for some m < c d^2 for small enough c.
Thus, u = -1 + η^T A^-1η/d∈ [-1, -1/2].
We defer the proof for s to <Ref> in the appendix. With the bounds on r, s and u, we immediatley get s^2 - ru ≥Ω(m/d).
To prove that M is invertible, let us first recall that we write M = A + UCU^T where A is defined in <Ref> and U = V^T = 1/√(d)[ 1_m η ]∈^m × 2 and
C = [ 1 1; 1 0 ].
By <Ref>, A is invertible.
Then by <Ref>, we know that M is invertible if and only if C^-1 + U^T A^-1 U [ r s; s u ] (see <Ref>) is invertible, which is equivalent to ru - s^2 ≠ 0.
Thus, s^2 - ru ≥Ω(m/d) suffices to conclude that M is invertible.
§.§ Finishing the proof of Theorem <ref>
The final piece of proving <Ref> is to show that = ∑_i=1^m w_i v_i v_i^T has spectral norm bounded by 1, which immediately implies that the candidate matrix Λ = I_d - ≽ 0.
There exists some absolute constant C_R such that for m ≤d^2/C_R,
_≤1/2
The proof is deferred to <Ref>.
In particular, we will write an expanded expression of M^-1 and obtain a decomposition of (<Ref>).
Then, in <Ref>, we prove tight spectral norm bounds for matrices in the decomposition, which then completes the proof of <Ref>.
Combining <Ref> and <ref> we can finish the proof of <Ref>.
The matrix M (recall <Ref>) is invertible due to <Ref>, thus our candidate matrix Λ = I_d - matrix defined in <Ref> is well-defined.
Furthermore, by <Ref> we have that _ <1.
This proves that Λ≻ 0.
§ MACHINERY FOR TIGHT NORM BOUNDS OF GRAPH MATRICES
One of the main technical contributions of this paper is providing tight spectral norm bounds (up to constants per vertex/edge) for structured random matrices with correlated entries (a.k.a. graph matrices).
We note that prior to this work, most known norm bounds for such matrices are only tight up to some logarithmic factors <cit.>, while not much is known in terms of precise bounds without log factors except for several specific cases (see e.g. <cit.>).
§.§ Preliminaries
We first give a lightweight introduction to the theory of graph matrices.
For interested readers who seek a thorough introduction or a more formal treatment, we refer them to its origin in a sequence of works in Sum-of-Squares lower bounds <cit.>.
We will follow the notations used in <cit.>.
Throughout this section, we assume that there is an underlying (random) input matrix G and a Fourier basis {χ_t}_t∈.
We first define shapes, which are representations of structured matrices whose entries depend on G.
A shape τ is a tuple (V(τ), U_τ, V_τ, E(τ)) associated with a (multi) graph (V(τ), E(τ)).
Each vertex in V(τ) is associated with a vertex-type that indicates the range of the labels for the particular vertex.
Each edge e∈ E(τ) is also associated with a Fourier index t(e) ∈.
Moreover, we have U_τ, V_τ⊆ V(τ) as the left and right boundary of the shape.
We remind the reader that V_τ should be distinguished from V(τ), where V_τ is the right boundary set, while V(τ) is the set of all vertices in the graph.
<Ref> show the shapes for matrices M_ and M_β defined in <Ref>.
For these shapes, there are two vertex-types (square and circle).
The two ovals in each shape indicate the left and right boundaries U_τ and V_τ.
We next describe how to associate a shape to a matrix (given the underlying matrix G).
Given a shape τ, we call a function σ : V(τ) → a mapping of the shape if
* σ assigns a label for each vertex according to its specified vertex-type;
* σ is an injective mapping for vertices of the same type.
Given a shape τ, we define its graphical matrix M_τ to be the matrix indexed by all possible boundary labelings of S, T, and for each of its entry, we define
M_τ[S, T] = ∑_σ: V(τ) →
σ(U_τ) = S, σ(V_τ) = T ∏_e∈ E(τ)χ_t(e)(G[σ(e)])
Observe that for each entry M_τ[S,T], since σ must map U_τ and V_τ to S and T,
M_τ[S,T] is simply a sum over labelings of the “middle” vertices V(τ) ∖ (U_τ∪ V_τ).
Take <Ref> for example. Suppose G∈^m × d and square and circle vertices take labels in [m] and [d] respectively, then we can write out the entries of the matrix: for i ≠ j∈ [m],
M_[i,j] = ∑_a≠ b∈ [d]χ_1(G[i,a]) ·χ_1(G[i,b]) ·χ_1(G[j,a]) ·χ_1(G[j,b])
M_β[i,j] = ∑_a∈ [d]χ_2(G[i,a]) ·χ_2(G[j,a])
Note also that since σ must be injective for vertices of the same type and U_τ≠ V_τ in both examples, there is no mapping such that σ(U_τ) = σ(V_τ).
Thus, by <Ref>, both matrices have zeros on the diagonal.
Adaptation to our setting The above is a general introduction for graph matrices.
In this work, we specialize to the following setting:
* G ∈^m× d is a random Gaussian matrix whose rows are v_1,…, v_m ∼(0, 1/d I_d).
* The Fourier characters {χ_t}_t∈ are the (scaled) Hermite polynomials.
* For all graph matrices that arise in our analysis,
* |S| = |T| = 1,
* There are two vertex-types: square vertices take labels in [m] and circle vertices take labels in [d].
For our technical analysis, we may also employ this machinery on a broader range of graph matrices for shape in which we relax the local injectivity condition within each block. That said, for illustration purpose, it suffices to consider the vanilla setting of graph matrix.
For each graph matrix τ considered in this work, let D_V be the size constraint such that |V(τ)|≤ D_V.
For concreteness, we will take D_V = (d) throughout this work.
Trace moment method
For all our norm bounds, we will use the trace moment method: for any graph matrix M_τ with underlying random matrix G and any q ∈,
M_τ_^2q ≤(M_τ M_τ^T)^q
= ∑_S_1, T_1, S_2, T_2,… S_q-1, T_q-1:
boundaries M_τ[S_1,T_1] M_τ^T[T_1,S_2] ⋯ M_τ^T[T_q-1, S_1]
where the expectation is taken over G.
Notice that the summation is over closed walks across the boundaries: S_1 → T_1 → S_2 → T_2→…→ S_1, where S_1, T_1, … are boundary labelings of M_τ. In particular, the walk is consist of 2q-steps of “block walk”, with the (2t-1)-th step across a block described by M_τ and the (2t)-th step across a block described by M_τ^T.
The crucial observation is that after taking expectation, all closed walks must walk on each labeled edge (i.e., Fourier character) an even number of times, since all odd moments of the Fourier characters are zero. Therefore, bounding the matrix norm is reduced to bounding the contribution of all such walks.
M_τ_^2q≤∑_: closed walk∏_e∈ E()χ_t(e)(G[e])^_(e)
where E() denotes the set of labeled edges used by the walk , _(e) denotes the number of times e appears in the walk, and t(e) denotes the Fourier index (with slight abuse of notation).
We remind the reader not to confuse vertices/edges in the walk with vertices/edges in the shape.
The vertices in a walk are “labeled” by elements in [m] or [d] (depending on the vertex-type).
Similarly, each edge e∈ E() in a walk is labeled by an element in [m] × [d].
We will use the terms “labeled vertex” and “labeled edge” unless it is clear from context.
§.§ Encoding closed walks
To count closed walks, we will provide an encoding that uniquely identifies each closed walk.
There are 2q steps in total, where q steps walk from square to circle vertices (type 1) and q steps vice versa (type 2), and each edge must be traversed an even number of times (can be any type).
Note the difference between steps and edges.
We will encode the closed walks step by step.
It is convenient to view the encoder as a walker who can see the future, deciding which vertex to go to at each step while potentially leaving notes for future steps.
It is a valid encoding as long as a decoder can reconstruct the entire walk based on the labels and notes left by the encoder.
* Pick a starting vertex i_1∈ [m].
* At each step, assign a label from {F,R,N,H}. Suppose we're currently at vertex u:
* “F” (Fresh) step: the next vertex is a new vertex that we haven't seen before, and we mark the new edge as “open”.
Choose a label in [m] (resp. [d]) to identify the next vertex if this step walks toward a square (resp. circle) vertex.
* “N” (Non-innovative) step: this step (u,v) is a new edge but the next vertex v has been visited before.
We choose an old vertex v, and in addition, leave two extra notes to indicate two (future) neighbors of v for some future R steps.
This only requires a label in [q] × [q], as we simply need to indicate two edges among all q steps of the same type.
* “H” (High-multiplicity) step: this step uses an old edge but this edge is not the last time it's traversed.
Similar to an N step, we use a label in [q] to both indicate the next vertex as well as leave one note at v for a future R step.
* “R” (Return) step: this step is the last time we will use this edge, marking the edge as “closed”.
If there are multiple open edges incident to u, we look at a note left by the first N or H step at u to determine which edge to close, and remove that note.
See <Ref> for an example of a closed walk.
We note that the terminology “fresh”, “non-innovative”, “high-multiplicity” and “return” are adopted from <cit.>.
In our encoding, the specific content of the notes is not important; what's important is that a label in [q] within each note suffices for the decoder to determine its next destination.
Intuitively, each N or H step has a gap from F steps: each F step gets a factor m or d to indicate the next vertex, whereas N/H steps don't.
Thus, we are able to charge extra information to N/H steps.
See <Ref> for further discussions.
We will show that the encoding uniquely identifies a closed walk.
The F, N, and H steps are clear: the walker knows exactly where to go.
The tricky part is when the walker is confused at an R step, i.e. there are more than one open edge.
We also call this an unforced return, as opposed to a forced return where there is only one open edge (the walker is forced to traverse that edge).
We thus need to show that the notes left by N/H steps suffice to resolve any confusion:
[All R step confusions are resolved]
For any R step from vertex v,
#(total notes at v) - #(used notes at v) ≥#(open edges incident to v) - 1
Note that the minus 1 is because if there's only one open edge, then it's a forced return (we don't need any note).
§.§ Resolving unforced R steps
To prove <Ref>, we will define a potential function for each vertex at time t, which roughly states how many open edges are left and how many notes were used (and discarded).
At time t, for each vertex v, let _t(v) be defined as
_t(v) = #(closed edges closed from v) + #(open edges incident to v) - 1
Here closed edges closed from v mean edges closed by an R step leaving v.
Note that we have a -1 in the definition because for each vertex we may by default assume the return is using a particular edge, hence at each time we know there is an edge presumed-to-be forced.
With this definition, the following lemma immediately proves <Ref>, because in our encoding, every time we go to v via an N step (resp. H step) we leave 2 notes (resp. 1 note).
At any time t∈[q], suppose the walker is currently at a vertex v, then
_t(v) ≤ 2· n_t(v) + h_t(v)
where we define
* n_t(v) is the number of N steps arriving at v by time t;
* h_t(v) is the number of H steps arriving at v by time t.
We prove by induction.
At the time when v is first created by an F edge, _t(v) = 0 (1 open F edge minus 1) and n_t(v) = h_t(v) = 0.
At time t, suppose the last time v was visited was at time t' < t, and suppose that the inequality holds true for t'.
Note that at time t'+1, _t'+1(v) = _t'(v)+1 if a new edge was created by an F or N step leaving v, otherwise _t'+1(v) = _t'(v) (for R step it adds 1 to the number of closed edges closed from v, but decreases 1 open edge).
On the other hand, n_t'(v) and h_t'(v) remain the same (we don't count out-going steps for n_t(v), h_t(v)).
When we reach v at time t, we case on the type of steps:
* Arriving by an R step: the edge is now closed, but the R step was not from v. So _t(v) = _t'+1(v) - 1 ≤_t'(v), while n_t(v) = n_t'(v) and h_t(v) = h_t'(v).
* Arriving by an N step: the edge is new, so _t(v) = _t'+1(v)+1 ≤_t'(v) + 2, and we have n_t(v) = n_t'(v) +1.
* Arriving by an H step: _t(v) = _t'+1(v) ≤_t'(v) + 1, and h_t(v) = h_t'(v)+1.
In all three cases, assuming _t'(v) ≤ 2 · s_t'(v) + h_t'(v), we have _t(v) ≤ 2· n_t(v) + h_t(v), completing the induction.
§.§ Tight norm bound up to constants
Combining factors from our encoding (i.e., vertex factors) and the edge factors, we have
* F step: factor √(2)/d multiplied by m or d, which is at most √(2)/d m (since d < m);
* R step: factor √(2)/d;
* N step: factor √(2)/d· q^2 (requiring a label from [q] × [q]);
* H step: factor √(2) q^2/d· q = √(2)/d· q^3.
We denote the number of F, N, H, R steps as f,n,h,r.
Our closed walks have length 2q, thus f+n+h+r = 2q.
Moreover, each new edge created by an F or N step must be closed by an R step (every edge must be traversed even number of times), thus f+n = r and h = 2q - 2r.
Then, the number of valid “signatures”
σ∈{F,N,H,R}^2q with f,n,h,r number of F,N,H,R steps can be bounded by
2qr2qn2qh≤2qr (2q)^n+2q-2r
Finally, we have m choices to choose our initial vertex i_1.
Thus, we can bound <Ref> by
M_β_^q ≤ m √(2)/d^2q∑_r=0^q ∑_n=0^r 2qr (2q)^n+2q-2r· m^r-n· q^2n· q^3(2q-2r)
= m 2/d^2^q∑_r=0^q 2qr m^r (2q^4)^2q-2r·∑_n=0^∞2q^3/m^n
≤ m 2/d^2^q∑_r=0^q 2qr m^r (2q^4)^2q-2r· (1+o_m(1))
as long as q^3 ≤ o(m).
It is clear that if q^8 ≤ o(m), then the summand is maximized at r=q.
Thus, we can further bound the above by
m 2/d^2^q q · 2^2q m^q (1+o_m(1)) = mq 8m/d^2^q (1+o_m(1))
Finally, choosing q = log^2 m and δ = O(1/log m), by Markov's inequality we have
M_β_≥8m/d^2 (1+δ)
= M_β_^q ≥8m/d^2^q (1+δ)^q ≤ (1+δ)^-q mq
≤ o_m(1)
We have finally proved that M_β_≤8m/d^2 (1+o(1)) with probability 1-o(1).
As we can see from the analysis, the dominating term is when n = 0 and r = q, i.e., when there are only F and R steps, each q times.
This is the expected behavior: ignoring the √(2)/d that every step gets, each F step gets m or d while each N or H step only gets (q) = (d).
We call this a gap (between F and N/H steps).
Due to the gaps from N and H steps, we can charge extra information to those steps in the encoding.
In this example, we charge the information needed to resolve future R step confusions to N and H steps.
While this increases the vertex factor for N and H steps, the gaps are large enough such that the total trace is still dominated by walks that use as many F steps as possible.
Correct norm
The correct norm bound is in fact M_β_≈2m/d^2, so our current upper bound is already tight up to a constant factor, already better than black-box norm bounds from (e.g.) <cit.> which lose log factors.
However, the techniques presented so far is not always tight to an absolute constant.
In more complicated cases that arise in our analysis, the norm bound can be off by a factor that is on the order of d and m.
Thus, in the subsequent sections, we introduce additional new ideas to prove tight norm bounds.
§.§ Global bounds via a local analysis
Observe that <Ref> is a weighted sum of closed walks of length 2q.
To obtain an upper bound, the standard approach is to specify an efficient encoding scheme that uniquely identifies each closed walk, and then upper bound the total number of such encodings.
We begin by defining a step-labeling — a categorization of each step in the closed walk.
For each step throughout the walk, we assign it the following label,
* F (a fresh step): it uses a new labeled edge making the first appearance and leads to a destination not seen before;
* S (a surprise step): it uses a new labeled edge to arrive at a vertex previously visited in the walk;
* H (a high-mul step): it uses a labeled edge that appears before, and the edge is making a middle appearance (i.e., it will appear again in the subsequent walk);
* R (a return step): it uses a labeled edge that appears before, and the edge is making its last appearance.
Analogously, for any shape τ , we call _τ :E(τ)→{F,R,S,H} a step-labeling of the block. The subscript τ is ignored when it is clear.
We note that the terms “fresh”, “high-mul” and “return” are adopted from the GOE matrix analysis in <cit.>.
Next, to obtain a final bound for <Ref>, we consider two factors for each step (which depend on the step-label):
* Vertex factor: a combinatorial factor that specifies the destination of the step;
* Edge factor: an analytical factor from the edge which accounts for the [χ_t(e)(G[e])^(e)] term in <Ref>.
For example, a vertex factor for an F step to a circle vertex can be d, an upper bound on the number of possible destinations.
One can think of vertex factors as the information needed for a decoder to complete a closed walk.
Essentially, the step-labeling and appropriate vertex factors should uniquely identify a closed walk, and combined with edge factors, we can obtain an upper bound for <Ref>.
We note that the approach stated above is a global encoding scheme.
One may proceed via a global analysis — carefully bounding the number of step-labelings allowed (e.g., using the fact that the F and R steps must form a Dyck word <cit.>), and then combining all vertex and edge factors to obtain a final bound.
However, to get tight norm bounds for complicated graph matrices (like M_α), the global analysis becomes unwieldy.
Local analysis
One of our main insights is to use a local analysis.
We now give a high-level overview of our strategy while deferring the specific details of our vertex/edge factor assignment scheme to subsequent sections.
Recall that a closed walk consists of “block-steps” described by the shape τ.
Thus, we treat each walk as a “block walk” and bound the contributions of a walk block by block.
This prompts us to bound the contribution of the walk at a given block-step to the final trace in <ref> by
·≤ B_q(τ)
where B_q(τ) is some desired upper bound that depends on the vertex/edge factor assignment scheme.
We define it formally in the following.
Fix q∈ and a shape τ. For any vertex/edge factor assignment scheme, we call B_q(τ) a valid block-value function for τ of the given scheme if
[((M_τ M_τ^T)^q)] ≤ (matrix dimension) · B_q(τ)^2q
and for each block-step 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i throughout the walk,
(𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i) ·(𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i) ≤ B_q(τ) .
We point out that the block-value function B should be considered as a function of both the shape τ and the length of the walk q (we will drop the subscript when it is clear throughout this work), and it also depends on the assignment scheme.
Thus, our task is to find a vertex/edge factor assignment scheme such that B_q(τ) is as small as possible.
Moreover, the matrix dimension, which is at most (d) in our case, is the factor that comes up in the start of the walk to specify the original vertex, and can be ignored as it is ultimately an 1+o(1) factor once we take a long enough walk.
Given <Ref>, the norm bound follows immediately from Markov's inequality.
Let M_τ be a graph matrix with dimension (d), and let q ≥Ω(log^2 d).
Suppose B_q(τ) is a valid block-value function,
Then, with probability 1-2^-q/log d,
M_τ_≤ (1+o_d(1)) · B_q(τ)
We apply Markov's inequality: for any > 0,
M_τ_ > (1+) B_q(τ) ≤(M_τ M_τ^T)^q > (1+)^2q B_q(τ)^2q
≤ (1+)^-2q(d) ≤ e^-2 q(d)
since [((M_τM_τ^T)^q)] ≤(d) · B_q(τ)^2q by <Ref>.
Setting = 1/log d and q ≥Ω(log^2 d), we have that the probability is at most 2^-q / log d.
Thus, we can conclude that M_τ_≤ (1+o_d(1))· B_q(τ) with probability 1 - 2^-q/
log d.
The next proposition shows that we can easily obtain a valid B_q(τ) once we have an appropriate factor assignment scheme.
For any graph matrix M_τ and any valid factor assignment scheme,
B_q(τ) = ∑_ : step-labelings for E(τ)() ·()
is a valid block-value function for τ.
It is clear that the second requirement in <Ref> is satisfied.
For the first requirement, observe that the trace can be bounded by the matrix dimension (specifying the start of the walk) times
∑__1,…, _2q:
step-labelings for E(τ)∏_i=1^2q(_i) ·(_i)
≤∑_:step-labelings for E(τ)() ·() ^2q
With this set-up, the main task is then to find an appropriate vertex/edge factor assignment scheme and obtain a good upper bound on B_q(τ).
§.§ Vertex factor assignment scheme
We now proceed to bound the vertex factors for each step-label.
We note that in this section, “vertices” refer to “labeled vertices” in the walk (having labels in [m] or [d]; recall <Ref>).
First, we define the weight of a square (resp. circle) vertex to be m (resp. d), since we need an element in [m] (resp. [d]) to specify which vertex to go to in the walk.
We first show a “naive” vertex factor assignment scheme.
In the following scheme, we use a potential unforced return factor, denoted , to specify the destination of any R step.
We will defer the specific details of to <Ref>.
[frametitle = Vanilla vertex factor assignment scheme ]
* For each vertex i that first appears via an F step, a label in (i) is required;
* For each vertex i that appears beyond the first time:
* If it is arrived via an R step, the destination may need to be specified, and this is captured by the factor.
* If it is not arrived via an R step, then it must be an S or H step.
A vertex cost in 2q· D_V is sufficient to identify the destination, where we recall 2q is the length of our walk, and D_V the size upper bound of each block.
The first thing to check is that this scheme combined with an step-labeling uniquely identifies a closed walk (given the start of the walk).
This is immediate for F and R steps by definition.
For S and H steps, since the destination is visited before in the walk, 2q · D_V is sufficient as it is an upper bound on the number of vertices in the walk.
A potential complication with analyzing the above assignment scheme directly is that it exhibits a significant difference in the vertex factors.
For example, consider a vertex that appears only twice in the walk on a tree. Its first appearance requires a label in [n], while its subsequent appearance does not require any cost if it is reached using an R step because backtracking from a tree is fixed (since there is only one parent).
This disparity can result in a very loose upper bound for the trace when applying <Ref>; in fact, the norm bound for M_τ obtained in this manner is equivalent to using the naive row-sum bound.
Redistribution
One of our main technical insights is to split the factors such that both first and last appearance contributes a factor of comparable magnitude; we call this redistribution.
We first formally define “appearance” in a block-step to clarify our terminology,
Each labeled vertex appearance can be “first”, “middle” and “last”.
Moreover, each vertex on the block-step boundary (U_τ or V_τ) appears in both adjacent blocks.
For example, suppose a vertex first appears in the right-boundary of block i and last appears in the left-boundary of block j, then it will make middle appearances in the left-boundary of block i+1 and right-boundary of block j-1 as well.
We are now ready to introduce the following vertex-factor assignment scheme with redistribution that assigns vertex-factor to each vertex's appearance to handle the disparity.
[frametitle = Vertex factor assignment scheme with redistribution ]
* For each vertex i that makes its first appearance, assign a cost of √((i));
* For any vertex's middle appearance, if it is not arrived at via an R step, assign a cost of 2q· D_V (where we recall 2q is the length of our walk, and D_V the size constraint of each block);
* For any vertex's middle appearance, if it is arrived at via an R step, its cost is captured by ;
* For each vertex i that makes its last appearance, assign a cost of √((i)) that serves as a backpay.
Deducing vertex factor from local step-labeling
As presented, the vertex factor assignment scheme requires knowing which vertex is making first/middle/last appearance. We further show that the vertex appearances, or more accurately, an upper bound of the vertex factors, can be deduced by a given step-labeling of the block. Fix traversal direction from U to V,
[frametitle = Localized vertex factor assignment from step-labeling ]
* For any vertex v that is on the left-boundary U, it cannot be making the first appearance since it necessarily appears in the previous block;
* For any vertex v that is on the right-boundary V, it cannot be making the last appearance since it necessarily appears in the subsequent block;
* For any vertex v reached via some S/R/H step, it cannot be making its first appearance;
* For any vertex v that incident to some F/S/H step, it cannot be making its last appearance since the edge necessarily appears again.
The first two points are due to <Ref>. The last point is because each labeled edge (i.e., Fourier character) must be traversed by an R step to close it.
§.§ Bounding edge-factors
To bound the contribution of the walks, we need to consider factors coming from the edges traversed by the walk.
Recall from <Ref> that each edge e in a closed walk P gets a factor [χ_t(e)^_P(e)], where t(e) is the Fourier index associated with the edge.
In our case, the Fourier characters are the scaled Hermite polynomials.
Recall that we assume that our vectors are sampled as v_i∼(0, 1/dI_d).
Thus, we define the polynomials {h_t}_t∈ such that they are orthogonal and _x∼(0,1/d)[h_t(x)^2] = t! · d^-t.
Specifically,
* h_1(x) = x,
* h_2(x) = x^2 - 1/d.
We first state the following bound on the moments of h_t, which follows directly from standard bounds on the moments of Hermite polynomials:
[Moments of Hermite polynomials]
Let d∈. For any t∈ and even k ∈,
_x∼(0,1/d)h_t(x)^k≤1/d^kt/2 (k-1)^kt/2 (t!)^k/2≤ (t!)^k/2k/d^kt/2
For now, we consider matrices that either contain only h_1 or only h_2 edges (the edge factors for graph matrices with “mixed” edges will be handled in <Ref>).
The following is our edge-factor assignment scheme to account for contributions from the Fourier characters.
[frametitle = Edge-factor assignment scheme ]
For an h_1 edge,
* F/S: assign a factor of 1/√(d) for its first appearance;
* H: assign a factor of 2q/√(d) for its middle appearance;
* R: assign a factor of 1/√(d) for its last appearance.
For an h_2 edge,
* F/S: assign a factor of √(2)/d for its first appearance (alternatively, we can view a single h_2 edge as two edge-copies of h_1 and assign each a factor of √(2)/√(d), which is a valid upper bound);
* H: assign a factor of 8q^2/d for its middle appearance;
* R: assign a factor of √(2)/d for its last appearance (alternatively, we can view a single h_2 edge as two edge-copies of h_1 and assign each a factor of √(2)/√(d) which is a valid upper bound).
The above scheme correctly accounts for the edge factors from h_1 and h_2 edges.
If an edge has multiplicity 2 then it must be traversed by one F/S step and one R step.
* If it is an h_1 edge, then the scheme assigns a factor 1/d, which equals _x∼(0,1/d)[h_1(x)^2].
* If it is an h_2 edge, then the scheme assigns a factor 2/d^2, which equals _x∼(0,1/d)[h_2(x)^2].
For an edge with multiplicity k > 2, it must be traversed by one F step (including S), one R step and k-2 H steps. Moreover, since k is even and 2q is the length of the walk, we have 4 ≤ k ≤ 2q.
* If it is an h_1 edge, then the scheme assigns a factor 1/d· (2q/√(d))^k-2≥ d^-k/2 (2q)^k/2≥ (k/d)^k/2.
By <Ref>, it is an upper bound on _x∼(0,1/d)[h_1(x)^k].
* If it is an h_2 edge, then the scheme assigns a factor 2/d^2· (8q^2/d)^k-2≥ d^-k 2^k/2 (2q)^k≥ 2^k/2 (k/d)^k.
By <Ref>, it is an upper bound on _x∼(0,1/d)[h_2(x)^k].
This shows that the edge factor assignment scheme above is correct.
§.§ Bounding return cost (Pur factors)
In our vertex factor assignment scheme described in <Ref>, we use a potential unforced return factor, denoted , to specify the destination of any return (R) step.
Note that the term “unforced return” is adopted from <cit.> as well.
In this section, we complete the bound of vertex factors by bounding the factor.
For starters, we will define a potential function for each vertex at time t, which measures the number of returns R pushed out from the particular vertex by time t that may require a label in 2q · D_V. Notice that a label in 2q · D_V is sufficient for any destination vertex arrived via an R step because the vertex appears before; however, this may be a loose bound.
We observe the following: a label in 2q· D_V may be spared if the vertex is incident to only one unclosed F/S edge; we call this a forced return.
Formally, we define a return step as unforced if it does not fall into the above categories,
We call a return (R) step an unforced return if the source vertex is incident to more than 1 (or 2 in the case of a square vertex) unclosed edge.
We now proceed to formalize the above two observations by introducing a potential function to help us bound the number of unforced returns from any given vertex throughout the walk. The number of unforced returns throughout the walk would then be immediately given once we sum over all vertices in the walk.
For any time t and vertex v, let _t(v) be defined as the number of potential unforced return from v throughout the walk until time t.
§.§.§ Pur bound for circle vertices
In our setting, each circle vertex pushes out at most 1 edge during the walk, analogous to the case of typical adjacency matrix. This serves as a starting point for our bound for circle vertices.
For any time t, suppose the walker is currently at a circle vertex v, then
_t(v) ≤#(R steps closed from v) + #(unclosed edges incident to v at time t) - 1
≤ 2· s_t(v) + h_t(v)
where we define the following counter functions:
* s_t(v) is the number of S steps arriving at v by time t;
* h_t(v) is the number of H steps arriving at v by time t.
We first prove the first inequality.
The R steps closed from v may all be unforced returns, and the unclosed edges incident to v may be closed by unforced returns in the future.
Note that we have a -1 in the above bound because for each vertex we may by default assume the return is using a particular edge, hence at each time we know there is an edge presumed-to-be forced.
We prove the second inequality by induction.
Define P_t(v) #(R steps closed from v) + #(unclosed edges incident to v at time t) - 1 for convenience.
At the time when v is first created by an F step, P_t(v) = 0 (1 open edge minus 1) and s_t(v) = h_t(v) = 0.
At time t, suppose the last time v was visited was at time t' < t, and suppose that the inequality holds true for t'.
Note that at time t'+1, P_t'+1(v) = P_t'(v)+1 if a new edge was created by an F or N step leaving v, otherwise P_t'+1(v) = P_t'(v) (for R step it adds 1 to the number of closed edges closed from v, but decreases 1 open edge).
On the other hand, s_t'(v) and h_t'(v) remain the same (we don't count out-going steps for s_t(v), h_t(v)).
When we reach v at time t, we case on the type of steps:
* Arriving by an R step: the edge is now closed, but the R step was not from v. So P_t(v) = P_t'+1(v) - 1 ≤ P_t'(v), while s_t(v) = s_t'(v) and h_t(v) = h_t'(v).
* Arriving by an S step: the edge is new, so P_t(v) = P_t'+1(v)+1 ≤ P_t'(v) + 2, and we have s_t(v) = s_t'(v) +1.
* Arriving by an H step: P_t(v) = P_t'+1(v) ≤ P_t'(v) + 1, and h_t(v) = h_t'(v)+1.
In all three cases, assuming P_t'(v) ≤ 2 · s_t'(v) + h_t'(v), we have P_t(v) ≤ 2· s_t(v) + h_t(v), completing the induction.
§.§.§ Pur bound for square vertices
The argument of <Ref> does not apply well for vertices incident to multiple edges in a single step. In particular, this may happen for square vertices in M_ as each is arrived via 2 edges and each pushes out 2 edges (recall <Ref>).
This is not an issue for M_β, but we will treat square vertices in M_β the same way to unify the analysis; in the context of for square vertices, one may think of M_β as collapsing the two circle vertices in M_α.
To handle this issue, we observe that it suffices for us to pay an extra cost of [2] for each square vertex, which would allow us to further presume 2 edges being forced. We then generalize the prior argument to capture this change.
For any time t, suppose the walker is currently at a square vertex v, then
_t(v) ≤#(R steps closed from v) + #(unclosed edges incident to v at time t) - 2
≤ 2(s_t(v) + h_t(v))
where s_t(v) and h_t(v) are the number of S and H steps arriving at v by time t, respectively.
We prove this by induction. Note that this is immediate for the base case when v first appears since a square vertex is incident to 2 edges.
Define P_t(v) #(R steps closed from v) + #(unclosed edges incident to v at time t) - 2 for convenience.
Suppose the inequality is true at time t', and assume vertex v appears again at time t. The departure at time t'+1 from v may open up at most 2 edges, hence P_t'+1(v) ≤ P_t'(v) + 2.
When we reach v at time t (via 2 edges), we case on the type of steps:
* Arriving by two R steps: the two edges closed by the R steps are not closed from v.
So P_t(v) = P_t'+1-2 ≤ P_t'(v), while s_t(v) = s_t'(v) and h_t(v) = h_t'(v).
* Arriving by one S/H and one R step: in this case, P_t(v) = P_t'+1(v) ≤ P_t'(v)+2 and s_t(v) + h_t(v) = s_t'(v) + h_t'(v) + 1.
* Arriving by two S/H steps: in this case, P_t(v) = P_t'+1(v) + 2 ≤ P_t'(v) + 4, whereas s_t(v) + h_t(v) = s_t'(v) + h_t'(v) + 2.
In all three cases, we have P_t(v) ≤ 2(s_t(v) + h_t(v)), completing the induction.
For each surprise/high-mul step, it suffices for us to assign 2 factors, which is a cost of (2q· D_V)^2 so that each factor throughout the walk is assigned.
Moreover, for M_, we pay a cost of 2 for any R step leaving a square vertex so that we can presume 2 edges being forced in <Ref>.
§.§ Wrapping up with examples
Recall <Ref> that for a graph matrix of shape τ,
B_q(τ) = ∑_: step-labelings for E(τ)() ·()
is a valid block-value function for τ (<Ref>).
Moreover, by <Ref>, we can take q ≥ d^ and conclude that with probability 1- 2^-d^,
M_τ_≤ (1+o(1)) · B_q(τ)
For each given shape, it suffices for us to bound the block-value for each edge-labeling. And we demonstrate how this may be readily done given the above bounds.
§.§.§ Warm-up: tight bound for GOE
As a warm-up, we first see how the above framework allows us to readily deduce a tight norm bound for G ∼(0,1/d), where G is a d× d symmetric matrix with each (off-diagonal) entry sampled from (0,1/d).
It is well-known that the correct norm of G is 2+o_d(1) <cit.>.
<Ref> shows the shape τ associated with G, which simply consists of one edge.
We now proceed to bound <Ref>.
Edge factor According to our edge factor scheme described in <Ref> (for h_1 edges), an F/R/S step-label gets a factor of 1/√(d) while an H step-label gets 2q/√(d).
factor
By <Ref>, there is no factor for F/R, while S and H get 2 and 1 factors respectively.
Vertex factor
The weight of a circle vertex is d, thus any vertex making a first or last appearance gets a factor of √(d).
We now case on the step-label and apply the vertex factor assignment scheme described in <Ref>.
* F: the vertex in U_τ must be making a middle appearance; it is not first due to <Ref>, and it is not last as otherwise the edge appears only once throughout the walk.
The vertex in V_τ is making a first appearance, so it gets a factor of √(d);
* R: the vertex in V_τ is making a middle appearance, since it is incident to an R edge (hence not first appearance), and it is on the boundary hence bound to appear again the next block.
The vertex in U_τ may be making its last appearance, so it gets a factor of √(d);
* S: the vertex in U_τ is making a middle appearance (same as F), and the vertex in V_τ is making a middle appearance since it cannot be first and must appear again.
In addition, it gets 2 factors of , which gives a bound of (2q· D_V)^2;
* H: analogous to the above, both vertices are making middle appearance, and it gets 1 factor of , giving a bound of 2q· D_V.
Combining the vertex and edge factors, we can bound <Ref>:
B_q(τ) =
√(d)·1/√(d) + √(d)·1/√(d) + (2q· D_V)^2·1/√(d) + (2q· D_V) ·2q/√(d)≤ 2 + o_d(1)
since q and D_V are both (d).
Therefore, by <Ref>, we can conclude that G_≤ 2 + o_d(1) with high probability, which is the correct bound.
§.§.§ Bound for M_beta
We now prove a bound on M_β_.
<Ref> shows the associated shape.
Let m ≥ω(d) and q = Ω(log^2 d).
Then with probability 1- 2^-q/log d,
M_β_≤ (1+o_d(1)) 2m/d^2 .
Let β be the shape associated with the matrix M_β.
We bound B_q(β) via our vertex/edge factor assignment schemes combined with factors.
Recall that each square vertex has weight m and each circle vertex has weight d.
We case on the step-labels of the two edges,
* F→ F: we have an F edge leading to a square vertex and circle vertex each.
The first square vertex must be making a middle appearance (<Ref>), while the circle and the other square vertex make first appearances, giving a vertex factor of √(m)·√(d).
Furthermore, there is no factor incurred.
Finally, the edge factor is 2/d^2.
* R→ R: we have both R edges departing from a square and circle vertex.
There is no vertex making first appearance and the square vertex on the right must be making middle appearance.
The other two vertices may be making last appearances. Furthermore, there is no factor incurred, while we assume each R edge from a square vertex can be identified at a cost of [2] modulo the ones with assigned factor, giving a total vertex factor of 2√(m)·√(d).
Finally, the edge factor is 2/d^2.
* R→ F:
the circle vertex must be making a middle appearance, since the F edge must be closed later.
The square vertex on the left may be making a last appearance, and the square vertex on the right must be making a first appearance.
This gives a vertex factor of √(m)·√(m) = m.
There is no factor, and the edge factor is 2/d^2.
* F→ R: this cannot happen.
The F step means that the circle vertex is making its first appearance, but the R step means that it must have appeared before.
* For any other step-labelings involving S and H step-labels, both vertices of the S/H edge must be making middle appearances.
Thus, the vertex factor is at most √(m).
By <Ref>, the factor is at most (2q )^4.
Finally, the edge factor is at most O(q^4/d^2).
Summing over all possible step-labelings, by <Ref> we get
B_q(β) ≤√(md)·2/d^2 + 2√(md)·2/d^2 + m ·2/d^2 + q^O(1)/d^2≤2m/d^2 (1 + o_d(1))
provided that m ≫ d.
Therefore, by <Ref>, we have that with probability 1- 2^-q/log d, M_β_≤ (1+o_d(1)) 2m/d^2.
§.§.§ Bound for M_alpha
We now prove a bound on M__.
<Ref> shows the associated shape.
Let m ≥ω(d) and q = Ω(log^2 d).
Then with probability 1- 2^-q/log d,
M__≤ (1+o_d(1)) ·1/d^2 (3d√(m) + 2m) .
Let α be the shape associated with the matrix M_.
Similar to the proof of <Ref>, we bound B_q() by casing on the step-labelings.
There are two paths from U_ to V_, 4 edges in total.
t
* In the case of all F or all R, one of the square vertices must be making middle appearance, hence we get a vertex factor of √(m)· (√(d))^2 = d √(m). There is no factor, and the edge factor is (1/√(d))^4 = 1/d^2.
For the case of all R, by <Ref> we pick up an additional factor of [2] since we assume each R edge from a square vertex can be identified at a cost [2] modulo those with assigned factor.
* If both paths are R → F, then both circle vertices are making middle appearances, hence we get a vertex factor of √(m)·√(m) = m.
There is no factor while we pick up a factor [2] for the return from the square vertex. Finally, the edge factor is (1/√(d))^4 = 1/d^2.
* Analogous to M_β, an F → R path cannot happen.
* For any other step-labelings involving S and H step-labels, there must be at least one square and one circle vertex making middle appearances, so the vertex factor is at most √(m)·√(d).
The factor is (2q · D_V)^O(1) = (d), and the edge factor is (2q/√(d))^4.
Summing over all possible step-labelings, by <Ref> we get
B_q() ≤ d√(m)·1/d^2 + 2d√(m)·1/d^2 + 2m ·1/d^2 + √(md)·q^O(1)/d^2≤ (1+ o_d(1)) ·1/d^2 (3d√(m) + 2m)
Therefore, by <Ref>, we have that with probability 1- 2^-q/log d, M__≤ (1+o_d(1)) ·1/d^2(3d√(m) + 2m).
Suppose we are traversing from left (U_) to right (V_), we can
build a set S_U→ V as the following,
* Include the circle vertex from U_ into S if it is not making its last appearance;
* Include any vertex making middle appearance into S;
* Include the circle vertex from V_ into S if if it is not making its first appearance.
We observe that S by construction is a separator between U_ and V_, as otherwise we would have some edge that appears only once. We defer the full proof to <ref> momentarily, while we are now ready to summarize the vertex factors once given the separator. this is essentially a whole proof of the norm bound being controlled by the separator except i cut some corner that uses properties of the shape...like no floating...i think its too much for an overview...in which case we may want to modify the old draft version...the only perk of this is it makes clear the connection to vertex separator, but it could be unnecessary at this point?
* Traversing from U_ to V_, any edge traversed before we enter the separator must be appearing for the last time, i.e., an R edge, hence it does not come with any vertex factor; On the other hand, traversing from V_ to U_, any such edge is appearing for the first time, and it comes with a full vertex-factor for each vertex an F edge leads to;
* Traversing from U_ to V_, any edge traversed after we exit the separator is an F edge that leads to a vertex appearing for the first time, hence a full vertex factor depending on its type; on the other hand, in the reverse traversal direction, any such edge is appearing for the last time as an R edge, hence no vertex factor;
* Therefore, for each vertex outside the separator, we pick up a factor of its weight in exactly one direction, while the other direction does not contribute extra vertex factor. Taking the average gives us the √((i)) for each vertex i outside the separator.
* Each edge outside the separator is either F or R, and each gets assigned a factor of √(2)/d in either traversal direction;
* For each vertex on the left boundary of the separator S_L, it is arrived using an R edge when we traverse from U_ to V_, hence no vertex factor is needed; for vertices in the interior of the separator S∖ S_L, a label in q is sufficient as we know this is a vertex making middle appearance; moreover, since each vertex in our example is connected to each other, it suffices for us to bound |S∖ S_L| ≤ |E(S)|;
* For each vertex on the right boundary of the separator S_R, it is arrived using an R edge when we traverse from V_ to U_, hence no vertex factor is needed; for vertices in the interior of the separator S∖ S_R, a label in q is sufficient as we know this is a vertex making middle appearance; moreover, since each vertex in our example is connected to each other, it suffices for us to bound |S∖ S_R| ≤ |E(S)|;
* Additionally, any edge inside the separator in E(S) may be making its an H-edge making middle appearance, or a surprise visit making its first/last appearance, and any such edge may contribute additional vertex factor via factor accounted earlier, as well as each such edge gets assigned an edge value of at most q/d via our distribution scheme.
Therefore, for any block _i with separator S, we can bound its ”assigned” contribution to the final trace via
B(_i(S)) ≤∏_i∈ V()∖ S√((i) )·q^O(1)· |E(S)|/d·(√(2)/d)^|E()∖ E(S)|
Summing over all possible separators gives us a bound for block _i,
B(_i) ≤∑_S:separator∏_i∈ V()∖ S√((i) )· q^O(1)· |E(S)|(√(2)/d)^|E()∖ E(S)|
where we point out that it is not hard to verify the potential dominant separators is simply taking one vertex in our example, and
* Taking a square vertex: we have 2 choices of such separators (either the one on the left or the one right), and the value for the given separator is given by
(2/d^2) ·√(md)
where we emphasize that the dependence on q vanishes as. there is no separator edge;
* Taking a circle vertex: we have only one choice (the middle circle vertex), and the value is given by
(2/d^2) ·√(m^2)
* Any other separator can be obtained from the above by including potentially more vertex into the separator, and notice each such operation gives us a multiplicative change of at most O(q^O(1)/√(d) ), hence any such term gives value most o_d(1) of the above values;
* Summing over the above choices yield a block value of
B() =(1+o_d(1)) (2·(2/d^2) ·√(md) + (2/d^2) ·√(m^2))
≤ (1+o_d(1)) 2m/d^2
which can be shown to be tight up to o_d(1) dependence.
we can add in a lower bound later
For starters, assign vertex factors for each fixed traversal direction as the following,
* Each F edge leads to a new vertex, and hence a label in [m] or in [d] is needed depending on the vertex-type;
* Each R edge has its destination fixed modulo the unforced-return settled by factors;
* Each S/H edge leads to a visited vertex, and may incur O(1) factors. Settling down the potentially assigned confusion at the current block, therefore it suffices for us to assign a cost of q^O(1).
For the edge-value, we adopt the following edge-factor assignment scheme,
* Assign a factor of √(2)/d for F/R/S edge;
* Assign a factor of at most √(2) q^2/d for H edge.
We are now ready to bound the block-value function for M_β: since each F edge becomes an R edge and vice versa once we consider a reversed traversal, each edge-labeled F/R receives vertex factor on one traversal direction only and does not contribute vertex cost on the other. That said, for a fixed traversal direction, each F edge to i contributes a factor √((i)), and each R edge from j contributes a factor √((j)) (since this would be an F edge leading to j in the reverse direction).
Therefore, summing over all possible edge-labelings when fixing the traversal direction from left to right, and each value above is obtained respectively by the following:
* F→ F, R→ R: for both labelings, in one of the two traversal directions, we have an F edge leading into a square and circle vertex, while the other direction has both vertices arrived using R edges. Moreover, both traversal do not incur factors while edge-value stays invariant that we pick up 2/d^2 in either direction, therefore we get factor of 2 √(md)/d^2 after averaging over two traversal directions for each labeling;
* F→ R: by local injectivity, we can deduce the F edge is a surprise visit, and it contributes a vertex cost of q^O(1) (counting its contribution to the factor), and we pick up a value of q^O(1)/d^2 in either direction (when combined with the edge-value);
* R→ F: in each traversal direction, we have an F edge leading to a new square vertex, and a circle vertex reached using R edge, that is a factor of 2√(m^2)/d^2 after averaging;
* For labelings involving H, the block-value calculation is analgous to F→ R lableing, and each is of value at most o_d(1) of the factor from R→ F, and note that there are at most O(1) such labelings.
* Summing over all possible lablelings gives us a block-value of
B(M_β) ≤ (1+o_d(1)) 2√(m^2)/d^2 +2·2√(md)/d^2≤ (1+o_d(1)) 2√(m^2)/d^2
where we use m=Ω(d^2) for the second inequality.
§.§ Block value bound via vertex separators
For each block through the walk, fixing the traversal direction from U_ to V_, consider the following set S,
* We include a vertex into the separator S if it is making its middle appearance;
* We include a vertex from U_ into the separator if it is not making its last appearance;
* We include a vertex from V_ into the separator if it is not making its first appearance;
* We include any vertex from U_∩ V_ into the separator.
S is a separator for each block.
Take a path that is not blocked by S, its vertex in U_ must be making its last appearance, while the vertex in V_ is making its first appearance. Moreover, since any vertex between is either making the first or the last appearance, we can see that there must be some edge between a vertex with one endpoint making the first appearance and the other endpoint the last appearance, i.e., this edge is new and does not appear in future block. This violates the evenness of our walk.
For a given separator, we define the left (right) separator S_L ( S_R) to be the set of vertices in S reachable from U_ ( V_) without passing through any other vertex in S.
Accounting block value
For each block , we start by fixing the traversal direction from U to V, the vertex factors are accounted as the following,
* For starters, we will presume any R edge to have their destination fixed modulo the cost of throughout the walk: to handle the factor, we assigns the factors to each block such that each block _i is assigned a factor (_i), the factor incurred by the walk on block _i;
* For vertices reachable from U_ without passing through S (including those on the boundary), they are either already identified from the previous block in U_, or reached using an R edge, hence no vertex cost is needed;
* For vertices reachable from V_ without passing through S (and which are not in S themselves), they are making their first appearance and hence a cost of [d] (or [m]) is sufficient;
* Floating components: for components not reachable from S that is also not reachable from U_∪ V_, all its vertices are making the first or last appearances, in which case, a label in [n] per vertex for each new vertex, and a label in 2|V()|q is sufficient for one vertex, and any other vertex can be reached using an R edge for free.
* Each vertex on the the boundary of the separator S_L is reached using an R edge, hence no vertex factor; each vertex in the interior of the separator S∖ S_L is reached using an H or S edge, hence a cost of at most q^O(1) per edge is sufficient (assuming each H and S edge gets assigned O(1) factors);
* Each components of the separator S not reachable from S_L∪ S_R contributes a label in 2|V()|q.
On the other hand, if we traverse from V to U for the same block,
* For vertices reachable from V_ without passing through S, they are either already identified from the previous block in V_, or reached using an R edge, hence no vertex cost is needed;
* For vertices reachable from U_ without passing through S (and which are not in S themselves), they are making their first appearance and hence a cost of [d] (or [m]) is sufficient;
* For components not reachable from S that is also not reachable from U_∪ V_, all its vertices are making the first or last appearances, in which case, a label in [n] per vertex for each new vertex, and a label in 2|V()|q is sufficient for one vertex, and any other vertex can be reached using an R edge for free.
* Each vertex on the the boundary of the separator S_R is reached using an R edge, and any vertex in the interior of the separator S∖ S_L is reached using an H or S edge, hence a cost of at most q^O(1) per edge is sufficient (assuming each H and S edge gets assigned O(1) factors);
* Each components of the separator S not reachable from S_L∪ S_R contributes a label in 2|V()|q.
We are now ready to summarize the block value.
* Each vertex outside the separator S contributes a factor of its weight (either d or m depending on its vertex-type) in exactly one traversal direction, hence the factor of √((i)) for each vertex i∈ V()∖ S;
* Each vertex outside the separator that contributes a factor of its weight in both direction must be outside U_∪ V_ and it makes appearance in exactly one block, that said, each incident edge must be of even multiplicity (including 0), and any such vertex contributes an extra factor of √((i)) (and call any such vertex isolated);
* Each floating component not connected to S receives an extra floating overhead for the R return that is 2|V()|q in one direction; and floating component connected to S receives a label in 2|V()|q in both directions, and it suffices for us to assign a 2|V()|q factor for both direction regardless of connectivity to S;
* Each edge outside the separator is an F/R edge, and gets assigned a factor of (F,R) depending on the edge-type, and each edge inside the separator is either S or H edge, and it suffices for us to assign a factor of (H) provided (H)≥(F,R).
Therefore, for each block with a given separator S, we can assign vertex factors and edge factors for the block such that
B((S)) ≤() ·∏_i∈ V()∖ S√((i))^1+1_iso(i)∏_e∈ E(S) q^O(1)· (2|V()|q)^2|()|
·∏_e ∈ E()∖ E(S)(F,R) ·∏_e∈ E(S)(H)
where () is the potential-unforced-return factor incurred by the walk of this current block.
§ MATRIX DECOMPOSITION OF R
In this section, we work towards analyzing and proving <Ref>.
Recall our candidate matrix Λ = I_d - ∈^d × d from <Ref>, where
∑_i=1^m w_i v_iv_i^T = ∑_i=1^m ( M^-1η)[i] · v_iv_i^T
See <Ref> for a reminder of the definitions of M ∈^m× m and η∈^m.
To analyze , we begin by obtaining an explicit expression for M^-1 using the Woodbury matrix identity (<Ref>), as discussed in <Ref>.
Recall that we write M = A + UCU^T where A is positive definite with high probability by <Ref>, and U = 1/√(d)[ 1_m η ]∈^m × 2 and C = [ 1 1; 1 0 ].
Restating <Ref>, the scalars r,s,u ∈ are defined as follows,
[ r s; s u ]
C^-1 + U^T A^-1 U =
[ 1_m^T A^-1 1_m/d 1 + η^T A^-1 1_m/d; 1+ η^T A^-1 1_m/d -1 + η^T A^-1η/d ]
Our next step is to show the following expansion of M^-1η.
Let r,s,u∈ be defined as in <Ref>.
Then,
M^-1η = r+s/s^2-ru· A^-1η - u+s/s^2-ru· A^-1 1_m
The inverse of <Ref> is as follows,
C^-1 + U^T A^-1 U^-1 = [ r s; s u ]^-1
= 1/ru-s^2[ u -s; -s r ]
Then, applying the Woodbury matrix identity (<Ref>), we have
M^-1 = A^-1 - 1/ru-s^2 A^-1 U [ u -s; -s r ] U^T A^-1
= A^-1 + 1/s^2-ru A^-1( u ·1_m 1_m^T/d - s ·η 1_m^T + 1_m η^T/d + r ·ηη^T/d) A^-1
Next, using the above, we have
M^-1η =
A^-1η + 1/s^2-ru(
u·1_m^T A^-1η/d - s ·η^T A^-1η/d· A^-1 1_m
+ -s ·1_m^T A^-1η/d + r ·η^T A^-1η/d· A^-1η)
Plugging in the definition of r,s,u in <Ref>, we get
M^-1η = A^-1η + 1/s^2-ru (u (s-1)-s(u+1)) A^-11_m + 1/s^2-ru(-s (s-1) + r(u+1))A^-1η
= A^-1η - u+s/s^2-ru· A^-11_m + -s^2 + ru + r+s/s^2-ru· A^-1η
= r+s/s^2-ru· A^-1η - u+s/s^2-ru· A^-1 1_m
finishing the proof.
§.§ Inverse of A: Neumann series and truncation
In light of <Ref>, we proceed to analyze A^-1.
Recall from <Ref> that A = I_m + M_α + M_β + M_D + 1/dI_m.
A useful tool to obtain inverses is to apply Neumann series (a.k.a. matrix Taylor expansion of 1/1-x), which allows us to write
(I-T)^-1 = ∑_k=0^∞ T^k
for T_ < 1.
In our case, let T -(M_α + M_β + M_D + 1/dI_m), then T_ < 1 is guaranteed by <Ref>.
Thus, we can write
A^-1 = ∑_k=0^∞ T^k = ∑_k=0^∞ (-1)^k ∑_(Q_1,…,Q_k) ∈{M_α, M_β, M_D, 1/dI_m}^k Q_1 Q_2 ⋯ Q_k
With the norm bounds from <Ref>, we can truncate the series, i.e., capping the number of occurrences of M_α, M_β, M_D and 1/dI_m by certain thresholds, such that the error is small.
We define thresholds τ_1 = τ_2 = O(log d), τ_3 = 3, and τ_4 = 1,
and define the truncation of A^-1 as
T_0 ∑_k_1,k_2,k_3,k_4∈
k_i ≤τ_i, ∀ i (-1)^k_1+⋯ + k_4∑_(Q_1,…,Q_k) ∈{M_α, M_β, M_D, 1/dI_m}^k
i-th matrix occurring k_i times Q_1 Q_2 ⋯ Q_k
In the next lemma, we upper bound the truncation error.
Suppose m ≤ cd^2 for a small enough constant c.
Let T_0 be the truncated series defined in <Ref> with thresholds τ_1=τ_2 = O(log d), τ_3 = 3 and τ_4 = 1.
Then, with probability 1-o_d(1), the truncation error E = A^-1 - T_0 satisfies E_≤ O(log d/d)^2.
From <Ref>, we know that M_α_, M_β_≤ 0.1, M_D_≤ O(√(log d/d)), and 1/dI_m_ = 1/d with high probability.
We can bound the contribution of M_α_ in the truncation error by
∑_i=τ_1+1^∞∑_j=0^∞i+ji·M_α_^i ·M_β_ + M_D_ + 1/d^j
≤ ∑_i=τ_1+1^∞∑_j=0^∞ 2^i+j·M_α_^i ·0.1+ o(1)^j
≤ 2M_α_^τ_1+1· O(1)
Similarly for M_β, M_D and 1/dI_m.
Therefore, we can bound the total truncation error:
E_ ≤2M_α_^τ_1+1 + 2M_β_^τ_2+1 + 2M_D_^τ_3+1 + (2/d)^τ_4+1· O(1)
≤ Olog d/d^2
by our choice of thresholds τ_1,τ_2,τ_3,τ_4.
§.§ Decomposition of R via truncated A{-1}
We shift our attention back to = ∑_i=1^m w_i v_i v_i^T.
Using the expansion of M^-1η in <Ref> (<Ref>) and the truncation of A^-1, we decompose as follows.
Suppose m ≤ cd^2 for a small enough constant c.
Let T_0 be the truncated series of A^-1 defined in <Ref>. Then,
= _1 + _2 + E_
where
_1 r+s/s^2-ru∑_i∈ [m] (T_0 η)[i] · v_i v_i^T
_2 -u-s/s^2-ru∑_i∈ [m] (T_0 1_m)[i] · v_i v_i^T
and E__≤ o(1) with probability 1-o_d(1).
We first state the following claim which is needed for the error analysis.
With probability 1-e^-d, ∑_i=1^m v_iv_i^T_≤ (1+o_d(1)) m/d.
We can write ∑_i=1^m v_i v_i^T = VV^T where V ∈ R^d× m has i.i.d. (0,1/d) entries, and note that ∑_i=1^m v_i v_i^T_ = σ_max(V)^2.
Since we assume m ≥ω(d), by standard concentration of the largest singular value of rectangular Gaussian matrices <cit.>, we have that σ_max(V) ≤ (1+o_d(1)) √(m/d) with probability at least 1-e^-d.
Recall that = ∑_i∈[m] (M^-1η)[i] · v_i v_i^T.
Unpacking the expression of M^-1η in <Ref> and plugging in A^-1 = T_0 + E, we have = _1 + _2 + E_ where
E_ = ∑_i∈[m]δ_i v_iv_i^T δ_i r+s/s^2-ruE[i],η - u+s/s^2-ruE[i], 1_m
We first upper bound |δ_i| for all i∈[m].
First, observe that |E[i], η| ≤E_·η_2 and |E[i], 1_m| ≤E_·1_m_2.
By <Ref>, we have E_≤ O(log d/d)^2 and η_2 ≤ O(√(m/d)) with high probability.
Moreover, by <Ref>, we have that r = Θ(m/d), |s| ≤ O(√(d)) and -1 ≤ u ≤ -1/2, hence s^2 - ru ≥Ω(m/d).
Thus,
|δ_i| ≤ Olog d/d^2 ·√(m/d) + d^3/2/√(m)≤ Olog^2 d/√(md)
as we assume that m ≤ O(d^2).
Finally, <Ref> states that ∑_i=1^m v_iv_i^T_≤ O(m/d).
Since
-∑_i∈[m] |δ_i| v_i v_i^T ≼ E_≼∑_i∈[m] |δ_i| v_i v_i^T
we may conclude that E__≤ O(log^2 d/√(md)) · O(m/d) ≤ O(log^2 d/√(d)) as m ≤ O(d^2).
§.§ Overview: each term is a dangling path of injective gadgets
By writing A^-1 as a truncated series (<Ref>) and the decomposition of (<Ref>), we may view _1 and _2 as linear combinations of d × d matrices of the forms
∑_i∈[m]Q_1 Q_2 ⋯ Q_k η[i] · v_i v_i^T
and ∑_i∈[m]Q_1 Q_2 ⋯ Q_k 1_m[i] · v_i v_i^T
respectively, where Q_1,…,Q_k ∈{M_, M_β, M_D, 1/d I_m} are m × m matrices.
Using the machinery of graph matrices described in <Ref>, we can systematically represent these matrices as shapes (with underlying input {v_1,…,v_m}⊆^d).
For starters, the off-diagonal part of the matrix ∑_i=1^m v_i v_i^T is represented by the shape in <Ref>; it serves as the “base” for other matrices in <Ref>.
Each matrix in <Ref> can be represented by attaching “gadgets” (<Ref>) to the square vertex in the middle.
For the case of _1, since η_j = v_j_2^2-1 = ∑_a=1^d h_2(v_j[a]) for j∈ [m] (see <Ref>), each shape has an extra h_2 attached at the end.
<Ref> shows an example of such a shape.
In the following, we formally define such shapes which we call dangling shapes.
We call each shape in {M_, M_β, M_D, 1/dI_m} an injective gadget in the dangling shape.
We further separate the matrices in the decomposition of (<Ref>) into diagonal and off-diagonal terms:
* For the off-diagonal terms, we start with a path from U to V (each containing a circle vertex receiving labels in [d]) passing through a middle square vertex that receives a label in [m].
* For diagonal terms, we have a double edge (that shall be distinguished from an h_2 edge as the double edge here stands for v_i[a]^2) connected to a middle square vertex that receives a label in [m].
For each term, we specify a length-k≥ 0 dangling path that starts from the middle square vertex such that
* for any k>0, each step comes from one of the following gadgets in { M_, M_β, M_D, 1/dI_m};
* The dangling path is not necessarily injective, that we may have each vertex appearing at multiple locations along the path.
However, since it is a walk along the above gadgets, the path is locally injective within each gadget.
For matrices from _1, there is an additional η factor.
Thus, we attach an h_2 gadget (<Ref>) to the end of the dangling path.
We call this the “final h_2 gadget”.
Let A,B be two gadgets along the dangling path.
We call it a gadget-incursion if there are unnecessary vertex intersections between V(A) and V(B) beyond the “necessary intersection”: when A,B are adjacent, V_A = U_B is the necessary boundary intersection, any intersection in V(A)∖ V_A and V(B)∖ U_B is a gadget incursion; similarly, when A, B are not adjacent, any intersection in V(A) and V(B) is a gadget incursion.
R = R_A + 1/s^2-ru(u· R_u - s· R_s + r · R_r)
= τ(R_A) + (R_A) +
1/s^2-ru(
u· (τ(R_u) +(R_u)) - s· (τ(R_s)+(R_s)) + r · (τ(R_r )+(R_r))
)
= τ(R_A) + τ(R_u+R_s+R_r) + (R)
where we define τ, the indicator function that split out non-intersection and intersection terms from a given collection of graph matrices, and with some abuse of notation, let τ(R) and (R) be the collection of terms without and with intersections, i.e.,
τ(R_A) ∑_∈τ(R_A) R_ ,
τ(R_u+R_s+R_r) u/s^2-ru∑_∈τ(R_u) R_ - s/s^2-ru∑_∈τ(R_s) R_ + r/s^2-ru∑_∈τ(R_r) R_
and
(R) ∑_∈(R_A) R_ + u/s^2-ru∑_∈(R_u) R_ - s/s^2-ru∑_∈(R_s) R_ + r/s^2-ru∑_∈(R_r) R_
With this decomposition, we are ready to introduce the lemmas that would help us prove the norm of R is bounded and therefore we can charge it to the identity.
There exists some absolute constant c_R s.t. for m ≤d^2/c_R,
τ(R_A)_≤1/10
There exists some absolute constant c_R s.t. for m ≤d^2/c_R,
τ(R_u + R_s +R_r) _≤ o_d(1)
There exists some absolute constant c_R s.t. for m ≤d^2/c_R,
(R)_≤1/10
We defer the proof to the lemma for the matrix in the decomposition of R to the subsequent sections, while we are now ready to see they immediately imply our desired norm bound for R.
The norm bounds for R follows by our decomposition, and a triangle inequality of the bounds from above <ref>.
Before we delve into the norm bounds, for ease of our technical analysis, we make a coupe of definitions here.
First, we fix an order to traverse each shape that appears in R as the following,
For a given shape from R, we consider a fixed order of traversing edges for encoding:
* First traverse the U-V path that passes through a square vertex, in particular, traverse both edges on the U-V path from U∪ V;
* Starting from the first square vertex, traverse edges from each A^-1 gadget;
* Traverse the edges (including the jump) from the filling layer;
* Traverse the edges in the second A^-1 gadget if available;
* Traverse the H_2 attachment gadget.
Throughout the work, we will fix q=Θ(log^10 d) unless otherwise mentioned. This is served as the proxy for the factor incurred by moment calculation of high trace.
Vertex and edge appearance
Recall from <Ref> that we bound the norm of a graph matrix by analyzing length-2q “block-walks” of the shape and bounding the vertex/edge factor of each “block-step”.
To this end, we need to consider both global and local appearances of a labeled vertex.
We remind the reader that a labeled square (resp. circle) vertex is an element in [m] (resp. [d]), and a labeled edge is an element in [m] × [d] (see <Ref>).
Given a block-walk, we call each labeled vertex's appearance within the given block a local appearance, and each vertex's appearance throughout the walk a global appearance.
Moreover, we say that a labeled vertex/edge is making its global first/last appearance if it is the first/last appearance of that labeled vertex throughout the walk.
Similarly, we say it is making its local first/last appearance if it is the first/last appearance within the given block in the walk.
We also need the following definition, which is a special case that we need to handle.
The term “reverse-charging” will be clear once we describe our edge charging scheme in <Ref>.
For a given walk, and a block in the walk, we call a step u→ v reverse-charging if
* the underlying edge is making its last global appearance throughout the walk;
* the underlying edge's first global appearance is also in the current block;
* (Reverse) the first appearance of the underlying edge goes from v to u.
§.§.§ Further set-up for step-labeling and edge factor scheme
Our argument for tight matrix norm bounds requires assigning each edge (or step) a step-label (<Ref>) that represents whether it is making first/middle/last appearance, and assigning edge factors based on the edge type (recall our edge factor scheme in <Ref>).
However, further care is warranted for dangling shapes when an edge appears with different Hermite indices in the walk (e.g., appears both as an h_1 and h_2 edge).
In this case, it is no longer true that an h_k edge needs to appear at least twice in the walk for the random variable to be non-vanishing.
For example, suppose an edge appears as h_1, h_1, and h_2 in the walk.
Then, even though h_2 only appears once, this term is non-vanishing under expectation:
_x∼(0,1/d)h_1(x)^2 h_2(x) =
[x^2 (x^2-1/d) ] = [x^4] - [x^2/d] = 2/d^2 .
The matrices that arise in our analysis may contain h_1, h_2, h_3, h_4 edges.
For i≤ 4, we treat an h_i edge as i edge-copies in our edge factor assignment scheme.
For each step regardless of the Hermite index, assign a step-label to all its edge-copies as follows,
* Assign an F step if it is making its first appearance;
* Assign an H step if it is making its middle appearance
* Assign an R step if it is making its last appearance.
We next describe our edge factor assignment scheme.
For any graph matrix of size at most D_V that contains h_1,h_2,h_3,h_4 edges and walks of length 2q ≥Ω(log d),
we can assign values to each edge-copy among the edge's appearance throughout the walk such that
* Each edge-copy of an h_1, h_2 edge with F/R step-label gets assigned a value 2^1/4/√(d).
* Each edge-copy of an edge with step-label H and any edge-copy of an h_3, h_4 edge gets assigned a value 32 q D_V/√(d).
* In the case of a random variable appearing as h_1, h_1, h_2 (in an arbitrary order), it gets assigned value of 2/d^2 in total, in particular, each edge-copy of the h_2 edge gets assigned a value √(2)/√(d).
We first note that _x∼(0,1/d)[h_k(x)^2] = k!/d^k.
For k=1,2, if an h_k edge only appears twice and no other Hermite index occurs, then it must have 2k edge-copies with step-labels F and R, giving an edge factor of (2^1/4/√(d))^2k = 2^k/2/d^k, which is larger than k!/d^k for k = 1, 2.
Next, we consider the case when h_3, h_4 edges are involved or when an edge appears more than twice, i.e., some edge-copies are assigned step-label H.
Let a_k be the number of times h_k appears in the walk, and let t ∑_k≤ 4 k · a_k be the total number of edge-copies.
Applying Cauchy-Schwarz twice and <Ref>,
_x∼(0,1/d)∏_k≤ 4 h_k(x)^a_k ≤h_1(x)^2a_1 h_2(x)^2a_2^1/2h_3(x)^2a_3 h_4(x)^2a_4^1/2
≤∏_k≤ 4h_k(x)^4a_k^1/4≤∏_k≤ 44k a_k/d^k· a_k/2
≤4t/d^t/2
We next show that the edge factors assigned to the t edge-copies upper bound the above.
Let t_0 be the number of edge-copies that get assigned 2^1/4/√(d).
We must have 0 ≤ t_0 < t and t_0 ≤ 4.
Then, the assignment scheme gives
2^1/4/√(d)^t_0·32q D_V/√(d)^t-t_0≥ d^-t/2 (32q D_V)^t-t_0
Since the length of the walk is 2q and the size of the graph matrix (shape) is ≤ D_V, we have t ≤ 8q D_V.
Thus, if t ≤ 8, then clearly (32q D_V)^t-t_0≥ (4t)^t/2;
otherwise, t-t_0 ≥ t/2 and (32q D_V)^t-t_0≥ (4t)^t/2.
This shows that the edge factors correctly account for the values from the Hermite characters.
For the special case when an edge appears as h_1, h_1, h_2, the factor 2/d^2 follows from <Ref>.
This completes the proof.
§.§ Alternative Analysis Draft
Note: I think this alternative analysis should be put in an appendix as it works well for the main analysis but doesn't work as well for Pur factors (S edges which are local F edges are tough to handle with this charging scheme).
For each chain, we define local F, R, S, and H edges as follows. Here we traverse the chain from top to bottom.
* We say that an edge is a local F edge if it is appearing for the first time in the current chain and its destination is appearing for the first time in the current chain.
* We say that an edge is a local R edge if it is appearing for the last time in the current chain.
* We say that an edge is a local H edge if it appears again both before and afterwards in the current chain.
* We say that an edge is a local S edge if it is appearing for the first time but its destination has appeared before in the current chain.
For our local analysis, there are two circle vertices which we may need to be careful about.
Let v_left be the circle vertex which is at the top left, let v_right be the circle vertex which is at the top right (which may be equal to v_left), and let v_bottom be the circle vertex which is at the bottom.
We can always take the minimum weight vertex separator to be v_left, so we do not need to worry about v_left. However, as we will see, we have to be careful about v_right and v_bottom.
We say that an edge is locally vanishing if we have that after the local intersections, the product of it and the edges parallel to it has a nonzero constant term.
Some examples of locally vanishing edges are as follows.
* Two parallel edges with label 1 are locally vanishing as x^2 = √(2)x^2 - 1/√(2) + 1
* Two parallel edges with label 2 are locally vanishing as
(x^2 - 1/√(2))^2 = √(6)(x^4 - 6x^2 + 3/√(24)) + 2√(2)(x^2 - 1/√(2)) + 1
* More generally, two parallel edges are locally vanishing if they have the same label k and are not locally vanishing if they have different labels.
* Three parallel edges with labels 1, 1, and 2 are locally vanishing as
x^2(x^2 - 1/√(2)) = 2√(3)x^4 - 6x^2 + 3/√(24) + 5x^2 - 1/√(2)+√(2)
We say that a vertex v is locally isolated if v is not equal to the top left circle or the top right circle and all edges incident to v are locally vanishing.
For all edges except for two special edges e^*_r and e^*_b which we describe below, we assign them as follows.
* For each local F edge, we assign it to its destination. For the edge at the top between v_right and its neighboring square vertex, we consider its destination to be the square vertex.
* For each local R edge, we assign it to its origin unless it is a dangling edge with label 2. If it is a dangling edge with label 2, we split it between its endpoints.
* For each local S and H edge, we assign half of it as a bonus to its origin and half of it as a bonus to its destination.
All vertices except v_right and v_bottom have the required edge factors.
For each circle vertex u except for v_left (which we doesn't need any edges), v_right, and v_bottom, consider the first and last time u appears in the current chain. The first time u appears, there must be a local F edge pointing to it which gives u one edge. If u only appears once, it cannot be locally isolated, so it only needs one edge. Otherwise, the last time u appears, it can only be locally isolated if all edges incident to it are R edges. In this case, u obtains a second edge.
Similarly, for each square vertex v, consider the first and last time v appears in the current chain. The first time v appears, there must be two local F edges with label 1 or one local F edge with label 2 pointing to it. The last time v appears, it can only be locally isolated if all edges incident to it are R edges, in which case v obtains two additional edges.
Note that v_right is an exception because the first time it appears, it does not have a local F edge pointing to it. Similarly, v_bottom is an exception because the last time it appears, it may be incident to an R edge with label 2 without gaining an additional edge.
When v_left≠ v_right and v_right is not equal to the final circle vertex, we find/define e^*_r through the following iterative process. We start with the edge e between v_right and its neighbor. Note that this is a local F edge going down from a vertex v = v_right which is not locally isolated and which is not the final circle vertex. We now have the following cases
* e does not appear again and the other endpoint of e is the final circle vertex. In this case, we take e^*_r = e and assign one edge factor from it to v_right. If it has label 2, we keep one edge factor in reserve in case e^*_b = e^*_r = e.
* e does not appear again and its other endpoint is not the final circle vertex. In this case, letting v' be the other endpoint of e, we take e' to be an edge going down from v'. If e' is not a local F edge, we take e^*_r = e' and assign it to v_right. We can do this because v' is not locally isolated so it already has all of the edge factors it needs. If e' is a local F edge, we again have a local F edge e' going down from a vertex which is not locally isolated and is not the final circle vertex, so we repeat this process.
* e appears again. In this case, let e' be the next time e appears. If e' is not a local R edge whose destination is v, we take e^*_r = e' and assign it to v_right. If e' is a local R edge whose destination is v, we let e” be an edge going down from v.
If e” is not a local F edge then we take e^*_r = e” and assign it to v_right. If e” is a local F edge then we are still in the situation where we have a local F edge going down from a vertex which is not locally isolated and is not equal to the final circle vertex, so we repeat this process.
We find/define e^*_b through the following iterative procedure. We start with the edge e with label 2 between v_bottom and its neighbor. Note that e must be a local R edge. We then do the following.
* If e is not a locally vanishing edge then we take e^*_b = e. We assign half of e^*_b = e to v_bottom and keep half in reserve in case e^*_b = e^*_r.
* If e is a locally vanishing edge then e must appear above. Consider the previous time e appears and let e' be this copy of e. There are a few possibilities for e'.
* e' is an edge with label 2 going from a copy of v_bottom to the bottom square vertex of the current block which is not equal to the top vertex of the current block. If e' is not a local F edge then we take e^*_b = e'. We assign half of e^*_b = e to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e' is a local F edge, let e” be the edge from the top square vertex of the current block to the copy of v_bottom. If e” is not a local R edge then we take e^*_b = e”. We assign half of e^*_b = e” to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e” is a local R edge then we are in the same situation as before so we repeat this process.
* e' is an edge with label 2 from the top square vertex in the current block which does not equal the bottom square vertex of the current block to a copy of v_bottom. In this case, we take e^*_b = e'. If e^*_b = e' is not a local F edge then we assign half of it to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e^*_b = e' is a local F edge then we assign it to v_bottom. Note that in this case, we cannot have that e^*_b = e^*_r. The reason for this is that e^*_r can only be a local F edge in case 1, in which case it does not appear again (while e' appears again by definition).
* e' is an edge with label 2 which hangs off of a square vertex which is both the top and bottom square vertex for the current block. In this case, we take e^* = e' and assign all of it to v_bottom. Note that in this case, we cannot have that e^*_b = e^*_r.
* e' is a local H edge with label 1. In this case, we take e^*_b = e' and assign it to v_bottom. Here we may have that e^*_b = e^*_r but if we do, one of the endpoints of e^*_b = e' is not locally isolated so we can take an edge factor from it and give the edge factor to v_right.
§.§ Attempted Simplification and Illustration
Our charging scheme is as follows.
* For all local F and S edges, we assign the edge to its destination.
* For all local H and R edges, we assign the edge to the destination of the corresponding local F edge in the block.
When U ≠ V, all vertices outside of U ∪ V have the required edge factors.
Observe that the first time a vertex outside of U ∪ V appears in a block, it receives the edges used to reach it (one for a circle vertex in M_α, two for a square vertex or a circle vertex in M_β or M_D).
This is sufficient unless the vertex only appears in this block. This can only happen if
* The edge(s) pointing to the vertex are F edges.
* The F edges appear for the last time in the current block as reverse charging edges.
In this case, the vertex gets edge factors from the reverse charging edges, which is sufficient.
When U ≠ V, we obtain two extra edge factors for the vertices u,v in U ∪ V as follows:
* We show that for all blocks except the first and last block, there is always a vertex (often u or v) which appears in both an earlier block and a later block and thus does not need any edge factors. One edge factor assigned to this vertex can then be given to u or v and any extras give additional decay which can be used to resolve Pur confusion.
* We show that there is a vertex (usually the final circle vertex) which has at least one more edge factor than it needs. One of these extra edge factors can be given to u or v and any extras give additional decay which can be used to resolve Pur confusion.
For each block except the first and last block, there is at least one vertex which appears in both an earlier block and a later block.
We use Lemma *** from PTVW which says that for each block, taking all intersections within the block into account, there is a path of edges with odd multiplicity from u to v.
Let u' be the last vertex on this path which appears in an earlier block. If u' = v then u' is the desired vertex. Otherwise, let v' be the vertex after u' on this path. v' does not appear in an earlier block, so the only way the edge {u',v'} can appear again is if u' appears in a later block, in which case u' is the desired vertex.
For 1/d1_d terms, the 1/d provides two extra edge factors, one of which can be given to u or v.
For w terms, we show that there is at least one vertex (usually the final circle vertex) which has more edge factors than it needs.
Add statement here.
We use a similar argument as Lemma *** in PTVW.
Consider the edge e to the final circle vertex.
If e is not a reverse charging edge pointing to the square vertex then there are a few cases, each of which gives us the needed edge factors
* This is the first appearance of the final circle vertex in the current block. In this case, the final circle vertex obtains two edge factors and only needs one, so we have an extra edge factor.
* The final circle vertex is equal to u or v. In this case, this vertex obtains at least 2 edge factors, which is more than is needed.
* The final circle vertex is not equal to u or v and has appeared before in the current block. In this case, the final circle vertex obtains at least 3 edge factors, two from e and one from the first time it appeared in the current block.
If e is a reverse charging edge pointing to the square vertex (in which case PTVW would call it a right-critical edge, see Definition *** of PTVW), let e' be the previous time that e appears in the current block (PTVW would call e' a left-critical edge). There are a few cases to consider
* e' is an F or S edge which is part of an M_β gadget. In this case, e' must be the bottom edge of the gadget as it points to the square vertex.
Letting e” be the top edge of the gadget, we can repeat the entire argument for e” instead of e.
* e' is an H edge which is part of an M_β gadget. In this case, the square vertex has a total of at least 6 edge factors (2 from the first time it appears, 2 from e' and 2 from e).
* e' is an H edge which is part of an M_α gadget. In this case, the square vertex has a total of at least 5 edge factors (2 from the first time it appears, 1 from e' and 2 from e).
* e' is an H edge which is part of an M_D gadget (if it was an F or S edge then it would point to the circle vertex instead). In this case, the square vertex has a total of at least 6 edge factors (2 from the first time it appears, 2 from e' and 2 from e).
Let SQ_1 be the square vertex in the top gadget.
When U = V, all vertices except SQ_1 have sufficient edge factors.
Note that u = v appears in both an earlier and a later block and does not need edge factors. For the other vertices, we use the same argument as we used for Proposition <ref>.
SQ_1 has two edge factors from the top gadget. If SQ_1 appears in another block then this is sufficient. If we are considering a 1/d1_n term then this gives two extra edge factors so even if SQ_1 only appears in the current block, it has the required edge factors.
We say that an edge is a local H edge if it appears both earlier and later in the current block. Similarly, we say that an edge is a local S edge if it has not yet appeared in the current block but its destination has already appeared in the current block.
If SQ_1 only appears in the current block then either SQ_1 has at least 2 edge factors or there is a neighbor of SQ_1 which has at least 4 edge factors.
We show that if SQ_1 only appears in the current block, one of the following cases must happen.
* There is an h_2 edge e going out from SQ_1 which is appearing for the first time in the current block. In this case, let v be the other endpoint of this edge (v will often be the final circle vertex but it does not have to be). Since SQ_1 only appears in the current block, e must appear later on, so v must obtain at least 4 edge factors, two of which can be given to SQ_1.
* There is at least one local H edge e incident to SQ_1. In this case, since SQ_1 only appears in the current block, this edge must appear with multiplicity at least 4. If e is assigned to SQ_1 then we are done. If e is assigned to its other endpoint v then we can split its edge factors between SQ_1 and v. up front, this may not be enough if we need all 4 edges to carry polynomial factors due to logs using naive splitting, but we can say if it appears 4 times only so there is no log.
* There is a local S edge e which goes to SQ_1. In this case, since SQ_1 only appears in the current block, e must appear with total multiplicity at least 2 so SQ_1 has the needed two edge factors.
We show that one of these cases must hold as follows. At the start, SQ_1 must either have an h_2 edge going out from it (which gives the first case) or it must have two single edges e_1 and e_2 going out from it as part of an M_α gadget. If so, consider the next time SQ_1 appears. Unless SQ_1 is reached via a local H or S edge (in which case we are in the second or third case), SQ_1 must be reached by closing e_1 and e_2. At this point, all edges which have appeared so far which are incident to SQ_1 do not appear again so we can repeat this argument.
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-5,2.5) ellipse (.4cm and .6cm);
(0,2.5) ellipse (.4cm and .6cm);
[orange] (-2.5,-6) .. controls (-6.5,-4) and (-6.5,-2) .. (-2.5,0);
[orange] (-4, -1.5) – (-4, -4.5);
[orange] (-1, -1.5) – (-1, -4.5);
(5,2.5) ellipse (.4cm and .6cm);
[orange] (2.5,-6) .. controls (6.5,-4) and (6.5,-2) .. (2.5,0);
[orange] (4, -1.5) – (4, -4.5);
[orange] (1, -1.5) – (1, -4.5);
[orange] (-5,2.5) .. controls (-1,4) and (1,4) .. (5,2.5);
[orange] (-2.5, 0) – (2.5, 0);
[orange] (-2.5, -8) – (2.5, -8);
[mycircle] at (-5, 2.5) (u) u;
[mycircle] at (0, 2.5) (v) v;
[mysquare] at (-2.5, 0) (a) a;
[mycircle] at (-4, -1.5) (b) b;
[mycircle] at (-1, -1.5) (c) c;
[mysquare] at (-2.5, -3) (d) d;
[mycircle] at (-4, -4.5) (e) b;
[mycircle] at (-1, -4.5) (f) c;
[mysquare] at (-2.5, -6) (g) a;
[mycircle] at (-2.5, -8) (t) t;
[mycircle] at (5, 2.5) (w) u;
[mysquare] at (2.5, 0) (a2) a;
[mycircle] at (1, -1.5) (b2) b';
[mycircle] at (4, -1.5) (c2) c';
[mysquare] at (2.5, -3) (d2) d';
[mycircle] at (1, -4.5) (e2) b';
[mycircle] at (4, -4.5) (f2) c';
[mysquare] at (2.5, -6) (g2) a;
[mycircle] at (2.5, -8) (t2) t;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5,
g.45/f.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.270/t.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
v.315/a2.135/ /above/0.5,
w.225/a2.45/ /above/0.5
[blue] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a2.225/b2.45/ /above/0.5,
a2.315/c2.135/ /above/0.5,
b2.315/d2.135/ /above/0.5,
c2.225/d2.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e2.45/d2.225/ /above/0.5,
f2.135/d2.315/ /above/0.5,
g2.135/e2.315/ /above/0.5,
g2.45/f2.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g2.270/t2.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
This figure illustrates the charging scheme. The light blue edges are F edges and we assign their factor to their destination. The dark blue edges are R edges whose first appearance is in a different block. We also assign these edges to their destination. The red edges are R edges whose first appearance is in the current block. We assign these edges to the destination of the corresponding F edge (which is generally the source for this edge).
Note that two edge factors are missing from v. We obtain these factors from the two green edges pointing towards t.
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(0,2) ellipse (.4cm and .6cm);
(0,2) ellipse (.5cm and .75cm);
[orange] (0,-6) .. controls (-4,-4) and (-4,-2) .. (0,0);
[orange] (-1.5, -1.5) – (-1.5, -4.5);
[orange] (1.5, -1.5) – (1.5, -4.5);
[orange] (0, -8) – (1.5, -4.5);
[mycircle] at (0, 2) (u) u;
[mysquare] at (0, 0) (a) a;
[mycircle] at (-1.5, -1.5) (b) b;
[mycircle] at (1.5, -1.5) (c) c;
[mysquare] at (0, -3) (d) d;
[mycircle] at (-1.5, -4.5) (e) b;
[mycircle] at (1.5, -4.5) (f) c;
[mysquare] at (0, -6) (g) a;
[mycircle] at (0, -8) (t) c;
ı/ȷ///in
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.45/f.225/ /above/0.5,
t.90/g.270/2/left/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
(4,2) ellipse (.4cm and .6cm);
(4,2) ellipse (.5cm and .75cm);
[orange] (4, -2) – (6, 0);
[mycircle] at (4, 2) (u2) u';
[mysquare] at (4, 0) (a2) a';
[mycircle] at (6, 0) (b2) b';
[mysquare] at (4, -2) (t2) b';
ı/ȷ///in
a2.00/b2.180/2/above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t2.90/a2.270/2/right/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-1.5,1.5) ellipse (.4cm and .6cm);
(1.5,1.5) ellipse (.4cm and .6cm);
[orange] (0,-4.5) .. controls (-2,-3.5) and (-2,-2.5) .. (0,-1.5);
[mycircle] at (-1.5, 1.5) (u) u;
[mycircle] at (1.5, 1.5) (v) v;
[mysquare] at (0, 0) (a) a;
[mycircle] at (0, -1.5) (b) b;
[mysquare] at (0, -3) (c) c;
[mycircle] at (0, -4.5) (t) b;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
b.270/c.90/2/right/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t.90/c.270/2/right/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a.270/b.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
§.§.§ Analyzing Pur Factors
Note: For this draft, I am making edges which have slack split their edge factor between their endpoints. This can be adjusted.
Note: These definitions need editing.
We define e^* to be the edge in the current block which gives en extra edge factor to u or v or gives extra edge factor(s) to SQ_1.
We define v^* to be the vertex identified by the parity argument which appears in both an earlier and a later block.
For each vertex w ≠ v^*, the following types of edges gives an extra edge factor (which we can split between its endpoints).
* There is a local S or H edge e leading to w which is not equal to the e^*.
* There is an S edge e leading to w which is a local F edge.
* There is an H edge e leading to w which is a local F or R edge.
We have the following cases.
* If e = (v,w) is a local S or H edge which is not equal to e^* then both v and w obtain sufficient edge factors even without e. To see this, note that if v appears in another block then the local F or local S edge(s) leading to v are sufficient. If v only appears in this block then the local F or local S edge(s) leading to v and the local R edge(s) closing these edges give sufficient edge factors. The same argument holds for w.
Note: This isn't quite correct. If an h_2 edge leads to w then it can be closed with a local H edge and then an R edge.
* If e = (v,w) is an S edge which is a local F edge then w obtains sufficient edge factors even without e. To see this, observe that since e is an S edge and a local F edge,
1. w must appear in an earlier block.
2. Either e is closed in this block or w must appear in a later block.
* If e = (v,w) is an H edge which is a local F edge then w must occur in a previous block. If w also appears in a later block then w does not need an edge factor for the current block. If w does not appear in a later block then e must be closed which gives an edge factor to w so w gains sufficient edge factors in the current block.
Note: The local R case is similar but needs to be added.
Whenever a vertex w ≠ v^* appears in a block and this is not the first time w appears in the block, one of the following cases holds:
* The edge(s) leading to w are local and global R edges.
* One of the edge(s) leading to w is e^*.
* w obtains at least half of an extra edge factor from one of the edges leading to it.
By Lemma <ref>, if an edge leading to w is a local or global H or S edge which is not equal to e^* then this edge gives e at least a half edge factor of slack. Thus, If none of the edge(s) leading to w are e^*, local H or S edges, or global H or S edges then the edge(s) leading to v must be local and global R edges.
If a square vertex v is incident to k M_D gadgets whose edge is not e^* then v obtains at least k-1/2 extra edge factors from the M_D gadgets.
Consider an M_D gadget and let e be the h_2 edge between the square vertex and the circle vertex in the M_D gadget. There are a few cases:
* e = e^*
* e is not e^* and e gives its edge factors to the circle vertex. In this case, if e is not closed then the circle only needs one edge factor so e has an extra edge factor which can be split between its endpoints. If e is closed then the edge(s) which close e give the circle vertex at least two edge factors so both of e's edge factors can be split between its endpoints.
* e gives its edge factors to the square vertex. Note that if this happens more than once then the square vertex obtains two extra edge factors for each additional time this happens.
Note: The following argument will need some editing. I forgot that S edges add two Pur rather than 1.
The total amount of Pur gained by a vertex w ∉{u,v,v^*} in a block is at most 10+8x where x is the number of extra edge factors obtained by the vertex.
By Lemmas <ref>, each time a vertex appears in a block and the edge(s) leading to it are not e^*, it obtains at least half an edge factor of slack and each such appearance and increases its Pur (note: we may want to count Pur by edge multiplicity rather than number of edges) by at most 4.
By Lemma <ref>, if the vertex appears in k M_D gadgets whose edge is not equal to e^* then it obtains at least k-1/2 extra edge factors of slack. Each of these M_D gadgets increases its Pur by at most 2.
We now make the following observations:
* A vertex gains at most 4 Pur from its first appearance in a block
* A vertex gains at most 4 Pur from the gadget containing e^*
* A square vertex may gain 2 Pur from an M_D gadget (actually, I think this case can be removed as Pur is only gained if the edge is an F edge, S edge, or H edge in which case there is slack)
* Each extra 1/2 edge factor the vertex obtains allows the vertex to gain at most 4 Pur.
If a vertex w ∉{u,v,v^*} appears in both an earlier block and a later block, it gains at most 10x Pur from the block where x ≥ 1 is the number of excess edge factors w obtains from the block.
§.§ Handling Pur for v^*, u, and v
Note: It shouldn't be too much of a problem, but we'll likely need to discuss the case when v^* is the final circle vertex.
Our main remaining concern is that the vertex v^* (the vertex which appears in both a previous and a later block but gives an edge factor to U ∪ V) gains Pur from the block without gaining any slack. We also need to analyze u and v
The intuition for why this does not happen is as follows. Recall that we identify v^* by finding a path from u to v consisting of edges of odd multiplicity, taking e_return to be the last edge on this path which appears in a previous block, and taking v^* to be the destination of e_retrun (if no edges on the path appear in a previous block then we take v^* = u). This means that unless v^* = u, v^* must be incident to an edge e_return which appears with odd multiplicity in the current block and also appears in a previous block. We can use e_return to return to v^* and reduce the Pur for v^*. We show that if v^* still gains Pur then it must also obtain some edge factors of slack.
However, making this argument rigorous is tricky. One weird possibility is that the edge e_return goes from v^* to another vertex w rather than from another vertex to v^*. We show that even in this case, we can go around v^* to reach w and use e_return to return to v^*.
If v^* is a square vertex then v^* has an extra edge factor.
If v^* is a square vertex then v^* obtains two edge factors from its first appearance in the block. Since v^* only needs to give one edge factor to u or v, this leaves v^* with an extra edge factor.
It's important that we can't have v^* = SQ_1 without having an edge factor of slack. If we could have v^* = SQ_1 with no slack then we would have a logarithmic factor in the final norm bound.
If w is a circle vertex which is incident to at least 3 edges of odd multiplicity then w obtains at least one edge factor of slack.
For each of the edges with odd multiplicity which go to w, there are two cases, each of which gives an edge factor to w.
* The edge assigns its edge factors to w.
* The edge does not assign its edge factors to w. In this case, the edge must have multiplicity at least 3 and its square endpoint must appear in another block. Since there must be a different local F edge which also leads to the square vertex, the square vertex obtains at least 2 extra edge factors, one of which can be given to w.
We now show that if w is incident to at least 3 edges of odd multiplicity then one of the following cases holds
* w ≠ u, w ≠ v, and at least two of these edges go to v^*.
* w = u or w = v and at least one of these edges goes to w.
* w obtains an extra edge factor of slack.
To see this, observe that one of the first two cases holds unless there is an edge e incident to w such that two copies of e appear in an M_α gadget and lead to w. If there is such an edge e then either e gives its edge factors to w or e has total multiplicity at least 3. If the total multiplicity of e is odd or is at least 6 then the other endpoint of e obtains at least two more edge factors than needed, so e can give an edge factor to w. If the total multiplicity of e is 4, we could have the case where e first appears as an h_2 edge in an M_β gadget from w to the square vertex and then appears two more times.
Sketch: In this case, we can repeat the e^* argument.
If v^*∉{u,v} then either v^* does not gain any Pur from the current block or V^* obtains at least one extra edge factor of slack.
Consider the first time v^* appears. If the first appearance of v^* is as part of an M_α gadget with an edge e_1 going to v^* and an edge e_2 going from v^* to another vertex, there are a few cases:
* Both e_1 and e_2 have odd multiplicity and either e_return = e_1 or e_return = e_2. In this case, the first appearance of v^* does not give v^* any Pur factors and later appearances of v^* can be analyzed as before.
* Both e_1 and e_2 have odd multiplicity and neither of these edges is e_return. In this case, v^* must be incident to a third edge with odd multiplicity which means that it must obtain at least one edge factor of slack.
* If e_1 or e_2 has even multiplicity, consider the next time v^* appears. If the next appearance of v^* is in an M_α gadget then let e' be the edge leading to v^*. There are a few cases:
* e' is not equal to e_1 or e_2. In this case v^* obtains an extra edge factor from e' as e' must be a local F edge.
* e' is equal to e_1 or e_2 and the total multiplicity of e' is odd. In this case, whichever vertex e' gives its edge factors two has at least two edge factors of slack which we can split between the endpoints of e'.
* e' is equal to e_1 or e_2 and the total multiplicity of e' is even. In this case, letting e'_1 be the edge in {e_1,e_2} which is not equal to e' and letting e'_2 be the other edge incident to v^* in this new appearance of v^*, we can repeat this argument with e'_1 and e'_2. Note that we can reach the bottom square vertex this M_α gadget without passing through v^*.
If the next appearance of v^* is not in an M_α gadget then there is an h_2 edge e' leading to the next appearance of v^*. In this case, either
* e' is not equal to e_1 or e_2. In this case v^* obtains two extra edge factors from e' as e' must be a local F edge.
* e' is equal to e_1 or e_2 and has total multiplicity at least 4. In this case, the other endpoint of e' must obtain at least 6 edge factors so we can split two of the edge factors of e' between its endpoints.
Either u does not gain Pur in the current block or u obtains at least one edge factor of slack.
We observe that one of the following cases holds:
* There is an edge going out from u with multiplicity 1 (which is an F edge if v^* = u and an R edge if v^*≠ u) and all other edges incident to u have even multiplicity, are local F or R edges, and do not give their edge factors to u. In this case, u does not gain any Pur from this block as the F or R edge going out from u is expected and the local R edges going to u cancel out the other edges going out from u.
* u obtains at least one extra edge factor, either from being incident to at least 3 edges with odd multiplicity, being incident to an edge whose total multipilcity is odd and is at least 3, being incident to an S or H edge, or obtaining edge factor(s) from an edge.
Either v does not gain Pur in the current block or v obtains at least one edge factor of slack.
If v^* = v then we can use the same argument as above except that instead of starting with two edges e_1 and e_2, we start with the edge e which is incident to v. If v^*≠ v then either this is the first block where v appears, in which case we reach v via an F edge and this is expected, or v appears in both an earlier block and a later block but is not equal to v^*, in which case v does not need any edge factors so the edge factor which is given to it is an extra edge factor of slack.
§.§ Local Analysis for R
For some constant >0, for any q<d^, the block-value function for R is bounded by
B_q( ) ≤1/10 .
We first state our vertex factor assignment scheme (with redistribution) which assigns vertex factors to labeled vertices according to their global appearances. It is the same one as described in <Ref>.
[frametitle= Vertex factor assignment scheme]
* For each labeled vertex i making its first or last global appearance, assign a factor of √((i));
* For each labeled vertex i making its middle global appearance, assign a factor of 1 if it is reached via an R step, otherwise assign a factor 2q · D_V as well as its corresponding O(1) factors.
We next describe the scheme that assigns edge-copies to vertices.
In most cases, we charge edge-copies (on the dangling path) to the vertex it leads to, unless it is a reverse-charging edge (<Ref>).
Recall that we define a step u→ v to be reverse-charging if it is making its last global appearance and if its first global appearance is also in the current block going from v → u.
[frametitle= "Top-down" edge-copy charging primitive]
* Assign both edges on the U-V path to the first square vertex in the middle;
* For any step u → v before the final h_2 gadget, assign the edge to v unless this is a reverse-charging edge (<Ref>), in which case we assign to u;
* For the final h_2 gadget (if any), we reserve its assignment from the current scheme.
See <Ref> for an illustration of the above scheme.
We next show the following invariant throughout the walk,
We can assign each edge-copy to at most one vertex,
* for a circle vertex, it is assigned 1 edge-copy if it is making first/last global appearance, and 2 edge-copies if both;
* for a square vertex on the dangling path, it is assigned 2 edge-copies if it is making first/last global appearance, and 4 edge-copies if both;
* for any square vertex's global middle appearance yet local first appearance in the block, it is assigned at least 1 edge-copy;
* no surprise/high-mul step is assigned to any vertex's global first/last appearance.
We start by giving the argument when U≠ V, and then show it can be modified into an argument for U=V.
Charging vertices outside U∪ V and the final h_2 gadget
The above scheme applies immediately to vertices that appear in the current block while not in U∪ V or the last square vertex as well as the final circle vertex it connects to in the h_2 gadget.
It follows by observing that any circle (resp. square) vertex that makes its first global appearance is the destination of 1 (resp. 2) edge-copies that are not reverse-charging, in which case the edge-copies are assigned to the particular vertex.
This holds analogously for vertices making the last global appearance but not the first global appearance in the block, since the edges are not reverse-charging.
We note that this is true for the first square vertex as well since we assign both edges on the U-V path to the first square vertex.
On the other hand, for vertices making both the first and the last global appearances, consider the assignment until the final h_2 gadget, the charging above gives us 1 and 2 edges for each such vertex when it first appears locally. Furthermore, notice that each of these edges need to be closed, and they are assigned to the destination vertex, i.e., the particular vertex in inspection, and this completes the proof of our assignment restricted to vertices outside U∪ V and the final h_2 gadget.
Charging the final gadget and finding "block-reserve"
The goal is to identify an edge-copy as block-reserve, and assign vertex factors to corresponding edges in the final h_2 gadget (if any). We first consider the case when there is no final h_2 attachment, i.e. the term from u+s/s^2-ru A^-1 1_m.
Recalling our bounds from <Ref> that s^2-ru = Ωm/d, and s=O(√(d)), |u|=O(1) (here the weaker bound on s suffices), we observe that
u+s/s^2-ru = O√(d)/m/d = O1/√(d)
and the normalizing constant can be regarded as an edge-copy. This is assigned as the block-reserve factor if there is no final h_2 gadget, and there is no vertex factor involved when there is no final h_2 gadget since this may be the last appearance of a square vertex while its factor has already been assigned to the edges that first lead to this vertex in the current block.
We now proceed to assign edges from the final h_2 gadget to vertex factors involved in the final h_2 gadget, moreover, we will identify one edge-copy of assigned factor at most √(2)/√(d) as "block-reserve" that allows us to further charge vertices in U∪ V.
For the vertices in the final h_2 gadget, we observe the following,
* in the final gadget, the last square vertex cannot be making its first global appearance yet it may be making its last appearance (since it is on the gadget boundary, we consider it appears in both the final h_2 attachment gadget and the gadget that precedes it);
* the last circle vertex may be making either its first or last global appearance but it cannot be both;
* the h_2 edge in-between has not yet been assigned, and we can split it as two edge-copies, each of weight 2^1/4/√(d) if F/R under the global step-labeling.
We now observe the following,
* If the final h_2 attachment is not a reverse-charging edge assigned to the source square vertex, the square vertex has also been assigned by 2 factors if making its first/last global appearance, and 4 if both, in the current block by the previous charging. In this case, note that we can reserve one of the two edge-copies, that is a factor of at most 2^1/4/√(d),
* If the circle vertex is making first/last appearance, we either have both edge-copies receiving F/R labeling, or a mix of H,R labeling. In the first case, we have two assigned factors of 2^1/4/√(d). Assign one to the circle vertex's first/last appearance, and another to the block-reserve. In the second case, we have a mix of H,R edges, that we at least have the underlying random variable appears twice as h_1 edges and at least once as an h_2 edge, by moving Õ(1) polylog factors to the second appearance of the underlying random variable locally, we have again two edge copies of value 2^1/4/√(d), and the assignment follows from above.
* If the circle vertex is making a middle appearance, we may assign related factors and vertex-factor cost to one edge-copy such that one edge-copy is of value O(1/√(d)) while the other is still at most 2^1/4/√(d); assign the edge-copy with O(1/√(d)) to the middle appearance factor and assign the edge-copy with factor 2^1/4/√(d) for block-reserve.
* If the final attachment h_2 edge is a reverse-charging step that corresponds to an edge whose F copy
leads to the final square vertex, we assign both h_2 edges to the final square vertex as the square vertex may be making its last global appearance. Note that if the square vertex makes its first global appearance in the current block, it has already been assigned 2 edges.
This leaves the last circle vertex potentially uncharged as it may be making its last appearance as well. Furthermore, we need to find one more "reserve" edge-copy for the block which would then be used to charge U∪ V.
* Finding critical edge: we now give a procedure to identify the critical edge, that is, an h_2 edge assigned to a circle vertex in the above top-down charging process. The process maintains a current circle vertex and its current gadget along the top-down path. In particular, the gadget considered is an M_β gadget throughout. The process starts from vertex s, that is the circle vertex involved in the final h_2 attachment, and the gadget considered is the one in which s opens up the F step of the current edge e^*, the h_2 edge in the final gadget. Note that this is an M_β gadget. We now case on the step-labeling of the top half of the M_β gadget in inspection, in particular, the edges leading to the circle vertex s in that particular gadget,
* This is an F, non-reverse-charging R, or H step assigned to s, this is the critical edge we aim to find, as we have two edge-copies assigned to a circle vertex, and the process terminates.
* This is a reverse-charging R edge: update the circle vertex to be the top vertex of the current M_β gadget s, and update the edge e^* to be the h_2 edge in the top half of the current gadget, and repeat the above process. Note that the updated gadget must appear, and additionally, on top of the current gadget in the dangling path.
We now observe that this process ultimately terminates since each time we move up towards the top of the dangling path. Once the critical edge is found, note that it contains two edge-copies, assign one to the vertex's factor, and another to the block-reserve. In the case of H edges involved, locally assign the edge-value as 1/√(d) and Õ(1/√(d)), and assign the copy with 1/√(d) for block-reserve, while the other for vertex's factor.
Reserving surprise/high-mul step from vertex-factor assignment outside U∪ V
We first observe that in the above assignment, it is clear that the edges assigned to vertex's first appearance cannot be surprise-visit nor high-mul step. That said, it suffices for us to consider the assignment for vertex's last appearance factor (whose first appearance is not in the same block). Consider such vertex's first local appearance, they may either be H/S edges if not R under the global edge-labeling. If so, since the vertex is making the last appearance in the current block, each corresponds to an R-step in this block. Moreover, observe that any such R-step is intended for "reverse-charging" if the vertex in inspection is making both first and last global appearance in the block, and thus it is not assigned to the vertex factor of the source vertex. That said, it suffices for us to swap the H/S step with the R step so that no surprise/high-mul step is assigned to vertex's (polynomial) factor.
At this point, for a block with U≠ V, we have assigned
* 1 or 2 edge for each vertex's first/last appearance in the current block outside U_∪ V_ depending on the vertex's type;
* 1 edge-copy of value O(1/√(d)) has been identified as the block-reserve.
* it is also straightforward to observe that any edge assigned for vertex's first/last appearance cannot be a surprise visit nor high-mul visit.
Charging circle vertices in U∪ V
There is always a U-V path such that each edge along the path is of odd multiplicity. Call this path P_safe.
It suffices for us to restrict our attention to paths of only h_1 edges. By our gadget property, each vertex except U_∪ V_ is incident to an even number of h_1 edges, while only vertex copy in U_∪ V_ is of odd degree. Suppose the path is broken, consider the (maximal) component C_U connected to U_ (but not V_), we first observe that the edge-multiplicity of E(C_U, V()∖ C_U) must be even, as otherwise, some edge is of odd multiplicity, and the edge is included inside C_U as opposed to be across the cut. That said,
∑_v∈ C_U(v) = 2 ·(E(C_U)) + (E(C_U, V()∖ C_U))
Observe that the LHS is odd as any vertex copy except U_ has even degree, and we have exactly 1 odd-degree vertex copy. On the other hand, RHS follows as each edge-multiplicity in E(C_U) contributes a factor 2 to degree of vertices in C_U while edges across the cut contribute 1 for each multiplicity; since E(C_U, V()∖ C_U) is even by connectivity argument above, the RHS is also even and we have a contradiction.
There is at least one vertex v^* making middle-block appearance, i.e. it is a vertex that appears in previous blocks, and is appearing again in future blocks.
We recall our charging so far that we have one edge-copy reserved, while we have two circle vertices in U∪ V uncharged. And we now show that the single-edge copy is sufficient. i.e., at most one vertex factor is picked up among these two vertices.
Consider the path P_safe, and observe that it passes through both vertices in U∪ V. Take v^* to be the first vertex on the path that pushes out an F/H/S edge. In the case v^* = U, note that it suffices for us to assign the reserve factor for the vertex in V; similarly, assign the reserve factor for U for v^*= V. In the case v^*∉ U∪ V, note that the previous scheme assigns at least one edge for v^* if it is a circle, and 2 if it is a square when v^* first appears in the current block, and this is in fact not needed since v^* is making a middle appearance. That said, we have at least 2 factors, 1 from the reserve factor, and 1 from the edge-assignment for v^* in the previous scheme, we now assign these two factors for U ∪ V.
v^* is picked such that it is either reached by an R edge, or it is the boundary vertex U_. In other words, no factor is needed for specifying v^* of the block.
and this completes the proof to our proposition when U≠ V.
Analysis for U=V
We now consider the case when U=V.
For each R term, call the first square vertex on top of the dangling path the 𝖲𝖰_1 vertex.
This vertex is an exception from any other square vertex as its first appearance might be the destination of two edge-copies that correspond to the same underlying random variable. This happens when an edge making its first and last appearance at the same time for the U=V term.
For U=V, the charging for vertices outside is identical except for the first square vertex 𝖲𝖰_1: since the two edges assigned to it now correspond to the same underlying edge-copy, and may now receive F, R-labeling. That said, both edges are now assigned to the first square vertex. If the square vertex is making either first/last appearance but not both in the current block, the current assignment is sufficient. However, there are no reverse-charging edge copies protecting 𝖲𝖰_1, and that warrants a further analysis.
We follow the previous strategy in identifying block-reserve: even though for U=V, the vertex U=V is by definition not contributing any vertex-appearance factor, we now intend the block-reserve factor to pay for either additional factor due to the dormant step stemming from U=V, or the last appearance factor of 𝖲𝖰_1 but not both.
* 𝖲𝖰_1 makes both first and last appearance in the current block: in this case, the edges assigned to 𝖲𝖰_1 in the beginning of the top-down charging process are assigned to the first appearance of 𝖲𝖰_1. That said, it remains for us to identify two extra edge-copies for the last appearance factor.
Charging for terms without final h_2 attachment: these terms come with a normalization of |u+s/s^2-ru| = O(1/d), and these are two edge copies that we assign to the last appearance of 𝖲𝖰_1.
Charging for terms with final h_2 attachment:
we first note that any edge incident to 𝖲𝖰_1 must be closed, and further case in whether there is any surprise visit arriving at 𝖲𝖰_1 throughout the dangling path.
* Suppose that there is no surprise visit arriving at 𝖲𝖰_1 throughout the dangling path.
Consider the final departure from 𝖲𝖰_1, and note that it pushes out at least two edge-copies.
* If this is an M_β gadget, let r be the circle vertex it connects to via an h_2 edge. Suppose the random variable corresponding to (𝖲𝖰_1,r) first appears as an h_2 edge, in this case, the circle vertex has been assigned two edge-copies the first time it appears as it is reached using h_2 edge, that said, we can reserve the other two edge copies from the final attachment for the missing last appearance factor of 𝖲𝖰_1;
* If this is an M_β gadget, yet the random variable (𝖲𝖰_1, r) first appears as an h_1 edge in the current block. Note that there is at least one more edge-copy of h_1 edge that is currently assigned an H-label, observe that we can replace its label to be R, and assign both F/R copies to the vertex factor of the circle vertices. We now relabel the final h_2 step to be H^*, and note that these two edges get assigned a value at most 2/d, and assign them both to the missing last appearance factor of 𝖲𝖰_1.
* To see that H^* does not require extra D_V· q factor, we observe that this is a circle vertex traversed before in the block, and for each block, H^* is unique. That said, it suffices for us to use a special label in [2] when H^* first appears in the block to identify the edge. Moreover, note that swapping H^* with the R step does not add to factor of the destination circle vertex as this edge no longer appears;
* If this is an M_ gadget, let r_1, r_2 be the circle vertices the square vertex is
connected to. Note that for each r_i, there must be at least two more edge-copies of random variable of (𝖲𝖰_1, r_i) that receive H label, i.e., we have a factor of Õ(1/d). That said, we may reassign the factors and extract one full factor of 1/√(d) (with the remaining being Õ(1/√(d)), and since we have two of them that combine to a factor of 1/d, assign it to the missing last appearance factor of 𝖲𝖰_1.
* There is a surprise visit arriving at 𝖲𝖰_1. We first observe this must be either an h_2 edge, or a pair of distinct h_1 edges. The surprise visit must be closed already, and their underlying R copies are intended for reverse-charging in the top-down charging scheme for the last appearance of 𝖲𝖰_1. In this case, the factor for the last appearance has already been assigned edges.
* Finding "block-reserve" for U=V:
SQ_1 is not making both first and last appearance in the current block, we first note that the departure from U∪ V is special as this is the only vertex that can appear on the boundary again without being reached by any edge, and this is not a dormant gadget M_D. That said, the most recent departure may contribute factor to the vertex in U∪ V, and we now identify an edge-copy for this factor.
* Charging for terms without final h_2 attachment: for A^-11_m term where the final h_2 gadget is missing, we pick up a normalization constant of O(1/d) and we designate this as the block-reserve.
* Charging for terms with final h_2 attachment: for terms with the final h_2 gadget, the analysis of finding block-reserve from the case U≠ V applies identically for finding an h_2 edge that gets assigned to a circle vertex in the top-down traversal process. In particular, we may designate one edge-copy among the two copies of the identified h_2 edge as a block-reserve to charge the corresponding Õ(1) factors from .
Note that the edge identified from the above process is in fact stronger, as it carries a factor of O(1/√(d)) as opposed to Õ(1/√(d)). That said, since for U≠ V terms, the block-reserve is only assigned for Õ(1) factors from , either is sufficient.
§.§ Illustration via diagrams
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-5,2.5) ellipse (.4cm and .6cm);
(0,2.5) ellipse (.4cm and .6cm);
[orange] (-2.5,-6) .. controls (-6.5,-4) and (-6.5,-2) .. (-2.5,0);
[orange] (-4, -1.5) – (-4, -4.5);
[orange] (-1, -1.5) – (-1, -4.5);
(5,2.5) ellipse (.4cm and .6cm);
[orange] (2.5,-6) .. controls (6.5,-4) and (6.5,-2) .. (2.5,0);
[orange] (4, -1.5) – (4, -4.5);
[orange] (1, -1.5) – (1, -4.5);
[orange] (-5,2.5) .. controls (-1,4) and (1,4) .. (5,2.5);
[orange] (-2.5, 0) – (2.5, 0);
[orange] (-2.5, -8) – (2.5, -8);
[mycircle] at (-5, 2.5) (u) u;
[mycircle] at (0, 2.5) (v) v;
[mysquare] at (-2.5, 0) (a) a;
[mycircle] at (-4, -1.5) (b) b;
[mycircle] at (-1, -1.5) (c) c;
[mysquare] at (-2.5, -3) (d) d;
[mycircle] at (-4, -4.5) (e) b;
[mycircle] at (-1, -4.5) (f) c;
[mysquare] at (-2.5, -6) (g) a;
[mycircle] at (-2.5, -8) (t) t;
[mycircle] at (5, 2.5) (w) u;
[mysquare] at (2.5, 0) (a2) a;
[mycircle] at (1, -1.5) (b2) b';
[mycircle] at (4, -1.5) (c2) c';
[mysquare] at (2.5, -3) (d2) d';
[mycircle] at (1, -4.5) (e2) b';
[mycircle] at (4, -4.5) (f2) c';
[mysquare] at (2.5, -6) (g2) a;
[mycircle] at (2.5, -8) (t2) t;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5,
g.45/f.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.270/t.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
v.315/a2.135/ /above/0.5,
w.225/a2.45/ /above/0.5
[blue] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a2.225/b2.45/ /above/0.5,
a2.315/c2.135/ /above/0.5,
b2.315/d2.135/ /above/0.5,
c2.225/d2.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e2.45/d2.225/ /above/0.5,
f2.135/d2.315/ /above/0.5,
g2.135/e2.315/ /above/0.5,
g2.45/f2.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g2.270/t2.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
This figure illustrates the charging scheme. The light blue edges are F edges and we assign their factor to their destination. The dark blue edges are R edges whose first appearance is in a different block. We also assign these edges to their destination. The red edges are reverse-charging R edges whose first appearance is in the current block. We assign these edges to the destination of the corresponding F edge (which is generally the source for this edge).
Note that two edge factors are missing from v. We obtain these factors from the two green edges pointing towards t.
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(0,2) ellipse (.4cm and .6cm);
(0,2) ellipse (.5cm and .75cm);
[orange] (0,-6) .. controls (-4,-4) and (-4,-2) .. (0,0);
[orange] (-1.5, -1.5) – (-1.5, -4.5);
[orange] (1.5, -1.5) – (1.5, -4.5);
[orange] (0, -8) – (1.5, -4.5);
[mycircle] at (0, 2) (u) u;
[mysquare] at (0, 0) (a) a;
[mycircle] at (-1.5, -1.5) (b) b;
[mycircle] at (1.5, -1.5) (c) c;
[mysquare] at (0, -3) (d) d;
[mycircle] at (-1.5, -4.5) (e) b;
[mycircle] at (1.5, -4.5) (f) c;
[mysquare] at (0, -6) (g) a;
[mycircle] at (0, -8) (t) c;
ı/ȷ///in
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.45/f.225/ /above/0.5,
t.90/g.270/2/left/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
(4,2) ellipse (.4cm and .6cm);
(4,2) ellipse (.5cm and .75cm);
[orange] (4, -2) – (6, 0);
[mycircle] at (4, 2) (u2) u';
[mysquare] at (4, 0) (a2) a';
[mycircle] at (6, 0) (b2) b';
[mysquare] at (4, -2) (t2) b';
ı/ȷ///in
a2.00/b2.180/2/above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t2.90/a2.270/2/right/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-1.5,1.5) ellipse (.4cm and .6cm);
(1.5,1.5) ellipse (.4cm and .6cm);
[orange] (0,-4.5) .. controls (-2,-3.5) and (-2,-2.5) .. (0,-1.5);
[mycircle] at (-1.5, 1.5) (u) u;
[mycircle] at (1.5, 1.5) (v) v;
[mysquare] at (0, 0) (a) a;
[mycircle] at (0, -1.5) (b) b;
[mysquare] at (0, -3) (c) c;
[mycircle] at (0, -4.5) (t) b;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
b.270/c.90/2/right/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t.90/c.270/2/right/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a.270/b.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
§.§ Extension from R_A to sandwich shapes with jump
§.§.§ Analysis for R_u
The block-value function for u/ru-s^2 R_u is bounded by
u/ru-s^2 B(R_u) ≤1/10 .
We first highlight the difference from R_A,
* We pick up an extra normalization of |1/ru-s^2| = O(d/m);
* We pick up a factor of |u| ≤ 1;
* The J_m/d-jump comes with a factor of 1/d;
* The normalization factors combines to be O(1/m).
Applying the "top-down" edge-copy charging scheme from earlier, we observe that proposition <ref> applies as stated except that there may not be any edge leading to the jump-square vertex, in which case, our prior charging demands 2 edge-copies to be assigned to the jump-square vertex as it may be making first/last appearance, and 4 if the vertex is making both first and last.
* Suppose the vertex makes both its first and last appearance in the current block, the entire factor of O(1/m) is assigned to the vertex;
* Suppose the vertex makes its first appearance in the current block, it suffices for us to assign a factor of 1/√(m) from the normalization to the vertex;
* Suppose the vertex makes its middle/last appearance in the current block, this is a vertex not reachable via R edge, hence a factor of q· D_V is needed; furthermore, it gets assigned a factor of (D_V· q) (in the case of contribution from H arrivals), or √(m) from the back-pay of the vertex factor (in the case of last appearance). That said, assigning the O(1/m) factor to the factor of q· D_V ·√(m) is sufficient.
We then complete the analysis for R_u by observing the assignment of the last attachment edge, as well as the assignment of the "block reserve", follow from that of R_A.
§.§.§ Analysis for R_s
The block-value function for s/ru-s^2 R_s is bounded by
s/ru-s^2 B(R_s) ≤1/10 .
We note the difference from R_A,
* We pick up a normalization of |1/ru-s^2| = O(d/m);
* We pick up a factor of |s| = O(√(d));
* The 1_mη^T+η· 1_m^T/d-jump gives a factor of 1/d;
* The normalization factors combines to be O( √(d)/m ).
We then complete the proof by an edge-assignment scheme depending on whether it is a 1_m·η^T-jump or η· 1_m^T-jump.
Charging for η· 1_m^T-jump
In this case, the dangling path connected to U_∪ V_ ends with an h_2 attachment, and the charging restricted to edges and vertices in that component is identical to R_A. In particular, the "block reserve" edge is already identified from the component connected to U∪ V. Thus, we focus on the floating component, and notice we may traverse the floating component by starting from the jump-square vertex, that is the destination of the η· 1_m^T jump, and then follow a dangling A^-1 path from there.
Applying the "top-down" charging scheme on the floating component yields the following,
* Each vertex besides the jump-square vertex has its first, last appearance charged;
* There is an edge-copy from the floating component for "block reserve" that is of value 1/√(d);
* Similar to the R_u jump, the jump-square vertex may have its first and last appearance uncharged, and it suffices for us to assign a factor of 1/m to the first square vertex;
* Combining the normalization for R_s with the "block reserve' edge from the floating component gives us
O(√(d)/m·1/√(d)) = O(1/m). Assigning this to the jump-square vertex completes the charging.
Charging for 1_m·η^T-jump
In this case, the charging restricted to edges and vertices vertices in that component is identical to R_A, except that the "block-reserve" edge is not yet identified. That said, we need to assign corresponding factors to each vertex in the floating component, as well as identify an edge for "block-reserve".
For the floating component proceeded by 1_m·η^T-jump, traverse it from the jump-circle vertex, that is the circle destination of 1_m·η^T. Applying the "top-down" charging on the floating component except that we no longer stop before the final h_2 attachment edge, i.e., apply the assignment rule for any edge in the floating component charge's each vertex's factor except the jump-circle vertex. We the observe the following,
* The normalization factor of the block is O(√(d)/m);
* The jump-circle vertex if making first but not last appearance inn this block, contributes a vertex factor of √(d); if making middle or last, but not first appearance in this block, it contributes a factor of O((D_V· q) ·√(d))); if making both first and last appearance, it contributes a factor of d. That said, assigning a factor of 1/d to the jump-circle vertex is sufficient.
* That said, we may split the O(√(d)/m) factor as O(1/√(d)·1/d), and assign the factor of O(1/√(d)) to the "block-reserve", and 1/d for the jump-circle vertex completes our charging.
§.§.§ Analysis for R_r
The block-value function for r/ru-s^2 R_r is bounded by
r/ru-s^2 B(R_r) ≤1/10 .
Note the difference from R_A is given by the following,
* We pick up an extra normalization of |1/ru-s^2| = O(d/m);
* We pick up a factor of |r| ≤m/d;
* The η·η^T/d-gadget gives a factor of 1/d;
* We have a total factor of 1/d by combining the above factors.
The charging for the component connected to U∪ V is identical to the case of η· 1_m^T-jump, in particular, the "block reserve" is already identified from the component connected to U∪ V. That said, we focus on the floating component, and apply the "top-down" charging scheme, notice each vertex's factor but the jump-circle vertex's first/last appearance is assigned to some edge.
* The jump-circle vertex may be making first but not last, or last but not first appearance in the current block, in which case, it contributes a factor of at most √(d)· q· D_V;
* It may be making both first and last appearances in the current block, in which case, it contributes a factor of d;
* Assigning the normalization factor of of 1/d to the jump-circle vertex is sufficient.
§.§ Pur bound for square vertices in R
The prior bound does not apply well for square vertices in , in particular, it should be pointed out that even before non-trivial intersection within each block along the dangling path, the argument based on 1-in-1-out (or its slightly generalized version of 2-in-2-out) falls apart in the analysis for .
In particular, one may consider a walk on R_S with the square vertex along the U-V path fixed throughout the walk. It is easy to verify the the fixed square vertex may have growing unclosed F edges without any surprise visit/high-mul step. That said, it is not sufficient to use the slack from such factors to offset the potential confusion due to R edges.
In this section, we give a new argument to handle the factor in the vanilla setting of R when there is no non-trivial intersection along the dangling gadget-path, and then extend it for the general cases of R where each block is not necessarily injective due to intersection across gadgets.
Gap from square middle appearance For starters, we observe that a vertex may only push out unforced returns if it is making a middle appearance given that it maintains a list of incident edges, including the additional information which are closed. When it is making the last appearance, any currently unclosed edge needs to be closed, and therefore shall be pushed out, giving us a fixed set of edges being pushed-out (where we momentarily ignore the question whether one needs to distinguish among the edges in the edge-set). This immediately renders us the following bound on readily,
For any vertex v, let 𝖬𝗂𝖽𝖠𝗉𝗉(v) be the number of middle appearances of v throughout the walk, we have
(v) ≤ 3 ·𝖬𝗂𝖽𝖠𝗉𝗉(v)
provided each square vertex pushes out at most 3 edges in each block throughout the walk, and each block is vertex-injective.
With the above bound, it may not be meaningful if we cannot obtain a slack from 𝖬𝗂𝖽𝖠𝗉𝗉. Fortunately, this is indeed the case for our setting, and we first observe this in the vanilla setting where there is no gadget-incursion within each block, (in particular, this already applies immediately if the edges are injective witin each block),
* We have assigned each square vertex at least 1 edge when it is making a middle appearance: to see this, the top-down charging scheme assigns each square vertex 2 edges when the square-vertex appears for the first time in the block; in the case of charging U∪ V, 1 edge may be re-routed from a square vertex making a global middle appearance. That said, at least 1 edge is assigned to each square vertex when it makes a middle appearance.
* Note that for a fixed vertex v in a given block, its local first appearance at the given block may be corresponding to a global middle appearance, and such mismatch is the source of our slack;
* Observe that each middle-appearance corresponds to a mismatch described above (provided each block is injective), and each such mismatch of local-global first appearance assigns a vertex making global middle appearance one edge-copy, that is a factor of O(1/√(d));
* However, since each vertex's middle appearance does not get assigned any vertex factor in our scheme, we may use the 1/√(d) gap to offset the 3 factors corresponding to the particular global middle appearance, that is
O(1/√(d)) · (q· D_V)^3 = o_d(1) .
Extension to gadget intersections To handle dangling paths with potentially intersecting gadgets, it should be pointed out the bound goes through as stated while we do not necessarily have a gap from middle-appearance. In particular, it is possible now that in a given block, a square vertex appears for various times and only has 2 edges assigned to it for its first local appearance, as any of its subsequent appearance in the given block follow via closing some F edge that gets opened up earlier in the work, and thus assigned to the destination as opposed to the given square vertex.
Towards generalizing the prior argument, we consider a specific subclass of global middle appearance of a vertex through the block walk,
For a given walk and a given block-step 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, we say a labeled vertex v∈ [m] makes a middle appearance in 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i if
* v makes appearance at 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i;
* v makes appearance at some 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_j for j<i, and at some 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_j' for j'>i.
With some abuse of notation, we continue to let 𝖬𝗂𝖽𝖠𝗉𝗉(v) denote the number of middle appearances of v. Note that this is clear when we are working with blocks that do not have block-injectivity, while it is equivalent to the previous definition in vertex-injective blocks.
In particular, we emphasize that following the above definition, in the case of a labeled vertex first appears at 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, and appears multiple times in various gadgets in the dangling-path of 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, it is not considered as making a middle appearance at 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i.
For any square vertex v that does not push out dormant gadgets, at any time-t,
_t(v) ≤ P_t(v) ≤ 3 ·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v))
where we define
P_t(v) #(R steps closed from v by time t) + #((unclosed edges incident to v at t) ) - 4
In other words, by assuming at 4 possible return legs to be fixed each time a vertex is on the boundary, the number of unforced returns from a vertex by time t is at most 3 ·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v)).
The first inequality is definitional as we assume each vertex may have 4 edges being fixed, which incurs a cost of [4] for each vertex each time it pushes out an R step. Analogous to previous bounds, the base case is immediate when vertex first appears in the walk. In particular, we note that the above bound can be strengthened for the 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i in which vertex v makes (globally) its first appearance.
For any square vertex v, for the 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, at any time-t within the 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i ,
P_t(v) ≤ 3(s_t(v) + h_t(v)) - 2
Notice this is immediate when vertex v first appears, as it is incident to 2 edges, and thus we have P^(1)_t = -2 (as we have a -2 term since we maintain 3 edges to be fixed instead of just 2). The invariant holds as any subsequent departure opens up at most 2 F edges, and any subsequent arrival either closes both edges, or either arrival is along a surprise/high-mul visit, and give a net-gain of at most 3 in the number of unclosed F edges. This proves our claim.
It remains for us to consider the appearance of v in subsequent blocks, in particular, we start with the locally first appearance of the subsequent block. Let t be the time-mark in which v is making its first local appearance in the current-block. Notice at time t when the vertex first appears, it may be arrived via a single edge (as opposed to 2 due to the U-V path), and therefore it may push out at most 3 edges.
Appearance at the second block
If not, this is currently the second block in which v appears: applying the claim on the first block, and observe that the most recent departure opens up at most 2 F edges, while the current arrival closes 1 (unless H or S, in which case a net-gain of 3 suffices), we have
P_t(v) ≤ 3(s_t(v)+h_t(v)) -2 + 2 -1 =3(s_t(v)+h_t(v)) - 1
where the +2 corresponds to the 2 F edges opened up at the most recent departure, and the -2 term corresponds to the term in the hypothesis on first-block, and the -1 comes from the current arrival closing at least one R edge.
For any subsequent appearance of v in the current block, if any, the following is immediate,
P_t'(v) - P_t(v) ≤ 3 (s_t'(v) - s_t(v)) + 3 (h_t'(v) - h_t(v)) + 1
as we observe that
* The departure from the first vertex may open up at most 3 edges instead of 2;
* Any subsequent departure and arrival closes 2 edges, and opens up at most 2 edges, hence the previous argument applies, giving a net-gain of +1 due to the extra opening in the first departure.
That said, for any appearance of v in the second block at time t', we have
P_t'(v) = P_t(v) + (P_t'(v) - P_t(v)) ≤ 3(s_t'(v)+h_t'(v))
Appearance at the future blocks For any block, let t_0 be the local first appearance, and t_1 be the local final appearance, applying the above argument gives
P_t_1(v) - P_t_0(v) ≤ 3 (s_t_1(v) - s_t_0(v)) + 3 (h_t_1(v) - h_t_0(v)) + 1 .
That said, it suffices for us to bound P_t_0(v). This is bounded by
P_t(v) ≤ 3 ·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v))
Consider the base case when v appears at the third block, the most recent departure opens up at most 2 new F edges. To offset this, we use the gain in 𝖬𝗂𝖽𝖠𝗉𝗉_t(v), as the appearance of v in the second block is now counted as a middle appearance once v appears in the third block, which gives a +3 on the RHS. The bound extends to any subsequent block immediately.
This completes our proof of bound.
Extension to dormant gadgets
To capture the change due to dormant gadget, we note that for each square vertex at each block, we can assume one dormant edge it pushes out being fixed while assign a factor for any other dormant gadget it pushes out at that block. Notice this is a cost at most 2 for each square vertex throughout the walk, as we may need to assume 1 dormant edge for its first appearance, and another for its last appearance. For any middle appearance, it suffices for us to assign a factor for any dormant edge it pushes out, as opposed to all-but-one in the case of first/last appearance. This prompts to define the following counter,
For each square vertex v, at any block 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, let D_i be the number of dormant-gadgets it pushes out at this block,
define D_i(v) D_i - 1[ first/last appearance at i ], and additionally, the counter function is defined for the whole walk by taking
D(v) = ∑_i:v appears in 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i D_i(v)
By using an extra additive constant of 2, each return using dormant leg is either forced or accounted for in D(v).
Each dormant-excess gadget corresponds to a circle vertex reached using an h_2 edge, and gets assigned a factor of at most Õ(1/√(d)) in the combinatorial charging argument.
Note that in the combinatorial charging argument, we have assigned both h_2 edges to the circle vertex's vertex factor, unless the square vertex is potentially making its first and last appearance at the same block, which is not ruled out by the definition of dormant excess. That said, for the circle vertex, it gets assigned at most a factor of √(d) in our scheme, while both edges assigned at least Õ(1/√(d)) each, and combining the above yields the desired.
By assuming at most 6 return legs to be forced, the number of unforced-return from v throughout the walk is at most
(v) ≤ 3·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v)) + D(v) .
This allows us to effectively ignore factor in the block-value analysis, as each factor can be distributed among surprise visit, high-mul visit, or middle appearance such that each is assigned at most 3 factors. Moreover, since each comes with a O(1/√(d)) factor, combining with the assigned factor contributes an o_d(1) term provided q^3/√(d) =o_d(1), which is sufficient for us as we set q = d^ for a small enough constant .
§.§ Wrapping up
Given the edge-assignment scheme to vertex appearance, we now show why this immediately gives the desired bound on B(): we have the following factors,
* Each circle vertex gives a factor of √(d) for their first/last appearance; the assignment gives each vertex one global F/R edge-copy each, that is a factor of √(2)/√(d), giving a bound of
√(d)·√(2)/√(d) = √(2) ;
* Each square vertex gives a factor of √(m) for their first/last appearance, and the assignment above gives global F/R each two edge-copies each, that is a factor of
√(m)·2/d≤2√( m)/d ;
* Each square vertex that makes a global middle appearance gives a O(1) factors of , while each such vertex appearance is assigned one edge, that is a factor of
(2q· D_V)^O(1)· O( √(2) q^2/√(d)) ≪ 1 .
For each vertex's appearance and edges assigned to it, we have a factor of at most
(1+o_d(1)) ·√(d)·√(2)/√(d)· 12 ≤ (1+o_d(1)) · 6√(2)
for a circle vertex, and
(1+o_d(1)) · 6 ·√(m)·2/d ≤ (1+o_d(1)) 12√(m)/d
for a square vertex where the factor 6 for each vertex comes from our bound that each vertex arrived using a forced R edge can be specified at a cost of [6].
By our edge-copy charging scheme and <Ref>, we observe the following,
* Treat the U-V path as a gadget, and we have 2 choices depending on whether U=V for the upcoming block;
* For each gadget-step, we sum over the edge-labelings;
* For each gadget along the dangling path, we pick up at most the following factor,
* M_: it gives 2 circle vertices and 1 square vertex, which is a factor of at most
B_ (1+o_d(1)) · 3^4·4/d^2·√(md^2)· 6^3 ;
* M_β: it gives 1 circle vertex and 1 square vertex, which is a factor of at most
B_β (1+o_d(1)) · 3^4 ·2/d^2·√(md)· 6^2 ;
* M_D: it gives 1 circle vertex, which is a factor of at most (1+o_d(1)) · 3^2
if it is the first M_D gadget of the block with the factor 3 counting the step-label of the edge-copies in the gadget, or a factor of at most B_D Õ(1/√(d)) for any subsequent M_D gadgets.
* For the U- V path, the square vertex SQ_1 gives a factor of at most
(1+o_d(1)) · 6 ·√(m/d^2)
and the circle vertex combined gives a factor of at most
(1+o_d(1)) · (√(2))^2 · 6
where the factor of 6 comes from at most one circle vertex being reached using R edge.
Therefore, combining the above bounds, we have
B() ≤ (1+o_d(1)) · 6 (√(m/d^2)· 6 ·√(2)^2) · 3^2 ·∏_≤ d gadgets (B_ +B_β +B_D ) < 1/2
provided
B_, B_β < 1/2
and
6 (√(m/d^2)· 6 ·√(2)^2) · 3^2 < 1/2
It can be verified that it suffices for us to take m < 1/7000000· d^2 though we do not emphasize upon the particular constant as we believe a more careful argument by tracking our above bounds can render an improved constant without much work.
With probability at least 1 - 2^-d^ for some constant >0,
_ <1/2 .
This follows by setting q= d^ for some constant >0 in <ref>, and combines with our block-value function for small enough constant c.
§.§ Analysis for non-intersection terms in RA
Overview We start with an overview for norm-bound analysis that ignore constant factors. Let's start by focusing on R_A, i.e. the first term in the Woodbury expansion. For intuition, let's first ignore the term coming from η and M^-1, i.e., we are simply looking at ∑_i∈ [m] v_iv_i^T, and note that this is a matrix with norm Θ(√(md)/d). To see this, observe that each edge gives us a 1/√(d) from normalization, and we have √(md) choices each step with two edges, hence the bound Θ(√(md)/d).
Up front, this is too large to charge, however, it is useful to notice that we are ”only” an 1/√(d) factor away from the desired as we would then have
√(md)/d^1.5 < 1
for m=Θ(d^2). That said, we need to find a gap of 1/√(d) from the path dangling from the main branch, i.e., the component coming from M^-1η. On the other hand, notice an 1/√(d) gap is indeed what we would expect of the typical entry of M^-1η since M^-1 has constant norm bound and η_i = v_i_2^2 -1 has a typical magnitude of O(1/√(d)).
We now proceed to a formal argument via graph matrices.
There is an absolute constant c_R such that for m≤d^2/c_R,
τ(R_A) _≤1/10
We give a piece-wise analysis, and our high-level idea follows the prior observation each gadget (modulo the starting vertex of the dangling path) in M^(-1) gives us a contribution at most 1
(and in fact bounded away from 1 as T<1), and the optimal vertex separator is either the vertex in U or V since picking the middle vertex otherwise gives a loss in value of Θ(1/d).
Following our machinery in the trace-method calculation, we note that it suffices for us to bound the contribution of each block locally, i.e., to find a function B(τ(R_A)) that upper bounds the ”local” contribution of each block in τ(R_A).
To bound B(τ(R_A)), we split the contribution into three-components based on the shape of τ(R_A). At this point, we remind the reader that we adopt the vertex-factor distribution scheme such that
* A vertex gets half of its weight when it appears the first and the last time;
* Each middle appearance does not get assigned any weight.
U-V path: off-diagonal We first identify the contribution from vertices on the U-V path in τ(R_A) and holds for any τ∈τ(R_A).
* There is a U-V path that needs to be blocked, i.e., at least one of the three vertices (two circle vertex in the boundary and the square vertex in the middle) needs to be making a middle appearance and each vertex outside gives a square-root of the weight as it may be appearing the frist/last time;
* Each edge gives value 1/√(d) (in the dominant term), and we have 2 edges in total;
* That said, if this is an off-diagonal term, we pick up a factor of (2+o_d(1)) ·1/d√(md) from vertices and edges on the U-V path.
U-V path: diagonal
* We do not pick up any vertex factor from the circle vertices in U∩ V as it is by definition making a middle appearance;
* The square vertex is connected to U∩ V by a double edge. We first note that even though it is connected to U∩ V by a double edge, it is still incident to non-double edges from the A^-1 path attached to it (or the H_2 attachment in the case when we pick up an I from the A^-1). That said, it contributes a factor of at most √(m), and an additional factor of √(q) as the second appearance of this component may not be arrived using any edge, i.e., it is floating and we need an label in q to identify where the vertex appears before);
* A double edge (two copies of the same underlying H_1 edge) that give a value of 1/d in total;
* Therefore, for a diagonal term, we pick up a factor of 1/d√(m)·√(a) =o_d(√(md)/d ) from the U-V path component as we consider q = O((d)).
Combining the above gives us a factor of at most
(2+o_d(1)) ·1/d√(md)
from the U-V path component.
Dangling A^-1 path We then have a dangling A^-1 path. In particular, it should be pointed out that the first square vertex (a.k.a. the dangling attachment vertex) already has its factor accounted in the previous component as a vertex on the U-V path.
* If the A^-1 path picks up Id, we have a factor of 1;
* Otherwise, each gadget contributes a factor of at most 1/dB(Id)+ B(M_)+B(M_β) +B(M_D) where we recall that M_τ = -( 1/dI + M_ +M_β + M_D );
* In total, we pick up a factor at most
1 + ∑_t=1^∞(1/d + 2√(md^2) + √(m^2) + 4√(md)/d^2 + o(1))^t .
H_2 attachment The final attachment component is accounted as the following, and recall that we aim for an O(1/√(d))-gap here to offset the blow up of √(md)/d from the U-V path component.
* the final circle vertex s∈ [d] gives us a vertex-cost of √(d) each block;
* the H_2 edge has edge-value √(2)/d as it has variance 2/d^2;
* We pick up a factor in total of
√(d)·√(2)/d = √(2)/√(d) .
Wrapping up for τ(R_A)
Following the above component-wise analysis, it suffices for us to set a block-function for τ(R_A) as
B(τ(R_A)) ≤(2+o_d(1) ) √(md)/d·(1 + ∑_t=1^∞(1/d + 2√(md^2) + √(m^2) + 4√(md)/d^2 + o(1))^t) ·√(2)/√(d) .
Our bound then follows as we note that for some large enough constant c_M, the above is bounded by 1/10.
§.§ Analysis for non-intersection terms in sandwich R terms
We now try to extend the above strategy for other non-intersecting terms in R, where the goal is the same as we would like to find an additional 1/√(d)-gap from the dangling path. Noticing the charging is identical for the first A^(-1)-path, it suffices to focus on charging from the top bun that ends at some vertex with label in [m], and in particular, the jump-square vertex (that is the square vertex reached by the jump) while the charging for any other vertex is identical to their counterpart in the A^-1 term.
With probability 1-o_d(1),
* |u/ru-s^2| ·τ(R_u) ≤ O(√(log d/d) );
* |s/ru-s^2| ·τ(R_s) ≤ O(log d/d );
* |r/ru-s^2| ·τ(R_r) ≤ O(√(log d/d) ).
Recall from <Ref> that the following bounds hold w.h.p.,
* |1/s^2-ru|= O(d/m) ;
* |u| ≤ 1 ;
* |s| = O(√(d)) ;
* |r| = Θ(m/d).
Moreover, observe that each step across the jump-gadget J_m/d, 1_m η^T + η 1_m^T/d, η·η^T/d comes with an 1/d factor.
Next, let's account the factor for the jump-square vertex and vertices in the gadget before the next A^-1 path starts,
* R_u-jump : The jump-square vertex contributes a value of √(mlog d) where the log d term comes from its second appearance as a (potential) floating component, and we can assign the normalizing constants as well as the 1/d normalization from the all-1 matrix to offset this vertex factor that gives us
u/ru-s^2·1/d·√(mlog^O(1) d) = O(1) ·1/d^2·√(mlog^O(1) d) = O1/d≪ 1
* R_s-jump:
* The jump-square vertex is again potentially floating, and contributes a factor of √(mlog d);
* The circle vertex contributes a factor of √(d) while its incident H_2 edge gives √(2/d^2) edge-value;
* Combining with the normalization constant gives us
s/s^2-ru·1/d·√(2/d^2)·√(m log^O(1) d)·√(d) = Od/m· O(√(d)) ·√(2)/d^2√(mdlog d)≤O1/√(d) .
* R_r-jump
* The jump gadget is an H_2 edge that leads to a circle from the end of the first A^-1 path, and then jump to another square vertex with an H_2 attachment, and there is an additional A^-1 path attached to the jump-square destination;
* Combining with the normalization constant, we have
r/s^2-ru·1/d·√(d)·1/d^2·√(md) =O1/d
The charging for the subsequent A^-1 path and H_2 attachment, including paying the extra circle vertex in V_, is identical to that in non-sandwich terms.
Moreover, this completes the proof to <ref> by a triangle inequality.
Vertex and edge appearance
Recall from <Ref> that we bound the norm of a graph matrix by analyzing length-2q “block-walks” of the shape and bounding the vertex/edge factor of each “block-step”.
To this end, we need to consider both global and local appearances of a labeled vertex.
We remind the reader that a labeled square (resp. circle) vertex is an element in [m] (resp. [d]), and a labeled edge is an element in [m] × [d] (see <Ref>).
Given a block-walk, we call each labeled vertex's appearance within the given block a local appearance, and each vertex's appearance throughout the walk a global appearance.
Moreover, we say that a labeled vertex/edge is making its global first/last appearance if it is the first/last appearance of that labeled vertex throughout the walk.
Similarly, we say it is making its local first/last appearance if it is the first/last appearance within the given block in the walk.
We also need the following definition, which is a special case that we need to handle.
The term “reverse-charging” will be clear once we describe our edge charging scheme in <Ref>.
For a given walk, and a block in the walk, we call a step u→ v reverse-charging if
* the underlying edge is making its last global appearance throughout the walk;
* the underlying edge's first global appearance is also in the current block;
* (Reverse) the first appearance of the underlying edge goes from v to u.
§.§.§ Further set-up for step-labeling and edge factor scheme
Our argument for tight matrix norm bounds requires assigning each edge (or step) a step-label (<Ref>) that represents whether it is making first/middle/last appearance, and assigning edge factors based on the edge type (recall our edge factor scheme in <Ref>).
However, further care is warranted for dangling shapes when an edge appears with different Hermite indices in the walk (e.g., appears both as an h_1 and h_2 edge).
In this case, it is no longer true that an h_k edge needs to appear at least twice in the walk for the random variable to be non-vanishing.
For example, suppose an edge appears as h_1, h_1, and h_2 in the walk.
Then, even though h_2 only appears once, this term is non-vanishing under expectation:
_x∼(0,1/d)h_1(x)^2 h_2(x) =
[x^2 (x^2-1/d) ] = [x^4] - [x^2/d] = 2/d^2 .
The matrices that arise in our analysis may contain h_1, h_2, h_3, h_4 edges.
For i≤ 4, we treat an h_i edge as i edge-copies in our edge factor assignment scheme.
For each step regardless of the Hermite index, assign a step-label to all its edge-copies as follows,
* Assign an F step if it is making its first appearance;
* Assign an H step if it is making its middle appearance
* Assign an R step if it is making its last appearance.
We next describe our edge factor assignment scheme.
For any graph matrix of size at most D_V that contains h_1,h_2,h_3,h_4 edges and walks of length 2q ≥Ω(log d),
we can assign values to each edge-copy among the edge's appearance throughout the walk such that
* Each edge-copy of an h_1, h_2 edge with F/R step-label gets assigned a value 2^1/4/√(d).
* Each edge-copy of an edge with step-label H and any edge-copy of an h_3, h_4 edge gets assigned a value 32 q D_V/√(d).
* In the case of a random variable appearing as h_1, h_1, h_2 (in an arbitrary order), it gets assigned value of 2/d^2 in total, in particular, each edge-copy of the h_2 edge gets assigned a value √(2)/√(d).
We first note that _x∼(0,1/d)[h_k(x)^2] = k!/d^k.
For k=1,2, if an h_k edge only appears twice and no other Hermite index occurs, then it must have 2k edge-copies with step-labels F and R, giving an edge factor of (2^1/4/√(d))^2k = 2^k/2/d^k, which is larger than k!/d^k for k = 1, 2.
Next, we consider the case when h_3, h_4 edges are involved or when an edge appears more than twice, i.e., some edge-copies are assigned step-label H.
Let a_k be the number of times h_k appears in the walk, and let t ∑_k≤ 4 k · a_k be the total number of edge-copies.
Applying Cauchy-Schwarz twice and <Ref>,
_x∼(0,1/d)∏_k≤ 4 h_k(x)^a_k ≤h_1(x)^2a_1 h_2(x)^2a_2^1/2h_3(x)^2a_3 h_4(x)^2a_4^1/2
≤∏_k≤ 4h_k(x)^4a_k^1/4≤∏_k≤ 44k a_k/d^k· a_k/2
≤4t/d^t/2
We next show that the edge factors assigned to the t edge-copies upper bound the above.
Let t_0 be the number of edge-copies that get assigned 2^1/4/√(d).
We must have 0 ≤ t_0 < t and t_0 ≤ 4.
Then, the assignment scheme gives
2^1/4/√(d)^t_0·32q D_V/√(d)^t-t_0≥ d^-t/2 (32q D_V)^t-t_0
Since the length of the walk is 2q and the size of the graph matrix (shape) is ≤ D_V, we have t ≤ 8q D_V.
Thus, if t ≤ 8, then clearly (32q D_V)^t-t_0≥ (4t)^t/2;
otherwise, t-t_0 ≥ t/2 and (32q D_V)^t-t_0≥ (4t)^t/2.
This shows that the edge factors correctly account for the values from the Hermite characters.
For the special case when an edge appears as h_1, h_1, h_2, the factor 2/d^2 follows from <Ref>.
This completes the proof.
§.§ Alternative Analysis Draft
Note: I think this alternative analysis should be put in an appendix as it works well for the main analysis but doesn't work as well for Pur factors (S edges which are local F edges are tough to handle with this charging scheme).
For each chain, we define local F, R, S, and H edges as follows. Here we traverse the chain from top to bottom.
* We say that an edge is a local F edge if it is appearing for the first time in the current chain and its destination is appearing for the first time in the current chain.
* We say that an edge is a local R edge if it is appearing for the last time in the current chain.
* We say that an edge is a local H edge if it appears again both before and afterwards in the current chain.
* We say that an edge is a local S edge if it is appearing for the first time but its destination has appeared before in the current chain.
For our local analysis, there are two circle vertices which we may need to be careful about.
Let v_left be the circle vertex which is at the top left, let v_right be the circle vertex which is at the top right (which may be equal to v_left), and let v_bottom be the circle vertex which is at the bottom.
We can always take the minimum weight vertex separator to be v_left, so we do not need to worry about v_left. However, as we will see, we have to be careful about v_right and v_bottom.
We say that an edge is locally vanishing if we have that after the local intersections, the product of it and the edges parallel to it has a nonzero constant term.
Some examples of locally vanishing edges are as follows.
* Two parallel edges with label 1 are locally vanishing as x^2 = √(2)x^2 - 1/√(2) + 1
* Two parallel edges with label 2 are locally vanishing as
(x^2 - 1/√(2))^2 = √(6)(x^4 - 6x^2 + 3/√(24)) + 2√(2)(x^2 - 1/√(2)) + 1
* More generally, two parallel edges are locally vanishing if they have the same label k and are not locally vanishing if they have different labels.
* Three parallel edges with labels 1, 1, and 2 are locally vanishing as
x^2(x^2 - 1/√(2)) = 2√(3)x^4 - 6x^2 + 3/√(24) + 5x^2 - 1/√(2)+√(2)
We say that a vertex v is locally isolated if v is not equal to the top left circle or the top right circle and all edges incident to v are locally vanishing.
For all edges except for two special edges e^*_r and e^*_b which we describe below, we assign them as follows.
* For each local F edge, we assign it to its destination. For the edge at the top between v_right and its neighboring square vertex, we consider its destination to be the square vertex.
* For each local R edge, we assign it to its origin unless it is a dangling edge with label 2. If it is a dangling edge with label 2, we split it between its endpoints.
* For each local S and H edge, we assign half of it as a bonus to its origin and half of it as a bonus to its destination.
All vertices except v_right and v_bottom have the required edge factors.
For each circle vertex u except for v_left (which we doesn't need any edges), v_right, and v_bottom, consider the first and last time u appears in the current chain. The first time u appears, there must be a local F edge pointing to it which gives u one edge. If u only appears once, it cannot be locally isolated, so it only needs one edge. Otherwise, the last time u appears, it can only be locally isolated if all edges incident to it are R edges. In this case, u obtains a second edge.
Similarly, for each square vertex v, consider the first and last time v appears in the current chain. The first time v appears, there must be two local F edges with label 1 or one local F edge with label 2 pointing to it. The last time v appears, it can only be locally isolated if all edges incident to it are R edges, in which case v obtains two additional edges.
Note that v_right is an exception because the first time it appears, it does not have a local F edge pointing to it. Similarly, v_bottom is an exception because the last time it appears, it may be incident to an R edge with label 2 without gaining an additional edge.
When v_left≠ v_right and v_right is not equal to the final circle vertex, we find/define e^*_r through the following iterative process. We start with the edge e between v_right and its neighbor. Note that this is a local F edge going down from a vertex v = v_right which is not locally isolated and which is not the final circle vertex. We now have the following cases
* e does not appear again and the other endpoint of e is the final circle vertex. In this case, we take e^*_r = e and assign one edge factor from it to v_right. If it has label 2, we keep one edge factor in reserve in case e^*_b = e^*_r = e.
* e does not appear again and its other endpoint is not the final circle vertex. In this case, letting v' be the other endpoint of e, we take e' to be an edge going down from v'. If e' is not a local F edge, we take e^*_r = e' and assign it to v_right. We can do this because v' is not locally isolated so it already has all of the edge factors it needs. If e' is a local F edge, we again have a local F edge e' going down from a vertex which is not locally isolated and is not the final circle vertex, so we repeat this process.
* e appears again. In this case, let e' be the next time e appears. If e' is not a local R edge whose destination is v, we take e^*_r = e' and assign it to v_right. If e' is a local R edge whose destination is v, we let e” be an edge going down from v.
If e” is not a local F edge then we take e^*_r = e” and assign it to v_right. If e” is a local F edge then we are still in the situation where we have a local F edge going down from a vertex which is not locally isolated and is not equal to the final circle vertex, so we repeat this process.
We find/define e^*_b through the following iterative procedure. We start with the edge e with label 2 between v_bottom and its neighbor. Note that e must be a local R edge. We then do the following.
* If e is not a locally vanishing edge then we take e^*_b = e. We assign half of e^*_b = e to v_bottom and keep half in reserve in case e^*_b = e^*_r.
* If e is a locally vanishing edge then e must appear above. Consider the previous time e appears and let e' be this copy of e. There are a few possibilities for e'.
* e' is an edge with label 2 going from a copy of v_bottom to the bottom square vertex of the current block which is not equal to the top vertex of the current block. If e' is not a local F edge then we take e^*_b = e'. We assign half of e^*_b = e to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e' is a local F edge, let e” be the edge from the top square vertex of the current block to the copy of v_bottom. If e” is not a local R edge then we take e^*_b = e”. We assign half of e^*_b = e” to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e” is a local R edge then we are in the same situation as before so we repeat this process.
* e' is an edge with label 2 from the top square vertex in the current block which does not equal the bottom square vertex of the current block to a copy of v_bottom. In this case, we take e^*_b = e'. If e^*_b = e' is not a local F edge then we assign half of it to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e^*_b = e' is a local F edge then we assign it to v_bottom. Note that in this case, we cannot have that e^*_b = e^*_r. The reason for this is that e^*_r can only be a local F edge in case 1, in which case it does not appear again (while e' appears again by definition).
* e' is an edge with label 2 which hangs off of a square vertex which is both the top and bottom square vertex for the current block. In this case, we take e^* = e' and assign all of it to v_bottom. Note that in this case, we cannot have that e^*_b = e^*_r.
* e' is a local H edge with label 1. In this case, we take e^*_b = e' and assign it to v_bottom. Here we may have that e^*_b = e^*_r but if we do, one of the endpoints of e^*_b = e' is not locally isolated so we can take an edge factor from it and give the edge factor to v_right.
§.§ Attempted Simplification and Illustration
Our charging scheme is as follows.
* For all local F and S edges, we assign the edge to its destination.
* For all local H and R edges, we assign the edge to the destination of the corresponding local F edge in the block.
When U ≠ V, all vertices outside of U ∪ V have the required edge factors.
Observe that the first time a vertex outside of U ∪ V appears in a block, it receives the edges used to reach it (one for a circle vertex in M_α, two for a square vertex or a circle vertex in M_β or M_D).
This is sufficient unless the vertex only appears in this block. This can only happen if
* The edge(s) pointing to the vertex are F edges.
* The F edges appear for the last time in the current block as reverse charging edges.
In this case, the vertex gets edge factors from the reverse charging edges, which is sufficient.
When U ≠ V, we obtain two extra edge factors for the vertices u,v in U ∪ V as follows:
* We show that for all blocks except the first and last block, there is always a vertex (often u or v) which appears in both an earlier block and a later block and thus does not need any edge factors. One edge factor assigned to this vertex can then be given to u or v and any extras give additional decay which can be used to resolve Pur confusion.
* We show that there is a vertex (usually the final circle vertex) which has at least one more edge factor than it needs. One of these extra edge factors can be given to u or v and any extras give additional decay which can be used to resolve Pur confusion.
For each block except the first and last block, there is at least one vertex which appears in both an earlier block and a later block.
We use Lemma *** from PTVW which says that for each block, taking all intersections within the block into account, there is a path of edges with odd multiplicity from u to v.
Let u' be the last vertex on this path which appears in an earlier block. If u' = v then u' is the desired vertex. Otherwise, let v' be the vertex after u' on this path. v' does not appear in an earlier block, so the only way the edge {u',v'} can appear again is if u' appears in a later block, in which case u' is the desired vertex.
For 1/d1_d terms, the 1/d provides two extra edge factors, one of which can be given to u or v.
For w terms, we show that there is at least one vertex (usually the final circle vertex) which has more edge factors than it needs.
Add statement here.
We use a similar argument as Lemma *** in PTVW.
Consider the edge e to the final circle vertex.
If e is not a reverse charging edge pointing to the square vertex then there are a few cases, each of which gives us the needed edge factors
* This is the first appearance of the final circle vertex in the current block. In this case, the final circle vertex obtains two edge factors and only needs one, so we have an extra edge factor.
* The final circle vertex is equal to u or v. In this case, this vertex obtains at least 2 edge factors, which is more than is needed.
* The final circle vertex is not equal to u or v and has appeared before in the current block. In this case, the final circle vertex obtains at least 3 edge factors, two from e and one from the first time it appeared in the current block.
If e is a reverse charging edge pointing to the square vertex (in which case PTVW would call it a right-critical edge, see Definition *** of PTVW), let e' be the previous time that e appears in the current block (PTVW would call e' a left-critical edge). There are a few cases to consider
* e' is an F or S edge which is part of an M_β gadget. In this case, e' must be the bottom edge of the gadget as it points to the square vertex.
Letting e” be the top edge of the gadget, we can repeat the entire argument for e” instead of e.
* e' is an H edge which is part of an M_β gadget. In this case, the square vertex has a total of at least 6 edge factors (2 from the first time it appears, 2 from e' and 2 from e).
* e' is an H edge which is part of an M_α gadget. In this case, the square vertex has a total of at least 5 edge factors (2 from the first time it appears, 1 from e' and 2 from e).
* e' is an H edge which is part of an M_D gadget (if it was an F or S edge then it would point to the circle vertex instead). In this case, the square vertex has a total of at least 6 edge factors (2 from the first time it appears, 2 from e' and 2 from e).
Let SQ_1 be the square vertex in the top gadget.
When U = V, all vertices except SQ_1 have sufficient edge factors.
Note that u = v appears in both an earlier and a later block and does not need edge factors. For the other vertices, we use the same argument as we used for Proposition <ref>.
SQ_1 has two edge factors from the top gadget. If SQ_1 appears in another block then this is sufficient. If we are considering a 1/d1_n term then this gives two extra edge factors so even if SQ_1 only appears in the current block, it has the required edge factors.
We say that an edge is a local H edge if it appears both earlier and later in the current block. Similarly, we say that an edge is a local S edge if it has not yet appeared in the current block but its destination has already appeared in the current block.
If SQ_1 only appears in the current block then either SQ_1 has at least 2 edge factors or there is a neighbor of SQ_1 which has at least 4 edge factors.
We show that if SQ_1 only appears in the current block, one of the following cases must happen.
* There is an h_2 edge e going out from SQ_1 which is appearing for the first time in the current block. In this case, let v be the other endpoint of this edge (v will often be the final circle vertex but it does not have to be). Since SQ_1 only appears in the current block, e must appear later on, so v must obtain at least 4 edge factors, two of which can be given to SQ_1.
* There is at least one local H edge e incident to SQ_1. In this case, since SQ_1 only appears in the current block, this edge must appear with multiplicity at least 4. If e is assigned to SQ_1 then we are done. If e is assigned to its other endpoint v then we can split its edge factors between SQ_1 and v. up front, this may not be enough if we need all 4 edges to carry polynomial factors due to logs using naive splitting, but we can say if it appears 4 times only so there is no log.
* There is a local S edge e which goes to SQ_1. In this case, since SQ_1 only appears in the current block, e must appear with total multiplicity at least 2 so SQ_1 has the needed two edge factors.
We show that one of these cases must hold as follows. At the start, SQ_1 must either have an h_2 edge going out from it (which gives the first case) or it must have two single edges e_1 and e_2 going out from it as part of an M_α gadget. If so, consider the next time SQ_1 appears. Unless SQ_1 is reached via a local H or S edge (in which case we are in the second or third case), SQ_1 must be reached by closing e_1 and e_2. At this point, all edges which have appeared so far which are incident to SQ_1 do not appear again so we can repeat this argument.
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-5,2.5) ellipse (.4cm and .6cm);
(0,2.5) ellipse (.4cm and .6cm);
[orange] (-2.5,-6) .. controls (-6.5,-4) and (-6.5,-2) .. (-2.5,0);
[orange] (-4, -1.5) – (-4, -4.5);
[orange] (-1, -1.5) – (-1, -4.5);
(5,2.5) ellipse (.4cm and .6cm);
[orange] (2.5,-6) .. controls (6.5,-4) and (6.5,-2) .. (2.5,0);
[orange] (4, -1.5) – (4, -4.5);
[orange] (1, -1.5) – (1, -4.5);
[orange] (-5,2.5) .. controls (-1,4) and (1,4) .. (5,2.5);
[orange] (-2.5, 0) – (2.5, 0);
[orange] (-2.5, -8) – (2.5, -8);
[mycircle] at (-5, 2.5) (u) u;
[mycircle] at (0, 2.5) (v) v;
[mysquare] at (-2.5, 0) (a) a;
[mycircle] at (-4, -1.5) (b) b;
[mycircle] at (-1, -1.5) (c) c;
[mysquare] at (-2.5, -3) (d) d;
[mycircle] at (-4, -4.5) (e) b;
[mycircle] at (-1, -4.5) (f) c;
[mysquare] at (-2.5, -6) (g) a;
[mycircle] at (-2.5, -8) (t) t;
[mycircle] at (5, 2.5) (w) u;
[mysquare] at (2.5, 0) (a2) a;
[mycircle] at (1, -1.5) (b2) b';
[mycircle] at (4, -1.5) (c2) c';
[mysquare] at (2.5, -3) (d2) d';
[mycircle] at (1, -4.5) (e2) b';
[mycircle] at (4, -4.5) (f2) c';
[mysquare] at (2.5, -6) (g2) a;
[mycircle] at (2.5, -8) (t2) t;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5,
g.45/f.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.270/t.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
v.315/a2.135/ /above/0.5,
w.225/a2.45/ /above/0.5
[blue] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a2.225/b2.45/ /above/0.5,
a2.315/c2.135/ /above/0.5,
b2.315/d2.135/ /above/0.5,
c2.225/d2.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e2.45/d2.225/ /above/0.5,
f2.135/d2.315/ /above/0.5,
g2.135/e2.315/ /above/0.5,
g2.45/f2.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g2.270/t2.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
This figure illustrates the charging scheme. The light blue edges are F edges and we assign their factor to their destination. The dark blue edges are R edges whose first appearance is in a different block. We also assign these edges to their destination. The red edges are R edges whose first appearance is in the current block. We assign these edges to the destination of the corresponding F edge (which is generally the source for this edge).
Note that two edge factors are missing from v. We obtain these factors from the two green edges pointing towards t.
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(0,2) ellipse (.4cm and .6cm);
(0,2) ellipse (.5cm and .75cm);
[orange] (0,-6) .. controls (-4,-4) and (-4,-2) .. (0,0);
[orange] (-1.5, -1.5) – (-1.5, -4.5);
[orange] (1.5, -1.5) – (1.5, -4.5);
[orange] (0, -8) – (1.5, -4.5);
[mycircle] at (0, 2) (u) u;
[mysquare] at (0, 0) (a) a;
[mycircle] at (-1.5, -1.5) (b) b;
[mycircle] at (1.5, -1.5) (c) c;
[mysquare] at (0, -3) (d) d;
[mycircle] at (-1.5, -4.5) (e) b;
[mycircle] at (1.5, -4.5) (f) c;
[mysquare] at (0, -6) (g) a;
[mycircle] at (0, -8) (t) c;
ı/ȷ///in
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.45/f.225/ /above/0.5,
t.90/g.270/2/left/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
(4,2) ellipse (.4cm and .6cm);
(4,2) ellipse (.5cm and .75cm);
[orange] (4, -2) – (6, 0);
[mycircle] at (4, 2) (u2) u';
[mysquare] at (4, 0) (a2) a';
[mycircle] at (6, 0) (b2) b';
[mysquare] at (4, -2) (t2) b';
ı/ȷ///in
a2.00/b2.180/2/above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t2.90/a2.270/2/right/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-1.5,1.5) ellipse (.4cm and .6cm);
(1.5,1.5) ellipse (.4cm and .6cm);
[orange] (0,-4.5) .. controls (-2,-3.5) and (-2,-2.5) .. (0,-1.5);
[mycircle] at (-1.5, 1.5) (u) u;
[mycircle] at (1.5, 1.5) (v) v;
[mysquare] at (0, 0) (a) a;
[mycircle] at (0, -1.5) (b) b;
[mysquare] at (0, -3) (c) c;
[mycircle] at (0, -4.5) (t) b;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
b.270/c.90/2/right/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t.90/c.270/2/right/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a.270/b.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
§.§.§ Analyzing Pur Factors
Note: For this draft, I am making edges which have slack split their edge factor between their endpoints. This can be adjusted.
Note: These definitions need editing.
We define e^* to be the edge in the current block which gives en extra edge factor to u or v or gives extra edge factor(s) to SQ_1.
We define v^* to be the vertex identified by the parity argument which appears in both an earlier and a later block.
For each vertex w ≠ v^*, the following types of edges gives an extra edge factor (which we can split between its endpoints).
* There is a local S or H edge e leading to w which is not equal to the e^*.
* There is an S edge e leading to w which is a local F edge.
* There is an H edge e leading to w which is a local F or R edge.
We have the following cases.
* If e = (v,w) is a local S or H edge which is not equal to e^* then both v and w obtain sufficient edge factors even without e. To see this, note that if v appears in another block then the local F or local S edge(s) leading to v are sufficient. If v only appears in this block then the local F or local S edge(s) leading to v and the local R edge(s) closing these edges give sufficient edge factors. The same argument holds for w.
Note: This isn't quite correct. If an h_2 edge leads to w then it can be closed with a local H edge and then an R edge.
* If e = (v,w) is an S edge which is a local F edge then w obtains sufficient edge factors even without e. To see this, observe that since e is an S edge and a local F edge,
1. w must appear in an earlier block.
2. Either e is closed in this block or w must appear in a later block.
* If e = (v,w) is an H edge which is a local F edge then w must occur in a previous block. If w also appears in a later block then w does not need an edge factor for the current block. If w does not appear in a later block then e must be closed which gives an edge factor to w so w gains sufficient edge factors in the current block.
Note: The local R case is similar but needs to be added.
Whenever a vertex w ≠ v^* appears in a block and this is not the first time w appears in the block, one of the following cases holds:
* The edge(s) leading to w are local and global R edges.
* One of the edge(s) leading to w is e^*.
* w obtains at least half of an extra edge factor from one of the edges leading to it.
By Lemma <ref>, if an edge leading to w is a local or global H or S edge which is not equal to e^* then this edge gives e at least a half edge factor of slack. Thus, If none of the edge(s) leading to w are e^*, local H or S edges, or global H or S edges then the edge(s) leading to v must be local and global R edges.
If a square vertex v is incident to k M_D gadgets whose edge is not e^* then v obtains at least k-1/2 extra edge factors from the M_D gadgets.
Consider an M_D gadget and let e be the h_2 edge between the square vertex and the circle vertex in the M_D gadget. There are a few cases:
* e = e^*
* e is not e^* and e gives its edge factors to the circle vertex. In this case, if e is not closed then the circle only needs one edge factor so e has an extra edge factor which can be split between its endpoints. If e is closed then the edge(s) which close e give the circle vertex at least two edge factors so both of e's edge factors can be split between its endpoints.
* e gives its edge factors to the square vertex. Note that if this happens more than once then the square vertex obtains two extra edge factors for each additional time this happens.
Note: The following argument will need some editing. I forgot that S edges add two Pur rather than 1.
The total amount of Pur gained by a vertex w ∉{u,v,v^*} in a block is at most 10+8x where x is the number of extra edge factors obtained by the vertex.
By Lemmas <ref>, each time a vertex appears in a block and the edge(s) leading to it are not e^*, it obtains at least half an edge factor of slack and each such appearance and increases its Pur (note: we may want to count Pur by edge multiplicity rather than number of edges) by at most 4.
By Lemma <ref>, if the vertex appears in k M_D gadgets whose edge is not equal to e^* then it obtains at least k-1/2 extra edge factors of slack. Each of these M_D gadgets increases its Pur by at most 2.
We now make the following observations:
* A vertex gains at most 4 Pur from its first appearance in a block
* A vertex gains at most 4 Pur from the gadget containing e^*
* A square vertex may gain 2 Pur from an M_D gadget (actually, I think this case can be removed as Pur is only gained if the edge is an F edge, S edge, or H edge in which case there is slack)
* Each extra 1/2 edge factor the vertex obtains allows the vertex to gain at most 4 Pur.
If a vertex w ∉{u,v,v^*} appears in both an earlier block and a later block, it gains at most 10x Pur from the block where x ≥ 1 is the number of excess edge factors w obtains from the block.
§.§ Handling Pur for v^*, u, and v
Note: It shouldn't be too much of a problem, but we'll likely need to discuss the case when v^* is the final circle vertex.
Our main remaining concern is that the vertex v^* (the vertex which appears in both a previous and a later block but gives an edge factor to U ∪ V) gains Pur from the block without gaining any slack. We also need to analyze u and v
The intuition for why this does not happen is as follows. Recall that we identify v^* by finding a path from u to v consisting of edges of odd multiplicity, taking e_return to be the last edge on this path which appears in a previous block, and taking v^* to be the destination of e_retrun (if no edges on the path appear in a previous block then we take v^* = u). This means that unless v^* = u, v^* must be incident to an edge e_return which appears with odd multiplicity in the current block and also appears in a previous block. We can use e_return to return to v^* and reduce the Pur for v^*. We show that if v^* still gains Pur then it must also obtain some edge factors of slack.
However, making this argument rigorous is tricky. One weird possibility is that the edge e_return goes from v^* to another vertex w rather than from another vertex to v^*. We show that even in this case, we can go around v^* to reach w and use e_return to return to v^*.
If v^* is a square vertex then v^* has an extra edge factor.
If v^* is a square vertex then v^* obtains two edge factors from its first appearance in the block. Since v^* only needs to give one edge factor to u or v, this leaves v^* with an extra edge factor.
It's important that we can't have v^* = SQ_1 without having an edge factor of slack. If we could have v^* = SQ_1 with no slack then we would have a logarithmic factor in the final norm bound.
If w is a circle vertex which is incident to at least 3 edges of odd multiplicity then w obtains at least one edge factor of slack.
For each of the edges with odd multiplicity which go to w, there are two cases, each of which gives an edge factor to w.
* The edge assigns its edge factors to w.
* The edge does not assign its edge factors to w. In this case, the edge must have multiplicity at least 3 and its square endpoint must appear in another block. Since there must be a different local F edge which also leads to the square vertex, the square vertex obtains at least 2 extra edge factors, one of which can be given to w.
We now show that if w is incident to at least 3 edges of odd multiplicity then one of the following cases holds
* w ≠ u, w ≠ v, and at least two of these edges go to v^*.
* w = u or w = v and at least one of these edges goes to w.
* w obtains an extra edge factor of slack.
To see this, observe that one of the first two cases holds unless there is an edge e incident to w such that two copies of e appear in an M_α gadget and lead to w. If there is such an edge e then either e gives its edge factors to w or e has total multiplicity at least 3. If the total multiplicity of e is odd or is at least 6 then the other endpoint of e obtains at least two more edge factors than needed, so e can give an edge factor to w. If the total multiplicity of e is 4, we could have the case where e first appears as an h_2 edge in an M_β gadget from w to the square vertex and then appears two more times.
Sketch: In this case, we can repeat the e^* argument.
If v^*∉{u,v} then either v^* does not gain any Pur from the current block or V^* obtains at least one extra edge factor of slack.
Consider the first time v^* appears. If the first appearance of v^* is as part of an M_α gadget with an edge e_1 going to v^* and an edge e_2 going from v^* to another vertex, there are a few cases:
* Both e_1 and e_2 have odd multiplicity and either e_return = e_1 or e_return = e_2. In this case, the first appearance of v^* does not give v^* any Pur factors and later appearances of v^* can be analyzed as before.
* Both e_1 and e_2 have odd multiplicity and neither of these edges is e_return. In this case, v^* must be incident to a third edge with odd multiplicity which means that it must obtain at least one edge factor of slack.
* If e_1 or e_2 has even multiplicity, consider the next time v^* appears. If the next appearance of v^* is in an M_α gadget then let e' be the edge leading to v^*. There are a few cases:
* e' is not equal to e_1 or e_2. In this case v^* obtains an extra edge factor from e' as e' must be a local F edge.
* e' is equal to e_1 or e_2 and the total multiplicity of e' is odd. In this case, whichever vertex e' gives its edge factors two has at least two edge factors of slack which we can split between the endpoints of e'.
* e' is equal to e_1 or e_2 and the total multiplicity of e' is even. In this case, letting e'_1 be the edge in {e_1,e_2} which is not equal to e' and letting e'_2 be the other edge incident to v^* in this new appearance of v^*, we can repeat this argument with e'_1 and e'_2. Note that we can reach the bottom square vertex this M_α gadget without passing through v^*.
If the next appearance of v^* is not in an M_α gadget then there is an h_2 edge e' leading to the next appearance of v^*. In this case, either
* e' is not equal to e_1 or e_2. In this case v^* obtains two extra edge factors from e' as e' must be a local F edge.
* e' is equal to e_1 or e_2 and has total multiplicity at least 4. In this case, the other endpoint of e' must obtain at least 6 edge factors so we can split two of the edge factors of e' between its endpoints.
Either u does not gain Pur in the current block or u obtains at least one edge factor of slack.
We observe that one of the following cases holds:
* There is an edge going out from u with multiplicity 1 (which is an F edge if v^* = u and an R edge if v^*≠ u) and all other edges incident to u have even multiplicity, are local F or R edges, and do not give their edge factors to u. In this case, u does not gain any Pur from this block as the F or R edge going out from u is expected and the local R edges going to u cancel out the other edges going out from u.
* u obtains at least one extra edge factor, either from being incident to at least 3 edges with odd multiplicity, being incident to an edge whose total multipilcity is odd and is at least 3, being incident to an S or H edge, or obtaining edge factor(s) from an edge.
Either v does not gain Pur in the current block or v obtains at least one edge factor of slack.
If v^* = v then we can use the same argument as above except that instead of starting with two edges e_1 and e_2, we start with the edge e which is incident to v. If v^*≠ v then either this is the first block where v appears, in which case we reach v via an F edge and this is expected, or v appears in both an earlier block and a later block but is not equal to v^*, in which case v does not need any edge factors so the edge factor which is given to it is an extra edge factor of slack.
§.§ Local Analysis for R
For some constant >0, for any q<d^, the block-value function for R is bounded by
B_q( ) ≤1/10 .
We first state our vertex factor assignment scheme (with redistribution) which assigns vertex factors to labeled vertices according to their global appearances. It is the same one as described in <Ref>.
[frametitle= Vertex factor assignment scheme]
* For each labeled vertex i making its first or last global appearance, assign a factor of √((i));
* For each labeled vertex i making its middle global appearance, assign a factor of 1 if it is reached via an R step, otherwise assign a factor 2q · D_V as well as its corresponding O(1) factors.
We next describe the scheme that assigns edge-copies to vertices.
In most cases, we charge edge-copies (on the dangling path) to the vertex it leads to, unless it is a reverse-charging edge (<Ref>).
Recall that we define a step u→ v to be reverse-charging if it is making its last global appearance and if its first global appearance is also in the current block going from v → u.
[frametitle= "Top-down" edge-copy charging primitive]
* Assign both edges on the U-V path to the first square vertex in the middle;
* For any step u → v before the final h_2 gadget, assign the edge to v unless this is a reverse-charging edge (<Ref>), in which case we assign to u;
* For the final h_2 gadget (if any), we reserve its assignment from the current scheme.
See <Ref> for an illustration of the above scheme.
We next show the following invariant throughout the walk,
We can assign each edge-copy to at most one vertex,
* for a circle vertex, it is assigned 1 edge-copy if it is making first/last global appearance, and 2 edge-copies if both;
* for a square vertex on the dangling path, it is assigned 2 edge-copies if it is making first/last global appearance, and 4 edge-copies if both;
* for any square vertex's global middle appearance yet local first appearance in the block, it is assigned at least 1 edge-copy;
* no surprise/high-mul step is assigned to any vertex's global first/last appearance.
We start by giving the argument when U≠ V, and then show it can be modified into an argument for U=V.
Charging vertices outside U∪ V and the final h_2 gadget
The above scheme applies immediately to vertices that appear in the current block while not in U∪ V or the last square vertex as well as the final circle vertex it connects to in the h_2 gadget.
It follows by observing that any circle (resp. square) vertex that makes its first global appearance is the destination of 1 (resp. 2) edge-copies that are not reverse-charging, in which case the edge-copies are assigned to the particular vertex.
This holds analogously for vertices making the last global appearance but not the first global appearance in the block, since the edges are not reverse-charging.
We note that this is true for the first square vertex as well since we assign both edges on the U-V path to the first square vertex.
On the other hand, for vertices making both the first and the last global appearances, consider the assignment until the final h_2 gadget, the charging above gives us 1 and 2 edges for each such vertex when it first appears locally. Furthermore, notice that each of these edges need to be closed, and they are assigned to the destination vertex, i.e., the particular vertex in inspection, and this completes the proof of our assignment restricted to vertices outside U∪ V and the final h_2 gadget.
Charging the final gadget and finding "block-reserve"
The goal is to identify an edge-copy as block-reserve, and assign vertex factors to corresponding edges in the final h_2 gadget (if any). We first consider the case when there is no final h_2 attachment, i.e. the term from u+s/s^2-ru A^-1 1_m.
Recalling our bounds from <Ref> that s^2-ru = Ωm/d, and s=O(√(d)), |u|=O(1) (here the weaker bound on s suffices), we observe that
u+s/s^2-ru = O√(d)/m/d = O1/√(d)
and the normalizing constant can be regarded as an edge-copy. This is assigned as the block-reserve factor if there is no final h_2 gadget, and there is no vertex factor involved when there is no final h_2 gadget since this may be the last appearance of a square vertex while its factor has already been assigned to the edges that first lead to this vertex in the current block.
We now proceed to assign edges from the final h_2 gadget to vertex factors involved in the final h_2 gadget, moreover, we will identify one edge-copy of assigned factor at most √(2)/√(d) as "block-reserve" that allows us to further charge vertices in U∪ V.
For the vertices in the final h_2 gadget, we observe the following,
* in the final gadget, the last square vertex cannot be making its first global appearance yet it may be making its last appearance (since it is on the gadget boundary, we consider it appears in both the final h_2 attachment gadget and the gadget that precedes it);
* the last circle vertex may be making either its first or last global appearance but it cannot be both;
* the h_2 edge in-between has not yet been assigned, and we can split it as two edge-copies, each of weight 2^1/4/√(d) if F/R under the global step-labeling.
We now observe the following,
* If the final h_2 attachment is not a reverse-charging edge assigned to the source square vertex, the square vertex has also been assigned by 2 factors if making its first/last global appearance, and 4 if both, in the current block by the previous charging. In this case, note that we can reserve one of the two edge-copies, that is a factor of at most 2^1/4/√(d),
* If the circle vertex is making first/last appearance, we either have both edge-copies receiving F/R labeling, or a mix of H,R labeling. In the first case, we have two assigned factors of 2^1/4/√(d). Assign one to the circle vertex's first/last appearance, and another to the block-reserve. In the second case, we have a mix of H,R edges, that we at least have the underlying random variable appears twice as h_1 edges and at least once as an h_2 edge, by moving Õ(1) polylog factors to the second appearance of the underlying random variable locally, we have again two edge copies of value 2^1/4/√(d), and the assignment follows from above.
* If the circle vertex is making a middle appearance, we may assign related factors and vertex-factor cost to one edge-copy such that one edge-copy is of value O(1/√(d)) while the other is still at most 2^1/4/√(d); assign the edge-copy with O(1/√(d)) to the middle appearance factor and assign the edge-copy with factor 2^1/4/√(d) for block-reserve.
* If the final attachment h_2 edge is a reverse-charging step that corresponds to an edge whose F copy
leads to the final square vertex, we assign both h_2 edges to the final square vertex as the square vertex may be making its last global appearance. Note that if the square vertex makes its first global appearance in the current block, it has already been assigned 2 edges.
This leaves the last circle vertex potentially uncharged as it may be making its last appearance as well. Furthermore, we need to find one more "reserve" edge-copy for the block which would then be used to charge U∪ V.
* Finding critical edge: we now give a procedure to identify the critical edge, that is, an h_2 edge assigned to a circle vertex in the above top-down charging process. The process maintains a current circle vertex and its current gadget along the top-down path. In particular, the gadget considered is an M_β gadget throughout. The process starts from vertex s, that is the circle vertex involved in the final h_2 attachment, and the gadget considered is the one in which s opens up the F step of the current edge e^*, the h_2 edge in the final gadget. Note that this is an M_β gadget. We now case on the step-labeling of the top half of the M_β gadget in inspection, in particular, the edges leading to the circle vertex s in that particular gadget,
* This is an F, non-reverse-charging R, or H step assigned to s, this is the critical edge we aim to find, as we have two edge-copies assigned to a circle vertex, and the process terminates.
* This is a reverse-charging R edge: update the circle vertex to be the top vertex of the current M_β gadget s, and update the edge e^* to be the h_2 edge in the top half of the current gadget, and repeat the above process. Note that the updated gadget must appear, and additionally, on top of the current gadget in the dangling path.
We now observe that this process ultimately terminates since each time we move up towards the top of the dangling path. Once the critical edge is found, note that it contains two edge-copies, assign one to the vertex's factor, and another to the block-reserve. In the case of H edges involved, locally assign the edge-value as 1/√(d) and Õ(1/√(d)), and assign the copy with 1/√(d) for block-reserve, while the other for vertex's factor.
Reserving surprise/high-mul step from vertex-factor assignment outside U∪ V
We first observe that in the above assignment, it is clear that the edges assigned to vertex's first appearance cannot be surprise-visit nor high-mul step. That said, it suffices for us to consider the assignment for vertex's last appearance factor (whose first appearance is not in the same block). Consider such vertex's first local appearance, they may either be H/S edges if not R under the global edge-labeling. If so, since the vertex is making the last appearance in the current block, each corresponds to an R-step in this block. Moreover, observe that any such R-step is intended for "reverse-charging" if the vertex in inspection is making both first and last global appearance in the block, and thus it is not assigned to the vertex factor of the source vertex. That said, it suffices for us to swap the H/S step with the R step so that no surprise/high-mul step is assigned to vertex's (polynomial) factor.
At this point, for a block with U≠ V, we have assigned
* 1 or 2 edge for each vertex's first/last appearance in the current block outside U_∪ V_ depending on the vertex's type;
* 1 edge-copy of value O(1/√(d)) has been identified as the block-reserve.
* it is also straightforward to observe that any edge assigned for vertex's first/last appearance cannot be a surprise visit nor high-mul visit.
Charging circle vertices in U∪ V
There is always a U-V path such that each edge along the path is of odd multiplicity. Call this path P_safe.
It suffices for us to restrict our attention to paths of only h_1 edges. By our gadget property, each vertex except U_∪ V_ is incident to an even number of h_1 edges, while only vertex copy in U_∪ V_ is of odd degree. Suppose the path is broken, consider the (maximal) component C_U connected to U_ (but not V_), we first observe that the edge-multiplicity of E(C_U, V()∖ C_U) must be even, as otherwise, some edge is of odd multiplicity, and the edge is included inside C_U as opposed to be across the cut. That said,
∑_v∈ C_U(v) = 2 ·(E(C_U)) + (E(C_U, V()∖ C_U))
Observe that the LHS is odd as any vertex copy except U_ has even degree, and we have exactly 1 odd-degree vertex copy. On the other hand, RHS follows as each edge-multiplicity in E(C_U) contributes a factor 2 to degree of vertices in C_U while edges across the cut contribute 1 for each multiplicity; since E(C_U, V()∖ C_U) is even by connectivity argument above, the RHS is also even and we have a contradiction.
There is at least one vertex v^* making middle-block appearance, i.e. it is a vertex that appears in previous blocks, and is appearing again in future blocks.
We recall our charging so far that we have one edge-copy reserved, while we have two circle vertices in U∪ V uncharged. And we now show that the single-edge copy is sufficient. i.e., at most one vertex factor is picked up among these two vertices.
Consider the path P_safe, and observe that it passes through both vertices in U∪ V. Take v^* to be the first vertex on the path that pushes out an F/H/S edge. In the case v^* = U, note that it suffices for us to assign the reserve factor for the vertex in V; similarly, assign the reserve factor for U for v^*= V. In the case v^*∉ U∪ V, note that the previous scheme assigns at least one edge for v^* if it is a circle, and 2 if it is a square when v^* first appears in the current block, and this is in fact not needed since v^* is making a middle appearance. That said, we have at least 2 factors, 1 from the reserve factor, and 1 from the edge-assignment for v^* in the previous scheme, we now assign these two factors for U ∪ V.
v^* is picked such that it is either reached by an R edge, or it is the boundary vertex U_. In other words, no factor is needed for specifying v^* of the block.
and this completes the proof to our proposition when U≠ V.
Analysis for U=V
We now consider the case when U=V.
For each R term, call the first square vertex on top of the dangling path the 𝖲𝖰_1 vertex.
This vertex is an exception from any other square vertex as its first appearance might be the destination of two edge-copies that correspond to the same underlying random variable. This happens when an edge making its first and last appearance at the same time for the U=V term.
For U=V, the charging for vertices outside is identical except for the first square vertex 𝖲𝖰_1: since the two edges assigned to it now correspond to the same underlying edge-copy, and may now receive F, R-labeling. That said, both edges are now assigned to the first square vertex. If the square vertex is making either first/last appearance but not both in the current block, the current assignment is sufficient. However, there are no reverse-charging edge copies protecting 𝖲𝖰_1, and that warrants a further analysis.
We follow the previous strategy in identifying block-reserve: even though for U=V, the vertex U=V is by definition not contributing any vertex-appearance factor, we now intend the block-reserve factor to pay for either additional factor due to the dormant step stemming from U=V, or the last appearance factor of 𝖲𝖰_1 but not both.
* 𝖲𝖰_1 makes both first and last appearance in the current block: in this case, the edges assigned to 𝖲𝖰_1 in the beginning of the top-down charging process are assigned to the first appearance of 𝖲𝖰_1. That said, it remains for us to identify two extra edge-copies for the last appearance factor.
Charging for terms without final h_2 attachment: these terms come with a normalization of |u+s/s^2-ru| = O(1/d), and these are two edge copies that we assign to the last appearance of 𝖲𝖰_1.
Charging for terms with final h_2 attachment:
we first note that any edge incident to 𝖲𝖰_1 must be closed, and further case in whether there is any surprise visit arriving at 𝖲𝖰_1 throughout the dangling path.
* Suppose that there is no surprise visit arriving at 𝖲𝖰_1 throughout the dangling path.
Consider the final departure from 𝖲𝖰_1, and note that it pushes out at least two edge-copies.
* If this is an M_β gadget, let r be the circle vertex it connects to via an h_2 edge. Suppose the random variable corresponding to (𝖲𝖰_1,r) first appears as an h_2 edge, in this case, the circle vertex has been assigned two edge-copies the first time it appears as it is reached using h_2 edge, that said, we can reserve the other two edge copies from the final attachment for the missing last appearance factor of 𝖲𝖰_1;
* If this is an M_β gadget, yet the random variable (𝖲𝖰_1, r) first appears as an h_1 edge in the current block. Note that there is at least one more edge-copy of h_1 edge that is currently assigned an H-label, observe that we can replace its label to be R, and assign both F/R copies to the vertex factor of the circle vertices. We now relabel the final h_2 step to be H^*, and note that these two edges get assigned a value at most 2/d, and assign them both to the missing last appearance factor of 𝖲𝖰_1.
* To see that H^* does not require extra D_V· q factor, we observe that this is a circle vertex traversed before in the block, and for each block, H^* is unique. That said, it suffices for us to use a special label in [2] when H^* first appears in the block to identify the edge. Moreover, note that swapping H^* with the R step does not add to factor of the destination circle vertex as this edge no longer appears;
* If this is an M_ gadget, let r_1, r_2 be the circle vertices the square vertex is
connected to. Note that for each r_i, there must be at least two more edge-copies of random variable of (𝖲𝖰_1, r_i) that receive H label, i.e., we have a factor of Õ(1/d). That said, we may reassign the factors and extract one full factor of 1/√(d) (with the remaining being Õ(1/√(d)), and since we have two of them that combine to a factor of 1/d, assign it to the missing last appearance factor of 𝖲𝖰_1.
* There is a surprise visit arriving at 𝖲𝖰_1. We first observe this must be either an h_2 edge, or a pair of distinct h_1 edges. The surprise visit must be closed already, and their underlying R copies are intended for reverse-charging in the top-down charging scheme for the last appearance of 𝖲𝖰_1. In this case, the factor for the last appearance has already been assigned edges.
* Finding "block-reserve" for U=V:
SQ_1 is not making both first and last appearance in the current block, we first note that the departure from U∪ V is special as this is the only vertex that can appear on the boundary again without being reached by any edge, and this is not a dormant gadget M_D. That said, the most recent departure may contribute factor to the vertex in U∪ V, and we now identify an edge-copy for this factor.
* Charging for terms without final h_2 attachment: for A^-11_m term where the final h_2 gadget is missing, we pick up a normalization constant of O(1/d) and we designate this as the block-reserve.
* Charging for terms with final h_2 attachment: for terms with the final h_2 gadget, the analysis of finding block-reserve from the case U≠ V applies identically for finding an h_2 edge that gets assigned to a circle vertex in the top-down traversal process. In particular, we may designate one edge-copy among the two copies of the identified h_2 edge as a block-reserve to charge the corresponding Õ(1) factors from .
Note that the edge identified from the above process is in fact stronger, as it carries a factor of O(1/√(d)) as opposed to Õ(1/√(d)). That said, since for U≠ V terms, the block-reserve is only assigned for Õ(1) factors from , either is sufficient.
§.§ Illustration via diagrams
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-5,2.5) ellipse (.4cm and .6cm);
(0,2.5) ellipse (.4cm and .6cm);
[orange] (-2.5,-6) .. controls (-6.5,-4) and (-6.5,-2) .. (-2.5,0);
[orange] (-4, -1.5) – (-4, -4.5);
[orange] (-1, -1.5) – (-1, -4.5);
(5,2.5) ellipse (.4cm and .6cm);
[orange] (2.5,-6) .. controls (6.5,-4) and (6.5,-2) .. (2.5,0);
[orange] (4, -1.5) – (4, -4.5);
[orange] (1, -1.5) – (1, -4.5);
[orange] (-5,2.5) .. controls (-1,4) and (1,4) .. (5,2.5);
[orange] (-2.5, 0) – (2.5, 0);
[orange] (-2.5, -8) – (2.5, -8);
[mycircle] at (-5, 2.5) (u) u;
[mycircle] at (0, 2.5) (v) v;
[mysquare] at (-2.5, 0) (a) a;
[mycircle] at (-4, -1.5) (b) b;
[mycircle] at (-1, -1.5) (c) c;
[mysquare] at (-2.5, -3) (d) d;
[mycircle] at (-4, -4.5) (e) b;
[mycircle] at (-1, -4.5) (f) c;
[mysquare] at (-2.5, -6) (g) a;
[mycircle] at (-2.5, -8) (t) t;
[mycircle] at (5, 2.5) (w) u;
[mysquare] at (2.5, 0) (a2) a;
[mycircle] at (1, -1.5) (b2) b';
[mycircle] at (4, -1.5) (c2) c';
[mysquare] at (2.5, -3) (d2) d';
[mycircle] at (1, -4.5) (e2) b';
[mycircle] at (4, -4.5) (f2) c';
[mysquare] at (2.5, -6) (g2) a;
[mycircle] at (2.5, -8) (t2) t;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5,
g.45/f.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.270/t.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
v.315/a2.135/ /above/0.5,
w.225/a2.45/ /above/0.5
[blue] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a2.225/b2.45/ /above/0.5,
a2.315/c2.135/ /above/0.5,
b2.315/d2.135/ /above/0.5,
c2.225/d2.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e2.45/d2.225/ /above/0.5,
f2.135/d2.315/ /above/0.5,
g2.135/e2.315/ /above/0.5,
g2.45/f2.225/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g2.270/t2.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
This figure illustrates the charging scheme. The light blue edges are F edges and we assign their factor to their destination. The dark blue edges are R edges whose first appearance is in a different block. We also assign these edges to their destination. The red edges are reverse-charging R edges whose first appearance is in the current block. We assign these edges to the destination of the corresponding F edge (which is generally the source for this edge).
Note that two edge factors are missing from v. We obtain these factors from the two green edges pointing towards t.
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(0,2) ellipse (.4cm and .6cm);
(0,2) ellipse (.5cm and .75cm);
[orange] (0,-6) .. controls (-4,-4) and (-4,-2) .. (0,0);
[orange] (-1.5, -1.5) – (-1.5, -4.5);
[orange] (1.5, -1.5) – (1.5, -4.5);
[orange] (0, -8) – (1.5, -4.5);
[mycircle] at (0, 2) (u) u;
[mysquare] at (0, 0) (a) a;
[mycircle] at (-1.5, -1.5) (b) b;
[mycircle] at (1.5, -1.5) (c) c;
[mysquare] at (0, -3) (d) d;
[mycircle] at (-1.5, -4.5) (e) b;
[mycircle] at (1.5, -4.5) (f) c;
[mysquare] at (0, -6) (g) a;
[mycircle] at (0, -8) (t) c;
ı/ȷ///in
a.225/b.45/ /above/0.5,
a.315/c.135/ /above/0.5,
b.315/d.135/ /above/0.5,
c.225/d.45/ /above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
e.45/d.225/ /above/0.5,
f.135/d.315/ /above/0.5,
g.135/e.315/ /above/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
g.45/f.225/ /above/0.5,
t.90/g.270/2/left/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
(4,2) ellipse (.4cm and .6cm);
(4,2) ellipse (.5cm and .75cm);
[orange] (4, -2) – (6, 0);
[mycircle] at (4, 2) (u2) u';
[mysquare] at (4, 0) (a2) a';
[mycircle] at (6, 0) (b2) b';
[mysquare] at (4, -2) (t2) b';
ı/ȷ///in
a2.00/b2.180/2/above/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t2.90/a2.270/2/right/0.5
[brown] [myarrow] (ı) – node[font=,,pos=] (ȷ);
[
mycircle/.style=
circle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum size=20pt,
font=,
mysquare/.style=
rectangle,
draw=black,
fill=white,
fill opacity = 1,
text opacity=1,
inner sep=0pt,
minimum height=20pt,
minimum width=20pt,
font=,
myarrow/.style=-Stealth,
node distance=0.6cm and 1.2cm
]
(-1.5,1.5) ellipse (.4cm and .6cm);
(1.5,1.5) ellipse (.4cm and .6cm);
[orange] (0,-4.5) .. controls (-2,-3.5) and (-2,-2.5) .. (0,-1.5);
[mycircle] at (-1.5, 1.5) (u) u;
[mycircle] at (1.5, 1.5) (v) v;
[mysquare] at (0, 0) (a) a;
[mycircle] at (0, -1.5) (b) b;
[mysquare] at (0, -3) (c) c;
[mycircle] at (0, -4.5) (t) b;
ı/ȷ///in
u.315/a.135/ /above/0.5,
v.225/a.45/ /above/0.5,
b.270/c.90/2/right/0.5
[cyan] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
t.90/c.270/2/right/0.5
[red] [myarrow] (ı) – node[font=,,pos=] (ȷ);
ı/ȷ///in
a.270/b.90/2/right/0.5
[green] [myarrow] (ı) – node[font=,,pos=] (ȷ);
§.§ Extension from R_A to sandwich shapes with jump
§.§.§ Analysis for R_u
The block-value function for u/ru-s^2 R_u is bounded by
u/ru-s^2 B(R_u) ≤1/10 .
We first highlight the difference from R_A,
* We pick up an extra normalization of |1/ru-s^2| = O(d/m);
* We pick up a factor of |u| ≤ 1;
* The J_m/d-jump comes with a factor of 1/d;
* The normalization factors combines to be O(1/m).
Applying the "top-down" edge-copy charging scheme from earlier, we observe that proposition <ref> applies as stated except that there may not be any edge leading to the jump-square vertex, in which case, our prior charging demands 2 edge-copies to be assigned to the jump-square vertex as it may be making first/last appearance, and 4 if the vertex is making both first and last.
* Suppose the vertex makes both its first and last appearance in the current block, the entire factor of O(1/m) is assigned to the vertex;
* Suppose the vertex makes its first appearance in the current block, it suffices for us to assign a factor of 1/√(m) from the normalization to the vertex;
* Suppose the vertex makes its middle/last appearance in the current block, this is a vertex not reachable via R edge, hence a factor of q· D_V is needed; furthermore, it gets assigned a factor of (D_V· q) (in the case of contribution from H arrivals), or √(m) from the back-pay of the vertex factor (in the case of last appearance). That said, assigning the O(1/m) factor to the factor of q· D_V ·√(m) is sufficient.
We then complete the analysis for R_u by observing the assignment of the last attachment edge, as well as the assignment of the "block reserve", follow from that of R_A.
§.§.§ Analysis for R_s
The block-value function for s/ru-s^2 R_s is bounded by
s/ru-s^2 B(R_s) ≤1/10 .
We note the difference from R_A,
* We pick up a normalization of |1/ru-s^2| = O(d/m);
* We pick up a factor of |s| = O(√(d));
* The 1_mη^T+η· 1_m^T/d-jump gives a factor of 1/d;
* The normalization factors combines to be O( √(d)/m ).
We then complete the proof by an edge-assignment scheme depending on whether it is a 1_m·η^T-jump or η· 1_m^T-jump.
Charging for η· 1_m^T-jump
In this case, the dangling path connected to U_∪ V_ ends with an h_2 attachment, and the charging restricted to edges and vertices in that component is identical to R_A. In particular, the "block reserve" edge is already identified from the component connected to U∪ V. Thus, we focus on the floating component, and notice we may traverse the floating component by starting from the jump-square vertex, that is the destination of the η· 1_m^T jump, and then follow a dangling A^-1 path from there.
Applying the "top-down" charging scheme on the floating component yields the following,
* Each vertex besides the jump-square vertex has its first, last appearance charged;
* There is an edge-copy from the floating component for "block reserve" that is of value 1/√(d);
* Similar to the R_u jump, the jump-square vertex may have its first and last appearance uncharged, and it suffices for us to assign a factor of 1/m to the first square vertex;
* Combining the normalization for R_s with the "block reserve' edge from the floating component gives us
O(√(d)/m·1/√(d)) = O(1/m). Assigning this to the jump-square vertex completes the charging.
Charging for 1_m·η^T-jump
In this case, the charging restricted to edges and vertices vertices in that component is identical to R_A, except that the "block-reserve" edge is not yet identified. That said, we need to assign corresponding factors to each vertex in the floating component, as well as identify an edge for "block-reserve".
For the floating component proceeded by 1_m·η^T-jump, traverse it from the jump-circle vertex, that is the circle destination of 1_m·η^T. Applying the "top-down" charging on the floating component except that we no longer stop before the final h_2 attachment edge, i.e., apply the assignment rule for any edge in the floating component charge's each vertex's factor except the jump-circle vertex. We the observe the following,
* The normalization factor of the block is O(√(d)/m);
* The jump-circle vertex if making first but not last appearance inn this block, contributes a vertex factor of √(d); if making middle or last, but not first appearance in this block, it contributes a factor of O((D_V· q) ·√(d))); if making both first and last appearance, it contributes a factor of d. That said, assigning a factor of 1/d to the jump-circle vertex is sufficient.
* That said, we may split the O(√(d)/m) factor as O(1/√(d)·1/d), and assign the factor of O(1/√(d)) to the "block-reserve", and 1/d for the jump-circle vertex completes our charging.
§.§.§ Analysis for R_r
The block-value function for r/ru-s^2 R_r is bounded by
r/ru-s^2 B(R_r) ≤1/10 .
Note the difference from R_A is given by the following,
* We pick up an extra normalization of |1/ru-s^2| = O(d/m);
* We pick up a factor of |r| ≤m/d;
* The η·η^T/d-gadget gives a factor of 1/d;
* We have a total factor of 1/d by combining the above factors.
The charging for the component connected to U∪ V is identical to the case of η· 1_m^T-jump, in particular, the "block reserve" is already identified from the component connected to U∪ V. That said, we focus on the floating component, and apply the "top-down" charging scheme, notice each vertex's factor but the jump-circle vertex's first/last appearance is assigned to some edge.
* The jump-circle vertex may be making first but not last, or last but not first appearance in the current block, in which case, it contributes a factor of at most √(d)· q· D_V;
* It may be making both first and last appearances in the current block, in which case, it contributes a factor of d;
* Assigning the normalization factor of of 1/d to the jump-circle vertex is sufficient.
§.§ Pur bound for square vertices in R
The prior bound does not apply well for square vertices in , in particular, it should be pointed out that even before non-trivial intersection within each block along the dangling path, the argument based on 1-in-1-out (or its slightly generalized version of 2-in-2-out) falls apart in the analysis for .
In particular, one may consider a walk on R_S with the square vertex along the U-V path fixed throughout the walk. It is easy to verify the the fixed square vertex may have growing unclosed F edges without any surprise visit/high-mul step. That said, it is not sufficient to use the slack from such factors to offset the potential confusion due to R edges.
In this section, we give a new argument to handle the factor in the vanilla setting of R when there is no non-trivial intersection along the dangling gadget-path, and then extend it for the general cases of R where each block is not necessarily injective due to intersection across gadgets.
Gap from square middle appearance For starters, we observe that a vertex may only push out unforced returns if it is making a middle appearance given that it maintains a list of incident edges, including the additional information which are closed. When it is making the last appearance, any currently unclosed edge needs to be closed, and therefore shall be pushed out, giving us a fixed set of edges being pushed-out (where we momentarily ignore the question whether one needs to distinguish among the edges in the edge-set). This immediately renders us the following bound on readily,
For any vertex v, let 𝖬𝗂𝖽𝖠𝗉𝗉(v) be the number of middle appearances of v throughout the walk, we have
(v) ≤ 3 ·𝖬𝗂𝖽𝖠𝗉𝗉(v)
provided each square vertex pushes out at most 3 edges in each block throughout the walk, and each block is vertex-injective.
With the above bound, it may not be meaningful if we cannot obtain a slack from 𝖬𝗂𝖽𝖠𝗉𝗉. Fortunately, this is indeed the case for our setting, and we first observe this in the vanilla setting where there is no gadget-incursion within each block, (in particular, this already applies immediately if the edges are injective witin each block),
* We have assigned each square vertex at least 1 edge when it is making a middle appearance: to see this, the top-down charging scheme assigns each square vertex 2 edges when the square-vertex appears for the first time in the block; in the case of charging U∪ V, 1 edge may be re-routed from a square vertex making a global middle appearance. That said, at least 1 edge is assigned to each square vertex when it makes a middle appearance.
* Note that for a fixed vertex v in a given block, its local first appearance at the given block may be corresponding to a global middle appearance, and such mismatch is the source of our slack;
* Observe that each middle-appearance corresponds to a mismatch described above (provided each block is injective), and each such mismatch of local-global first appearance assigns a vertex making global middle appearance one edge-copy, that is a factor of O(1/√(d));
* However, since each vertex's middle appearance does not get assigned any vertex factor in our scheme, we may use the 1/√(d) gap to offset the 3 factors corresponding to the particular global middle appearance, that is
O(1/√(d)) · (q· D_V)^3 = o_d(1) .
Extension to gadget intersections To handle dangling paths with potentially intersecting gadgets, it should be pointed out the bound goes through as stated while we do not necessarily have a gap from middle-appearance. In particular, it is possible now that in a given block, a square vertex appears for various times and only has 2 edges assigned to it for its first local appearance, as any of its subsequent appearance in the given block follow via closing some F edge that gets opened up earlier in the work, and thus assigned to the destination as opposed to the given square vertex.
Towards generalizing the prior argument, we consider a specific subclass of global middle appearance of a vertex through the block walk,
For a given walk and a given block-step 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, we say a labeled vertex v∈ [m] makes a middle appearance in 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i if
* v makes appearance at 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i;
* v makes appearance at some 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_j for j<i, and at some 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_j' for j'>i.
With some abuse of notation, we continue to let 𝖬𝗂𝖽𝖠𝗉𝗉(v) denote the number of middle appearances of v. Note that this is clear when we are working with blocks that do not have block-injectivity, while it is equivalent to the previous definition in vertex-injective blocks.
In particular, we emphasize that following the above definition, in the case of a labeled vertex first appears at 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, and appears multiple times in various gadgets in the dangling-path of 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, it is not considered as making a middle appearance at 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i.
For any square vertex v that does not push out dormant gadgets, at any time-t,
_t(v) ≤ P_t(v) ≤ 3 ·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v))
where we define
P_t(v) #(R steps closed from v by time t) + #((unclosed edges incident to v at t) ) - 4
In other words, by assuming at 4 possible return legs to be fixed each time a vertex is on the boundary, the number of unforced returns from a vertex by time t is at most 3 ·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v)).
The first inequality is definitional as we assume each vertex may have 4 edges being fixed, which incurs a cost of [4] for each vertex each time it pushes out an R step. Analogous to previous bounds, the base case is immediate when vertex first appears in the walk. In particular, we note that the above bound can be strengthened for the 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i in which vertex v makes (globally) its first appearance.
For any square vertex v, for the 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, at any time-t within the 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i ,
P_t(v) ≤ 3(s_t(v) + h_t(v)) - 2
Notice this is immediate when vertex v first appears, as it is incident to 2 edges, and thus we have P^(1)_t = -2 (as we have a -2 term since we maintain 3 edges to be fixed instead of just 2). The invariant holds as any subsequent departure opens up at most 2 F edges, and any subsequent arrival either closes both edges, or either arrival is along a surprise/high-mul visit, and give a net-gain of at most 3 in the number of unclosed F edges. This proves our claim.
It remains for us to consider the appearance of v in subsequent blocks, in particular, we start with the locally first appearance of the subsequent block. Let t be the time-mark in which v is making its first local appearance in the current-block. Notice at time t when the vertex first appears, it may be arrived via a single edge (as opposed to 2 due to the U-V path), and therefore it may push out at most 3 edges.
Appearance at the second block
If not, this is currently the second block in which v appears: applying the claim on the first block, and observe that the most recent departure opens up at most 2 F edges, while the current arrival closes 1 (unless H or S, in which case a net-gain of 3 suffices), we have
P_t(v) ≤ 3(s_t(v)+h_t(v)) -2 + 2 -1 =3(s_t(v)+h_t(v)) - 1
where the +2 corresponds to the 2 F edges opened up at the most recent departure, and the -2 term corresponds to the term in the hypothesis on first-block, and the -1 comes from the current arrival closing at least one R edge.
For any subsequent appearance of v in the current block, if any, the following is immediate,
P_t'(v) - P_t(v) ≤ 3 (s_t'(v) - s_t(v)) + 3 (h_t'(v) - h_t(v)) + 1
as we observe that
* The departure from the first vertex may open up at most 3 edges instead of 2;
* Any subsequent departure and arrival closes 2 edges, and opens up at most 2 edges, hence the previous argument applies, giving a net-gain of +1 due to the extra opening in the first departure.
That said, for any appearance of v in the second block at time t', we have
P_t'(v) = P_t(v) + (P_t'(v) - P_t(v)) ≤ 3(s_t'(v)+h_t'(v))
Appearance at the future blocks For any block, let t_0 be the local first appearance, and t_1 be the local final appearance, applying the above argument gives
P_t_1(v) - P_t_0(v) ≤ 3 (s_t_1(v) - s_t_0(v)) + 3 (h_t_1(v) - h_t_0(v)) + 1 .
That said, it suffices for us to bound P_t_0(v). This is bounded by
P_t(v) ≤ 3 ·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v))
Consider the base case when v appears at the third block, the most recent departure opens up at most 2 new F edges. To offset this, we use the gain in 𝖬𝗂𝖽𝖠𝗉𝗉_t(v), as the appearance of v in the second block is now counted as a middle appearance once v appears in the third block, which gives a +3 on the RHS. The bound extends to any subsequent block immediately.
This completes our proof of bound.
Extension to dormant gadgets
To capture the change due to dormant gadget, we note that for each square vertex at each block, we can assume one dormant edge it pushes out being fixed while assign a factor for any other dormant gadget it pushes out at that block. Notice this is a cost at most 2 for each square vertex throughout the walk, as we may need to assume 1 dormant edge for its first appearance, and another for its last appearance. For any middle appearance, it suffices for us to assign a factor for any dormant edge it pushes out, as opposed to all-but-one in the case of first/last appearance. This prompts to define the following counter,
For each square vertex v, at any block 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i, let D_i be the number of dormant-gadgets it pushes out at this block,
define D_i(v) D_i - 1[ first/last appearance at i ], and additionally, the counter function is defined for the whole walk by taking
D(v) = ∑_i:v appears in 𝖡𝗅𝗈𝖼𝗄𝖲𝗍𝖾𝗉_i D_i(v)
By using an extra additive constant of 2, each return using dormant leg is either forced or accounted for in D(v).
Each dormant-excess gadget corresponds to a circle vertex reached using an h_2 edge, and gets assigned a factor of at most Õ(1/√(d)) in the combinatorial charging argument.
Note that in the combinatorial charging argument, we have assigned both h_2 edges to the circle vertex's vertex factor, unless the square vertex is potentially making its first and last appearance at the same block, which is not ruled out by the definition of dormant excess. That said, for the circle vertex, it gets assigned at most a factor of √(d) in our scheme, while both edges assigned at least Õ(1/√(d)) each, and combining the above yields the desired.
By assuming at most 6 return legs to be forced, the number of unforced-return from v throughout the walk is at most
(v) ≤ 3·𝖬𝗂𝖽𝖠𝗉𝗉_t(v) + 3(s_t(v) + h_t(v)) + D(v) .
This allows us to effectively ignore factor in the block-value analysis, as each factor can be distributed among surprise visit, high-mul visit, or middle appearance such that each is assigned at most 3 factors. Moreover, since each comes with a O(1/√(d)) factor, combining with the assigned factor contributes an o_d(1) term provided q^3/√(d) =o_d(1), which is sufficient for us as we set q = d^ for a small enough constant .
§.§ Wrapping up
Given the edge-assignment scheme to vertex appearance, we now show why this immediately gives the desired bound on B(): we have the following factors,
* Each circle vertex gives a factor of √(d) for their first/last appearance; the assignment gives each vertex one global F/R edge-copy each, that is a factor of √(2)/√(d), giving a bound of
√(d)·√(2)/√(d) = √(2) ;
* Each square vertex gives a factor of √(m) for their first/last appearance, and the assignment above gives global F/R each two edge-copies each, that is a factor of
√(m)·2/d≤2√( m)/d ;
* Each square vertex that makes a global middle appearance gives a O(1) factors of , while each such vertex appearance is assigned one edge, that is a factor of
(2q· D_V)^O(1)· O( √(2) q^2/√(d)) ≪ 1 .
For each vertex's appearance and edges assigned to it, we have a factor of at most
(1+o_d(1)) ·√(d)·√(2)/√(d)· 12 ≤ (1+o_d(1)) · 6√(2)
for a circle vertex, and
(1+o_d(1)) · 6 ·√(m)·2/d ≤ (1+o_d(1)) 12√(m)/d
for a square vertex where the factor 6 for each vertex comes from our bound that each vertex arrived using a forced R edge can be specified at a cost of [6].
By our edge-copy charging scheme and <Ref>, we observe the following,
* Treat the U-V path as a gadget, and we have 2 choices depending on whether U=V for the upcoming block;
* For each gadget-step, we sum over the edge-labelings;
* For each gadget along the dangling path, we pick up at most the following factor,
* M_: it gives 2 circle vertices and 1 square vertex, which is a factor of at most
B_ (1+o_d(1)) · 3^4·4/d^2·√(md^2)· 6^3 ;
* M_β: it gives 1 circle vertex and 1 square vertex, which is a factor of at most
B_β (1+o_d(1)) · 3^4 ·2/d^2·√(md)· 6^2 ;
* M_D: it gives 1 circle vertex, which is a factor of at most (1+o_d(1)) · 3^2
if it is the first M_D gadget of the block with the factor 3 counting the step-label of the edge-copies in the gadget, or a factor of at most B_D Õ(1/√(d)) for any subsequent M_D gadgets.
* For the U- V path, the square vertex SQ_1 gives a factor of at most
(1+o_d(1)) · 6 ·√(m/d^2)
and the circle vertex combined gives a factor of at most
(1+o_d(1)) · (√(2))^2 · 6
where the factor of 6 comes from at most one circle vertex being reached using R edge.
Therefore, combining the above bounds, we have
B() ≤ (1+o_d(1)) · 6 (√(m/d^2)· 6 ·√(2)^2) · 3^2 ·∏_≤ d gadgets (B_ +B_β +B_D ) < 1/2
provided
B_, B_β < 1/2
and
6 (√(m/d^2)· 6 ·√(2)^2) · 3^2 < 1/2
It can be verified that it suffices for us to take m < 1/7000000· d^2 though we do not emphasize upon the particular constant as we believe a more careful argument by tracking our above bounds can render an improved constant without much work.
With probability at least 1 - 2^-d^ for some constant >0,
_ <1/2 .
This follows by setting q= d^ for some constant >0 in <ref>, and combines with our block-value function for small enough constant c.
§ ACKNOWLEDGEMENTS
We would like to thank anonymous reviewers for their various suggestions in polishing our writing. J.H., P.K., and J.X. are grateful to Prayaag Venkat for bringing this problem to their attention in his theory lunch talk at CMU.
alpha
§ DEFERRED CALCULATIONS
[Restatement of <Ref>]
With probability at least 1-o_d(1),
η_2^2 ≤ (1+o_d(1)) 2m/d
We first unpack the inner product,
η_2^2 = ∑_i∈ [m]η_i^2 = ∑_i∈ [m]( ∑_j∈ [d] (v_i[j]^2-1/d))^2 = ∑_i∈ [m]∑_j_1, j_2 ∈ [d](v_i[j_1]^2-1/d)(v_i[j_2]^2 -1/d) .
For any q>0, we then have
[η_2^2q] = [∑_s_1,s_2,…, s_q ∈ [m]∑_a_1,a_2, …, a_q ∈ [d]
b_1, b_2,…,b_q ∈ [d] ∏_i=1^q (v_s_i[a_i]^2 - 1/d ) · (v_s_i[b_i]^2 - 1/d ) ] ≤( 1+o_d(1))(2md/d^2)^q
where the final bound follows in a similar way to our norm bounds as
* For each new s_i, we case on whether a_i = b_i, and if so, we pick a factor of d for a_i=b_i and a factor of m for s_i;
* Otherwise, if we have a_i≠ b_i, since each edge is mean 0, we can assign a factor of √(d · 2q) to each of the a_i, b_i for their first two appearances, and a factor of q for a_i, b_i making their third appearances and onwards; similarly, we assign a factor of √(m · q) for the first two appearances of s_i or q if s_i is not making the first two appearances;
* Each H_2 edge (v_s_i[t_i]^2 - 1/d ) gets assigned its standard deviation that is √(2)/d when it appears for the first two times, otherwise a factor of O(q/d);
* To see the final bound, notice we can case on whether each H_2 edge is making their first two appearances.
* In the case both edges making first two appearances, we pick up an edge-value of exactly (√(2)/d)^2= 2/d^2; otherwise, we pick up an edge-value at most (q/d)^2= q^2/d^2;
* Assuming both edges making first two appearances, and if a_i=b_i matches, we pick up a factor of
(1+o_d(1)) 2/d^2· md
* Assuming both edges making first two appearances while a_i≠ b_i, we pick up a factor of at most
(1+o_d(1)) 2/d^2·√(md^2 q^3)
* If some edge is making third appearance or beyond, we pick up a factor at most
(1+o_d(1)) q^2/d^2· O(q^2√(dq))
as we have at most one new ”circle” vertex that takes label in [d] and gets assigned weight at most √(dq).
* Summing over the above cases gives us a bound of
(1+o_d(1)) 2/d^2· md = (1+o_d(1)) 2m/d
Taking the 1/q-th root and apply Markov's gives us the desired.
[Bound of s]
Recall that s = 1+ 1/dη^TA^-11_m, we have
|s| ≤ 1+o_d(1)
It suffices for us to bound |1/dη^TA^-11_m| ≤ o_d(1). We adopt the truncation strategy in our main analysis of , and split it as
1/dη^TA^-11_m = 1/dη (T_0 + E) 1_m = 1/dη T_0 1_m + 1/dη E 1_m
where we focus on the analysis of the first term.
B1/dη T_0 1_m≤Õ1/√(d)
We mimic our analysis for the well-conditionedness of A^-1, in particular, each term η T_0 1_m is a floating component and moreover, we apply our edge-assignment scheme from analysis of M by traversing from the circle-vertex on one end (for concreteness, take it to be the one from η^T). With some abuse of notation, let this vertex be 𝖼𝗂𝗋𝖼_1 again. Consider the following assignment-scheme,
* Assign each F, H and non-reverse-charging R to the vertex it leads to;
* Assign each reverse-charging R step to the destination vertex of the underlying F copy;
* Assign the normalizing constant of 1/d to the starting circle vertex 𝖼𝗂𝗋𝖼_1.
The desired block-value bound then follows immediately by the following proposition.
Each vertex making first/last appearance outside 𝖼𝗂𝗋𝖼_1 is assigned the required (non-surprise) F/R edge-copies. Moreover, there is either an extra edge-copy of factor Õ(1/√(d)), or 𝖼𝗂𝗋𝖼_1 is making first/last but not both appearances while it is assigned a factor of 1/d.
The first part follows immediately from our analysis of M, that we assign each vertex's first factor to the edge that discovers it, and if the vertex is making the last appearance in the same block, the factor is assigned by the R step of the edge that discovers it. To see that the assignment of last appearance is using R edge alone when the first, last appearances are not in the same block, note that each vertex is assigned the first edge ”locally” that discovers the first vertex. In the case that this edge is non-R, the edge must be closed later on and we can swap the assignment of the underlying R copy of that edge. The swapping factor is valid as the R copy of the edge is intended for protecting the vertex's both first and last appearance in the current block, which is again not needed in this case.
To see the second part of the claim, note that the traversal path of the floating component starts from a circle vertex while ends at a square vertex. On the one hand, note that it is immediate if the first circle vertex 𝖼𝗂𝗋𝖼_1 is making first/last but not both appearances in the current block as it is assigned the normalizing constant of 1/d, giving a gap of 1/√(d). On the other hand, if the 𝖼𝗂𝗋𝖼_1 is making both first and last appearances, there are two cases,
* The final departure from 𝖼𝗂𝗋𝖼_1 is a reverse-charging step, in this case, there is a surprise visit within this block, and it gives an extra copy of edge-value Õ(1/√(d));
* The final departure from 𝖼𝗂𝗋𝖼_1 is not reverse-charging, this edge either makes a middle appearance before as an H-step, or its underlying F copy is a surprise visit arriving at 𝖼𝗂𝗋𝖼_1. In either case, this gives an extra edge-copy of factor Õ(1/√(d)).
§ ANALYSIS OF M_D
§.§ Return bound for Malpha,Mbeta
We first remind the reader of some definitions explained in <Ref>.
We call a step in the walk a surprise visit if it uses a new edge to arrive at a visited vertex (that appears before the current block) in the walk.
It shall be pointed out that in a block-step walk, if we choose to process edges in order, an edge that leads to a visited vertex is not necessarily a surprise visit as it may be opened up by another edge in the current block.
Throughout the walk, we call a step using an edge that already appears at least twice by the current step a high-mul step.
We split the return bound based on circle and square vertices.
Return from circle vertices
The following definition of and the subsequent bound on _t is identical to <Ref>.
At time t, for each vertex v, let _t(v) be defined as the following,
* Each return step closed from v contributes an 1 to _t(v);
* Each unclosed F edge currently incident to v contributes an 1;
* An extra -1 as we can maintain a counter for each vertex such that there is always one potential return-step presumed to be forced.
For any circle vertex v on the boundary of the walk at time t∈ [q] in M_α and M_β,
_t(v) ≤ 2· n_t(v) + h_t(v)
where we define
* n_t(v) is the number of surprise visits arriving at v by time t;
* h_t(v) is the number of high-mul steps arriving at v by time t.
This argument exploits the property that each circle vertex only gets encoded into the boundary by one edge, and also only pushes out 1 edge on the boundary of each block, and we give a proof by induction. Notice this is immediate when vertex v appears for the first time, as there is only an unclosed F edge incident to v (the step that explores v and this cancels out with the -1 in the factor).
Suppose currently at time t vertex v is on the boundary, and the hypothesis holds, let time t' be the next time v appears on the boundary. The bound follows as we observe the departure from v (along the U-V path) may introduce an +1 to _t(v) (assuming v only splits out one edge when on the boundary), and any subsequent arrival at time t'-1 either closes an F edge, in which case we have _t(v) = _t'(v); otherwise it opens up a new F edge to v (which is a surprise visit) or follow a high-mul step, and we have an increment of at most 2 and 1 respectively in the change of unclosed incident F edges.
Return bound for square vertices
At time t, for each vertex v, let _t(v) be defined as the following,
* Each return step closed from v contributes an 1 to _t(v);
* Each unclosed F edge currently incident to v contributes an 1;
* An extra -2 as we can maintain a counter for each vertex such that there are always two H_1 edges (or a pair of H_2 edge viewed as two edges) presumed to be forced as each square vertex pushes out two edge-copies a time.
Notice the difference between for circle and square factors is only the final subtraction.
For any square vertex v on the boundary of the walk at time t∈ [q] in M_ and M_β,
_t(v) ≤ 4· n_t(v) + 4· h_t(v)
where we define
* n_t(v) is the number of surprise visits arriving at v by time t;
* h_t(v) is the number of high-mul steps arriving at v by time t;
The proof structure is identical to the previous proof for circle vertices, and note that this holds trivially in the base case when v first appears. Suppose this holds for v at time t, and let time t' be the subsequent appearance of v. Note that the last departure from v opens up at most 2 F edges, and consider how we arrive back at v at time t'-1.
* Suppose we use both R edges in the arrival, this offsets the gain of the at most 2 new F edges that may be opened up from time t and the hypothesis holds;
* Otherwise, there is at least an F or H edge, and we pick up a gain of at most 4 new F edges compared to time t, thus it suffices to (loosely) assign a gain of 4 per surprise visit (if F edge) or H-visit.
This completes the proof.
Recall from <Ref> that M_D[i,i] = v_i_2^4 - 2/dv_i_2^2 - 1.
For ease of technical analysis, it is helpful to note here that we can further decompose M_D to be the following matrices:
We write
M_D = M_D,1 + M_D,2 + 2 + 2/d M_D,3
where for any i ∈ [m],
* M_D,1 [i,i ] =∑_a≠ b∈ [d](v_i[a]^2 -1/d) (v_i[b]^2 - 1/d);
* M_D,2 [i,i ] =∑_a∈ [d] (v_i[a]^4 - 6/dv_i[a]^2 + 3/d^2);
* M_D,3 [i,i ] =∑_a∈ [d] (v_i[a]^2 - 1/d).
M_D[i,i] = v_i_2^4 - 2/dv_i_2^2 - 1.
We first unpack v_i_2^4: v_i_2^4 = (∑_a∈[d] v_i[a]^2)^2 = ∑_a v_i[a]^4 + ∑_a≠ b v_i[a]^2 v_i[b]^2.
On the one hand, for any a≠ b ∈ [d],
v_i[a]^2v_i[b]^2 = ((v_i[a]^2-1/d)+1/d)
·((v_i[b]^2-1/d)+1/d)
thus,
∑_a≠ b ∈ [d] v_i[a]^2v_i[b]^2 = ∑_a≠ b∈ [d]((v_i[a]^2-1/d)(v_i[b]^2-1/d) )_M_D,1 + 2·d-1/d∑_a(v_i[a]^2-1/d) _(2-2/d) M_D,3+ d(d-1)/d^2
On the other hand,
∑_a∈ [d]v_i[a]^4 = ∑_a∈ [d]( (v_i[a]^4 - 6v_i[a]^2/d + 3/d^2) + 6/d(v_i[a]^2 - 1/d) + 3/d^2)
= ∑_a∈ [d] (v_i[a]^4 - 6v_i[a]^2/d + 3/d^2)_M_D,2 + 6/d∑_a∈ [d] (v_i[a]^2 - 1/d)_6/d M_D,3 + 3/d
Thus, v_i_2^4 = M_D,1 + M_D,2 + (2+4/d)M_D,3 + 1 + 2/d.
Next, observe that v_i_2^2 = M_D,3[i,i] + 1.
Thus, we have
M_D[i,i] = v_i_2^4 - 2/dv_i_2^2 - 1
= M_D,1 + M_D,2 + (2+2/d)M_D,3
completing the proof.
For matrices M_D,1, M_D,2, M_D,3 defined in <Ref>,
we have the following bounds
* B(M_D,1) ≤O(1/d^2·√(d^2)) = O(1/d);
* B(M_D,2) ≤O(√(d)/√(d)^8 );
* B(M_D,3) ≤O(√(d)/d) = O(1/√(d)).
As a consequence, M_D_≤O(1/√(d)) = o_d(1) with probability 1- 2^-q/log d.
We observe the following,
* The square vertex is in U∩ V and hence bound to be making a middle appearance (ignoring the first and the last block of the walk as they are offset by long-path);
* Each circle vertex is either making a first/middle/last appearance, and note that instead of keeping track of factor, it suffices for us to assign it a cost of at most √(n)· (q· D_V);
* For M_D,1, each edge gets assigned a factor of at most q/d, and we have at most two circle vertices outside making first/middle/last appearances, which give a bound of O(q^2/d^2·√(d^2) (q· D_V)^2).
* M_D,3 is identical to M_D,1 except there is only one circle vertex outside, which gives a bound of O(q/d·√(d)· q· D_V ).
* For M_D,2, each edge gets assigned a factor of at most q/√(d)^8, and we have one single circle vertex outside, which combines to a bound of O(q/√(d)^8 ·√(d)· (q· D_V)).
Finally, recalling that we set q^O(1)· D_V ≪√(d) completes the proof to our lemma.
§ SKETCH OF AN ALTERNATIVE ANALYSIS
For each chain, we define local F, R, S, and H edges as follows. Here we traverse the chain from top to bottom.
* We say that an edge is a local F edge if it is appearing for the first time in the current chain and its destination is appearing for the first time in the current chain.
* We say that an edge is a local R edge if it is appearing for the last time in the current chain.
* We say that an edge is a local H edge if it appears again both before and afterwards in the current chain.
* We say that an edge is a local S edge if it is appearing for the first time but its destination has appeared before in the current chain.
For our local analysis, there are two circle vertices which we may need to be careful about.
Let v_left be the circle vertex which is at the top left, let v_right be the circle vertex which is at the top right (which may be equal to v_left), and let v_bottom be the circle vertex which is at the bottom.
We can always take the minimum weight vertex separator to be v_left, so we do not need to worry about v_left. However, as we will see, we have to be careful about v_right and v_bottom.
We say that an edge is locally vanishing if we have that after the local intersections, the product of it and the edges parallel to it has a nonzero constant term.
Some examples of locally vanishing edges are as follows.
* Two parallel edges with label 1 are locally vanishing as x^2 = √(2)x^2 - 1/√(2) + 1
* Two parallel edges with label 2 are locally vanishing as
(x^2 - 1/√(2))^2 = √(6)(x^4 - 6x^2 + 3/√(24)) + 2√(2)(x^2 - 1/√(2)) + 1
* More generally, two parallel edges are locally vanishing if they have the same label k and are not locally vanishing if they have different labels.
* Three parallel edges with labels 1, 1, and 2 are locally vanishing as
x^2(x^2 - 1/√(2)) = 2√(3)x^4 - 6x^2 + 3/√(24) + 5x^2 - 1/√(2)+√(2)
We say that a vertex v is locally isolated if v is not equal to the top left circle or the top right circle and all edges incident to v are locally vanishing.
For all edges except for two special edges e^*_r and e^*_b which we describe below, we assign them as follows.
* For each local F edge, we assign it to its destination. For the edge at the top between v_right and its neighboring square vertex, we consider its destination to be the square vertex.
* For each local R edge, we assign it to its origin unless it is a dangling edge with label 2. If it is a dangling edge with label 2, we split it between its endpoints.
* For each local S and H edge, we assign half of it as a bonus to its origin and half of it as a bonus to its destination.
All vertices except v_right and v_bottom have the required edge factors.
For each circle vertex u except for v_left (which we doesn't need any edges), v_right, and v_bottom, consider the first and last time u appears in the current chain. The first time u appears, there must be a local F edge pointing to it which gives u one edge. If u only appears once, it cannot be locally isolated, so it only needs one edge. Otherwise, the last time u appears, it can only be locally isolated if all edges incident to it are R edges. In this case, u obtains a second edge.
Similarly, for each square vertex v, consider the first and last time v appears in the current chain. The first time v appears, there must be two local F edges with label 1 or one local F edge with label 2 pointing to it. The last time v appears, it can only be locally isolated if all edges incident to it are R edges, in which case v obtains two additional edges.
Note that v_right is an exception because the first time it appears, it does not have a local F edge pointing to it. Similarly, v_bottom is an exception because the last time it appears, it may be incident to an R edge with label 2 without gaining an additional edge.
When v_left≠ v_right and v_right is not equal to the final circle vertex, we find/define e^*_r through the following iterative process. We start with the edge e between v_right and its neighbor. Note that this is a local F edge going down from a vertex v = v_right which is not locally isolated and which is not the final circle vertex. We now have the following cases
* e does not appear again and the other endpoint of e is the final circle vertex. In this case, we take e^*_r = e and assign one edge factor from it to v_right. If it has label 2, we keep one edge factor in reserve in case e^*_b = e^*_r = e.
* e does not appear again and its other endpoint is not the final circle vertex. In this case, letting v' be the other endpoint of e, we take e' to be an edge going down from v'. If e' is not a local F edge, we take e^*_r = e' and assign it to v_right. We can do this because v' is not locally isolated so it already has all of the edge factors it needs. If e' is a local F edge, we again have a local F edge e' going down from a vertex which is not locally isolated and is not the final circle vertex, so we repeat this process.
* e appears again. In this case, let e' be the next time e appears. If e' is not a local R edge whose destination is v, we take e^*_r = e' and assign it to v_right. If e' is a local R edge whose destination is v, we let e” be an edge going down from v.
If e” is not a local F edge then we take e^*_r = e” and assign it to v_right. If e” is a local F edge then we are still in the situation where we have a local F edge going down from a vertex which is not locally isolated and is not equal to the final circle vertex, so we repeat this process.
We find/define e^*_b through the following iterative procedure. We start with the edge e with label 2 between v_bottom and its neighbor. Note that e must be a local R edge. We then do the following.
* If e is not a locally vanishing edge then we take e^*_b = e. We assign half of e^*_b = e to v_bottom and keep half in reserve in case e^*_b = e^*_r.
* If e is a locally vanishing edge then e must appear above. Consider the previous time e appears and let e' be this copy of e. There are a few possibilities for e'.
* e' is an edge with label 2 going from a copy of v_bottom to the bottom square vertex of the current block which is not equal to the top vertex of the current block. If e' is not a local F edge then we take e^*_b = e'. We assign half of e^*_b = e to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e' is a local F edge, let e” be the edge from the top square vertex of the current block to the copy of v_bottom. If e” is not a local R edge then we take e^*_b = e”. We assign half of e^*_b = e” to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e” is a local R edge then we are in the same situation as before so we repeat this process.
* e' is an edge with label 2 from the top square vertex in the current block which does not equal the bottom square vertex of the current block to a copy of v_bottom. In this case, we take e^*_b = e'. If e^*_b = e' is not a local F edge then we assign half of it to v_bottom and keep half in reserve in case e^*_b = e^*_r. If e^*_b = e' is a local F edge then we assign it to v_bottom. Note that in this case, we cannot have that e^*_b = e^*_r. The reason for this is that e^*_r can only be a local F edge in case 1, in which case it does not appear again (while e' appears again by definition).
* e' is an edge with label 2 which hangs off of a square vertex which is both the top and bottom square vertex for the current block. In this case, we take e^* = e' and assign all of it to v_bottom. Note that in this case, we cannot have that e^*_b = e^*_r.
* e' is a local H edge with label 1. In this case, we take e^*_b = e' and assign it to v_bottom. Here we may have that e^*_b = e^*_r but if we do, one of the endpoints of e^*_b = e' is not locally isolated so we can take an edge factor from it and give the edge factor to v_right.
|
http://arxiv.org/abs/2307.04397v1 | 20230710075924 | On Estimating Derivatives of Input Signals in Biochemistry | [
"Mathieu Hemery",
"François Fages"
] | q-bio.QM | [
"q-bio.QM",
"q-bio.MN"
] |
Inria Saclay, Lifeware project-team, Palaiseau, France
[email protected] [email protected]
On Estimating Derivatives of Input Signals in Biochemistry
Mathieu Hemery and François Fages
July 8, 2023
==========================================================
The online estimation of the derivative of an input signal is widespread in control
theory and engineering. In the realm of chemical reaction networks (CRN), this raises
however a number of specific issues on the different ways to achieve it. A CRN pattern
for implementing a derivative block has already been proposed for the PID control of
biochemical processes, and proved correct using Tikhonov's limit theorem. In this
paper, we give a detailed mathematical analysis of that CRN, thus clarifying the
computed quantity and quantifying the error done as a function of the reaction kinetic
parameters. In a synthetic biology perspective, we show how this can be used to design error correcting terms
to compute online functions involving derivatives with CRNs.
In the systems biology perspective, we give the list of models in BioModels containing (in the sense of subgraph
epimorphisms) the core derivative CRN,
most of which being models of oscillators and control systems in the cell,
and discuss in detail two such examples: one model of the circadian clock and one model of a bistable switch.
§ INTRODUCTION
Sensing the presence of molecular compounds in a cell compartment is a necessary task of
living cells to maintain themselves in their environment, and achieve high-level functions
as the result of low-level processes of basic biomolecular interactions. The formalism of
chemical reaction networks (CRN) <cit.> is both a useful abstraction to
describe such complex systems in the perspective of systems biology
<cit.>, and a possible molecular programming language in the perspective
of synthetic biology <cit.>.
Sensing the concentration levels of molecular compounds has been well-studied in the
domain of signal transduction networks. For instance, the ubiquitous CRN structure of
MAPK signaling networks has been shown to provide a way to implement analog-digital
converters in our cells, by transforming a continuous input signal, such as the
concentration of an external hormone activating membrane receptors, into an almost
all-or-nothing output signal according to some threshold value of the input, i.e. using a
stiff sigmoid as dose-response input-output function <cit.>.
The analysis of input/output functions fits well with the computational theory of CRNs. In particular, the
Turing-completeness result shown in <cit.> for the interpretation by Ordinary Differential Equations (ODE) of CRNs,
possibly restricted to elementary CRNs using mass-action law kinetics and
at most bimolecular reactions, demonstrates the generality of this
approach to biomolecular programming. Furthermore, it comes with an algorithm to automatically generate a finite CRN for
implementing any computable real function. Such a compiler is implemented
in our CRN modeling software BIOCHAM <cit.> in several forms, including a
theoretically more limited but practically more interesting framework for robust online computation <cit.>.
Sensing the derivative of an input molecular concentration is nevertheless beyond the scope
of this computational paradigm since it assumes that the input molecular concentrations are stabilized
at some fixed values which makes no sense for computing the derivative.
Furthermore, it is well-known that the derivative of a computable real function is not
necessarily computable <cit.>. We must thus content ourselves with
estimating the derivative of an input with some error, instead of
computing it with arbitrary precision as computability theory requires.
In control theory and engineering, online estimations of input signal derivatives are
used in many places. Proportional Integral Derivative (PID) controllers adjust a target
variable to some desired value by monitoring three components: the error, that is the
difference between the current value and the target, its
integral over a past time slice, and its current derivative. The derivative term can
improve the performance of the controller by avoiding overshoots and solving some
problematic cases of instability.
Following early work on the General Purpose Analog Computer (GPAC) <cit.>,
the integral terms can be implemented with CRNs using simple
catalytic synthesis reactions such as A → A+B for integrating A over time,
indeed B(T)=∫_O^T A(t) dt. Difference terms can be implemented using the
annihilation reaction A_+ + A_-→∅ which is also used in
<cit.> to encode negative values by the difference of two molecular
concentrations, i.e. dual-rail encoding.
This is at the basis of the CRN implementations of, for instance, antithetic PI
controllers presented in <cit.>.
For the CRN implementation of PID controllers, to the best of our knowledge three different CRN templates have been
proposed to estimate derivative terms. The first one by Chevalier & al.
<cit.> is inspired by bacteria's chemotaxis, but relies on strong restrictions upon
the parameters and the structure of the input function making it apparently limited in
scope.
A second one proposed by Alexis & al.
<cit.> uses tools from signal theory
to design a derivative circuit with offset coding of negative values
and to provide analytic expressions for its response.
The third one developed by Whitby & al. <cit.> is practically similar in its
functioning to the one we study here, differing only on minor implementation details,
and proven correct through Tikhonov's limit theorem.
This result ensures that when
the appropriate kinetic rates tend to infinity, the output is precisely the derivative of
the input.
In this paper, we give a detailed mathematical analysis of that third derivative CRN and quantify the
error done as a function of the reaction kinetic parameters, by providing a first-order
correction term.
We illustrate the precision of this analysis on several examples,
and show how this estimation of the derivative can be actively used with error-correcting terms to compute elementary mathematical
functions online.
Furthermore, we compare our core derivative CRN to the CRN models in the curated part of <BioModels.net> model repository.
For this, we use the theory of subgraph epimorphisms (SEPI)
<cit.> and its implementation in BIOCHAM <cit.>,
to identify the models in BioModels which contain the derivative CRN structure.
We discuss with some details the SEPIs found on two such models:
,
one of the smallest eukaryotes circadian clock model <cit.>,
and
, a model of the bistable switch at the restriction point of the cell cycle <cit.>.
The rest of the article is organized as follow. In Section <ref>, we provide some
preliminaries on CRNs and their interpretation by ODEs. We present the core differentiation CRN in
Section <ref>, in terms of both of some of its different possible biological
interpretations, and of its mathematical properties.
Section <ref> develops the mathematical analysis to bound the error done by that core CRN,
and give in Section <ref> some examples to test the validity of our estimation
and the possibility to introduce error-correcting terms.
Section <ref> is then devoted to the search of that derivative CRN pattern
in BioModels repository and the analysis of those matching in two cases.
Finally, we conclude on the perspectives of our approach to both CRN design at an abstract mathematical level,
and comparison to natural CRNs to help understanding their functions.
§ PRELIMINARIES ON CRNS
§.§ Reactions and Equations
The CRN formalism allows us to represent the molecular interactions that occur on a finite set
of molecular compounds or species, {X_i}_i ∈ 1 … n, through a finite set of
formal (bio)chemical reactions, without prejudging their interpretation
in the differential, stochastic, Petri Net and Boolean semantics hierarchy <cit.>.
Each reaction is a triplet
(R,P,f), also written R P,
where R and P are multisets of respectively reactant and product species in
{X_i}, and f:_+^n ↦_+ is a kinetic rate function of the reactant species.
A CRN is thus entirely described by the two sets of n species and m reactions:
{X_i},{R_s P_s}.
The differential semantics of a CRN associates positive real valued molecular concentrations,
also noted X_i by abuse of notation,
and the following ODEs which define the time evolution of those concentrations:
d X_i/dt = ∑_s ∈ S (P_s(X_i) - R_s(X_i)) f_s(X),
where P_s(X_i) (resp. R_s(X_i)) denotes the multiplicity (stoichiometry) of X_i in the multiset of products
(resp. reactants) of reaction s.
In the case of a mass action law kinetics,
the rate function is a monomial, f_s = k_s ∏_x ∈ R_s x,
composed of the product of the concentrations of the reactants by some positive constant k_s.
If all reactions have mass action law kinetics, we write the rate constant in place of the rate function R P,
and the differential semantics of the CRN is defined by a
Polynomial Ordinary Differential Equation (PODE).
From the point of view of the computational theory of CRNs, there is no loss of generality
to restrict ourselves to elementary CRNs composed of at most bimolecular reactions with
mass action law kinetics. Indeed, <cit.> shows that any computable real
functions (in the sense of computable analysis, i.e. with arbitrary finite precision by a
Turing machine), can be computed by such a CRN, using the dual-rail encoding of real
values by the difference of molecular concentrations, x=X_+-X_-. While our compiler
ensures that the quantity X_+-X_- behaves properly, it is also important to degrade
both of them with an annihilation reaction, X_+ + X_- ∅,
to avoid a spurious increase of their concentration.
Those annihilation reactions are supposed to be faster than the other reactions of the CRN.
The first example given in <cit.> showed the compilation of the cosine function of
time, y=cos(t) in the following CRN:
A_p → A_p+y_p
A_m → A_m+y_m A_m(0)=0, A_p(0)=0
y_m → A_p+y_m
y_p → A_m+y_p y_m(0)=0, y_p(0)=1
y_m+y_p ∅
A_m+A_p ∅
The last two reactions are necessary to avoid an exponential increase of the species concentration.
The associated PODE is:
d(A_m)/dt = y_p-fast*A_m*A_p A_m(0) =0
d(A_p)/dt = y_m-fast*A_m*A_p A_p(0) =0
d(y_m)/dt = A_m-fast*y_m*y_p y_m(0) =0
d(y_p)/dt = A_p-fast*y_m*y_p y_p(0) =1
§.§ CRN Computational Frameworks
The notions of CRN computation proposed in <cit.> and <cit.>
for computing input/ouput functions, do not provide however
a suitable framework for computing derivative functions.
Both rely on a computation at
the limit, meaning that the output converges to the result of the computation whenever the CRN is either properly
initialized <cit.>, or the inputs are stable for a sufficient period of
time <cit.>. To compute a derivative, we cannot ask that the input stay fixed for
any period of time as this would imply a null derivative. We want the output to follow
« at run time » the derivative of the input.
Our question is thus as follows. Given an input species X following a time course
imposed by the environment X(t), is it possible to perform an online computation such
that we can approximate the derivative dX/dt on the concentration of 2 output
species using a dual-rail encoding?
The idea is to approximate the left derivative by getting back to its very mathematical
definition:
dX/dt(t) = lim_ϵ→ 0^+X(t)-X(t-ϵ)/ϵ,
but how can we measure X(t-ϵ)?
§ DIFFERENTIATION CRN
§.§ Biological intuition using a membrane
One biological intuition we may have to measure a value in a previous time is to use a
membrane with a fast diffusive constant. Indeed, if we suppose that the input is the
outside species, the inside species equilibrates to follow the concentration of the
outside one (the input) but also suffers a lag due to the diffusion. Building upon
this simple trick leads to the CRN presented in Fig. <ref>.
As the derivative may be positive or negative, a dual-rail encoding is used for the derivative.
This CRN is
mainly equivalent to the derivative block proposed in <cit.> apart from the
fact that we suppose (for the sake of clarity) that the input stay positive and no dual-rail encoding is used for it.
In the case of a
dual-rail encoded input, the two species need to have the same permeability through
the membrane, otherwise the delay is not the same for the positive and
negative parts.
The delay is thus introduced through a membrane
under the assumption that the outside concentration is imposed by the environment. This
conveniently explains why the kinetic rates are the same for the two monomials in the
derivative of , but this is not mandatory.
Indeed two other settings can be used to construct such a CRN without relying on a
membrane. We could use a phosphorylation and a dephosphorylation reactions where
would be the phosphorylated species. Or we could, as in <cit.>, rely on a
catalytic production of by and a degradation reaction of . A drawback
of these two other implementations is that they need to be tuned to minimize the
difference between the rates of the two monomials in the derivative of . Otherwise
a proportional constant is introduced between and , and needs to be
corrected by adjusting the production rates of D_+ and D_-.
However, the membrane implementation also has its own drawback as it requires the reaction
→ + D_+ to occur through the membrane. We may think of a membrane
protein M that mediates this reaction (+ M → + M + D_+). Then, since
its concentration is constant, it can simply be wrap up in the kinetic constant of the reaction.
Which of this three implementations should be chosen may depend on the exact details of
the system to be build.
§.§ Core differentiation CRN
Our core differentiation CRN schematized in Fig. <ref>
is more precisely composed of the following 7 reactions:
+ D_+ + D_-
D_+ ∅
D_- ∅
D_+ + D_- ∅
The diffusion through the membrane is symmetrical with a constant and both activations
should have the same constant product k. while the degradation of the outputs should have a
rate k.
We make the assumption that the outside species is present in large
quantity so that its concentration is not affected by the dynamics of the CRN.
Under this assumption, the differential
semantics is then the same as the one of the differentiation CRN
proposed in <cit.>:
d/dt = k_diff ( - )
dD_+/dt = k k_diff - k D_+ - fast D_+ D_-
dD_-/dt = k k_diff - k D_- - fast D_+ D_-
The derivative is encoded as D = D_+ - D_- and hence obeys the equation
(using the two last lines of the previous equation):
dD/dt = dD_+/dt - dD_-/dt
= k k_diff ( - ) - k (D_+ - D_-)
dD/dt = k ( - /1/ - D )
In the next section, we prove that is equal to with a delay ϵ,
hence giving us our second time point X(t-ϵ),
up to the first order in
ϵ = 1/.
The fractional part of the last equation is thus precisely an
estimate of the derivative of as defined in Eq. <ref>, with a
finite value for ϵ.
It is also worth remarking that such derivative circuits can in principle be connected to
compute higher-order derivatives, with a dual-rail encoded input. It is
well known that such estimations of higher-order derivatives can be very sensitive to
noise and error, and are thus not reliable for precise computation but may be good enough
for biological purposes. We will see a biological example of this kind in
Section <ref> on a simple model of the circadian clock.
§ MATHEMATICAL ANALYSIS OF THE QUALITY OF THE ESTIMATION
Our first goal is to determine precisely the relation between and when the
later is enforced by the environment. Using the first line of Eq. <ref>, we
obtain by symbolic integration:
(t) = ∫_0^∞exp(- s) (t-s) ds,
where we can see that is the convolution of with a decreasing exponential.
This convolution is not without reminding the notion of evaluation in the theory of
distribution and has important properties of regularisation of the input function. In
particular, whatever the input function is, this ensures that the internal representation
is continuous and differentiable.
The interesting limit for us is when →∞, that is when ϵ =
1/→ 0. In this case, the exponential is neglectable except in a
neighbourhood of the current time and supposing that is infinitely
differentiable[We also explore in Figures <ref>D and
<ref>C what a non analyticity of imply for our model.],
we obtain by Taylor expansion:
(t) = ∫_0^∞exp(- s) ∑_n=0^∞(-s)^n/n!^(n)(t) ds
= ∑_n=0^∞/n!^(n)(t) ∫_0^∞ (-s)^n exp(- s)
ds
The integral may be evaluated separately using integration by parts and recursion:
I_n = ∫_0^∞ (-s)^n exp(- s) ds = -n ϵ I_n-1
= (-1)^n (ϵ)^n+1 n!
We thus have:
(t) = ∑_n=0^∞/n!^(n)(t) (-1)^n n! ϵ^n+1
= ∑_n (-ϵ)^n ^(n)(t)
= (t) - ϵ'(t) + ϵ^2 ”(t) + …
(t) = ( t - ϵ) + o(ϵ^2).
Using Taylor expansion once again in the last equation somehow formalizes
our intuition: the concentration of the internal species follows the time course
of the external one with a delay equal to the inverse of the diffusive constant .
This validates our formulation of the derivative.
Now, it is sufficient to remark that Eq. <ref> has exactly the same
form as the first line of Eq. <ref> that we just study in length. Just
replace by the estimation of the left derivative, by the output D and
the rate constant k instead of . The delay approximation is thus also possible in
this step and, introducing the delay τ = 1/k, we immediately obtain a precise
expression for D:
D(t) = (t-τ) - (t-ϵ-τ)/ϵ+o(ϵ)+o(τ^2).
We can see this as the secant approximation of the derivative of with a step
size ϵ and a delay τ. Moreover we also know that the residual error on this
expression are of first order in ϵ and second order in τ.
It is well known in the field of numerical computation that the secant method provides a
rather poor approximation, but it has the benefit to be the simplest one, and thus
gives here a small size derivative circuit.
In the hope of improving the precision, one could implement higher-order methods using
several "membranes" to access the value of the function on several time points
before performing the adapted computation.
Such complexation would however also increase the delay
between the input and output function.
§ VALIDATION ON SIMPLE EXAMPLES
§.§ Verification of the delay-approximation
In this first subsection, we want to validate the approximation expressed by
Eq. <ref>. For this, we focus on the diffusion part of our CRN:
↔. We make numerical simulation for 2 different values of
ϵ and 2 different input functions: a sine wave and an absolute value signals.
The second allowing us to see how well the delay approximation works in presence of
non analyticity.
Fig. <ref> shows the response of in that different condition. In
panel A, the kinetic constant is very low so we expect our approximation to
fail. Indeed, one can see that in addition to having an important delay, the output is strongly
smoothed, this tends to average the variation of the input, bringing back to the
average value of the input. In panel B the diffusion constant is
increased by a factor 10. The delay approximation is now very good and we only expect an
error of order ϵ^2 = 10^-2 which can be checked with good accuracy on panel
C. Panel D shows a case of a
non-differentiable function in which an error of order ϵ = 0.1 is visible shortly
after the discontinuity and vanishes in a similar timescale.
§.§ Approximation of the derivative
Let us now check the behaviour of the derivative circuit. On Fig. <ref>,
we can see the response of our derivative circuit for a sine wave and an absolute value
input functions. In panels A and B we see that when the first and second order
derivatives of the input are smaller than the kinetic reaction rates, the delay
approximation gives a very good picture of the response. On a complementary point of view,
the panel C shows that in front of singularity, the system adapts after an
exponential transient phase with a characteristic time τ = 1/k.
§.§ Using signal derivatives for online computations
Our main motivation for analyzing the differentiation CRN is to
compute a function f of some unknown input signal, (t), online.
that is, given a function f, compute a function f((t))
Yet the differentiation CRN only
allows us to approximate the derivative of the input signal.
The idea is thus to implement the PODE:
dY/dt = f'((t)) d /dt, Y(0)=f(X(0)
and provide the
result online on a set of internal species Y(t) = Y_+ - Y_-.
This necessitates to compute the function f' and estimate the derivative of the input.
Using the formalism developed in <cit.> we know that there
exist an elmentary CRN (i.e. quadratic PODE) computing f'() for any elementary function f and we just
have shown that d /dt can be approximated by the differentiation CRN.
Therefore, in principle, any elementary function of input signals can be approximated online by a CRN.
As a toy example, let us consider the square function, d Y/dt = 2 (D_+ - D_-),
and as input, a sine wave offset to stay positive : (t) = 1 + sin(t).
The CRN generated by BIOCHAM according to these principles, to compute the square of the input online is:
, ,
+ D_+ D_+ ∅
+ D_- D_- ∅
+ D_+ + D_+ + Y_+ + D_- + D_- + Y_-
D_+ + D_- ∅
Y_+ + Y_- ∅
The first three lines implement the derivative circuit, the fourth line implements
the derivative of Y and the last line provides the dual-rail encoding.
The numerical simulation of this CRN is depicted in Fig. <ref>A
One can see that while it effectively computes the square of the input, it also suffers
from a strong drift. To verify if this drift comes from the delay between the input and
the output, we can compute analytically the output of our network with our approximation of
derivative with a delay (see the full computation in Appendix).
y(t) = ∫ 2 x(s) x'(s-τ) ds
≃(1+sin(t))^2 + τ t.
This is precisely the behaviour that can be seen on the time course of Fig. <ref>
A. After the integration of 20 time units, the offset is of order 2 which is
exactly what is predicted for a delay τ = 1/k = 0.1. Therefore,
while it is always possible to get rid of such errors by increasing ,
the identification of the cause of the drift, gives us a potentially simpler path to eliminate it:
using a representation of the input that is itself delayed: ↔
X_delay, and use this delayed signal as the catalyst for the production of Y_+
and Y_- in the place of . This leads to the CRN given in Appendix
(Eq. <ref>) for which numerical integration shows in
Fig. <ref>B that we indeed have get rid of the drift, or said
otherwise, the correct implementation for online computation is given by:
dY/dt = f'((t-τ)) d/dt(t-τ),
where the delays has to be equal for the two pieces of the derivative.
§ BIOLOGICAL EXAMPLES
§.§ BioModels repository
To explore the possibility that natural biochemical systems already implement a form or another of
the core differentiation CRN, one can try to scan the CRN models of the BioModels
repository <cit.>.
This can be automated with the general graph matching notion of Subgraph EPImorphism (SEPI)
introduced in <cit.> to compare CRN models
and identify model reduction relationships based on their graph structures.
SEPI generalizes the classical notion of subgraph isomorphism by introducing an operation of node merging in addition to node deletion.
Considering two bipartite graphs of species and reactions, there exists a SEPI from G_A
to G_B if there exists a sequence of mergings[A species (resp. reaction) node can
only be merged with another species (resp. reaction) node and the resulting node inherits of all the
incoming and outcoming edges of the two nodes.] and deletions of nodes in G_A
such that the resulting graph is isomorphic to G_B.
More precisely, we used the SEPI detection algorithm of BIOCHAM to scan the
curated models in Biomodels (after automatic rewriting with well-formed reactions <cit.>)
and check the existence of a SEPI from each model graph to the differentiation CRN graph.
Fig. <ref> shows that our small differentiation CRN with 4 species is frequently found in large models.
It is thus reasonable to restrict to models with no more than 10 species.
Table <ref> lists the models with no more than 10 species in the 700 first models of BioModels
that contain our differentiation CRN.
The predominance of models exhibiting oscillatory dynamics, and in particular circadian clock models is striking.
§.§ Circadian clock
Model of the eukaryotes circadian clock proposed by Becker-Weimann & al.
<cit.> is among the smallest models of the circadian clock displaying a SEPI reduction
toward our differentiation CRN. Its influence graph is depicted in
Fig. <ref>A, we also display in red the first SEPI found by BIOCHAM,
and in green a second one obtain by enforcing the mapping from the PER/Cry
species inside the nucleus to the input of the differentiation CRN.
Interestingly, this model has the nucleus membrane separating the species
mapped to and the one mapped to in the second SEPI.
The oscillatory behavior of this model is shown in panel B.
Now, thinking at the mathematical insight that this relation provides, it is quite natural
for a CRN implementing an oscillator to evaluate its own derivative on the fly.
Actually, when looking at the natural symmetry of the model, we are inclined to think that this
CRN may actually be two interlocked CRNs of the derivative circuit, both computing
the derivative of the output of the other, as if a second order derivative circuit was
closed on itself.
This is something we could easily check by imposing restrictions on the
SEPI mapping. Enforcing the nucleus PerCRY protein to be mapped on gives us the
SEPI shown in green in Fig. <ref>A.
To validate the preservation of the function of the derivative CRN given by this SEPI,
we can verify that the quantities defined by summing the species
that are mapped together are effectively linked by the desired derivative relation. As
can be seen in Fig. <ref>B, the agreement is striking.
One can even note that the delay of the chemical derivative is the one predicted by our theory.
The case of Fig. <ref>C is more complex as this part of the model seems to compute the opposite of the derivative.
It is however worth noting that there is absolutely no degree
of freedom in our choice of the species used in Fig. <ref>B and
C that are entirely constrained by the SEPI given by BIOCHAM.
Taking both SEPI together we
see that Bmal1^nucleus_protein and
Bmal1^cytoplasm_mRNA play symmetrical roles, being the input and
derivative of the two displayed SEPI. Given that the second SEPI introduces a negative
sign, we may see this as:
Bmal1^cytoplasm_mRNA = d/dtBmal1^nucleus_protein
Bmal1^nucleus_protein = -d/dtBmal1^cytoplasm_mRNA
The solution of this well known equation are the sine and cosine functions, and this
perfectly fits the oscillatory behaviour of this CRN. To confirm this hypothesis, we check
for the presence of a SEPI from the clock model to the compiled cosine CRN presented in
Eq. <ref> which is effectively the case.
On the other hand, there
is no SEPI relation between the compiled cosine and the derivative circuit.
§.§ Bistable switch
The model of a bistable switch in the context of the restriction
point <cit.> displays a SEPI toward our derivative circuit. This model,
presented in Fig. <ref>A, study the Rb-E2F pathway as an example of
bistable switch where the presence of a (not modeled) growth factor activates the
MyC protein, starting the pathway until it reach the E2F factor that constitute the output
of the model. Yao & al. show that once E2F reachs a threshold, its activation becomes
self sustained hence the notion of switch.
The SEPI given by Biocham is worth of interest as it does not merge any species and only
three reactions into one leaving all the other either untouched or deleted, thus
indicating that the pattern of the derivative is already well present. Morevoer, MyC is
mapped to the input and E2F to one part of the output, reinforcing our intuition that the
discovered SEPI is closed from the natural functionning of the CRN.
To conform this, we run the simulation as provided by the models and display the
derivative of the MyC protein against a scaled difference of the D_+ and D_- species:
D = a RB - b E2F where a and b are positive constant adjusted so that D goes to
0 at final time and are of the same magnitude as d MyC/dt. (This gives a=6.3,
b=0.063.) Clearly, D is a delayed and smoothed version of the input derivative exactly
as our derivative device would provide.
§ CONCLUSION AND PERSPECTIVES
We have presented a mathematical analysis of the core differentiation CRN
introduced by Whitby & al. <cit.>. In particular, we have shown that what is
computed is an approximation of the left derivative given a small time in the past with a time
constant determined by the diffusion constant between the input and its internal
representation: ϵ = 1/. Moreover, there is a delay τ due to the
computation time that can also be precisely estimated given the rate of activation and
degradation of the species encoding the derivative: τ = 1/k.
We have shown that such results can be used in some cases to design error-correcting terms
and obtain excellent implementations of functions of input signals using an approximation of their derivative on the fly.
From a synthetic biology perspective, the derivative CRN may be very relevant in the context of biosensor design,
when the test is not be about the presence of some molecular compounds <cit.>
but on their variation.
A derivative CRN is also needed to construct PID
controllers. The derivative control is known for damping the oscillations around the target of the
controller but delays are also known for producing such oscillations.
Being able to determine and quantify those delays and errors is thus important to optimize the design.
This device may also be used to approximate the derivative of an unknown
external input in the context of online cellular computing. Once again, delay may produce
nefarious artefacts that can easily be avoided when aware of the problem.
Furthermore, using the notion of SEPI to scan the biomodels database, we were able
to highlight a certain number of CRN models that contain the core differentiation CRN.
A high number of these models occur in models presenting oscillations.
We have shown on one such example, a circadian clock model, why it makes sense for an oscillator to
sense its own derivative, and to reproduce what a mathematician would produce in a more direct way for the
most basic oscillatory function: sine and cosine.
§.§ Acknowledgment
This work benefited from ANR-20-CE48-0002 δifference project grant.
plain
§ APPENDIX: COMPUTATION OF INTEGRATION WITH A DELAY
To prove that the drift of the output is a direct consequence of the delay, we first
compute the input and the approximate derivative for our choice of input:
x(t) = 1+sin(t)
x'(t-τ) = cos(t-τ)
= cos(t) + τsin(t) + o(τ^2)
Then we can compute the output up to the first order:
y(t) = ∫ 2 x(s) x'(s-τ) ds
= ∫ 2 (1+sin(s)) cos(s) ds + ∫ 2 τ (sin(s)+sin^2(s)) ds
= (1+sin(t))^2 + 2 τ∫sin(s)+sin^2(s) ds
y(t) ≃(1+sin(t))^2 + τ t
Then, to correct the observed drift, we propose to introduce a delay signal and use it in the
computation to produce the output species Y_+ and Y_-, with the following CRN:
, ,
+ , ∅,
+ D_+ D_+ ∅
+ D_- D_- ∅
+ D_+ + D_+ + Y_+ + D_- + D_- + Y_-
D_+ + D_- ∅
Y_+ + Y_- ∅
|
http://arxiv.org/abs/2307.05691v1 | 20230711180114 | KPM: A Flexible and Data-Driven K-Process Model for Nucleosynthesis | [
"Emily J. Griffith",
"David W. Hogg",
"Julianne J. Dalcanton",
"Sten Hasselquist",
"Bridget Ratcilffe",
"Melissa Ness",
"David H. Weinberg"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.SR"
] |
Emily J. Griffith
[email protected]
0000-0001-9345-9977]Emily J. Griffith
NSF Astronomy and Astrophysics Postdoctoral Fellow
Center for Astrophysics and Space Astronomy, Department of Astrophysical and Planetary Sciences, University of Colorado, 389 UCB, Boulder, CO 80309-0389, USA
0000-0003-2866-9403]David W. Hogg
Center for Cosmology and Particle Physics, Department of Physics, New York University, 726 Broadway, New York, NY 10003, USA
Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
Department of Astronomy, Box 351580, University of Washington, Seattle, WA 98195
0000-0001-5388-0994]Sten Hasselquist
Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218
0000-0003-1124-7378]Bridget Ratcilffe
Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16,
14482 Potsdam, Germany
0000-0001-5082-6693]Melissa Ness
Center for Computational Astrophysics, Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA
Department of Astronomy, Columbia University, Pupin Physics Laboratories, New York, NY 10027, USA
0000-0001-7775-7261]David H. Weinberg
The Department of Astronomy and Center of Cosmology and AstroParticle Physics, The Ohio State University, Columbus, OH 43210, USA
The element abundance pattern found by APOGEE and GALAH in Milky Way disk stars is close to two-dimensional, dominated by production from one prompt process and one delayed process.
This simplicity is remarkable, since the elements are produced by a multitude of nucleosynthesis mechanisms operating in stars with a wide range of progenitor masses.
We fit the abundances of 14 elements for 48,659 red-giant stars from APOGEE DR17 using a flexible, data-driven K-process model—dubbed .
In our fiducial model, with K=2, each abundance in each star is described as the sum of a prompt and a delayed process contribution.
We find that with K=2 is able to explain the abundances well, recover the observed abundance bimodality, and detect the bimodality over a greater range in metallicity than previously has been possible.
We compare to prior work by <cit.>, finding that produces similar results, but that better predicts stellar abundances, especially for elements C+N and Mn and for stars at super-solar metallicities.
The model makes assumptions, including especially that it fixes some parameters to break degeneracies and improve interpretability; we find that some of the nucleosynthetic implications are dependent upon these detailed parameter choices.
We add a third and fourth process (to make K=4), finding that the additional processes give the model more freedom and improve the model's ability to predict the stellar abundances, as expected, but they don't qualitatively change the story.
The results of have implications for the formation of the Galaxy disk, the relationship between abundances and ages, and the physics of nucleosynthesis.
§ INTRODUCTION
After hydrogen, helium, lithium, and beryllium, all other naturally occurring elements are made in stars, supernovae, and the collisions of stars.
Stellar surface abundances—the abundances measured by taking a spectrum of a stellar photosphere—are thought to deliver a relatively unprocessed record of the element abundances in the gas from which the star formed <cit.>.
These birth abundances were set by a combination of nucleosynthetic processes involved in making heavy atomic nuclei, and astrophysical processes involved in delivering atoms from stellar interiors to star-formation sites <cit.>.
Thus nuclear physics and a wide swath of astrophysics are critically intertwined in our understanding of stellar surface abundances, motivating theoretical, experimental, and observational work.
At the present day, stellar surface abundances are not very well explained by purely ab initio, physics-driven models.
Theoretical yields vary from data set to data set, as they are dependent on progenitor properties and explosion assumptions <cit.>.
The wide parameter space of progenitor and supernova models coupled with uncertainties in reaction rates and explosion physics hinder the creation of an accurate nucleosynthetic model from theory alone.
In the long run, it is incumbent upon us to understand these issues and correct the assumptions or calculations underlying our nucleosynthetic and astrophysical models.
In the short run, however, we gather data—tens of millions of abundance measurements on millions of stars in different astronomical surveys such as RAVE, SEGUE, LAMOST, Gaia-ESO, APOGEE/MWM, GALAH, and H3 <cit.>.
This raises the question: Can we take a data-driven approach to nucleosynthesis?
In this , we build a purely data-driven model for the surface element abundances observed in stars.
We treat each star as being a linear combination of nucleosynthetic processes, beginning with one that is primarily responsible for the α-element Mg <cit.>, and one that is primarily not responsible for Mg (delayed enrichment, such as Type-Ia supernovae or SNIa).
Beyond these up-front assumptions, we try to be agnostic about how the elements are produced.
We build upon the work of <cit.> and <cit.>, who used the bimodality in [Mg/Fe] vs. [Fe/H][Where [ X/ Y] = log( X/ Y) - log( X/ Y)_⊙ and ( X/ Y)_⊙ is the solar abundance ratio.] <cit.> to separate stars into populations with high and low SNIa enrichment. Using the median [X/Mg] vs. [Mg/H] abundance trends, these works explain data from the GALAH[GALAH = GALactic Archaeology with HERMES.] and SDSS-IV APOGEE[APOGEE = Apache Point Observatory Galactic Evolution Experiment, part of the Sloan Digital Sky Survey] surveys, respectively, with a two-process model. Because the median abundance trends in [X/Mg] vs. [Mg/H] space are largely insensitive to aspects of chemical evolution, such as outflows and variations in star formation history (W19), the population abundance trends are set by the nucleosynthetic processes and can be used to empirically constrain Galactic enrichment.
These works, as well as <cit.> and <cit.>, find that the Milky Way stellar abundances are well fit by two components, grounded in [Fe/H] and [Mg/Fe], down to residuals of 0.01 to 0.03 dex for the most precisely measured elements and 0.05 to 0.1 dex for elements (such as Na, C, and Ce) with large measurement errors. Simultaneously, <cit.> and <cit.> have found that disk abundances are also well described by a two-component model of birth radius and age. Correlations between two-process model parameters and stellar ages and kinematics (W22) as well as the success of a two-component model of [Fe/H] and age in predicting APOGEE abundances <cit.> suggest that these two 2-dimensional models are somehow interconnected.
Beyond standard CCSN and SNIa enrichment, many elements have contributions from additional nucleosynthetic processes, such as the rapid (r) and slow (s) neutron capture processes <cit.> in asymptotic giant branch (AGB) stars <cit.>, merging neutron stars <cit.>, or atypical supernova explosions <cit.>. After predicting stellar abundances from and , <cit.> identify correlated abundance residuals that are unexplained by observational uncertainties, indicative of additional nucleosynthetic processes that standard disk CCSN and SNIa enrichment cannot explain. Results from G22 and W22 support this conclusion, and both works attempt to add additional processes to their models to account for non-CCSN and non-SNIa enrichment, though in a restrictive manner. Other sources of abundance scatter, such as stochastic sampling of the Initial Mass Function (IMF), IMF variations, and bursty star formation history could also cause deviations away from a two-process model <cit.>.
To date, survey abundances have not been fully exploited to create a data-driven model of nucleosynthesis. While works such as <cit.>, <cit.>, and <cit.> effectively use clustering algorithms to identify elements with like sources and reduce abundance dimensionality, the results are difficult to translate into a model of nucleosynthesis. Clustering components can be linked to nucleosynthesis sources and enrichment history, but have not yet been used to describe the enrichment of a single star.
In this work, our main innovations are to relax the assumptions made in G22 and W22, to be more agnostic about the nucleosynthetic processes, and to be more principled with the measurements or inferences from data.
In the K Process Model (), we find the intersection between reliable facts about nucleosynthesis and good abundance measurements to build an edifice of Galactic enrichment.
The model is hierarchical, in that it learns some parameters (process vectors) that are shared across all stars, but different for each element, and some parameters (process amplitudes) that are shared across all elements, but different for each star.
The parameters output by our model can thus be used as de-noised abundance labels; these will sharpen relationships between abundances and stellar parameters (including birth location and time). Our main contribution is to construct a data-driven model for nucleosynthesis that has good statistical properties, only enforcing constraints to break rotational degeneracies. All other parameters are set by the data with no fixed normalization.
This paper is organized as follows. In Section <ref> we present the assumptions and the implementation of . In Section <ref> we describe the APOGEE data sample employed in this paper. We apply to the APOGEE data in Section <ref> and compare our results to those of W22 in Section <ref>. In Section <ref> we explore variations from the fiducial model, changing our assumptions about Fe production as well as the number of model components. Finally, we discuss and summarize our results in Section <ref>.
§ THE K-PROCESS MODEL
As in W22 and G22, we propose that all stellar abundances can be generated by a combination of K nucleosynthetic processes.
In this picture, each element has K metallicity-dependent process vector components that are shared across the full stellar sample, while each star individually has K process amplitudes, which apply across all elements, such that the expected logarithmic abundance of element j relative to H in star i (m_ij) is defined as:
m_ij = log_10 ∑^K_k=1 A_i^k q_k,j^Z .
Each star i has K process amplitudes (A^k_i)
and each element j has K metallicity dependent process vector components (q_k,j^Z).
The Z superscript denotes the dependence of the process vectors on metallicity, Z, taken to be [Mg/H].
The observed abundance can be expressed
[X_j/H]_i = m_ij + noise,
where “noise” represents observational noise and/or other sources of intrinsic abundance scatter that are not included in this model. For detailed examples of a similar model with K=2, see Section 2 and Figures 2 and 3 of W22.
In , we put forth the following set of assumptions:
1. K processes
All elements on the periodic table are produced by a combination of nucleosynthetic processes such as CCSN, SNIa, AGB stars, and merging neutron stars <cit.>. The majority of α, light odd-Z, and Fe-peak elements (the elements observed by APOGEE) are dominantly produced by K=2 sources, with one being a prompt process or mix of prompt processes, and one being a delayed process or a mix of delayed processes. This is substantiated by theoretical yields <cit.> and past successful data-driven models (e.g., , G22, W22, ). In this paper we therefore assume that K ≥ 2, though could in principle be implemented with K=1.
2. Linearity
At every metallicity, the (linear) (X/H) abundances of a star can be expressed as a linear combination of K processes.
These processes themselves will depend on metallicity, but a linear sum is sufficient to explain all element abundances at any overall metallicity.
Because yields depend on stellar masses and progenitor structure, and because different stars can get to their metallicities by different histories, this assumption must be at least slightly wrong in detail.
3. Non-negativity
All process vector components for all elements are non-negative and all process amplitudes are non-negative.
This assumption implies that the elements considered here are only produced, and not ever destroyed, by the K processes (relative to hydrogen).
This makes the model similar to a non-negative matrix factorization (). In , this assumption is enforced by requiring that the process vector components and amplitudes are always greater than or equal to zero, such that
q_k,j^Z ≥ 0 ∀ Z,k,j and A_i^k ≥ 0 ∀ k,i .
4. Mg production
All Mg is produced in a prompt process and no other processes contribute to its production.
This is substantiated by theoretical yields where Mg is purely produced by prompt CCSN <cit.>.
This assumption (along with non-negativity) breaks a set of symmetries in the process space and makes the processes quasi-interpretable in terms of nucleosynthesis sources.
Because such a prompt process is likely dominated by CCSN <cit.>, we label the first process with “CC”.
In , this assumption is enforced by fixing the Mg process vector components such that
q_ CC, Mg^ Z = 1, q_k>1, Mg^ Z = 0
at all metallicities. Equation <ref> also imposes that the Mg process is metallicity independent.
5. Fe production
Fe is produced through a combination of a prompt and delayed process. Because the delayed process is likely dominated by SNIa <cit.>, we label the delayed process with “Ia”[While other enrichment channels with similar timescales may be included in the respective processes, the “CC” and “Ia” naming convention conforms to the choices in W22 and G22, and avoids the possible confusion of process numbers (1 and 2) with supernova type (II and Ia).].
While the prompt process constraint (Mg) is grounded in nucleosynthesis theory, there is no equivalent nucleosynthesis fact to constrain the delayed process.
To break model degeneracies, we also fix the Fe process vector components such that
q_ CC, Fe^ Z = 0.4, q_ Ia, Fe^ Z = 1 - q_ CC, Fe^ Z q_k>2, Fe^ Z = 0
at all metallicities.
This assumption places a star with purely prompt enrichment on the low metallicity plateau near 0.4 dex, in agreement with APOGEE observations but in contention with recent results from <cit.> which place the plateau near 0.6 dex.
We explore the impact of different assumptions in Section <ref>.
6. Metallicity dependence
We permit the process vector components for all elements other than Mg and Fe to float as a function of metallicity.
The variation is parameterized by a linear spline in log-process space, attached to a set of variable control points. We assume that a particular set of hard-coded knots are sufficient to capture the metallicity dependence.
7. APOGEE abundances and uncertainties
We assume that the APOGEE abundances and uncertainties can be used for this project.
This is not the same as assuming that they are correct, but rather that it is possible and useful to build an interpretable model to explain them. We describe the potential data systematics in Section <ref>.
For our purposes, we care mainly about the statistical observational errors rather than systematics that arise from imperfect modeling of the spectra such as NLTE effects, though differential systematics across the sample can artificially add abundance scatter. The actual derived values of q_k,j^Z will be affected by systematic offsets in the abundances. We add a softening parameter Q (Equation <ref>) to allow for the possibility that APOGEE observational errors are underestimated or that there is intrinsic scatter around the predictions.
8. Robust likelihood function
The observed value of [X/H] can be described as the K process expected value plus observational noise and/or other sources of intrinsic abundance scatter, as described by Equation <ref>.
The expression in this equation can be thought of as the key assumption underlying our likelihood function.
In detail the (negative two times the) log likelihood function is given by a chi-squared (χ^2) objective
χ^2 = ∑_ij1/σ_ij^2 (_ij - m_ij)^2 ,
where 1/σ_ij^2 is the (robust; see below) inverse variance on measurement ij.
Because we don't want to be too drawn or influenced by outlier points, we don't use the observed errors σ_ obs,ij in the likelihood, but instead we soften them in the spirit of iteratively reweighted least squares ():
1/σ_ij^2 = Q^2/σ_ obs,ij^2/Q^2 + (_ij - m_ij)^2 /σ_ obs,ij^2 ,
where Q is a softening parameter. Our results are largely insensitive to the choice of Q (e.g., 1 < Q < 10), so we choose to set Q=5. Very small values (e.g., Q=.1) will erase some of the abundance structure and produce poorer fits.
9. Implementation and optimization
With the above assumptions in place, the likelihood function can be optimized to a set of stellar abundances.
The model is initialized at the Mg and Fe process vector components from Equations <ref> and <ref>. It subsequently optimizes the process amplitudes (dubbed the A-step) using only Mg and Fe at fixed process vector components, and then optimizes the process vector components (dubbed the q-step) for all elements at fixed process amplitudes.
The A-step and q-step are alternated, repeating 48 rounds of optimization, in the K=2 case, and updating the best-fit parameters when the objective function improves. We find few differences in the best-fit parameters when we decrease the number of iterations to 32, indicating that the model quickly finds a good solution.
In detail the optimizations are performed with a nonlinear χ^2 minimization algorithm (Gauss–Newton nonlinear least-squares) from [<https://jaxopt.github.io/>].
mirrors the two-proccess model from prior work (G22, W22) but, unless otherwise noted, the assumptions are weaker, there is a likelihood function in play, and the implementation is more general.
In particular, we don't assume anything about the relationships between the process vector components and the morphologies of observed element-abundance ratio diagrams.
§ DATA
In this paper, we employ stellar abundances from APOGEE DR17 <cit.>, part of the SDSS-IV <cit.>. The APOGEE survey obtains high-resolution (R∼22,500) near-infrared (IR) observations <cit.> for stars in the Galactic disk, halo, bulge, and nearby satellites/streams. Observations are taken with two nearly identical spectrographs on the 2.5m Sloan Foundation telescope <cit.> at Apache Point Observatory in New Mexico and the 2.5m du Pont Telescope <cit.> at the Las Campanas Observatory in Chile. Spectral data are reduced and calibrated with the APOGEE data processing pipeline <cit.>, after which stellar parameters and abundances are calculated with ASPCAP <cit.>. See <cit.> and Holtzman et al. (in prep., DR17) for a more detailed description of APOGEE data reduction and analysis, and <cit.>, <cit.>, and <cit.> for a discussion of survey targeting.
APOGEE DR17 reports stellar parameters, including and , as well as 20 elemental abundances: C, C1, N, O, Na, Mg, Al, Si, S, K, Ca, Ti1, Ti2, V, Cr, Mn, Fe, Co, Ni, and Ce for 657,135 stars. In DR17, new spectral libraries <cit.> are generated using the Synspec code and incorporate NLTE corrections for Na, Mg, K, and Ca <cit.>. Among the reported elements and ions, some are measured more precisely than others. We exclude Ti from our analysis as there are large differences between the abundances derived from the Ti1 and Ti2 lines <cit.>. We also exclude P and V, as the P abundances are measured from a few very weak spectra features and V abundances are one of the least precise and least accurate labels <cit.>. Both P and V display strong abundance artifacts and large scatter. Among the remaining elements, we note the following concerns: weak Na spectral features, large abundance scatter in S, significant systematic artifacts in Cr abundances at super-solar metallicities, potentially strong unaccounted-for NLTE effects on Mn abundances <cit.>, and large abundance scatter in Co and Ce. For a more detailed discussion of abundance systematics and their effects on population trends, see <cit.> and <cit.>.
For our stellar sample, we select a subset of APOGEE DR17 stars with the goal of minimizing statistical errors from poor observations and systematic errors from abundance trends with and/or while preserving a sufficient number of stars to conduct a meaningful statistical analysis across the Galactic disk. To remove poor quality data points, we require ASPCAP flags and equal zero. We only include stars from the main survey sample ( = 0), and use named abundances (), as recommended by <cit.>. In addition to these quality cuts, we apply the following sample selection:
0em
* > -0.75
* S/N ≥ 200
* = 1-3 dex
* = 4000-5200 K.
To eliminate red clump (RC) stars, which show abundance variations from the RGB sample <cit.>, we cross-match with and remove stars that appear in the APOGEE DR17 RC VAC[<https://www.sdss4.org/dr17/data_access/value-added-catalogs/?vac_id=apogee-red-clump-(rc)-catalog>] <cit.>.
These cuts result in a sample of 48,659 stars that span the Galactic disk. We plot their Z vs. R location, as well as the distributions of distances and eccentricities in Figure <ref>, taking distances and kinematics from <cit.>. While our stellar sample extends from Galactic center, to the outer disk, to the halo, the majority of our stars (75%) are within 3.5 kpc of the sun. Further, 94% of our stellar sample has an eccentricity less than 0.4, indicative of in situ origin. In this paper, we assume that the KPM fits will be consistent across the Galactic disk, as the median high-Ia and low-Ia [X/Mg] vs. [Mg/H] abundance tends are insensitive to Galactic location (W19, ).
We present abundances for Mg, O, Si, S, Ca, C+N, Na, Al, K, Cr, Fe, Ni, Mn, Co, and Ce. In the analysis of each element, X, we drop stars with . Ce abundances are flagged in the most stars, resulting in ∼ 700 Ce labels being excluded. While the surface abundances of C and N differ from the stellar birth abundances for RGB stars due to the CNO processes and dredge-up events <cit.>, the total C+N abundance remains constant. As in W22, we consider C+N as an element, taking [(C+N)/H] to be
[C+N/H] = log_10(10^[C/H]+8.39 + 10^[N/H]+7.78) - log_10(10^8.39 + 10^7.78),
using logarithmic solar abundances for C (8.39) and N (7.78) from <cit.>. We further adopt the error on the [C/Fe] abundance as the error on [C+N/Fe], since C typically dominates in the abundance ratio.
We plot the distributions of all abundances in [X/Mg] vs. [Mg/H] for our sample in the first column of Figures <ref> and <ref>.
§ THE FIDUCIAL MODEL
We fit the APOGEE sample with our fiducial model of K=2, such that
m_ij = log_10(A_i^ CC q_ CC,j^Z + A_i^ Ia q_ Ia,j^Z)
with the assumptions from Section <ref>. This fit produces process vector components and as a function of for each element and process amplitudes and for each star. From the model parameters, we can calculate fractional contributions from each process as well as a full suite of predicted K=2 process abundances, shown in the second column of Figures <ref> and <ref>.
§.§ Process Parameters and Fractional Contributions
We plot the process vector components as a function of in the third column of Figures <ref> and <ref> and provide the values at the [Mg/H] knots in Tables <ref> and <ref>. The process vector components inform us about the relative contribution of prompt and delayed processes to the formation of the elements, as well as the metallicity dependence of the enrichment. By definition, =0.4 at all metallicities. For Mg and Fe, we also require + = 1, implying =0.6. No such constraints are placed on other elements. We note that differs from the previous two-process models in this regard, as G22 and W22 require that the process vector components for all elements sum to 1 at solar metallicity.
cccccccccccccccc
Fiducial model values at [Mg/H] knot values for each element.
[Mg/H] C+N O Na Mg Al Si S K Ca Cr Mn Fe Co Ni Ce
-0.8 0.34 1.06 0.26 1.0 0.62 0.90 0.98 0.78 0.77 0.33 0.19 0.4 0.37 0.44 0.40
-0.5 0.44 1.04 0.36 1.0 0.77 0.82 1.02 0.88 0.70 0.34 0.22 0.4 0.48 0.48 0.27
-0.4 0.48 1.03 0.33 1.0 0.80 0.79 0.98 0.88 0.67 0.35 0.24 0.4 0.52 0.50 0.21
-0.3 0.52 1.01 0.37 1.0 0.84 0.77 0.96 0.92 0.65 0.35 0.25 0.4 0.58 0.51 0.18
-0.2 0.55 0.98 0.42 1.0 0.86 0.75 0.93 0.93 0.64 0.38 0.27 0.4 0.62 0.53 0.17
-0.1 0.58 0.97 0.45 1.0 0.85 0.74 0.90 0.96 0.61 0.39 0.29 0.4 0.66 0.53 0.18
0.0 0.62 0.95 0.46 1.0 0.84 0.72 0.87 0.99 0.60 0.39 0.29 0.4 0.67 0.52 0.22
0.1 0.64 0.93 0.43 1.0 0.85 0.71 0.80 0.98 0.60 0.40 0.24 0.4 0.62 0.49 0.26
0.2 0.57 0.85 0.26 1.0 0.85 0.66 0.68 0.92 0.63 0.49 0.09 0.4 0.49 0.45 0.37
0.3 0.61 0.78 0.18 1.0 0.78 0.63 0.62 0.76 0.66 0.64 0.01 0.4 0.48 0.44 0.47
0.6 0.66 0.65 0.61 1.0 0.66 0.58 0.49 0.64 0.70 0.64 0.28 0.4 0.70 0.50 0.36
cccccccccccccccc
Fiducial model values at [Mg/H] knot values for each element.
[Mg/H] C+N O Na Mg Al Si S K Ca Cr Mn Fe Co Ni Ce
-0.8 0.54 0.15 0.28 0.0 0.02 0.08 0.37 0.17 0.21 0.38 0.55 0.6 0.47 0.49 0.28
-0.5 0.62 0.01 0.67 0.0 0.16 0.13 0.27 0.11 0.18 0.61 0.83 0.6 0.74 0.63 0.60
-0.4 0.54 0.01 0.70 0.0 0.11 0.15 0.18 0.10 0.24 0.63 0.81 0.6 0.67 0.56 0.78
-0.3 0.46 0.04 0.66 0.0 0.11 0.19 0.15 0.09 0.29 0.64 0.79 0.6 0.58 0.51 0.93
-0.2 0.42 0.08 0.54 0.0 0.10 0.23 0.15 0.07 0.33 0.60 0.75 0.6 0.48 0.46 0.98
-0.1 0.39 0.09 0.50 0.0 0.13 0.25 0.15 0.04 0.36 0.58 0.74 0.6 0.43 0.45 0.88
0.0 0.40 0.10 0.51 0.0 0.12 0.26 0.14 0.00 0.35 0.60 0.79 0.6 0.45 0.49 0.67
0.1 0.45 0.12 0.68 0.0 0.08 0.27 0.19 0.03 0.32 0.63 0.95 0.6 0.59 0.57 0.49
0.2 0.61 0.21 1.05 0.0 0.06 0.31 0.31 0.15 0.25 0.53 1.26 0.6 0.83 0.65 0.28
0.3 0.63 0.29 1.32 0.0 0.15 0.35 0.35 0.40 0.19 0.42 1.46 0.6 0.96 0.70 0.09
0.6 0.62 0.37 1.13 0.0 0.30 0.34 0.36 0.48 0.13 0.43 1.40 0.6 0.83 0.64 0.16
In the fourth column of Figures <ref> and <ref>, we plot the distribution of fractional contributions from the prompt process () to each element, where
= / + .
We generally find that the distributions are bimodal, like the observed abundance patterns, as the high-Ia and low-Ia populations have differing fractional contributions from prompt and delayed sources.
We find that the α-elements (O, Si, S, Ca) are best fit with and > 0.5 at all metallicities. This is in agreement with theoretical prediction that α-elements are dominated by prompt CCSN production <cit.>. O, a Mg-like element theoretically purely produced in prompt CCSN, shows O near 1 from =-0.75 to solar. At supersolar metallicity, the delayed process contributes to O production, driving the O value down to ∼ 0.8 at =0.4. S behaves like O, with almost entirely prompt production up to solar metallicity, after which delayed enrichment contributes more significantly. Conversely, we find that Si and Ca are best fit with prompt and delayed enrichment at all metallicities, though the prompt process always dominates. For Si, the delayed process appears to increase linearly with , while the Ca delayed enrichment increases from of -0.75 to -0.1 and then decreases from of -0.1 to 0.5.
The process vector components of light odd-Z elements Al and K resemble those of the α-elements, such as S. Both exhibit and near 1 through solar metallicity, with an increase in and downturn in at supersolar metallicities (especially for K). The behavior of the Na process vector components is more complex, with peaks and troughs in Na. We find that Na has the strongest contributions from the delayed process of all α and light odd-Z elements, with Na≳ 0.5 at almost all values of and Na < 0.3 at > 0. The strong delayed contribution to Na is in agreement with findings of W22 and G22, and in tension with theoretical yields <cit.>.
Unlike α and light odd-Z elements whose delayed production is dominated by SNIa, C and N are thought to be promptly produced in CCSN with additional delayed enrichment from AGB stars <cit.>. We find that the prompt and delayed processes both contribute significantly, and nearly equally, across our stellar sample. Though theoretical N yields from AGB stars have a strong metallicity dependence <cit.>, we observe only a slight positive metallicity dependence in C+N and a shallow dip in C+N. We find a population of stars with C+N near 0.9 and a population near 0.4.
The Fe-peak elements (Cr, Mn, Fe, Co, Ni) are thought to be produced through prompt CCSN production and delayed SNIa production <cit.>. By construction, =0.4 and =0.6 at all metallicities. This produces a bimodal distribution in Fe similar to that observed in abundance space. Because of our choice of , only a few stars have Fe=1 (see Section <ref>). We instead observe a population with Fe near 0.8 and a population near 0.4. The process vector components and Fe distribution for Cr and Ni strongly resemble those of Fe. All three elements have even atomic numbers. At supersolar metallicity, we find that the prompt process dominates Cr production, resulting in an upturn in . Conversely, Ni displays a dominant, and increasing, delayed process vector component at supersolar metallicities. The process vector components for Mn and Co (odd atomic numbers) show a complex metallicity dependence, more resembling that of Na. Both elements display a strong delayed process, with the Mn > 0.5 at all metallicities and > 1 for > 0.1. Mn is the only element for which Mn decreases to 0 for ≳ 0.2.
Finally, we find that the delayed process dominates Ce production at intermediate metallicity, with Ce increasing up to ≈ -0.2 and then decreasing to nearly 0 at ≈ 0.3. The Ce values are clustered near 0.25 around of 0.2, then increase such that the abundances are almost entirely dominated by prompt enrichment at high metallicity.
In addition to process vector components, each star is fit with a prompt and delayed process amplitude, and respectively (Table <ref>). All elemental abundances are used in the calculation of these amplitudes, so they can be interpreted as “de-noised” abundance labels that suppress observational scatter by averaging over elements via the data-driven model. The value of traces the metallicity (specifically ) while the ratio of / traces the abundance. In the left panel of Figure <ref> we plot the distribution of / vs. . We find a bimodal distribution, similar to the Tinsley-Wallerstein diagram ( vs. , ), as was found in W22 and G22. We stress that the presence of the abundance bimodality was not fed into our model, and yet it is recovered in the best-fit process amplitudes. The stars with larger / values correspond to the high-Ia population, and those with low / correspond to the low-Ia population. While in the Tinsley-Wallerstein diagram the two populations blend together at high metallicity, they are more distinguishable in our amplitude space. We plot / vs. in the center panel of Figure <ref>. The high-Ia and low-Ia populations are clearly separable through of 0.4. This is further shown through the / distributions in the right panel of Figure <ref> for [Mg/H] bins of -0.75 to -0.425, -0.425 to -0.1, -0.1 to 0.225, and 0.225 to 0.55. The three lowest metallicity bins display a bimodal distribution and the highest metallicity bin is dominated by high-Ia stars.
cccc
Fiducial model and values for our stellar sample.
APOGEE ID [Mg/H]
2M00000546+6152107 -0.20 0.61 0.48
2M00000866+7122144 -0.10 0.82 0.69
2M00001328+5725563 0.04 1.09 1.02
2M00001653+5540107 0.06 1.13 0.36
2M00001717+6147500 -0.23 0.60 0.39
... ... ... ...
Full table available online.
With the optimized process parameters in hand, we can use Equation <ref> to calculate predicted abundances for the fiducial model—the abundances our stellar population would have if the model assumptions are correct and only one prompt and one delayed process contribute. To simulate observational noise, we add an error drawn from a Gaussian distribution with σ equal to the reported error on each abundance for each star. In Figure <ref> and <ref> we plot the predicted abundances plus estimated noise in the second columns. These distributions can be compared to the observed abundance distribution in the first columns.
Overall, the fiducial model successfully reproduces the observed abundance distributions. It is capable of capturing metalicity dependences and bimodality. The predicted abundances plus estimated noise are not, however, able to reproduce the observed abundance scatter. This is especially noticeable for O, C+N, Na, Al, K, Co, and Ce. For these elements the scatter in the observed abundance distribution is larger than in the predicted distribution, suggesting that the APOGEE observational scatter is underestimated, that there are or dependent abundance trends (e.g., , W22), or that the K=2 model is insufficient–a likely case for elements produced by AGB stars, such as C+N and Ce.
§.§ Comparing to W22
As discussed in Sections <ref> and <ref>, is based upon the two-process model developed in W19 and W22, but with increased flexibility, minimal normalization, and no forced dependence upon the [Fe/Mg] vs. [Fe/H] bimodality or population abundance trends. Further, the utilizes all stellar abundances in the optimization of and , whereas only Mg and Fe are used in W19 and only Mg, O, Si, Ca, Fe, and Ni in W22.
In the fiducial model, we adopt K=2, as in W22, but assume = 0.4, 0.1 dex lower than the value assumed in W22. In practice, this moves the implied “pure” CCSN enrichment plateau from =-0.3 to =-0.4 (though the W22 plateau value is determined after they apply a global offset of +0.05 to all abundances). Because our model is non-negative, it requires a lower to correctly model the stars on the plateau, whereas W22 assigns stars with < -0.3 negative values.
While our stellar samples and model assumptions differ, we plot the W22 and vector components as well as the W22 solar metallicity values of Figures <ref> and <ref> for comparison with our fiducial model. We generally observe similar behavior between and W22. Our vector components tend to be ∼ 0.1 greater than those of W22 for elements with significant delayed contributions because of our differing assumptions. The metallicity dependencies agree for most elements, with small variations at the high- end for O, Al, K, and Ce. We also see good agreement between the and W22 solar metallicity values, with the W22 points slightly offset to larger values for elements with significant delayed contributions.
To compare the accuracy of the models' abilities to reproduce the observed abundances, we identify a subset of ∼ 23,000 stars in both our sample and the W22 sample. We calculate the predicted abundances for each star under and the two-process model, then determine the χ^2 value of the fits for each star (summing over the elements) and for each element (summing over the stars). We plot the cumulative stellar log(χ^2) distribution and the total χ^2 for each element in the right and left panels, respectively, of Figure <ref>. It is important to note that in the calculation of the W22 model residuals, we do not apply the temperature corrections discussed in Section 5.1 of W22.
We find that, overall, the χ^2 decreases between the W22 two-process model and our K-process model, an indication that we better predict all of a star's abundances. When looking at each element individually, we find that we better predict C+N, Na, K, Ni, Mn, Co, and Ce, with major improvements to C+N and Mn. Our fiducial model is significantly worse at predicting Mg, Ca, and Fe than the W22 model, three of the six elements that W22 employ to fit the process amplitudes. Because uses all elements in its optimization, Mg, Ca, and Fe are effectively de-weighted relative to the W22 model, while C+N and Mn influence the model parameters. If we re-fit using only the Mg, O, Si, Ca, Fe, and Ni abundances in the A-step (as in W22), we find that and the two-process model predict the abundances of all elements but Mn with similar accuracy, and that the two-process model better predicts Mn. Our fiducial model's success in predicting C+N and Mn is likely attributable to the inclusion of the elements in the A-step. The choice to include all elements or a subset of elements in the fits should be considered when implementing . If searching for stars with anomalous abundances of element X relative to the expected abundances of others, one may want to exclude X from the A-step.
We note that our fiducial model is fit to a stellar sample that spans a wider range of and than the W22 sample. If we repeat our analysis on the W22 stellar sample with =0.5 we almost perfectly recover the W22 process vector components, with small deviations at >0.1, and more substantially improve upon the stellar and elemental χ^2 values. Most notably, is better able to predict the abundances of stars with >0, where the high-Ia and low-Ia sequences blend together and the W22 categorization of high-Ia and low-Ia stars may be incorrect.
§ VARIATIONS AWAY FROM THE FIDUCIAL MODEL
§.§ Impact of Fe Process Assumptions
While is flexible, we still include some quantitative assumptions, which we make to break exact model degeneracies (see Section <ref>).
Specifically, for each process, we choose one element to assign a “known” process vector component at all metallicities.
These are Mg and Fe in the K=2 case.
Specifically, for the prompt process, we ground our assumption in the nucleosynthetic theory that Mg is a pure CCSN element <cit.>.
Unfortunately, there is no comparable pure SNIa element, nor is there an element for which we know the relative CCSN/SNIa ratio.
In order to break an exact degeneracy between the prompt and delayed processes, we choose to fix the Fe process vector components, which effectively makes assumptions about the exact fractional contribution of CCSN and SNIa (or prompt and delayed processes) to Fe enrichment.
In the fiducial model, we choose = 0.4 as this parameter choice is able to reproduce the observed [Fe/Mg] vs. [Mg/H] abundance distribution, as discussed in Section <ref>. This choice impacts the predicted abundances as well as the implied values of each star. Because our model is non-negative, the choice of sets the minimum [Fe/Mg] value attainable by our model (log_10()). In this Section, we explore the implications of different assumptions, varying the zero point and metallicity dependence (). In Figure <ref>, we plot the minimum [Fe/Mg] as a function of for the and parameters we explore. With a =0.0, we choose =0.5 (the W22 value), 0.45 (approximate plateau value at =-0.75), 0.4 (the fiducial value that skirts the edge of the distribution), and 0.35 (captures almost all stars). With a =0.15, which roughly matches the slope of the low-Ia sequence at intermediate metallicity, we choose =0.5 (passes through the center of the low-Ia density), 0.4 (skirts the edge of the distribution), and 0.35 (captures almost all stars).
We repeat the optimization of the with K=2 and these assumptions. In Figure <ref>, we plot the resulting predicted abundance distributions plus estimated noise alongside the observed distribution. We do not plot the prediction for the fiducial model (=0.4, =0.0), as this is shown in Figure <ref>. Both models with =0.5 fail to reproduce the shape and width of the low-Ia abundance distribution. They instead predict a much thinner sequence that is flat for =0.0 or slightly inclined for =0.15. The model with =0.45 and =0.0 is better, but still predicts a low-Ia abundance distribution that is too thin, flat, and dense. The other four models (=0.35 and =0.4 with =0.0 and =0.15) predict an abundance distribution that strongly resembles the observed. There are minor differences in the low [Fe/Mg] and low [Mg/H] region, but it is difficult to tell by eye which model is best.
To better assess the goodness of fit of each model, we calculate the average χ^2 value per star. The models with of 0.5 and 0.45, regardless of , have an average χ^2 per star >90 while the models with of 0.4 and 0.35 have an average χ^2 per star < 55. In both the metallicity independent and metalicity dependent cases, the models with =0.4 have the lowest average χ^2 per star, at 54.54 and 54.47, respectively, though the models with =0.35 have a χ^2 that is only greater by ∼0.1. Of the seven models explored here, the case with =0.4 and =0.15 has the lowest average χ^2 per star, indicating that the Fe abundances are best fit by a metallicity dependent prompt process. Introducing this metallicity dependence subtly changes the shape of the predicted low-Ia distribution in a way that achieves better agreement with APOGEE observations.
Though the = 0.4 and 0.35 models are similar in terms of their goodness of fit, their nucleosynthesis implications are different. In Figure <ref>, we plot the median value of (Equation <ref>) for the low-Ia population at solar metallicity (-0.05 < < 0.05), where low-Ia stars are defined by
> 0.12 - 0.13 , <0 > 0.12, >0,
as in W19, W22, and G22.
We only show the median values for the models with =0.0 as the solar metallicity median values for the metallicity dependent models are almost identical for matching values of . We find that the choice of has little impact on the median values of elements dominated by CCSN enrichment (e.g., O, Al, S, K). As the delayed contribution increases, the median elemental values decrease more significantly with decreasing . The choice of most impacts the median values for Na, Cr, Fe, Mn, and Ce, with the median for Mn decreasing from 0.42 for =0.5 to 0.22 for =0.35. Because the value sets the prompt enrichment plateau, a lower model implies a lower value.
While the high model can likely be ruled out due to poorness of fit, the true value and its metallicity dependence are unknown. It is therefore important not to over-interpret the specific values of a given model. The parameter can provide qualitative descriptions of which elements have more or less prompt/delayed enrichment, but the exact values are uncertain.
§.§ Increasing the Number of Processes
In our fiducial model, we adopt K=2 with the two processes representing prompt, CCSN-like enrichment and delayed, SNIa-like enrichment. While a K=2 model can well describe the stellar abundances (e.g., Figures <ref> and <ref>), the abundance residuals cannot be explained by observational noise alone and hold information about the intrinsic variations from a K=2 model ( G22, W22, ). Potential sources of such scatter include metallicity-dependent SN yields with a bursty star formation history, environmental variations in the IMF, stochastic sampling of the IMF, and more than two distinct processes (e.g., AGB stars, merging neutron stars, and unique classes of SNIa) with different time delays for enrichment <cit.>. Note that the existence of many enrichment channels is not in itself sufficient for producing scatter around a K=2 model (or even a K=1 model); one needs star-to-star variation in the relative amplitude of these channels. For example, in a fully mixed one-zone model, all abundances depend only on time, even if many enrichment channels contribute.
In this Section, we will explore the impact of adding additional processes to our model, increasing from K=2 to K=4. Because is sensitive to enrichment with different time delays, adding components could be interpreted as adding sources with distinct enrichment time scales. For example, if AGB stars and SNIa enrich with the same time delay, the model would fit both sources in one delayed component. If AGB and SNIa enrich with different delay times, a third component could pick up delayed AGB enrichment not captured by the original delayed process. Indeed, evidence of a distinct AGB-like process is identified in G22 and W22, where correlated residuals are used to expand the two-process model. However, both works add components in a restrictive manner that requires choosing which elements to assign to 3rd and/or 4th processes and that does not allow for the original two processes to vary.
Our goal is to demonstrate the potential for using to flexibly model more than two enrichment channels and improve the accuracy of the abundance predictions. We allow the model to identify the elements best-fit with additional components and modify the K=2 process parameters. Ultimately, such a method could be used to identify elements with more than two enrichment channels, but our data set may not be capable of doing so robustly. In the K=4 case, our model becomes
m_ij = log_10( + + A_i^3 q_3, j^Z + A_i^4 q_4, j^Z),
where q_3, j^Z and q_4, j^Z are the third and fourth process vector components and A_i^3 and A_i^4 are the third and fourth process amplitudes. The model, however, does require some regularization to converge. As in the K=2 case where we assume that Mg is a pure CCSN element and fix the and values, we need elements to regulate our 3rd and 4th processes. We choose Ce and Mn, two elements with larger residuals that likely have additional nucleosynthetic sources—Ce from AGB stars and Mn from distinct classes of SNIa <cit.>. To test the impact of our choice of representative elements we also fit the K=4 model with the third and fourth processes fixed to C+N and Cr. We find that similar groups of elements are better fit with additional components. We initialize the K=4 model at the K=2 model values of , , , and with the added constraints that
q_ 3, Mg^ Z = 0,
q_ 3, Fe^ Z = 0,
q_ 3, Mn^ Z = 0,
q_ 3, Ce^ Z = 1
and
q_ 4, Mg^ Z = 0,
q_ 4, Fe^ Z = 0,
q_ 4, Mn^ Z = 1,
q_ 4, Ce^ Z = 0
at all metallicities. We first fit the A-step to only Mg, Fe, Mn, and Ce, and then conduct 32 iterations of the q-step and A-step, as described in Section <ref>. We again inflate the scatter according to Equation <ref> with Q=5.
The model converges upon a set of process vector components and amplitudes that can be combined with Equation <ref> to predict the stellar abundances and calculate the fractional contribution from each process. In Figure <ref>, we plot A^k_i/ vs. for the SNIa, third, and fourth processes. We find that processes 3 and 4 are most prominent in stars at low metallicity and that there is a large population of stars with A^3_i and/or A^4_i ≈ 0. In Figure <ref> we plot the observed and predicted abundance distributions as well as the process vector components and fractional contribution from each component for a subset of elements. We note that the model parameters q_3,j^Z and q_4,j^Z should be interpreted in conjunction with the amplitudes, as we set the third and fourth process vector components for Ce and Mn to an arbitrary value with no metallicity dependence.
We find that the third process, regularized to Ce, contributes at a low level to O, Si, S, Al, and K and more significantly to Ca, Na, Cr, and Ce. The fourth process, regularized to Mn, contributes at a low level to K and more significantly to S, C+N, Na, Cr, Ni, Mn, and Co. These best-fit element groupings resemble, but are not identical to, the elements selected for additional components in W22, where the third process included Ca, Na, Al, K, Cr, and Ce and the fourth process included Ni, V, Mn, and Co.
In the left-most columns of Figure <ref>, we plot the v.s. distributions for the observed stellar sample, the K=2 model predictions, and the K=4 model predictions for Ca, C+N, Na, Cr, Ni, Mn, Co, and Ce. We note that the predicted abundances do not have noise added (unlike Figures <ref> and <ref>) to highlight the differences between the K=2 and K=4 predictions. Comparing the predicted abundances from the K=2 and K=4 process models, we see that the K=4 process model is better able to capture the abundance scatter than the K=2 model, especially at the low metallicity end of the low-Ia population. This result is expected, as adding more model components will increase the abundance space that is able to reproduce.
In the fourth and fifth columns of Figure <ref>, we plot the process vector components and median f^k_ij as a function of of the low-Ia population (Equation <ref>), respectively, where
f^k_ij = A_i^k q_k,j^Z/ + + A_i^3 q_3, j^Z + A_i^4 q_4, j^Z.
We include and as well as median and f^ Ia_ij from the K=2 model in respective columns for comparison. We see that the third process contributes significantly to Ca, Na, Cr, and Ce at low metallicity, with decreasing contribution up to ≈0.1. The fractional contribution from the K=2 prompt and delayed processes to these elements decreases under the K=4 model. The fourth process behaves in a similar manner but with elements C+N, Na, Cr, Ni, Mn, and Co. The fractional contribution from the third and fourth process is nearly identical in the high-Ia population.
=The statistical improvement in the between the K=2 and K=4 models is evident in the χ^2 values.
In Figure <ref>, we plot the cumulative log_10(χ^2) distributions for the fits to each star and the total χ^2 for each element for the fiducial K=2 model and the K=4 model.
We find that the cumulative log_10(χ^2) distribution decreases with the increase in model components by greater than two for most stars, expected for the addition of two degrees of freedom.
We also find that the χ^2 per element is lower for all elements in the K=4 model.
Significant improvements to Ca, C+N, Mn, and Ce are likely due to the additional third and fourth components capturing abundance scatter that the original two processes could not.
Notably, we also see a significant improvement in the Fe fit, even though we require q_ 3,Fe^Z = q_ 4,Fe^Z = 0. Because all elements influence the K=2 model fit, the fiducial model was likely pulled away from the best solution for Fe to accommodate another element, like Mn.
With the additional components able to account for the non-Fe-like enrichment, the original two processes are better able to capture the Fe enrichment.
Through this investigation, we find that is extendable to K>2 processes. The additional processes improve the model quantitatively, but additional work is needed to improve the nucleosynthetic interpretability. We provide a discussion of the future science that and the K=4 model enable below.
§ DISCUSSION
In this paper, we present , a flexible and data-driven model for inferring nucleosynthesis yields. describes stellar abundances as the sum of K components, where each component is the product of a metallicity-dependent process vector component (fit to each element) and a process amplitude (fit to each star). Combined with a likelihood function and a set of assumptions (Section <ref>) that make the processes interpretable in terms of nucleosynthetic sources, the best-fit parameters can be used to calculate fractional contributions from each process as well as a full suite of predicted K process abundances.
We fit with K=2 to abundances labels for 15 elements and 48,659 RGB stars in APOGEE DR17, selecting a population that minimizes statistical and systematic errors while spanning an [Mg/H] of -0.8 to 0.5.
In the K=2 model, the first process, fixed to Mg, represents prompt CCSN-like enrichment, and the second process, fixed to Fe, represents delayed SNIa-like enrichment—but other nucleosynthetic sources with similar time delays may be mixed into each. Under our adopted assumptions, the prompt process also contributes to Fe, but the delayed process does not contribute to Mg, in accordance with theoretical expectations for CCSN and SNIa. Overall, we find that K=2 is a good fit to the data and that the model successfully recovers the global abundance patterns in the Milky Way. While does not rely on vs. bimodality or median abundance trends, it is able to recover the observed bimodal abundance distribution. Further, the fit parameters and act as combined individual-process abundance labels, revealing a clearer signature of bimodality at high metallicity in / vs. space than in vs. . This suggests that the fit parameters and predicted abundances could be used as a higher signal-to-noise tracer of nucleosynthesis, as they are using a justified likelihood to condense information from 15 elements into two variables.
To test the assumptions of the fiducial model, we explore the impact of varying the fixed value of .
We find that high values of (0.5 and 0.45) are not able to reproduce the observed [Fe/Mg] vs [Mg/H] abundance distribution, regardless of the process vector component's metallicity dependence.
Our requirement that and are non-negative makes it impossible for the model to reproduce the lowest values in the APOGEE data. Values = 0.4 and 0.35 with = 0 or 0.15 produces similarly successful fits. While predicted abundance distributions appear similar for these models, the implied fractional contribution from the prompt process is dependent upon the Fe assumption for elements with substantial delayed enrichment. Through this exploration, we conclude that the quantitative nucleosynthetic interpretation of is dependent upon the input assumptions, and that there is inherent uncertainty in the values.
Finally, we expand from K=2 to K=4, regularizing the third and fourth processes to Ce and Mn. builds off of the original model, such that the K=4 model starts at the K=2 solution and then finds the best-fit parameters for K=4, altering the original solution and allowing all elements (except Mg and Fe) to have contributions from additional processes. We find that S, Ca, C+N, Na, Cr, Mn, Co, Ni, and Ce are best fit with a third and/or fourth component, with such processes contributing most significantly at low metallicity.
The information for constraining the q^Z_k,j values for the third (fourth) process come from the star-by-star deviations of Ce (Mn) from the K=2 model predictions and their correlation with deviations for other elements X_i. Relative to the approach taken in W22 (Section 8), our K=4 model requires non-negative q^Z_k,j and A_i^k for all elements and stars, and it starts by tying the third and fourth processes to individual elements rather than groups of elements.
The K=4 model improves the ability of to fit the abundances of all elements but especially improves predictions of Ca, C+N, Fe, Mn, and Ce. This successful implementation of a K=4 model shows that can be extended to K>2, and it has potential future use in constraining enrichment beyond a single prompt and delayed process—critical to understanding enrichment from AGB stars, merging neutron stars, and rarer novae.
is based upon the two-process model developed in W19 and W22. While the two models are identical in format for the K=2 case, the model assumptions, parameter derivations, and implementations differ. The W22 two-process model derives process vector components from median abundance trends, reliant upon vs. bimodality, and fits process amplitudes to a subset of 2-6 α and Fe-peak elements. , on the other hand, employs a likelihood function fit to all stars and all elements to derive both process amplitudes and vector components. Our more data-driven implementation results in the improved ability of the K=2 model fit to predict all of a star's abundances. Notably, can better predict C+N and Mn abundances than the W22 two-process model, since all elements are used to constrain the fits. The most significant improvement to the original two-process model, though, is in 's flexibility. The flexible implementation of the model allows us to easily vary the assumptions, such as , and increase the number of model components to study the impact of our assumptions on the results and push the interpretation of beyond standard CCSN and SNIa nucleosynthesis in a less restrictive manner than W22 and G22.
However, is not without its own faults. The assumptions listed in Section <ref> may incorrectly skew our results, and the model could benefit from improvements in implementation. While assumptions (4) and (5) on Mg and Fe production are flexible, requires that both elements have fixed process vector components. If our assumptions are incorrect and, for instance, Mg is not a pure prompt element or (in the K=4 case) Fe has contributions from multiple delayed sources, our nucleosynthetic interpretation of may be wrong. This becomes more challenging as K increases and we have to make more assumptions to break rotational symmetries. Additionally, assumption (7) states that the APOGEE data products can be used for this project, but we inflate outlying-star abundance errors with a softening parameter, Q to account for their likely underestimation. It is also possible that Q is accounting for some of the real intrinsic scatter in the data and inflating the observational error on true outlier stars. In future implementation, the development of a more robust method to justifiably down-weight outlier stars from the global fits would be beneficial. This method should account for both non-Gaussian observational errors (e.g., from bad telluric subtraction or unlucky line blends) and physically interesting outliers (e.g., from binary mass transfer).
Finally, fits process vector components along a spline with 11 knots (assumption 6), and those knots have fixed locations in metallicity.
As this method fits a polynomial between each knot, it can result in sharp features at the knot locations in metallicity regions with few points or large scatter. Fitting process vector components with a differentiable function might be more reasonable, though results shown in Appendix C.1 in G22 suggests that this change might have minimal impact on the results.
In general this model for the metallicity-dependence of the yields is very rigid; a better model could both have more flexibility and be smoother.
Beyond improvements to the underlying model assumptions and implementation, needs to include parameter uncertainty. While the model delivers process vector components and amplitudes, which can be used to calculate f^k_ij and K process predicted abundances, the current implementation does not return errors on any variable. The best method to derive such errors has not been explored, but one could use the likelihood function or bootstrapping. These methods will encapsulate the uncertainty on process parameters from the APOGEE abundance errors but will not capture the uncertainty due to model assumptions, such as (Section <ref>).
While such future changes will improve the model, the current form of and its data products can support ongoing research and will enable new science.
Most immediately, provides high signal-to-noise abundance labels, and , as well as de-noised stellar abundances (m_ij).
The best-fit values of and , in particular, are powerful tracers of nucleosynthesis.
They show a bimodality at all metallicities, as do some of the de-noised abundances.
And—because the model is a maximum-likelihood model—they represent information-theory optimal combined measures of α and Fe-peak abundances.
That is, these data-driven amplitudes could replace more theory-driven measures of the relative contributions of CCSN and SNIa enrichment channels.
In Section <ref>, we showed that the high-Ia and low-Ia populations are more clearly defined in / vs. [Mg/H] than in [Fe/Mg] vs/ [Mg/H]. In amplitude space, the low-Ia population can be re-defined as
/ < 0.35 , <-0.2 / < 0.5 + 0.7 , -0.2≥<0.1, / < 0.57 , ≥0.1 .
Compared to Equation <ref> (W19, W22), this new definition re-classifies 647 stars as high-Ia and 224 stars as low-Ia. We show the location of these stars in / vs. [Mg/H] and [Mg/Fe] vs. [Fe/H] in Figure <ref>. Many of the re-classified stars are at >-0.1. When dividing in [Mg/Fe], it is difficult to correctly separate the populations at high metallicity, as they are blended together. Our new definition also re-classifies many stars near [Fe/H] of -0.3 as high-Ia, suggesting that the W19 and W22 high-Ia definition has too shallow a slope. While only ∼ 2% of stars are re-classified under the new definition, we suggest that Equation <ref> be used to chemically define the low-Ia and high-Ia populations if fits are available, especially if studying stars with > -0.1.
Beyond improving the definition of high-Ia and low-Ia populations, the parameters and predicted abundances could be used in any current analysis that strives to show trends with abundance labels. We predict that trends of stellar parameters with [X/H] will be clearer when comparing to m_ij, , or .
Such analysis with parameters will be useful in studying nucleosynthesis, dynamics, disk formation, stellar ages, and much more. However, in this paper we only present fits for a small population with restricted stellar parameters, relative to the full APOGEE sample.
While could be fit to the full APOGEE sample, systematic abundance effects with and , as well as other abundance artifacts <cit.>, cause the abundance trends to differ across the Hertzsprung-Russell diagram. The best-fit parameters for the giants would differ from those for the dwarfs. If such systematics could be accounted for (see Sit et al. in prep) we could fit the full APOGEE stellar sample with , or train on a subset of high signal-to-noise stars and apply the fits to the full sample. This potential future analysis could reveal additional information about the nucleosynthetic history of our Galaxy and would provide higher signal-to-noise abundance labels for the full sample.
The success of the two-process model (W19, W22) and with K=2 suggests that the distribution of disk stars in APGOEE abundance space is largely two-dimensional (2D), though more dimensions are required to fully explain the data (, W22). In this paper, we have focused on a 2D nucleosynthetic model, with the two dimensions representing prompt CCSN-like enrichment and delayed SNIa-like enrichment. However, another 2D class of theoretical models for the Milky Way exists, describing stars in terms of birth radius and birth date <cit.>. Are these two 2D models related? If they are, then the nucleosynthetic parameters from ( and ) should predict asteroseismic ages (or masses), up to unpredictable aspects of mass transfer, as well as the guiding center radius, up to unpredictable aspects of radial migration. While a deeper study of the implications of the disk's two-dimensionality is outside the scope of this work, we show the relationship between asteroseismic age from the APOKASC sample <cit.> and process amplitudes in Figure <ref>. Here we see a clear gradient in age with and / (as in G22 and W22), though outlier stars are scattered throughout. We predict that the parameters will be better age diagnostics than APOGEE abundances, and that age outliers may be mass transfer objects.
Finally, because of the flexibility of , new scientific applications are enabled that were not feasible before. Because does not rely on [Mg/Fe] vs. [Fe/H] bimodality, non-bimodal populations can now be fit with a multi-component nucleosynthetic model. could be applied to the low metallicity disk, halo, Gaia Enceladus Sausage, Nubecula Major, Nubecula Minor[Historically referred to as the LMC and SMC.], other Milky Way satellites, and more. can also be easily extended to K>2 in a much less restricted way than the two-process model. While a K=2 model well describes the global abundance patterns, intrinsic residual scatter on the scale of 0.01 to 0.02 dex remains (, W22, G22). This scatter could be signatures of enrichment from non-CCSN/SNIa sources, stochastic sampling of the IMF, environmental IMF variations, or metallicity-dependent SN yields with a bursty star formation history <cit.>. While it is difficult to identify non-CCSN or SNIa enrichment in the APOGEE data alone, where only C+N and Ce are expected to have significant contributions from other sources, there may be signatures in other surveys with better coverage of heavier elements. Applying a K>2 model to GALAH <cit.>, or an overlapping sample of APOGEE and GALAH stars <cit.>, could prove more successful.
In the K=2 and K>2 cases, results from will help us disentangle our Galactic formation and enrichment history. This data-driven model opens doors to many new research projects and exciting future scientific results. To use yourself, please reference the KPM GitHub repository[<https://github.com/13emilygriffith/KProcessModel>] or contact the corresponding author.
§ ACKNOWLEDGEMENTS
It is a pleasure to thank
Polly Frazer (NYU),
Adrian Price-Whelan (Flatiron),
Tawny Sit (OSU),
Soledad Villar (JHU),
the Darling research group at CU Boulder,
CU Boulder Research Computing services and staff,
and the Astronomical Data group at the Flatiron Institute
for valuable discussions and help.
E.J.G. is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2202135.
B.R. acknowledges support by the Deutsche Forschungsgemeinschaft under the
grant MI 2009/2-1.
Funding for the Sloan Digital Sky Survey V has been provided by the Alfred P. Sloan Foundation, the Heising-Simons Foundation, the National Science Foundation, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is <www.sdss.org>.
SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration, including the Carnegie Institution for Science, Chilean National Time Allocation Committee (CNTAC) ratified researchers, the Gotham Participation Group, Harvard University, Heidelberg University, The Johns Hopkins University, L’Ecole polytechnique fédérale de Lausanne (EPFL), Leibniz-Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut für Astronomie (MPIA Heidelberg), Max-Planck-Institut für Extraterrestrische Physik (MPE), Nanjing University, National Astronomical Observatories of China (NAOC), New Mexico State University, The Ohio State University, Pennsylvania State University, Smithsonian Astrophysical Observatory, Space Telescope Science Institute (STScI), the Stellar Astrophysics Participation Group, Universidad Nacional Autónoma de México, University of Arizona, University of Colorado Boulder, University of Illinois at Urbana-Champaign, University of Toronto, University of Utah, University of Virginia, Yale University, and Yunnan University.
matplotlib <cit.>, NumPy <cit.>, pandas <cit.>, astropy <cit.>, and jax <cit.>.
Sloan, Kepler
aasjournal
|
http://arxiv.org/abs/2307.07495v1 | 20230714172824 | Optimal Metric Distortion with Predictions | [
"Ben Berger",
"Michal Feldman",
"Vasilis Gkatzelis",
"Xizhi Tan"
] | cs.GT | [
"cs.GT"
] |
NNLL Resummation for Projected Three-Point Energy Correlator
[
received: ** 2023, accepted: * 2023
=============================================================
In the metric distortion problem there is a set of candidates and a set of voters, all residing in the same metric space. The objective is to choose a candidate with minimum social cost, defined as the total distance of the chosen candidate from all voters. The challenge is that the algorithm receives only ordinal input from each voter, in the form of a ranked list of candidates in non-decreasing order of their distances from the voter, whereas the objective function is cardinal.
The distortion of an algorithm is its worst-case approximation factor with respect to the optimal social cost. A series of papers culminated in a 3-distortion algorithm, which is tight with respect to all deterministic algorithms.
Aiming to overcome the limitations of worst-case analysis, we revisit the metric distortion problem through the learning-augmented framework, where the algorithm is provided with some (machine-learned) prediction regarding the optimal candidate.
The quality of this prediction is unknown, and the goal is to evaluate the performance of the algorithm under a perfectly accurate prediction (known as consistency), while simultaneously providing worst-case guarantees even for arbitrarily inaccurate predictions (known as robustness).
For our main result, we characterize the robustness-consistency Pareto frontier for the metric distortion problem.
We first identify an inevitable trade-off between robustness and consistency.
We then devise a family of learning-augmented algorithms that achieves any desired robustness-consistency pair on this Pareto frontier.
Furthermore, we provide a more refined analysis of the distortion bounds as a function of
the prediction error (with consistency and robustness being two extremes).
Finally, we also prove distortion bounds that integrate the notion of α-decisiveness, which quantifies the extent to which a voter prefers her favorite candidate relative to the rest.
empty
§ INTRODUCTION
In the metric distortion problem there is a set of candidates and a set of voters , all residing in the same metric space. This metric defines a distance d(v,c) between each voter v∈ and candidate c∈, and the objective is to choose a candidate c with minimum social cost, defined as the total distance of the chosen candidate from all voters, i.e., ∑_v∈ d(v,c). This captures spatial models of voting from the political science literature, such as the Downsian proximity model <cit.>, as well as the classic problem of facility location <cit.>. Computing the optimal candidate, ø=min_c∈∑_v∈d(v,c) given the pairwise distances, d(v,c), is a computationally easy problem, but the challenge in the metric distortion problem is that the algorithm has access only to ordinal information regarding the preferences of each voter, i.e., a ranked list of candidates in non-decreasing order of their distances from the voter. This information limitation restricts the designer's ability to optimize the social cost and the performance of an algorithm is evaluated based on its distortion, i.e., its worst-case approximation factor with respect to the optimal social cost.
The metric distortion problem has been the focus of a series of papers
<cit.> which evaluated
the distortion of known algorithms and designed new ones, aiming to achieve better distortion bounds. <cit.> established that no deterministic algorithm can achieve a distortion better than 3, and <cit.>
eventually provided a
deterministic algorithm that matches this bound (see Section <ref> for details). Even after this tight bound was achieved, though, subsequent work by <cit.> and <cit.>
have provided an even deeper understanding of the problem, leading to simpler algorithms that achieve the same bound.
Learning-augmented framework.
Aiming to overcome the limitations of worst-case analysis, a surge of recent work has focused on a new analysis framework for achieving refined bounds using the guidance of (machine-learned) predictions. In this framework, often termed algorithms with predictions or learning-augmented algorithms, the algorithm is enhanced with a prediction that it can use as a guide toward improved performance. This prediction can take different forms (depending on the information that the designer may benefit from in the setting at hand) and can be generated using historical data or other relevant information available to the designer.
However, the quality of this prediction is unknown and cannot be trusted.
This framework goes beyond worst-case analysis through bicriteria optimization: the algorithm is simultaneously evaluated based on its performance when the prediction is accurate (known as its consistency) as well as its performance when the prediction can be arbitrarily inaccurate (known as its robustness).
One extreme choice is to follow the prediction blindly, which is rewarding when it is accurate (leading to good consistency), but can yield very bad outcomes when it is inaccurate (leading to poor robustness). The other extreme is to disregard the prediction, which provides weak consistency guarantees. In general, each learning-augmented algorithm provides a tradeoff between robustness and consistency, and the goal is to identify the Pareto frontier between these two measures.
The learning-augmented framework has been applied quite broadly, e.g., to analyze the competitive ratio of online algorithms, the running time of algorithms and data structures, or the performance of mechanisms in multiagent systems
(see <cit.> for a frequently updated list of papers in this rapidly growing literature). In all of these applications, this framework has been used to mitigate information limitations that pose obstacles for the designer.
Since the main challenge in metric distortion is the information gap that the designer faces, this makes it a prime application domain for this framework.
Specifically, apart from the rankings of the voters, it is reasonable to assume that the designer may also have access to historical observations regarding the voters' choices in other matters that may correlate with their preferences in the matter at hand, thus providing hints regarding their preferred outcome in the metric space.
§.§ Our Results and Techniques
We provide the first analysis of the metric distortion problem using the learning-augmented framework and our main result is a characterization of the robustness-consistency Pareto frontier for this problem. Specifically, we consider the class of deterministic learning-augmented algorithms that are enhanced with a prediction ∈ regarding who is the optimal candidate and we first exhibit an inevitable tradeoff between the robustness and consistency that these algorithms can achieve
(see Theorem <ref>).
We then propose a family of such algorithms, termed BoostedSimultaneousVeto, or , that are parameterized by δ∈ [0,1) and achieve any desired robustness-consistency pair on this Pareto frontier
(see Theorem <ref>).
This is cast in the following theorem.
Theorem: For any δ∈[0,1), achieves 3-δ/1+δ-consistency and 3+δ/1-δ-robustness.
Moreover, this is the optimal tradeoff.
Namely, no deterministic algorithm that is 3-δ/1+δ-consistent can be strictly better than 3+δ/1-δ-robust, even for the line metric and just two candidates.
Figure <ref> exhibits this optimal tradeoff for values of δ∈ [0,1).
If the designer is confident regarding the quality of the prediction and willing to relax the robustness guarantee to β≥ 3, then choosing the corresponding value of δ yields a consistency guarantee of (β+3)/(β-1). For example, the designer can achieve a consistency of 2 with a guaranteed robustness of 5.
The algorithm.
is a learning-augmented modification of the SimultaneousVeto algorithm very recently proposed by <cit.>. SimultaneousVeto initially assigns to each candidate c∈ a score equal to their plurality (i.e., equal to the number of voters that ranked c at the top) and then lets the voters continuously and simultaneously decrease the score of their least preferred candidate among the ones that still have a positive score. It then returns the candidate(s) whose score reaches 0 last. enhances this algorithm by “boosting” the initial score of the candidate ∈ predicted to be optimal,
where the amount of this boost is a carefully chosen increasing function of δ,
and then also appropriately boosting the rate at which all the voters decrease the scores. The higher the value of δ, and hence the confidence of the designer regarding the quality of the prediction, the larger the amount of this boost.
A more refined analysis.
Consistency and robustness capture two extremes regarding the accuracy of the prediction, namely, full accuracy and complete arbitrariness, respectively. A more refined analysis provides bounds on the distortion as a function of the accuracy level of the prediction.
In Section <ref>, we provide such refined worst-case bounds on the distortion of as a function of the prediction quality.
Specifically, we define the prediction error as =n · d(, ø)/(ø), i.e., the distance between the predicted and the optimal candidates normalized[As we discuss in Section <ref>, normalizing the error using the average distance from the optimal candidate is necessary to accurately quantify the quality of the prediction.] by the optimal average distance, (ø)/n, and prove the following.
Theorem:
For any δ∈ [0,1) and prediction error η, the distortion of is at most
min{3-δ+2δ/1+δ, 3+δ/1-δ}.
This bound recovers the consistency guarantee for =0, then increases linearly in until reaching the robustness bound, where it flattens.
Figure <ref> demonstrates this bound for different values of δ.
For example, as long as ≤ 2, the distortion is at most 3, for any value of δ.
Metric spaces with decisive voters.
In Section <ref> we refine the distortion bounds further, using the α-decisiveness framework of <cit.>. For α∈ [0,1], a voter v is α-decisive if
her distance from her favorite candidate is at most α times her distance from any other candidate.
An instance is α-decisive if every voter is α-decisive.
We further refine our analysis according to the parameter α, and obtain a distortion upper bound of
2 + α -αδ/1+δ + min{2δ/1+δ, 8δ/(1+δ)(1-δ)}.
Note that the case of α=1 imposes no restrictions, thus recovering our aforementioned bounds.
For α = 0, the distance between every voter and her favorite candidate is 0, capturing the well-studied peer selection problem, where the set of voters and the set of candidates coincide, and each voter ranks herself first.
Technical novelty.
To ensure good consistency, our algorithm introduces a boost towards the predicted optimal candidate while accordingly adjusting the score decrement rate of the voters.
However, this boost
inevitably induces additional costs in terms of robustness. These additional costs are proportional to the quality of the prediction (the distance between the predicted and actual optimal candidates, d(ø,p)) for which we need to provide an upper bound as a function of the optimal social cost (See Lemma <ref>).
This obstacle did not arise in any of the previous works, and existing analyses do not offer a solution to address it. Proving such a bound requires substantial new structural insights regarding the problem, the majority of which is presented in Section <ref>.
To this end, we define the veto map — an edge-weighted non-bipartite graph that captures the down-voting relationships among voters during the execution of the algorithm.
We then carefully lower bound the contribution to the optimal social cost due to
any pair of voters connected by an edge in the veto map, and prove that there exists a sufficiently large fractional matching to achieve our intended bound.
§.§ Related Work
Deterministic metric distortion. <cit.> initiated the study of the metric distortion problem and showed that well-known algorithms such as “plurality”, “Borda count”, “k-approval”, and “veto” admit a distortion that increases linearly with the number of voters or candidates. They also showed that the distortion of the transferable vote algorithm grows logarithmically in the number of candidates, while Copeland's rule achieves a
distortion of 5.
They also proved a lower bound of 3 for any deterministic algorithm and conjectured that this bound is tight.
The first deterministic algorithm to guarantee a distortion better than 5 was proposed by <cit.>, who achieved a distortion of 2+√(5)≈ 4.236 using a novel algorithm.
They also provided a sufficient condition for achieving a distortion of 3.
Independently, <cit.> derived the same bound of 2+√(5) using a linear programming duality framework, and provided several alternative formulations of the sufficient condition outlined in <cit.>.
<cit.> then introduced the Plurality Matching rule that achieves the optimal distortion of 3.
Even after the tight distortion bound of 3 was achieved, subsequent work provided a better understanding of the problem.
<cit.> achieved the optimal distortion with the Plurality Veto algorithm which is much simpler relative to Plurality Matching,
and very recently <cit.> proposed a refinement of Plurality Veto called Simultaneous Veto that resolves the arbitrary tie-breaking issues in <cit.> and <cit.> while maintaining optimal distortion.
<cit.> studied the extent to which metric distortion bounds can be combined with distortion bounds for the utilitarian setting — where the voters' preferences are arbitrary normalized valuations.
<cit.> studied a variant where more information besides the usual preference rankings is available,
and <cit.> studied the tradeoffs between the achievable distortion and the communication complexity of deterministic social choice algorithms.
Metric distortion with randomized algorithms. The tight bound of 3 does not hold when algorithms are allowed to be randomized.
<cit.> proved that the Random Dictatorship algorithm
achieves distortion better than 3, but that tends to 3 as the number of voters grows.
Subsequent work by <cit.> and <cit.> identifies other algorithms that achieve distortion that tends to 3 (but this time as the number of candidates grows).
A lower bound of 2 is also established by <cit.>, which was improved to 2.0261 by <cit.>.
Recently, <cit.> achieved an upper-bound of 2.753, thereby providing the first randomized algorithm with distortion bounded away from 3.
Unlike the deterministic metric distortion problem, the randomized variant remains unresolved even from the traditional worst-case approach. Closing the remaining gap is a major open question in this field and as more advances are achieved in this direction, this provides additional motivation for a refined analysis of that variant through the learning-augmented framework as well.
Learning-augmented framework.
The learning-augmented framework has been widely accepted in recent years as a valuable paradigm for the design and analysis of algorithms,
aiming to circumvent overly-pessimistic worst-case bounds.
In the last five years alone,
more than 140 papers have revisited
classic algorithmic problems using this framework, with principal examples being online paging <cit.>, scheduling <cit.>, secretary problems <cit.>, optimization problems with covering <cit.> and knapsack constraints <cit.>, as well as graph problems <cit.>. We refer the reader to <cit.> for a survey of some early work and <cit.> for a more up-to-date list of papers in this area. More closely related to this paper, some recent work has used the learning-augmented framework on social choice problems, where the predictions are regarding a group of agents and their preferences, and the goal is to optimize some social cost or social welfare function. This includes the study of resource allocation problems where agents' preferences are revealed in an online fashion <cit.> and settings where the agents are strategic and their preferences may be private <cit.>. Our work adds to this literature by focusing on settings where access to the agents' preferences is limited to ordinal information
Metric spaces with α-decisive voters.
The α-decisiveness framework was introduced by <cit.> to provide a more refined analysis of distortion bounds, as a function of how strong the agents' preference is for their top choice, and it has been used in several distortion problems
<cit.>.
The special case of α = 0
corresponds to
the peer selection setting, where the set of voters and candidates coincide.
This setting has received attention in settings beyond distortion as well
<cit.>.
§ MODEL AND PRELIMINARIES
Let (,)̣ be a metric space where :̣×→_≥ 0 is the distance function that satisfies the following conditions: 1) positive definiteness: ∀ a,b ∈, d(a,b) ≥ 0 and d(a,b) = 0 if and only if a = b; 2) symmetry: ∀ a,b ∈, d(a,b) = d(b,a); and 3) triangle inequality: ∀ a,b,c ∈, d(a,b) + d(b,c) ≥ d(a,c).
Let
,
be finite sets of voters and candidates that are located on the metric space, respectively. We denote by n the number of voters (n = ||), and by m the number of candidates (m = ||).
For simplicity of notation, we extend d to also operate on the voters and the candidates.
That is, we use d(a,b), for a,b ∈∪, to denote the the distance between the points in where a and b are located.
Given a metric d,
The social cost of a candidate c∈ is defined as (c,d) = ∑_v ∈ d(v,c); we abuse notation and write (c) when the metric d is clear from the context.
Preference profile.
We refer to triplets (,,)̣ as instances and the distance between a voter and a candidate
quantifies how much the voter prefers that candidate — the closer the better.
We say that a voter v ∈ prefers c∈ over c' ∈, and we write c _v c' if and only if (̣v,c) ≤(̣v,c'). Note that this is a weak preference relation.
A given instance (,,)̣ induces preference rankings over candidates for each voter. That is, for each v ∈ we have a preference ranking (bijection) σ_v:→{1,…,||} such that σ_v(c_1) < σ_v(c_2) implies c_1 _v c_2,
and we
write
in this case that
$̣ is aligned
withσ_v, denoted ▹σ_v.
Note that these rankings are not determined uniquely when there are at least two candidates whose distance fromvis the same.
A preference profileσ:= (σ_v)_v ∈is a tuple of preference rankings for each voter and we say thatdis aligned
with the preference profileσ, denoted asd ▹ σ, ifd ▹ σ_vfor eachv ∈.
Metric distortion.
In the metric distortion problem, an algorithmreceives as input a preference profileσwhich is induced by some(,,)̣.
Importantly,does not have access to the underlying distance function.
The goal is to output a candidate that minimizes the social cost.
We denote byø()̣ = min_c ∈ (c,)̣the candidate that minimizes the social cost, i.e., the optimal candidate,
breaking ties arbitrarily.
The distortion ofis the worst-case multiplicative approximation it achieves to the optimum, i.e.,
() = sup_σ sup_d : d ▹ σ ((σ),d)/(ø(d),d).
Metric distortion with predictions.
In this paper we leverage the learning-augmented framework and study algorithms that receive as input a pair(σ, ). That is, apart from the preference profileσ, they also receive as input a prediction∈regarding who is the optimal candidateø(d). We evaluate the performance of an algorithm using its consistency and robustness: consistency refers to the worst-case distortion achieved by the algorithm assuming an accurate prediction, i.e.,p = ø(d), and robustness refers to the worst-case distortion with an arbitrary prediction.
Notably, the accuracy of the prediction cannot be guaranteed and can be arbitrarily bad.
Formally,
the consistency ofis the distortion that it guarantees when the provided prediction is accurate (=ø(d)), i.e.,
𝖼𝗈𝗇𝗌𝗂𝗌𝗍𝖾𝗇𝖼𝗒() = sup_σ sup_d : d ▹ σ ((σ,ø(d)),d)/(ø(d),d).
The robustness ofis the distortion that it guarantees irrespective of how inaccurate the prediction may be, i.e.,
𝗋𝗈𝖻𝗎𝗌𝗍𝗇𝖾𝗌𝗌() =sup_σ, ∈ sup_d : d ▹ σ ((σ,),d)/(ø(d),d).
Useful notation.
Given a preference profileσ, a voterv, a candidatecand a subset of candidatesS ⊂, we shall make use of the following quantities:(v)is the candidate ranked highest byv,(c)is the number of voters who rankcas their top choice,(v)is the candidate ranked lowest byv, and_S(v)is the candidate ranked lowest byv, out of all candidates inS(note that(v)and_(v)are equivalent).
§ ROBUSTNESS-CONSISTENCY TRADEOFF
Our first result asserts that even for instances with just two candidates, no deterministic algorithm can simultaneously achieve3-δ/1+δ-consistency and a robustness factor strictly better than3+δ/1-δ, for any parameterδ∈[0,1).
For any δ∈[0,1), let be a deterministic algorithm that is 3-δ/1+δ-consistent. Then, for any β < 3+δ/1-δ, is not β-robust.
Let δ, , β be as in the theorem statement.
We let
r(n,ϵ) = 3+δ - 4/n - 2ϵ/1-δ + 4/n,
and note that for every n,ϵ > 0 we have r(n,ϵ) < 3+δ/1-δ.
Since lim_n →∞, ϵ→ 0^+ r(n,ϵ) = 3+δ/1-δ, then there exist n,ϵ for which β < r(n,ϵ) < 3+δ/1-δ. Fix any n,ϵ that satisfy this, and also satisfy 1/n - ϵ > 0 (since 1/n>0, we can always choose a small enough value for ϵ that satisfies both inequalities).
Consider the instance (,,)̣ consisting of two candidates a,b whose distance from one another is 2-ϵ. The voters are located in the metric space such that
⌈1-δ/2n+1⌉
of them are placed on b, and the rest (of which there are
⌊1+δ/2n-1⌋
are placed almost at the halfway point with a slight inclination towards a, such that their distance from b is 1 and their distance from a is 1-ϵ.
Note that b is the optimal candidate, and a achieves distortion equal to:
(a)/(b) = ⌊1+δ/2n-1⌋(1-ϵ) + ⌈1-δ/2n+1⌉(2-ϵ)/⌊1+δ/2n-1⌋· 1
= (⌊1+δ/2n-1⌋ + ⌈1-δ/2n+1⌉)(1-ϵ) +
⌈1-δ/2n+1⌉· 1/⌊1+δ/2n-1⌋
= n(1-ϵ) +
⌈1-δ/2n+1⌉· 1/⌊1+δ/2n-1⌋ ≥ n(1-ϵ) +
(1-δ/2n+1)/1+δ/2n
= (3-δ) n/2 + 1 -ϵ n /(1+δ) n/2 = 3-δ +2/n -2ϵ/1+δ > 3-δ/1+δ
Therefore, if σ is the preference profile induced by (,,)̣ and p=b is the prediction, then outputs b when executed on the input (σ,p)— if it instead outputs a then we get a contradiction to our assumption that is 3-δ/1+δ-consistent.
Now, consider the instance (',,)̣
which is identical to the previous instance with the following difference: the
⌈1-δ/2n+1⌉
voters who were located on b before are now located almost at the halfway point between a and b with a slight inclination towards b such that their distance from b is 1-ϵ and their distance from a is 1.
The rest of the voters (of which there are ⌊1+δ/2n-1⌋) are located on a.
Note that this instance induces the same preference profile σ as above, and recall that outputs b when executed on the input (σ, b).
Therefore, is not β-robust, since here a is the optimal candidate, and the output b achieves distortion equal to:
(b)/(a) = ⌈1-δ/2n+1⌉(1-ϵ) + ⌊1+δ/2n-1⌋(2-ϵ)/⌈1-δ/2n+1⌉· 1
= n(1-ϵ) + ⌊1+δ/2n-1⌋· 1/⌈1-δ/2n+1⌉ ≥ n(1-ϵ) + (1+δ/2n-2)/1-δ/2n+2
= (3+δ)n/2 - 2 - ϵ n/1-δ/2n+2 = 3+δ - 4/n - 2ϵ/1-δ + 4/n = r(n,ϵ)
> β
This concludes the proof.
§ THE BOOSTED SIMULTANEOUS VETO () ALGORITHMS
In this section we present a family of algorithms termed, parametrized byδ∈[0,1).
Our main result shows that for each choice ofδ∈[0,1),achieves an optimal trade-off point between robustness and consistency:
For any δ∈[0,1), achieves 3-δ/1+δ-consistency and 3+δ/1-δ-robustness.
In what follows, we provide a description of the algorithm, followed by several observations. A pseudo code for the algorithm appears as Algorithm <ref> and an execution example is described in Figure <ref>.
algorithm. The algorithm initializes scores for the candidates — each candidate other than the prediction starts with score equal to its plurality score. The predicted candidate's initial score is boosted and equals its plurality score plus some boost= 2δn/1-δ.
We say that a votervup-votes a candidatecif(v) = c.
Note that the sum of initial scores equalsn + = n.
After the initialization stage, from timet = 0untilt = 1, the voters continuously and simultaneously “eat” (decrement) the score of their least-favorite candidate that still has positive score, at a rate of(n+)/n = .
We say that a votervdown-votes some candidatecwhenevervdecrementsc's score.
We say that a candidate is active at some time point if her score at that time point is positive.
Each time that the score of some candidate reaches 0, all voters that down-voted it up to that point immediately switch to down-voting their next least-favorite active candidate.
Observe that the first (and only) time point at which all candidates reach score 0 is the end of the algorithm (t = 1).
If the predicted candidate is one of those that reached score 0 only at the end, then it is output. Otherwise, the algorithm chooses some arbitrary candidate whose score reached 0 only at the end. In particular we have
For any δ∈ [0,1),
the candidate output by a given execution of is active
at any time point t < 1.
A discrete implementation of .
We implement the continuous process described above
in a sequence of up tomrounds
which are divided by time points at which the score of some candidate reaches 0 —
at least one candidate is eliminated at the end of each round.
Roundistarts at timet_i-1and ends at timet_i, wheret_0 = 0, andt_k = 1wherekis the total number of rounds.
Givent_i-1, we determinet_ias follows:
Let(c)be the current score of candidatecat timet_i-1, and letn_cbe the number of voters whose least favorite active candidate at timet_i-1isc— these are the voters who are currently
down-votingc.
Letcbe a candidate minimizing the ratioΔ= (c)/(n_c ·). This is the time interval
until the next time point at which
some candidate's score reaches 0.t_i = t_i-1 + Δis the new time point, and a new round begins after appropriately updating the scores of all candidates.
All voters who in the previous round were down-voting a candidate whose score reached zero at exactly this point switch to down-voting their next least favorite active candidate in the new round. We emphasize that more than one candidate may become inactive at the end of a round, in which case the number of roundskis strictly less thanm.
Given an execution of ,
we use f(v,c)∈ [0,1] to denote the fraction of time that each voter v∈ spends down-voting each candidate c∈ throughout the execution. Since every voter spends all the [0,1] time interval down-voting at a rate of 1+δ/1-δ
and since every candidate's score eventually reaches 0, we have
∑_c∈f(v,c) =1 for every voter v,
∑_v∈ f(v,c) = 1-δ/1+δ·(c) for every candidate c≠, and
∑_v∈ f(v,) = 1-δ/1+δ·(()+) for the predicted candidate c=.
For any δ∈ [0,1),
let a be the candidate output by a given execution of . Then, for any voter v and any candidate c satisfying f(v,c) >0, we have d(v,c) ≥ d(v,a).
First by Observation <ref> we know that
a is active throughout the algorithm, i.e.a ∈ A
in all rounds of the algorithm.
Since f(v,c) >0, we have c
= _A(v)
in at least one of the rounds.
By definition of _A, we have d(v, c) ≥ d(v,a).
The remainder of this section is organized as follows.
In Section <ref> we establish our consistency bound.
The more challenging part is the robustness bound, which is established in the following subsections.
In Section <ref> we establish our robustness bound for instances where the predicted candidate is returned by the algorithm.
This bound is then extended in Section <ref>
to all instances, by reducing any instance to one where the predicted candidate is returned.
§.§ Consistency Bound
For any δ∈ [0,1),
let a be the candidate output by a given execution of that is provided with the correct prediction = ø.
Then we have
(a) ≤3-δ/1+δ·(ø).
The social cost of the returned candidate a can be upper bounded as follows:
∑_v d(v,a) ≤∑_v∑_c f(v,c) · d(v,a) using (<ref>)
≤∑_v ∑_c f(v,c) · d(v,c) by Lemma <ref>
≤∑_v∑_c f(v,c) · (d(v,ø) + d(ø,c)) triangle inequality
= (ø) + ∑_c ∑_v f(v,c) · d(ø,c)
= (ø) + ∑_c≠ø∑_v f(v,c) · d(ø,c) since d(ø,ø)=0
= (ø) +∑_c≠ø1-δ/1+δ·(c)· d(ø,c) using (<ref>) and the fact that = ø
=(ø) + 1-δ/1+δ∑_c∑_v: (v) = cd(ø,c)
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = cd(ø,v) + d(v,c) triangle inequality
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = c 2d(ø,v) since c= (v) _v ø
≤(ø) + 2(1-δ)/1+δ(ø)
≤3-δ/1+δ(ø).
§.§ Robustness Bound when the Returned Candidate is the Prediction
For any δ∈ [0,1),
assume that is the candidate output by a given execution of ,
i.e., the algorithm returns the prediction.
Then we have
() ≤3+δ/1-δ·(ø).
To prove Proposition <ref>, i.e., the upper bound on()/(ø)stated above, the key technical contribution of this section is
establishing an upper bound on the distance between the optimal candidate,ø, and the prediction,, as a function of the optimal social cost (see Lemma <ref>). To prove this upper bound, we define a graph (the veto map) that captures the way in which the down-voting of each voter is distributed across the up-votes of other voters and then use the structure of this graph to appropriately lower bound the distance of (pairs of voters) from the optimal candidate,ø(see Lemma <ref>).
Let δ∈ [0,1).
Given an execution of , the veto map associated with this execution is an edge-weighted directed graph G = (, E), where each vertex corresponds to a voter in and an edge (v,v') between two voters v,v'∈ exists in E if and only if voter v down-voted the top candidate of v' at some point during the execution, i.e., if and only if f(v,(v'))>0. The weight of each edge (v,v')∈ E is
w(v,v')=f(v,(v'))·1+δ/1-δ/((v')).
Note that for each edge(v,v')∈E, the numerator of (<ref>) is equal to the amount by which votervdecreased the score of candidate(v')throughout the execution of( i.e., the amount of time,f(v,(v')), thatvspent down-voting that candidate, multiplied by the rate,(1+δ)/(1-δ)). The denominator of (<ref>) is the plurality of this candidate, i.e., the number of voters who up-voted her. Therefore, for each votervand each candidatec, the veto map essentially “distributes” the amount by whichvreduced the score ofcequally among the weights of the edges fromvto each voterv'such thatc=(v'), i.e., each voter that up-votedc. Figure <ref> provides the veto map for the example introduced in Figure <ref>.
We now make the crucial observation that for every edge(v,v')∈E( i.e., for every edge of the veto map), we can derive a lower bound on the combined contribution of votersvandv'to the social cost of the optimal candidate,ø. Intuitively, ifvdown-voted the candidate closest tov'(this is the meaning of the assumption that(v,v') ∈E),
thenvmust be far enough from that candidate and hence fromv'as well. In turn, if one ofvorv'is close toøthen the other voter must be far enough fromø.
For any δ∈ [0,1),
let
G=(,E)
be the veto map
associated with a given execution of . If (v,v') ∈ E, then we have d(v,ø) + d(v',ø) ≥d(ø, p)/2.
Consider any v, by the triangle inequality we have
d(ø,p) ≤ d(p,v) + d(v,ø) ⇒ d(p,v) ≥ d(ø,p) - d(v,ø).
By definition of the veto map, the fact that (v,v') ∈ E implies that voter v down-voted the top candidate of v'. Let c = (v'). We have
d(ø, c) ≥ d(c,v) - d(v,ø) ≥ d(,v) -d(v,ø) ≥ d(ø,) -2d(v,ø),
where the first inequality holds by the triangle inequality, the second holds since d(c,v) ≥ d(,v) by Lemma <ref>, and the last inequality is using (<ref>). Now consider voter v', we have
d(ø,v') + d(v',c) ≥ d(ø,c) (by triangle inequality) ⇒
2d(ø,v') ≥ d(ø,c) (since c = (v') _v'ø) ⇒
2d(ø,v') ≥ d(ø,) - 2d(v,ø) (using (<ref>)) ⇒
d(ø,v)+d(ø,v') ≥d(ø,)/2.
Using the veto map and Lemma <ref>, we can now prove the key lemma of this section (Lemma <ref>), which provides a lower bound of(1-δ/2n)d(ø,p)/2on the optimal social cost,(ø). Note that, given the lower bound of Lemma <ref>, we would be able to directly derive the lower bound of Lemma <ref> if the setEof veto map edges contained a matching of size at least1-δ/2n. Instead of finding such a matching, we instead
partition the set of votersinto the set, which contains voters whose distance fromøis strictly less thand(ø,)/2, and the set, whose distance fromøis at leastd(ø,)/2. We then prove that the sub-graph of the veto map induced by vertices incontains in its edges a fractional matching of total size1-δ/2n-||. Since each voter fromcontributes at leastd(ø,)/2to the optimal social cost and each edge of the fractional matching (whose vertices are both in) also contributes (by Lemma <ref>) at leastd(ø,)/2to the optimal social cost, the combined contribution fromandis at least(1-δ/2n-|| +||)d(ø,)/2=(1-δ/2n)d(ø,)/2, which implies the desired lower bound on(ø).
For any δ∈ [0,1),
assume that is the candidate output by a given execution of ,
i.e., the algorithm returns the prediction.
Then we have
(ø) ≥(1-δ/2n)d(ø,p)/2.
Let be the set of voters v such that d(v,ø) < d(ø,p)/2 and let be the set of remaining voters, i.e., those with d(v,ø) ≥d(ø,p)/2. First, note that if || ≥1-δ/2n then the lemma is trivially true since (ø) ≥∑_v ∈ d(v,ø) ≥(1-δ/2n) d(ø,p)/2. Therefore, we henceforth focus on the case where || < 1-δ/2n and let x>0 be such that || = 1-δ/2n - xn. Then, the contribution of voters in is
∑_v∈ d(v,ø) ≥(1-δ/2n - xn) d(ø,)/2,
and || = n - || = 1+δ/2n + xn. Our goal now is to show that the contribution of to (ø) compensates for the lack of enough voters in . In particular, we will show that
∑_v ∈(̣v,ø) ≥ xn d(ø,p)/2,
and note that this would immediately imply the lemma.
Let G = (V, E) be the veto map associated with the given execution of .
We now prove some useful inequalities regarding the veto map edge weights:
∑_v∈w(v,v') ≤ 1 for all v' ∈.
Claim <ref> holds since every voter has 1 up-vote. Formally,
for every voter v'∈ we have
∑_v∈w(v,v')=∑_v∈f(v,(v'))·1+δ/1-δ/((v'))≤∑_v∈f(v,(v'))/((v'))·1+δ/1-δ=1,
where the last equation uses (<ref>). Note that the reason (<ref>) applies and not (<ref>) is because v' is in and hence (v') cannot be
(v' strictly prefers ø over since (̣ø,v') < (̣ø,)/2 ≤ ((̣ø,v')+(̣v',))/2 using the definition of and the triangle inequality).
∑_v' ∈ w(v,v') ≤1+δ/1-δ for all v ∈.
Claim <ref> holds since every voter down-votes a total score of throughout the execution of the algorithm. Formally,
∑_v'∈w(v,v') =∑_v'∈f(v,(v'))·1+δ/1-δ/((v')) = ·∑_c ∈∑_v' ∈: (v')=cf(v,c)/(c)
≤·∑_c ∈∑_v' ∈: (v')=cf(v,c)/(c) = ·∑_c ∈(c) ·f(v,c)/(c)
= ·∑_c ∈ f(v,c)
= ,
where the last equality is by (<ref>).
Our next claim, (Claim <ref>) provides a lower bound on the “amount” of up-votes from voters v'∈ that are down-voted by some other voters v∈. The way we obtain this bound is by showing that even if all the voters in spend all the time down-voting the up-votes of voters in , there are still some up-votes left, which must have been down-voted by voters in .
Let
={(v,v')∈ E: v,v'∈}
be
the subset of edges in the veto map for which both vertices are in .
Then
we have
∑_(v,v') ∈ w(v,v') ≥2xn/1-δ.
We first note that for every candidate v'∈ we have ∑_v∈ f(v,(v')) = 1-δ/1+δ·((v')). This is implied by the fact that ∑_v∈ f(v,c) = 1-δ/1+δ·(c) holds for every candidate c≠ (by Equation (<ref>)) and the fact that (v') cannot be for any v'∈ (because d(v',)> d(ø,)/2 > d(v',ø) for all v'∈, so p cannot be the top candidate of v'). Therefore, we get
∑_v' ∈∑_v ∈ w(v,v') = ∑_v' ∈∑_v ∈f(v,(v'))·/((v'))=∑_v' ∈((v'))/((v')) = ||.
Also, by Claim <ref> we get
∑_v' ∈∑_v ∈ w(v,v') = ∑_v ∈∑_v' ∈w(v,v') ≤∑_v ∈ = · ||.
Therefore, we have
∑_(v,v') ∈ w(v,v') = ∑_v' ∈∑_v ∈ w(v,v')
= ∑_v' ∈∑_v ∈ w(v,v') - ∑_v' ∈∑_v ∈ w(v,v') since = \
≥ ||- · || using (<ref>) and (<ref>)
= (1+δ/2n +xn)-·(1-δ/2n - xn)
= 2xn/1-δ.
We now use Lemma <ref> and Claims <ref>, <ref> and <ref> to lower bound the contribution of voters in toward the optimal social cost, i.e., ∑_v ∈d(v, ø).
Note that if we could find a matching M ⊆ of size |M| ≥ xn, then using Lemma <ref> we would be done, i.e., (<ref>) would be satisfied. We identify a fractional matching in whose size is at least xn. Specifically, for each edge (v,v')∈ our fractional matching includes a fraction of this edge equal to
m(v,v') = w(v,v') ·1-δ/2.
To verify that this is a valid fractional matching, we first show that for any vertex v∈ the total fraction of the edges adjacent to it (both incoming and outgoing) that are included in the matching is at most 1.
Using Claim <ref> and <ref> we have:
∑_v' ∈m(v',v) + ∑_v' ∈m(v,v') = 1-δ/2·(∑_v' ∈w(v',v) + ∑_v' ∈w(v,v')) ≤1-δ/2·(1 + 1+δ/1-δ) = 1.
Regarding the size of this matching, we get
∑_(v,v')∈ m(v,v') = ∑_(v,v')∈ w(v,v') ·1-δ/2≥1-δ/2·2xn/1-δ= xn,
where the inequality is implied by Claim <ref>.
We now lower bound the total contribution of voters in to the optimal social cost as follows:
∑_v ∈ d(v,ø) ≥∑_v ∈(∑_v' ∈m(v',v) + ∑_v' ∈m(v,v') )· d(v,ø)using (<ref>)
≥∑_(v,v') ∈ E m(v,v')· (d(v,ø)+d(v',ø)) rearranging the sum
≥∑_(v,v') ∈ E m(v,v')·d(ø,p)/2by Lemma <ref>
≥ xn ·d(ø,p)/2.using (<ref>)
For completeness,
we now show how the above,
combined with the contribution of voters from to the optimal social cost (which in turn is lower bounded by (<ref>)), allows us to prove the desired lower bound on the optimal social cost:
(ø)
= ∑_v ∈d(v,ø) + ∑_v ∈d(v,ø)
≥(1-δ/2n - xn)d(ø,p)/2 + xn d(ø,p)/2
≥(1-δ/2n) d(ø,p)/2.
We are now ready to prove the main proposition of this section.
The social cost of the returned candidate p can be upper bounded as follows:
∑_v ∈ d(v,p) ≤∑_v∑_c f(v,c) · d(v,p) using (<ref>)
≤∑_v∑_c f(v,c) · d(v,c) by Corollary <ref>
≤∑_v ∑_c f(v,c) ·(d(v,ø) + d(ø,c)) triangle inequality
= (ø) + ∑_c ∑_v f(v,c) · d(ø,c)
= (ø) + ∑_c ≠ p∑_v f(v,c) · d(ø,c) + ∑_v f(v,p) · d(ø,p)
=(ø) + 1-δ/1+δ∑_c(c) d(ø,c) + 2δ n/1-δd(ø,p)using (<ref>) and (<ref>)
=(ø) + 1-δ/1+δ∑_c∑_v: (v) = c d(ø,c) + 2δ n/1-δd(ø,p)
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = c (d(ø,v)+d(v,c))+ 2δ n/1-δd(ø,p) triangle inequality
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = c 2d(ø,v)+ 2δ n/1-δd(ø,p) since c =(v) _v ø
≤(ø) + 2(1-δ)/1+δ(ø) + 2δ n/1+δd(ø,p)
≤3-δ/1+δ(ø) + 2δ n/1+δd(ø,p)
≤3-δ/1+δ(ø) + 2δ n/1+δ·2(ø)/1-δ/2nby Lemma <ref>
≤(3-δ/1+δ + 8δ/(1+δ)(1-δ))(ø)
≤3+4δ+δ^2/(1+δ)(1-δ)(ø)
≤ 3+δ/1-δ(ø).
§.§ Robustness Bound when the Returned Candidate is not the Prediction
The following proposition shows
that the robustness bound is maintained even whendoes not output the prediction.
For any δ∈ [0,1),
assume that a ∈∖{} is the candidate output by a given execution of ,
i.e., the algorithm does not return the prediction.
Then we have
(a) ≤3+δ/1-δ·(ø).
First, we note that the approach taken in Section <ref> will not work here. In particular, the key Lemma <ref> does not hold in general as the prediction can be arbitrarily far away from the optimal candidate.[For example, consider an instance with one voter, a candidate that is in the same location as the voter, and a second candidate (who is also the prediction) at distance 1 from the other two. This instance violates Lemma <ref>.]
Instead, we show that if candidatea≠is returned, then she would have still been returned in an instance that is identical, exceptais the predicted candidate.
In other words,
we prove Proposition <ref> by reducing instances in which the prediction is not the output to instances where it is while maintaining the distortion:
For any δ∈ [0,1),
assume that a ∈∖{} is the candidate output by a given execution of .
Then
also outputs a when executed on the same preference profile, with candidate a as the prediction.
Before presenting the proof of Proposition <ref>, we first show how it implies Proposition <ref>.
By Proposition <ref>,
a is returned by when executed on the same preference profile as in the given execution, with candidate a also being the prediction.
That is, the new execution returns the new prediction (candidate a).
The
claim follows by Proposition <ref>.
We now present the proof of Proposition <ref>.
We shall be comparing the two considered executions of , i.e., the one where was the prediction (and a was the output)
and the one where a was the prediction. To that end we shall refer to them as the “ execution” and the “a execution”, respectively.
We need to show that a is also the output in the a execution.
We first introduce some useful notation. For any candidate c ∈, any time point t ∈ [0,1] and any b∈{a,} we define
* _b(c,t)— the amount of score decremented from c up to time t, in the continuous interpretation of the b execution.
* _b(c,t)— the score of c at time t in the continuous interpretation of the b execution.
For both b ∈{a,} and for any fixed c ∈, the function _b(c,·) is piece-wise linear (in t) and can be computed using the round boundary time points t_i from the discrete implementation of (see Algorithm <ref>).
Specifically, we have _b(c,0) = 0, and if there were k iterations in the While loop ( i.e., k rounds), then for each i=1,…,k and for any t ∈[t_i-1,t_i]
(recall that t_0 = 0 and t_k = 1)
we have
_b(c,t) = _b(c,t_i-1) + n_c^i ·· (t - t_i-1),
where
n_c^i is the number of voters who down-voted c in round i (recall also that is the down-voting rate per voter).
The function _b(c,·) is defined as _b(c,t) = s^b_c - _b(c,t), where s^b_c is the initial score of c in the b execution. To summarize, we have
* The functions _b(c,·) and _b(c,·) are piece-wise linear and in particular continuous in t, for any fixed c ∈ and b∈{a,}.
* For any c∈, t ∈ [0,1], and b ∈{a,}, the initial score of c in the b execution, denoted s^b_c, satisfies s^b_c = _b(c,t) + _b(c,t).
We further observe that both executions are equivalent up to the time point where the score of reaches 0 in the a execution. That is, we have:
For any c ∈ and for any t such that _a(,t) > 0 we have _(c,t) = _a(c,t).
To see this, consider the implementation of as given by Algorithm <ref>.
Note that the only difference in the initialization phase of the two executions is the initial scores of and a,
where in the execution gets the boost and in the a execution a gets the boost.
But a was output in the execution, i.e., a has positive score throughout the execution and as such always belongs to the set of active candidates A.
Therefore, before 's score reaches 0 in the a execution, every step of the While loop in Algorithm <ref> will be computed the same way in both executions.
The following lemma captures the intuition that when does not get the boost, the rest of the candidates can only be down-voted faster.
For any c ∈∖{}, and for any t ∈ [0,1], we have _(c,t) ≤_a(c,t).
Suppose towards contradiction that the lemma does not hold, i.e.,
there is some candidate c≠ and some time point t for which _a(c,t) < _(c,t).
Let T be the set of such time points, i.e.,
T := {t ∈ [0,1] |∃ c ∈∖{} such that _a(c,t) < _(c,t)}
We note that for each t ∈ T we have t > 0, because the amount of score decremented from any candidate at time t=0 equals 0 at both executions of the algorithm.
Let t_inf = inf(T). We note that t_inf∉ T. To see this, observe that for any t ∈ T and candidate c for which _a(c,t) < _(c,t), there is some small enough ϵ > 0 such that _a(c,t-ϵ) < _(c,t-ϵ), that is t-ϵ∈ S. This is clearly implied by the continuity of the functions _a(c,·) and _(c,·).
Furthermore, note that t_inf < 1, as otherwise T is empty — contradiction.
Finally, the continuity of the functions _a(c,·) and _(c,·) also implies that at time t = t_inf there must exist a candidate c ≠ for which the following two assertions must hold:
* _a(c,t_inf) = _(c,t_inf).
* There exists a small enough ϵ_0 > 0 such that for any 0<ϵ< ϵ_0 we have _a(c,t_inf + ϵ) < _(c,t_inf + ϵ).
In other words, the amount of score decremented from c up until time t_inf was the same in both executions, but ever so slightly beyond time t_inf, (c) is decremented more in the execution.
In particular this implies that at time t_inf, the score of c in the execution was positive, i.e., _(c,t_inf) > 0. From here we also get:
_a(c,t_inf) ≥(c) - _a(c,t_inf)
= (c) - _(c,t_inf)
= _(c,t_inf)
> 0,
where the first inequality holds by Observation <ref> since the initial score of each candidate is lower bounded by her plurality score, the first equality holds by bullet 1, the second equality holds since c ≠ and since the initial score of each candidate other than equals her plurality score in the execution, and the final inequality was discussed right before the above chain.
To summarize, c's score at t_inf was positive in both executions, but the decrement rate was higher in the execution.
But for this to hold, it must be the case that at time t_inf the number of voters who are
down-voting c is larger in the execution relative to the a execution.
In particular, there exists a voter v who at time t_inf down-votes c in the execution, but in the a execution she down-votes some other candidate c' ≠ c, even though c was also active in the a execution at that time. This implies that v ranks c' lower than c and:
* _a(c',t_inf) > 0— since v down-votes c' in the a execution at time t_inf (only candidates with positive score are being down-voted at any given time).
* _(c',t_inf) = 0— since otherwise v would have down-voted c' in the execution at time t_inf (or some other active candidate that v ranks lower than c).
Note that c' ≠ a since a was output in the execution (and in particular _(a,t_inf) > 0).
We also claim that c' ≠, since for any t (and in particular for t = t_inf) we have
_(,t) ≥_a(,t).
To see this, note first that it is trivial for any t for which _a(,t) = 0. For times t in which _a(,t) > 0 we have
_(,t) = () + - _(,t)
= () + - _a(,t)
= _a(,t) + ≥_a(,t),
where the first equality holds by Observation <ref>, the second holds by Observation <ref> and the third holds again by Observation <ref>.
We have established that c' ∉{a,}, implying that the initial score of c' in both executions is (c'). By Observation <ref>, this in turn implies:
_a(c',t_inf) = (c') - _a(c',t_inf) < (c') - _(c',t_inf) = _(c',t_inf),
where the inequality holds by the two bullets above.
In particular, this implies by definition that t_inf∈ T, a contradiction.
Lemma <ref> shows that when does not get the boost, the rest of the candidates can only be down-voted faster. The following lemma upper bounds the extent to which that can happen.
For any c ∈∖{}, and for any t ∈ [0,1], we have _a(c,t) ≤_(c,t) +.
Let t ∈ [0,1].
We start with the following observation:
_(,t) ≤_a(,t) +.
This holds since
_(,t) = () + - _(,t) ≤() + - _a(,t) = _a(,t) + ,
where both equalities hold by Observation <ref> and the inequality holds by Inequality (<ref>).
From here we get:
∑_c ∈_a(c,t) = ∑_c ∈_(c,t)
= (∑_c ∈∖{}_(c,t)) + _(,t)
≤(∑_c ∈∖{}_(c,t)) + _a(,t) + ,
where the first equality holds since the total amount of score decremented from all candidates up to time t equals t· n· regardless of the prediction's identity, and the inequality holds by Observation <ref>. By cancelling out the term from both sides, we get
∑_c ∈∖{}_a(c,t) ≤(∑_c ∈∖{}_(c,t)) + ,
which in turn implies
∑_c ∈∖{}(_a(c,t)- _(c,t)) ≤.
By Lemma <ref>, all terms in this sum are non-negative, implying in particular that each of the terms is also upper-bounded by . The claim follows.
To conclude the proof of Proposition <ref>, we show that for any 0≤ t < 1 we have _a(a,t) > 0. This would imply that a belongs to the set A at the end of Algorithm <ref>, and therefore a would be the output in the a execution, as desired.
To that end, we have
_a(a,t) = (a) + - _a(a,t) ≥(a) - _(a,t) = _(a,t) > 0,
where the first equality holds by Observation <ref>,
the first inequality holds by Lemma <ref>,
the second equality holds again by Observation <ref>,
and the second inequality holds by our assumption that a was output in the execution.
§ MORE REFINED METRIC DISTORTION BOUNDS
§.§ Bounds Parameterized by Prediction Quality
Consistency and robustness bounds focus on two extreme cases: the case where the prediction is perfect (consistency) and the case where it can be arbitrarily bad (robustness). To develop a more refined understanding of the distortion that our family ofalgorithms achieves as a function of the prediction quality, we now seek to prove bounds parameterized by the error of the prediction.
We first define a natural measure of error, which is proportional to the distance between the prediction,, and the optimal candidate,ø, i.e.,d(, ø). Note that using this distance alone as a measure of error would not accurately capture the quality of the prediction, since this does not measure how significant this distance is relative to the average cost that each voter would suffer in the optimal solution. For example, ifd(, ø)=100, and the average distance in the optimal solution is10000, then the prediction was quite accurate, whereas the prediction would be inaccurate if we scaled the same instance down so that the average distance is1. Therefore, to accurately capture the relative quality of the prediction we quantify the errorof the prediction by normalizing its distance fromøby the average optimal distance,(ø)/n, i.e.,=n d(, ø) /(ø)Our main result in this section shows that for each choice ofδ∈[0, 1), ouralgorithm guarantees distortion at most3-δ+2δ/1+δwhen the error of the prediction is. This bound is equal to our consistency bound when the prediction is correct, i.e.,=0, and then grows linearly as a function ofe, with a slope that is increasing inδ. This makes sense, as it captures the intuition that trusting the prediction more ( i.e., higher values ofδ) translates into a worse dependence on the error. Note, however, that the distortion never exceeds our robustness bound, so it gracefully transitions from the consistency bound to the robustness bound as a function of the error.
For any δ∈[0,1 ), the distortion achieved by on instances where the prediction has error = n (̣,ø) / (ø) is at most
min{3-δ+2δ/1+δ, 3+δ/1-δ}.
For any δ∈ [0,1),
assume that a is the candidate output by a given execution of ,
when provided with a prediction with error =n d(, ø) / (ø).
Theorem <ref> already shows that the worst-case distortion of is at most 3+δ/1-δ for any δ, so we just need to prove that it is also at most 3-δ+2δ/1+δ.
The social cost of a can be upper bounded as follows:
∑_v d(v,a) ≤∑_v∑_c f(v,c) · d(v,a) using (<ref>)
≤∑_v ∑_c f(v,c) · d(v,c) by Lemma <ref>
≤∑_v∑_c f(v,c) · (d(v,ø) + d(ø,c)) triangle inequality
= (ø) + ∑_c ∑_v f(v,c) · d(ø,c)
= (ø) + ∑_v f(v,) · d(ø, ) + ∑_c≠∑_v f(v,c) · d(ø,c)
isolating c =
= (ø) + 1-δ/1+δ·(()+2δ n/1-δ)d(ø,) +1-δ/1+δ∑_c≠(c)· d(ø,c) by (<ref>), (<ref>)
= (ø) + 1-δ/1+δ·2δ n/1-δ· d(ø,) +1-δ/1+δ∑_c(c)· d(ø,c) c=p back in the sum
= (ø) +2δ/1+δ(ø)+ 1-δ/1+δ∑_c(c)· d(ø,c)
since d(ø,)= ·(ø)/n
= 1+δ + 2δ/1+δ(ø)+ 1-δ/1+δ∑_c∑_v: (v) = cd(ø,c) rearranging the sum
≤1+δ + 2δ/1+δ(ø) + 1-δ/1+δ∑_c∑_v: (v) = cd(ø,v) + d(v,c) triangle inequality
≤1+δ + 2δ/1+δ(ø) + 1-δ/1+δ∑_c∑_v: (v) = c 2d(ø,v) since c=(v) _v ø
≤1+δ + 2δ/1+δ(ø) + 2(1-δ)/1+δ(ø)
≤(3-δ+2δ/1+δ)(ø)
This concludes the proof.
§.§ Bounds for Metric Spaces with Decisive Voters
We now refine the achieved distortion bounds even further, using theα-decisiveness framework of <cit.>. Forα∈[0,1], A votervisα-decisive ifd(v, (v)) ≤α·d(v,c), for allc ≠(v). In other words, the distance of votervfrom her favorite candidate is at mostαtimes her distance from any other candidate.
We say
an instance(,,)̣isα-decisive if every voter isα-decisive. Note that whenα= 0, the distance between every voter and her favorite candidate is0. This captures the peer selection setting, in which the set of voters is the set of candidates, and each voter ranks herself first. Therefore, the positive results presented in the section forα= 0apply to this setting as well. Since the proofs of the decisiveness results are very similar to some of our earlier proofs, we defer them to Appendix <ref> and directly provide the most general bound that is parameterized both by the decisiveness parameterα, and the error parameter.
For any δ∈[0,1 ) and α∈ [0,1], the distortion achieved by on α-decisive instances where the prediction has error = n (̣,ø) / (ø)
is at most
2 + α -αδ/1+δ + min{2δ/1+δ, 8δ/(1+δ)(1-δ)}.
If we focus our attention on the interesting case whereα=0, i.e., the peer selection case, then this implies the following bound.
For any δ∈[0,1 ) and α=0, the distortion achieved by on α-decisive instances where the prediction has error = n (̣,ø) / (ø)
is at most
2/1+δ + min{2δ/1+δ, 8δ/(1+δ)(1-δ)}.
In the metric distortion problem with 0-decisive voters (the peer selection setting),
the best known achievable distortion is 2. Our algorithm recovers this guarantee
forδ= 0. Forδ>0,
our algorithm achieves a consistency strictly better than 2, at the cost of higher robustness — Figure <ref> illustrates the achieved consistency-robustness pairs obtained by varyingδ.
Figure <ref> provides the distortion obtained as a function ofηfor different values ofδ.
Note that as long as the value ofηis less than or equal to 1, our algorithm ensures that the achieved distortion remains at most 2.
§ CONCLUSION AND FUTURE WORK
In this work we initiated the study of distortion problems through the learning-augmented framework, focusing on the metric distortion problem.
We considered the performance guarantees achievable by the class of learning-augmented algorithms enhanced with a prediction regarding the optimal outcome and fully resolved this problem by characterizing the robustness-consistency Pareto frontier. Specifically, we provided a natural family of algorithms that appropriately boosts the predicted candidate and achieves any desired outcome on this frontier, parameterized by the designer's trust in the prediction quality.
To prove our algorithm's optimal robustness guarantee, we upper bounded the impact of the boost applied to the prediction, using the new notion of the veto map, which could be of independent interest. Furthermore, we provided even more refined distortion bounds as a function of the prediction error and the decisiveness of the voters.
Our results pave the way for revisiting many other interesting problems within the distortion literature using the learning-augmented framework (a very recent survey on the distortion literature can be used as a useful guide <cit.>).
Examples include the multi-winner variant of the metric distortion problem
<cit.>, the distortion of one-sided matching <cit.>, as well as
the utilitarian distortion setting with a single winner <cit.> and with multiple winners <cit.>. Another natural direction is to apply this framework to randomized metric distortion, especially given the very recent randomized algorithm that achieves distortion better than3<cit.>. Can this be extended toward more general improvements on the robustness-consistency Pareto frontier achievable via randomization?
plainnat
§ MISSING PROOFS FROM SECTION <REF>
§.§ Consistency Bound for α-Decisive Instances
For any δ∈ [0,1) and α∈ [0,1],
assume that a is the candidate output by a given execution of
on an α-decisive instance
when provided with the correct prediction = ø.
Then we have
(a) ≤2+α-αδ/1+δ·(ø).
The social cost of the returned candidate a can be upper bounded as follows:
∑_v d(v,a) ≤∑_v∑_c f(v,c) · d(v,a) using (<ref>)
≤∑_v ∑_c f(v,c) · d(v,c) by Lemma <ref>
≤∑_v∑_c f(v,c) · (d(v,ø) + d(ø,c)) triangle inequality
= (ø) + ∑_c ∑_v f(v,c) · d(ø,c)
= (ø) + ∑_c≠ø∑_v f(v,c) · d(ø,c) since d(ø,ø)=0
= (ø) +∑_c≠ø1-δ/1+δ·(c)· d(ø,c) using (<ref>) and the fact that = ø
=(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = cd(ø,c)
≤(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = cd(ø,v) + d(v,c) triangle inequality
≤(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = cd(ø,v) + α d(v,ø) α-decisiveness
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = cd(ø,v) + α d(v,ø) adding ø to the sum
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = c (1+α)d(ø,v)
≤(ø) + (1+α)(1-δ)/1+δ(ø)
≤2+α-αδ/1+δ(ø).
§.§ Robustness Bound for α-Decisive Instances
For any δ∈ [0,1) and α∈ [0,1],
assume that is the candidate output by a given execution of on an α-decisive instance,
i.e., the algorithm returns the prediction.
Then we have
() ≤(2 + α -αδ/1+δ + 8δ/(1+δ)(1-δ)) ·(ø).
Note that if = ø then the claim holds by Proposition <ref>. We thus assume for the rest of the proof that ≠ø.
The social cost of the returned candidate p can be upper bounded as follows:
∑_v ∈ d(v,p) ≤∑_v∑_c f(v,c) · d(v,p) using (<ref>)
≤∑_v∑_c f(v,c) · d(v,c) by Corollary <ref>
≤∑_v ∑_c f(v,c) ·(d(v,ø) + d(ø,c)) triangle inequality
= (ø) + ∑_c ∑_v f(v,c) · d(ø,c)
= (ø) + ∑_c ≠ø∑_v f(v,c) · d(ø,c) Since (̣ø,ø) = 0
= (ø) + ∑_c ≠ø,∑_v f(v,c) · d(ø,c) + ∑_v f(v,p) · d(ø,p)
=(ø) + 1-δ/1+δ∑_c ≠ø(c) d(ø,c) + 2δ n/1-δd(ø,p)using (<ref>) and (<ref>)
=(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = c d(ø,c) + 2δ n/1-δd(ø,p)
≤(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = c (d(ø,v)+d(v,c))+ 2δ n/1-δd(ø,p) triangle inequality
≤(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = c (1+α)d(ø,v)+ 2δ n/1-δd(ø,p) by α-decisiveness
≤(ø) + 1-δ/1+δ∑_c∑_v: (v) = c (1+α)d(ø,v)+ 2δ n/1-δd(ø,p) adding ø to the sum
≤(ø) + (1+α)/(1-δ)1+δ(ø) + 2δ n/1+δd(ø,p)
≤a+α-αδ/1+δ(ø) + 2δ n/1+δd(ø,p)
≤a+α-αδ/1+δ(ø) + 2δ n/1+δ·2(ø)/1-δ/2nby Lemma <ref>
≤(a+α-αδ/1+δ + 8δ/(1+δ)(1-δ))(ø).
The following proposition shows
that the robustness bound is maintained also whendoes not output the prediction.
For any δ∈ [0,1) and α∈ [0,1],
assume that a ∈∖{} is the candidate output by a given execution of on an α-decisive instance,
i.e., the algorithm does not return the prediction.
Then we have
(a) ≤(2 + α -αδ/1+δ + 8δ/(1+δ)(1-δ)) ·(ø).We first note that proposition <ref> holds for any instance, and in particular it holds for α-decisive
instances.
By Proposition <ref>,
a is returned by when executed on the same preference profile as in the given execution, with candidate a also being the prediction.
That is, the new execution returns the new prediction (candidate a).
The
claim follows by Proposition <ref>.
§.§ Bounds Parameterized by Prediction Quality for α-Decisive Instances
For any δ∈[0,1 ) and α∈ [0,1], the distortion achieved by on
α-decisive
instances where the prediction has error = n (̣,ø) / (ø)
is at most
2 + α -αδ/1+δ + min{2δ/1+δ, 8δ/(1+δ)(1-δ)}.
For any δ∈ [0,1),
assume that a is the candidate output by a given execution of on an α-decisive instance
when provided with a prediction with error =n d(, ø) / (ø).
Proposition <ref> and <ref> already show that the worst-case distortion of is at most (2 + α -αδ/1-δ + 8δ/(1+δ)(1-δ)) for any δ, so we just need to prove that it is also at most 2+α-αδ+2δ/1+δ.
Note that if = ø then η = 0 and the claim follows from Proposition <ref>. We thus assume for the rest of the proof that ≠ø.
The social cost of a can be upper bounded as follows:
∑_v d(v,a) ≤∑_v∑_c f(v,c) · d(v,a) using (<ref>)
≤∑_v ∑_c f(v,c) · d(v,c) by Lemma <ref>
≤∑_v∑_c f(v,c) · (d(v,ø) + d(ø,c)) triangle inequality
= (ø) + ∑_c ∑_v f(v,c) · d(ø,c)
= (ø) + ∑_c ≠ø∑_v f(v,c) · d(ø,c)since (̣ø,ø) = 0
= (ø) + ∑_v f(v,) · d(ø, ) + ∑_c≠,ø∑_v f(v,c) · d(ø,c) extracting c= from the sum
= (ø) + 1-δ/1+δ·(()+2δ n/1-δ)d(ø,) +∑_c≠,ø1-δ/1+δ·(c)· d(ø,c) using (<ref>), (<ref>)
= (ø) + 1-δ/1+δ·2δ n/1-δ· d(ø,) +∑_c ≠ø1-δ/1+δ·(c)· d(ø,c) c=p back in the sum
= (ø) +2δ/1+δ(ø)+ ∑_c ≠ø1-δ/1+δ·(c)· d(ø,c)
since d(ø,)= ·(ø)/n
= 1+δ + 2δ/1+δ(ø)+ 1-δ/1+δ∑_c ≠ø∑_v: (v) = cd(ø,c) rearranging the sum
≤1+δ + 2δ/1+δ(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = cd(ø,v) + d(v,c) triangle inequality
≤1+δ + 2δ/1+δ(ø) + 1-δ/1+δ∑_c ≠ø∑_v: (v) = c (1+α)d(ø,v) by α-decisiveness
≤1+δ + 2δ/1+δ(ø) + 1-δ/1+δ∑_c∑_v: (v) = c (1+α)d(ø,v) adding ø back to the sum
≤1+δ + 2δ/1+δ(ø) + (1+α)(1-δ)/1+δ(ø)
≤(2+α-αδ+2δ/1+δ)(ø)
This concludes the proof. |
http://arxiv.org/abs/2307.05337v1 | 20230711152649 | Explaining Competitive-Level Programming Solutions using LLMs | [
"Jierui Li",
"Szymon Tworkowski",
"Yingying Wu",
"Raymond Mooney"
] | cs.CL | [
"cs.CL"
] |
Tracking Most Significant Shifts in Nonparametric Contextual Bandits
Joe Suk
Columbia University
Samory Kpotufe
Columbia University
==============================================================================================
In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation. We propose a novel method to automatically annotate natural language explanations to <problem, solution> pairs. We show that despite poor performance in solving competitive-level programming problems, state-of-the-art LLMs exhibit a strong capacity in describing and explaining solutions.
Our explanation generation methodology can generate a structured solution explanation for the problem containing descriptions and analysis. To evaluate the quality of the annotated explanations, we examine their effectiveness in two aspects: 1) satisfying the human programming expert who authored the oracle solution,
and 2) aiding LLMs in solving problems more effectively.
The experimental results on the CodeContests dataset demonstrate that while LLM GPT3.5's and GPT-4's abilities in describing the solution are comparable, GPT-4 shows a better understanding of the key idea behind the solution.
§ INTRODUCTION
Recent Large Language Models (LLMs) have shown impressive capabilities for various reasoning tasks, including multi-hop question answering <cit.>, commonsense reasoning <cit.>, symbolic reasoning <cit.>, and math word problem-solving <cit.>. The chain-of-thought prompting method <cit.> explicitly instructs LLMs to generate intermediate steps until the final answer is reached, enabling the model to decompose problems and solve them step-by-step. Nevertheless, challenges persist when tackling complex reasoning tasks, such as competitive-level programming problems. For instance, even powerful models like GPT-4 outperform fewer than 5% of human competitors in virtual contests from Codeforces <cit.>. Competitive-level programming problems epitomize problem-solving strategies for algorithmic, mathematical, geometric, and graph-theoretic problems. Solving them necessitates understanding problems, familiarity with algorithms, reasoning skills, creative algorithm development, and efficient, robust implementation.
Previous works on automatically solving programming problems focus on tasks mapping fairly detailed natural language instructions to programs. <cit.> verified and selected candidate programs by running them on human-written or automatically generated test cases. <cit.> incorporated execution feedback as an intermediate step during code generation to enhance the programming ability of LLMs. While these methods yield promising results for fairly straightforward implementation tasks, they fall short on algorithmic reasoning tasks.
Table <ref> shows a sample problem from Codeforces.[<https://codeforces.com/problemset/problem/1670/A>] Compared to most instruction-to-program tasks, competitive-level programming, a problem-to-program task, is more challenging. Before implementing the program, one needs to first abstract the problem and create a mathematical representation, come up with potential solutions, consider time and space complexity constraints, and corner cases, and finally identify a proper problem-solving strategy.
In order to disentangle reasoning about the problem from code implementation, we advocate decomposing the process of solving a programming problem. Instead of directly generating a program given the problem statement, we propose adding explicit, intermediate reasoning steps in natural language. These steps are more aligned with how humans typically solve such problems and utilize the idea of chain-of-thought.
However, while <problem, solution>[A solution here refers to a correct program.] pairs are publicly available on the practice websites <cit.>, natural language descriptions or explanations of “how to solve the problem” are hard to collect at scale, as it requires additional annotations from programming experts.
Hence, we propose to automatically generate explanations using an LLM. We found that: 1) Given both the problem and a human-written program solution,
LLMs like GPT-3.5 and GPT-4 can describe the solution in natural language reasonably well; and 2) Given the automatically generated explanation as a hint, LLMs perform better at solving the problem.
Our explanation generation methodology shown in Figure <ref> employs a hierarchical prompt, requesting a step-by-step explanation from detailed code analysis to a broader understanding of the solution. The 7 points can be categorized into Description-Level (i.e., Point 1,3,4,5) and Analysis-Level (i.e., Point 2,6,7) of the problem and solution. To evaluate the quality of the generated explanations, we examine their effectiveness in two respects: 1) being positively rated by the human programming expert who authored the “oracle” solution, and 2) aiding LLMs in solving the problems more effectively. In the explanation-aided evaluation, Explainer generates the explanation given the oracle solution while Solver generates programs from the explanation. We use GPT-turbo-3.5 and GPT-4 as the Explainer and GPT-turbo-3.5 as the Solver in our experiments.
In the human evaluation, we ask human experts who are the authors of solutions to score the explanations from -2 to 2. They give the explanations positive scores averaging 0.81 and 1.30 on GPT-3.5 Explainer and GPT-4 Explainer respectively. With respect to explanation-aided program synthesis, we find that different points of the generated explanations can guide the model to solve the problem better, with the solve rate at pass@10 (one of the top 10 generated programs is deemed correct) increasing from 6.1% to 42.4% on the CodeContests <cit.> test set. In addition, we found that GPT-turbo-3.5 performs significantly worse at Analysis-Level explanations compared to GPT-4. Both of them can generate high-quality Descriptions.
The main contributions of this work are:
* We advocate disentangling reasoning about the problem and code generation in solving competitive-level programming problems.
* We propose a Specific-to-General prompt to automatically generate structured natural language explanations for <problem, solution> pairs.
* We demonstrate that this proposed method yields convincing explanations that are positively scored by the program authors and serve as effective hints to better solve the problems.
Though the main focus of this paper is not solving competitive-level programming problems. We further discuss how such explanations can potentially be used to improve problem-solving, which we leave as a potential avenue for future research.
§ BACKGROUND
§.§ Challenges in Solving and Annotation
Competitive-level programming problems <cit.> are more indirect compared to many code implementation tasks. Reasoning and problem-solving strategies are usually necessary before implementation <cit.>; and they require solutions to be both correct and efficient. While brute-force solutions may be feasible for some problems, they are frequently deemed inadequate due to their high time and space complexity. Additionally, some problems may intentionally obscure the key idea behind the solution, presenting more puzzle-like challenges.
The challenges in solving competitive-level programming lie in not only the implementation phase but also the reasoning process that precedes it, which has not been adequately addressed by previous works. Consider the problem in Table <ref>, if given a specific instruction, LLMs optimized for code generation can generate the correct program.
However, the reasoning process of why it is correct is not reflected in the problem or the program solution. To bridge the gap between the problem and the solution, natural-language-explained solutions and reasoning steps can be potentially helpful.
Annotating explanations on how to solve those questions can be difficult and time-consuming, even for highly skilled programming competitors. Solutions are written under a time constraint, and competitors compromise readability for fast implementation. Therefore, solutions are often hard to understand by others without natural language explanation. Small-scale solution explanations, also known as editorials, can be found in some blogs on the Internet, but collecting large-scale editorials in a unified format is still infeasible.
In this paper, we tackle how to use LLMs to generate silver-standard explanations automatically, thereby addressing the need for accessible and comprehensive solution explanations in competitive-level programming.
§.§ Problem Formulation
We formalize our task with a problem set consisting of n problems P={p_1, p_2,⋯,p_n}; each problem p_i is a text sequence that describes the following aspects clearly.
* Problem statement: a natural language description of the problem, as the first cell in Table <ref>.
* Input/Output: the format and input/output constraints (e.g. ranges) for the submitted program s.
* Example: An example of a correct pair of input/output.
* Note (Optional): Explanation of the Example input/output.
Each p_i corresponds to a set of oracle human solutions S_i={s_i^1, s_i^2, ⋯, s_i^t} where t is the number of total solutions of p_i, we then select top k solutions following 2 simple rules: (1) We only consider correct human solutions, i.e., those that have passed the online judge system; (2) Solutions in S_i are ranked according to their programming language and size in bytes, with a preference for Python-implemented solutions and shorter programs.
All experiments in this paper are zero-shot on large language models without fine-tuning.
§.§ General-to-Specific Prompting Solver
Before delving into the explanations, we first discuss the general capacity of LLMs to solve those problems directly from problem to generated solution or thinking step-by-step. We note that our methodology requires using instruction-finetuned language models <cit.>, as we provide zero-shot instructions in our prompt.
We designed a general-to-specific reasoning chain, which is inspired by humans' step-by-step thought processes in solving such problems. As shown in Figure <ref>, we prompt the LLM to start from a general understanding of the problem and potential algorithms to use, then gradually transit to a more specific and detailed level of understanding, till finally implementing a program in Python.
For each problem, we generate k programs {g_i^1,g_i^2,⋯, g_i^k} with LLMs as the k candidates to conduct a solve@k evaluation, as defined by <cit.>. In other words, if any of the generated k programs is considered as a correct solution, then this problem is regarded as solved.
When experimenting with GPT-turbo-3.5 on the 165 problems in the test set of CodeContests, the proposed general-to-specific prompt can boost the solve@10
from 6.1% to 9.1%. Through reasoning general-to-specific, the LLM can perform a bit better at solving programming problems. However, upon examining the failed cases, we discovered that for most problems, the model makes a mistake at a very early stage, ultimately resulting in a completely incorrect solution.
§ METHOD
In the process of problem-solving, a human typically constructs a solution by progressing from a general idea to a detailed code implementation. However, explaining that solution involves a reverse approach. This entails examining the code on a line-by-line basis, interpreting the role of each function, and then rationalizing the algorithmic steps in relation to the original problem. Therefore, we design a specific-to-general explanation generation method.
§.§ Specific-to-General Solution Explaining
Previous works have demonstrated the ability of LLMs to explain code; therefore, we investigated generating explanations automatically using an LLM with both the problem and sample solution as input.
For a problem-solution pair {p_i, s_i^j} where j≤ k, an explanation e_i^j is generated. For each problem p_i, a set of explanations E_i is generated given different solutions {s_i^1, s_i^2, ⋯, s_i^k}.
Although simple prompts such as 'explain the solution' may generate useful explanations, these often lack crucial information and are difficult to evaluate due to their output's diversity. To tackle this issue, we deliberately control aspects of the explanations, requiring them to include a 'problem summary' that demonstrates our understanding of the problem and three levels of 'natural language description of the problem,' illustrating our ability to comprehend the solution from a low-level to a high-level perspective. These can be considered as 'Description-level' explanations. The elements such as 'used algorithm,' 'time complexity,' and 'proof of correctness' fall under 'Analysis-level' explanations, showcasing the language model's overall analysis and understanding of the solution. The method for this specific-to-general explanation prompt is detailed in the left part of <ref>.
Format-guided-generated explanations are clear and structured, thus making it easier to disentangle information and evaluate each aspect. In our experiment, over 99% of explanations contain all defined points, with less than 1% skipping some later points due to the length constraint.
In addition, thinking from detailed code-level implementation can also provide the intermediate steps in context. The LLM can reach a better general understanding of the solution by looking at its previously generated general descriptions.
§.§ Explanation Instructed Solver
In order to evaluate the quality of generated explanations, we design an automatic metric to test how much it can aid in solving the problem if included in the instruction. In this setting, we give both the original problem as well as one of Description-level points to the LLM Solver with the corresponding prompt given in the right part of Figure <ref>. If a given instruction enables the LLM Solver to solve a problem it was previously unable to solve, we consider that instruction to be more informative than one that does not yield such an outcome.
§ EXPERIMENTS
§.§ Experimental Setup
ModelWe use both GPT-3.5-turbo and GPT-4 <cit.> as the Explainer for explanation generation.[Due to the usage limit of GPT-4, we run larger scale experiments only on GPT-3.5-turbo.] We use GPT-3.5-turbo for all our experiments as Solver LLM for code generation. We will refer to it as GPT-3.5 for simplicity. The temperature t is set to 0 wherever only one sample is needed, and 0.2 otherwise. Max-length of text is set to 4096, and we skipped 0.7% of cases where the max length is exceeded.
DataTo ensure the effectiveness and accuracy of our results, given that GPT-3.5 may have seen some <problem, solution> pairs in its training data, we use the CodeContests test set as our main dataset in this paper. It contains 165 real online contest problems from Codeforces, the earliest of which dates back to Oct 2021, which is after the knowledge cutoff of GPT-3.5 and GPT-4 (Sep. 2021). Additionally, we extract a small subset of 50 more recent problems from Codeforces for human evaluation. Table <ref> are statistics based on their level-of-difficulty ratings. Problems with ratings over 2k are considered very difficult, most of which can only be solved by medal-winning competitors.
Metric We employ pass@k <cit.> as our evaluation metric for solve rate. For each problem p_i, we sample k programs generated from GPT-3.5 and evaluate them using Solve Rate@k metric: the percentage of programs that pass all hidden test cases when submitted to Codeforces' online judge.
We first filter the programs by their output on the public test cases before submitting them and also measure Pass Public@k: the percentage of programs that pass the public test cases given in the examples. The above metrics are abbreviated as `solve@k' and `public@k'.
§.§ Human Evaluation
We measured the quality of LLM-generated explanations using human evaluation. We collect 50 <problem, solution> pairs from Codeforces, ensuring that their format remained consistent with those in CodeContests.
Author Likert Scores Recognizing that understanding and explaining others' solutions can be a challenging task for programmers, we employed an annotator-centered evaluation approach. We extracted solutions and corresponding problems from Codeforces for an expert annotator. The Explainer then generates an explanation for the annotator's solution, which was subsequently scored by the author of the explained solution. Note that each explanation is scored by the author of the solution being explained.
We generated explanations for 50 problems with ratings ranging from 800 to 2000, along with their corresponding solutions, and provided these explanations to human experts. They were asked to assign a Likert score from -2 (very poor) to 2 (excellent).
The evaluation consists of ten questions, each one corresponding to a specific aspect of the explanation. We separately assess the quality of the response to each point of our G2S prompt (see Figure <ref>). Furthermore, we developed three criteria to evaluate various aspects of the overall explanation:
* Usefulness: How useful is the explanation as guidance to solve the problem?
* Clearness: How good is the explanation in terms of describing everything clearly and avoiding ambiguity?
* Understanding: How much does the LLM understand the key idea behind the solution?
The average Likert scores over 50 problems are shown in Figure <ref>. Regarding the scores for the solution descriptions (Step-by-Step Solution Description, Explanation of the Solution, Solution in One Sentence) and usefulness, both GPT-3.5 and GPT-4 Explainer are positively rated by expert annotators, with an average of 1.16 and 1.36 respectively.
However, GPT-3.5 receives near zero or negative scores on questions including why it's correct, clearness, and understanding, showing its inadequate ability to grasp the key idea behind the solution, while GPT-4 performs better (0.68∼0.88 score higher) on these aspects. This reveals a clear difference in the abilities of GPT-3.5 and GPT-4 to reason and analyze competitive-level programming solutions.
Qualitative Analysis
We observed several interesting aspects of the explanations generated by the models. Models can effectively explain code by integrating the problem statement and the solution. Step-by-step descriptions are often more concise than line-by-line code reports by summarizing complex operations and avoiding re-stating well-known algorithms (e.g., depth-first-search).
A sample explanation from GPT-3.5 is given in Table <ref>. It describes the solution very well in both specific (step-by-step) and general (one-sentence) levels. It summarizes the operations of `count <0' and `multiply -1 or 1' into `negative on the left, positive on the right' and explains if it's sorted, then `yes' otherwise, `no'. However, if we look at the one-sentence description, there are ambiguous terms like `original array' or `move elements', which might mislead the problem-solving if interpreted incorrectly. This is due to natural languages' ambiguous nature compared to programming languages.
Models exhibit shortcomings when explaining solution correctness, as they may not comprehensively account for conditions stipulated in the problem statement. For instance, when explaining Example <ref>, it failed to recognize that “swapping signs of 2 elements multiple times means moving signs arbitrarily along the array” is a crucial condition, which is not mentioned explicitly in natural language. This highlights a potential limitation in the models' ability to extract and incorporate essential information from various parts of the problem statement when generating explanations.
We also present the full input/output and our scores for both successful and failed cases in appendix <ref>.
§.§ Automatic Metrics: Improving Solutions
We further investigated the ability of generated explanations to improve problem solving. Our fundamental assumption is that if an explanation accurately describes the algorithm, it should be capable of guiding the implementation of a correct solution. Consequently, we experimented with versions of the Instructed Solver Prompt in Figure <ref>, wherein one point in the explanation (i.e., an aspect of the solution) is provided to the GPT-3.5 Solver as a hint for solving the problem.
We compare it with two baseline solvers that, unlike our solver from Figure <ref>, are not conditioned on explanations and only get the problem statement as an input: zero-shot prompt (denoted as Baseline in Table <ref>) and General to Specific (G2S) “step-by-step” prompt shown in Figure <ref>. We also check that explanations do not contain code snippets to ensure the solutions are not directly leaked in explanations. However, note that it is still not a completely “fair” comparison, since the automatically generated `silver explanations' are conditioned on oracle solutions.
Main results For GPT-3.5, we measure pass@k for k={1,5,10}, but only pass@1 for GPT-4 due to access limits. To sample k programs, we sample k different human solutions for Explainer and then generate a program for each explanation.
Results are shown in Table <ref>. Different Description-level aspects of explanations improve both the solve rate and pass public rate. The most detailed aspect, Step-by-Step Solution Description (S-by-S), which provides a detailed natural language description of the implementation, offers the most significant benefit to problem-solving, resulting in a solve rate @1 that is 7.4 times higher than the baseline. The impact of Explanation of the Solution (Exp-Sol) and Solution in One Sentence (OneSent) is comparatively lower due to their concise nature, which offers a less clear path towards the solution. However, providing information on the algorithms used (UsedAlg) or the expected time complexity (TC) does not improve GPT-3.5's problem-solving capabilities.
The pass@1 results for GPT-4 Explainer are not significantly better than for GPT-3.5, indicating that they share similar capabilities in terms of Description-level explanations.
Pass Public Tests vs. Solve One observation from Table <ref> is that solve@10 is significantly less than public@10. For a program that passes the public tests but fails the hidden tests, there are two possibilities: 1) It is incorrect and only applies to a subset of test data, including the public tests; 2) It is inefficient. As discussed before, in competitive-level programming, a “correct” but slow implementation does not count as a solution, as there are constraints on time and space complexity. Therefore, we further study programs that pass the public tests but may fail hidden tests. As shown in Table <ref>, the baseline has 48.9% of its programs rejected by the online judge due to inefficiency, indicating that GPT-3.5 tends to generate inefficient implementations (e.g., brute force solutions).
This is for all submissions, i.e., one problem might have up to k submissions, which is different from the problem-wise solve rate.
When provided hints from the solution description, the portion of TLE programs drops significantly. Although GPT-3.5 may still make mistakes in some details or fail to consider corner cases even with hints from the explanation, it is better at avoiding inefficient solutions.
Another interesting observation is that the wrong answer rate for one-sentence explanation-instructed solving is higher than the baseline. One possible explanation is that it is challenging to incorporate corner case handling in a one-sentence solution description, which makes GPT-3.5 more likely to implement an almost-correct program.
Difficulty of the problem We further study the aiding effect of three levels of Solution Description on problems of different difficulty ratings.
Codeforces problems are given ratings, the higher the ratings are, the more challenging the problem is. Individuals who consistently solve problems with ratings of 2000 are in the 93rd percentile of all participants.
The outlier is because there's only one rating=3100 problem in the CodeContests test set.
As shown in Figure <ref>, the solve rate decreases as the ratings increase and no explanation can help solve complex problems. However, for easier problems, even a one-sentence hint enables GPT-3.5 to solve approximately 70% of problems, compared to the ∼30% baseline. Furthermore, hints can effectively help to solve medium-difficulty problems which were previously unsolvable.
Sampling Strategies
In our approach, we generate k programs and treat all of them as candidates without re-ranking, making the sampling strategy crucial. We therefore compared three strategies for sampling k programs.
* Sample k human solutions: for each p_i, we sample S_i={s_i^1, s_i^2, ⋯, s_i^k}, and for each of the solutions_i^j, we generate one explanation e_i^j, and one corresponding program g_i^j.
* Sample k explanations: We only take the first solution s_i, and sample E_i={e_i^1, e_i^2, ⋯, e_i^k}, for each explanation e_i^j, we generate one corresponding program g_i^j.
* Sample k programs: We only sample 1 solution s_i and one corresponding explanation e_i, then we sample k programs G_i={g_i^1, g_i^2, ⋯, g_i^k} given the explanations.
Table <ref> shows that the first strategy of sampling from 10 different human oracle solutions is the most effective. Additionally, the second strategy of sampling 10 explanations from one oracle solution yields better results than sampling 10 programs from one explanation (strategy 3).
One potential reason is that some human solutions may have poor readability or employ complex implementations that are hard to follow. By sampling different human oracle solutions, there is a higher likelihood that explanations based on clear and concise solutions can serve as better hints. Similarly, sampling diverse explanations can mitigate the issue of misleading, incorrect explanations. We also compared the explanation quality of GPT-4 (i.e., only as an Explainer) and found it to be superior to GPT-3.5 in the same setting (10.9% vs. 6.7%). We skipped other settings due to experimental limitations.
§ RELATED WORK
§.§ Solving Competitive-level programming problems
Early attempts to apply deep learning to solve competitive-level programming problems <cit.> utilized traditional approaches such as SMT solvers and search to generate short programs for simple problems. <cit.> collected a dataset of human-written problem statements and solutions for Codeforces problems and introduced sequence model baselines that could solve a small subset of their dataset. With the advent of Transformers, AlphaCode <cit.> achieved significant progress in solving competitive-level programming problems by attaining a rating equivalent to the top 54% of participants on Codeforces by finetuning LLMs in the problem-to-solution scenario with the CodeContests dataset collected from Codeforces. Notably, AlphaCode requires sampling 1M program candidates per problem to achieve a 29.6% solve rate on their test set. <cit.> improves upon AlphaCode by using fewer samples for the same level of performance. Our study focuses on explaining solutions to problems, rather than directly solving them. To the best of our knowledge, this is the first attempt to explain competitive-level programming solutions using language models, which places a landmark of the reasoning and interpreting ability of those models.
§.§ Reasoning with large language models
<cit.> has demonstrated that by breaking down the reasoning steps through chain-of-thought (CoT) prompting, LLMs are able to solve challenging reasoning problems by following the correct logic step-by-step.
This method, along with majority voting, has led to notable advancements in solving high-school-level mathematical problems <cit.>. <cit.> generalize the idea of CoT to zero-shot learning. Another technique that builds upon CoT is the self-consistency decoding strategy <cit.>. This approach samples diverse reasoning paths and selects the most consistent answer, which has shown to improve LLMs' performance on complex reasoning tasks by embracing multiple ways of thinking. Additionally, Parsel <cit.> proposed a framework that focuses on enhancing LLMs' hierarchical multi-step reasoning capabilities, particularly for tasks such as generating complex programs.
§.§ Code Comprehension with LLMs
Several existing works have explored generating code explanations using LLMs. <cit.> integrated LLM-generated code explanations into an interactive e-book on web software development, showing that students found the generated explanations helpful. <cit.> compared LLM-generated explanations with student-created explanations, finding that the LLM-created explanations were easier to understand and more accurate. <cit.> utilizes self-generated explanations as feedback to its self-debug. In comparison, our work targets explaining competitive-level programming problems, aiming not only to clarify the code implementation but also to point out the key idea behind the solution, its correctness, choice of algorithms, and time complexity.
§ CONCLUSION AND FUTURE WORK
In this paper, we propose explaining competitive-level programming solutions using LLMs. Given a problem and its corresponding human oracle solution given, LLMs can generate structured explanations that are positively scored by human authors. Our evaluation demonstrates that both GPT-3.5 and GPT-4 exhibit reasonable capabilities in generating faithful and clear descriptions, which can guide another LLM to better solve the problem. GPT-4 outperforms GPT-3.5 significantly in analyzing the problem and solution, as well as capturing the key ideas behind the solution.
Our automatic evaluation metric examines an ideal scenario: when a hint is based on an oracle human solution, it effectively guides the LLM to generate improved programs for solving problems. However, a system should be able to learn from human programming solutions to improve its own problem-solving on novel problems without guidance from a human solution. This raises the question: Can the LLM-generated explanations be utilized to improve subsequent problem-solving?
Our explanation method can potentially be applied to annotate large-scale data (e.g., the full CodeContests training set), yielding thousands of silver explanations that can be used to fine-tune a reasoning model for competitive-level programming problems. This approach could help bridge the long-standing reasoning gap between problem and program for complex programming problems. Moving forward, we aim to further address solving such problems by focusing on enhancing reasoning for programming problems.
§ LIMITATIONS
One primary limitation of this work is that we experimented on only one dataset and two LLMs, namely GPT-3.5 and GPT-4, so it's unclear whether our method can generalize well to other LLMs or problem sources other than Codeforces. Here we just assume that the competitive-level programming problems are well defined so the distribution shift won't be large between sources.
Another limitation stems from the annotator-centered nature of our human evaluation process, which prevents us from assessing annotator agreement. Individual annotators were only able to score explanations based on their own solutions. While we provided guidelines for assigning scores, the evaluation process remains inherently subjective, and interpretations may vary among different annotators.
§ ETHICS STATEMENT
Our research is driven by the potential benefits of improved problem-solving capabilities and a deeper understanding of programming concepts for developers and learners. However, we acknowledge the ethical implications and potential risks specific to our work.
This work focuses on the task of automatic code generation, but we emphasize that it is not intended to replace human efforts in programming. Machine-generated programs may contain errors or vulnerabilities, and it is crucial to thoroughly verify any AI-generated code snippets before using them. Providing code explanations should not be seen as an endorsement to blindly trust the generated programs. Users must carefully understand, verify, and examine AI-generated code to ensure its correctness and safety.
§ ACKNOWLEDGEMENT
This material is based on research that is supported in part by the Air Force Research Laboratory (AFRL) and DARPA, for the KAIROS program under agreement number FA8750-19-2-1003. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, IARPA, NSF, or the U.S. Government.
We sincerely thank our annotators, especially smax, for their efforts in this work, and also all reviewers for their valuable suggestions on this paper.
acl_natbib
§ APPENDIX
Case Study
Table <ref> presents an example of the input/output of the model, which contains our specific-to-general prompt as well as the comparison between GPT3.5 and GPT4 generated explanations. We can see that both GPT-3.5 and GPT-4 describe the problem and solution very well. Both models correspond the solution logic to the problem situation correctly. In the analysis of why this solution is correct, GPT-3.5 and GPT-4 mention the key idea of increasing IQ backward means saving IQ for the future.
Table <ref> presents an incorrectly explained example. LLMs can describe the problem and illustrate the printing array operation as “moving the last element to the head of the array”. However, both of them fail to understand the purpose of placing the maximum element in the front as they ignore one crucial condition in the problem: all elements are strictly positive. Nevertheless, GPT-4 maintains a better understanding of the problem by noticing the condition that the given array is already sorted in non-decreasing order.
|
http://arxiv.org/abs/2307.04718v1 | 20230710173013 | On the randomized Euler algorithm under inexact information | [
"Marcin Baranek",
"Andrzej Kałuża",
"Paweł M. Morkisz",
"Paweł Przybyłowicz",
"Michał Sobieraj"
] | math.NA | [
"math.NA",
"cs.NA"
] |
Approximation of SDEs under inexact information]
On the randomized Euler algorithm under inexact information
M. Baranek]Marcin Baranek
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
A. Kałuża]Andrzej Kałuża
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
P. M. Morkisz]Paweł M. Morkisz
NVIDIA Corp. and AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
P. Przybyłowicz]Paweł Przybyłowicz
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected], corresponding author
M. Sobieraj]Michał Sobieraj
AGH University of Krakow,
Faculty of Applied Mathematics,
Al. A. Mickiewicza 30, 30-059 Kraków, Poland
[email protected]
This paper focuses on analyzing the error of the randomized Euler algorithm when only noisy information about the coefficients of the underlying stochastic differential equation (SDE) and the driving Wiener process is available. Two classes of disturbed Wiener process are considered, and the dependence of the algorithm's error on the regularity of the disturbing functions is investigated. The paper also presents results from numerical experiments to support the theoretical findings.
Key words: stochastic differential equations, randomized Euler algorithm, inexact information, Wiener process, lower bounds, optimality
MSC 2010: 65C30, 68Q25
[
[
=====
§ INTRODUCTION
We investigate the strong approximation of solutions of the following SDEs
{[ X(t) = a(t,X(t)) t + b(t,X(t)) W(t), t∈ [0,T],; X(0)=η, ].
where T >0, W is an m-dimensional Wiener process,
and η∈ℝ^d. Our analysis is performed under the assumption that only standard noisy information about (a,b,W) is available. This means that we have access to a,b,W only through its inexact values at finite number of discretization points.
Our interest lies in approximating the values of X(T) using the inexact information about the coefficients (a,b) and the driving Wiener process W. We consider algorithms that are based on values of a,b, and W corrupted by noise. This noise can arise from measurement errors, rounding procedures, etc. The inspiration of considering such inexact information comes from various sources, such as numerically solving SDEs on GPUs and understanding of impact of low precision in computations (when switching from double to float and half, see <cit.>), as well as modeling real-world phenomena that are described by SDEs such as energy demand/production forecasting (where exact information is rarely available).
The study of inexact information has been explored in the literature for various problems, including function approximation and integration (<cit.>, <cit.>, <cit.>, <cit.>), approximate solving of ODEs (<cit.>) and PDEs (<cit.>, <cit.>), see also the related monograph <cit.>. In the context of stochastic integration and approximation of solution of stochastic differential equations inexact information about the integrands or coefficients of the underlying SDEs has been considered in <cit.>, <cit.>, <cit.>. However, it is important to note that in <cit.> and <cit.> the information considered about the process W was exact. We also refer to the article <cit.> where noisy information induced by the approximation of normally distributed random variables is considered. However, the computational setting (devoted for weak approximation of SDEs) is different that the one considered in this paper (established in the context of strong approximation of the solution X).
In this paper, we mainly extend the proof technique known from <cit.> and <cit.>. Namely, we cover the case when also the information about the Wiener process W is inexact. This assumption leads to a significant change in the proof technique. It allows us to investigate the error behavior for the randomized Euler scheme under inexact information about the tuple (a,b,W) with precision parameters δ_1,δ_2,δ_3∈ [0,1] for a,b,W, respectively. (See also, for example, <cit.>, <cit.>, <cit.>, <cit.> where other randomized algorithms for approximation of solutions of ODEs and SDEs have been defined and investigated under exact information.) Roughly speaking, we show that the L^r(Ω)-error of the randomized Euler scheme, that use O(n) noisy evaluations of (a,b,W), is O(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3), provided that the corrupting functions for W are sufficiently regular (see Theorem <ref> (i). In the case of less regular corrupting functions for W (assuming only Hölder continuity) the error might increase due to the presence of informational noise (see Theorem <ref> (ii)).
The main contributions of this paper are as follows:
* Upper error bounds on the randomized Euler algorithm in two classes of corrupting functions for the Wiener process W (Theorem <ref>),
* Lower error bounds and optimality of the randomized Euler algorithm (Theorem <ref>),
* Results of numerical experiments that confirm our theoretical findings (Section 5).
The structure of the paper is as follows. Section 2 provides basic notions and definitions, along with a description of the computation model used when dealing with inexact information for drift and diffusion coefficients, as well as for the driving Wiener process. In Section 3, we analyze the upper bounds for the error of the randomized Euler algorithm. Lower bounds and some optimality results are stated in Section 4. Section 5 contains the results of numerical experiments conducted to validate our theoretical findings. Finally, the Appendix provides auxiliary results used in the paper.
§ PRELIMINARIES
We denote by ℕ={1,2,…}. Let W = {W(t)}_t≥ 0 be a standard m-dimensional Wiener process defined on a complete probability space (Ω,Σ, ℙ). By {Σ_t}_t≥ 0 we denote a filtration, satisfying the usual conditions, such that W is a Wiener process with respect to {Σ_t}_t≥ 0. We set Σ_∞=σ(⋃_t≥ 0Σ_t).
We denote by · the Frobenius norm in ℝ^m or ℝ^d× m respectively, where we treat a column vector in ℝ^m as a matrix of size m× 1. For x∈ℝ^m and α∈ℝ, by x·α or α· x we mean a componentwise scalar-by-vector multiplication and for a matrix y∈ℝ^d× m by y· x we mean a standard matrix-by-vector multiplication. For a sufficiently smooth function f:[0,T]×ℝ^m→ℝ we denote by ∂ f(t,y)/∂ y its gradient, while by ∂^2 f(t,y)/∂ y^2 its Hessian matrix of size m× m. Moreover, for a smooth function f:[0,T]×ℝ^m→ℝ^m we denote by ∂ f(t,y)/∂ y its Jacobi matrix of size m× m, computed also with respect to the space variable y. For r∈ [2,+∞) by the L^r(Ω)-norm, either for a random vector or a random matrix, we mean
Y_r := ( Y^r)^1/r Y:Ω→ℝ^m Y:Ω→ℝ^d× m.
We also make us of the following second order differential operator
ℒ=∂/∂ t+1/2∑_k=1^m∂^2/∂ y_k^2.
We now define classes of drift and diffusion coefficients.
Let T>0, K>0, ϱ∈ (0,1]. A function a:[0,T]×ℝ^d →ℝ^d belongs to 𝒜_K if
* the mapping a:[0,T]×ℝ^d→ℝ^d is Borel measurable,
* for all t∈[0,T]
a(t,0)≤ K,
* for all t∈[0,T], x,y ∈ℝ^d
a(t,x) - a(t,y)≤ K x - y.
Note that if a∈𝒜_K then for all (t,y)∈ [0,T]×ℝ^d we have
a(t,y)≤ K(1+y).
A mapping b:[0,T]×ℝ^d→ℝ^d× m belongs to ℬ^ϱ_K if
* b is bounded in the origin (0,0),
b(0,0)≤ K,
* for all t∈[0,T], x,y ∈ℝ^d
b(t,x) - b(t,y) ≤ K x-y,
* for all t,s∈[0,T], x ∈ℝ^d
b(t,x) - b(s,x) ≤ K (1+x)· |t-s|^ϱ.
The above conditions imply that for all (t,y)∈ [0,T]×ℝ^d
b(t,y)≤K̅ (1+ y),
where K̅=K·max{1,T^ϱ}. We also consider the following class of initial values
𝒥_K={η∈ℝ^d | η≤ K}.
The class of all admissible tuples (a,b,η) is defined as
ℱ(ϱ,K)=𝒜_K×ℬ^ϱ_K×𝒥_K.
Let
δ_1,δ_2, δ_3,δ_4 ∈ [0,1]
We refer to δ_1, δ_2, δ_3, and δ_4 as to precision parameters.
We now describe what we mean by corrupted values and information about a, b, W.
Let us set
𝒦^s = {p:[0,T]×ℝ^d→ℝ^d× s | p(·,·)-Borel measurable,
p(t,y)≤ 1+y for all t∈ [0,T], y∈ℝ^d},
for s∈{1,m}. The classes 𝒦^1, 𝒦^m are nonempty and contain constant functions. Let
V_c(γ)={c̃ | ∃_p_c∈𝒦^s: c̃=c+γ· p_c},
where c∈{a,b}, (γ,s)=(δ_1,1) if c=a and (γ,s)=(δ_2,m) if c=b. By ã and b̃ we mean any functions ã∈ V_a(δ_1) and b̃∈ V_b(δ_2), respectively. We have that {a}=V_a(0)⊂ V_a(δ_1)⊂ V_a(δ'_1) for 0≤δ_1≤δ'_1≤ 1 and {b}=V_b(0)⊂ V_b(δ_2)⊂ V_b(δ'_2) for 0≤δ_2≤δ'_2≤ 1.
In order to introduce perturbed information about the Wiener process W, we introduce the following classes of corrupting functions for W
𝒦_0={p:[0,T]×ℝ^m→ℝ^m | p^j∈ C^1,2([0,T]×ℝ^m;ℝ), |p^j(0,0)|≤ 1,
max{|∂ p^j/∂ t(t,y)|,∂ p^j/∂ y(t,y),∂^2 p^j/∂ y^2(t,y)}≤ 1
for all t∈ [0,T], y∈ℝ^m, j=1,2,…,m},
and
𝒦_α,β =
{
p:[0,T]×ℝ^m→ℝ^m
| ‖ p(t,x)-p(s, y)‖≤| t-s|^α
+ ‖ x-y‖^β,
for all t,s∈ [0,T], x,y∈ℝ^m}.
We consider the following classes of disturbed Wiener processes
𝒲_0(δ_3)={W̃ | ∃_p∈ 𝒦_0:∀_(t,ω)∈ [0,T]×Ω W̃(t,ω) = W(t,ω)+δ_3· p(t,W(t,ω))},
and
𝒲_α,β(δ_3)={W̃ | ∃_p∈𝒦
_α,β:∀_(t,ω)∈ [0,T]×Ω W̃(t,ω) = W(t,ω)+δ_3· p(t,W(t,ω))}.
We have that {W}=𝒲_0(0)⊂𝒲_0(δ_3)⊂𝒲_0(δ'_3) for 0≤δ_3≤δ'_3≤ 1, and similarly for 𝒲_α,β. As in <cit.> the classes defined above allow us to model the impact of regularity of noise on the error bound.
We assume that the algorithm is based on discrete noisy information about (a,b,W) and exact information about η. Hence, a vector of noisy information has the following form
𝒩(ã, b̃, W̃, η) = [ã (ξ_0,y_0),ã(ξ_1,y_1),…,ã(ξ_i_1-1,y_i_1-1),
b̃(t_0,z_0), b̃(t_1,z_1),…, b̃(t_i_1-1,z_i_1-1),
W̃(u_0), W̃(u_1),…,W̃(u_i_2-1),η],
where i_1,i_2∈ℕ and (ξ_0,ξ_1,…,ξ_i_1-1) is a random vector on (Ω,Σ,ℙ) which takes values in [0,T]^i_1. We assume that the σ-fields σ(ξ_0,ξ_1,…,ξ_i_1-1) and Σ_∞ are independent. Moreover, t_0,t_1,…,t_i_1-1∈ [0,T] and u_0,u_1,…,u_i_2-1∈ [0,T] are fixed time points. The evaluation points y_j, z_j for the spatial variables y,z of a(·,y) and b(·,z) can be computed in adaptive way with respect to (a,b,η) and W. Formally, it means that there exist Borel measurable mappings ψ_0:ℝ^i_2× m×ℝ^d→ℝ^2d, ψ_j:ℝ^d× j×ℝ^d× m× j×ℝ^i_2× m×ℝ^d→ℝ^2d, j=1,2,…,i_1-1, such that the successive points y_j,z_j are given as follows
(y_0,z_0)=ψ_0(W̃(u_0),W̃(u_1),…,W̃(u_i_2-1),η),
and for j=1,2,…, i_1-1
(y_j,z_j) = ψ_j(ã(ξ_0,y_0), ã(ξ_1,y_1),…, ã(ξ_j-1,y_j-1),
b̃(t_0,z_0), b̃(t_1,z_1),…, b̃(t_j-1,z_j-1),
W̃(u_0),W̃(u_1),…, W̃(u_i_2-1),η).
The total number of noisy evaluations of (a,b,W) is l = 2 i_1 + i_2.
The algorithm 𝒜 that uses the noisy information 𝒩(ã, b̃, W̃, η) and computes approximation of X(T) is defined as
𝒜(ã,b̃, W̃,η)=φ(𝒩(ã,b̃, W̃,η)),
for some Borel measurable function φ:ℝ^i_1× d×ℝ^i_1× d× m×ℝ^i_2× m×ℝ^d→ℝ^d. For a fixed n∈ℕ by Φ_n we denote a class of all algorithms (<ref>) for which the total number of evaluations l is at most n.
Let r∈ [2,+∞). The L^r(Ω)-error of 𝒜∈Φ_n for the fixed tuple (a,b,η)∈𝒢 is given by
e^(r)(𝒜,a,b,η,𝒲,δ_1,δ_2,δ_3)
=sup_(ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲(δ_3)X(a,b,W,η)(T)-𝒜(ã,b̃, W̃,η)_r,
where 𝒲∈{𝒲_0,𝒲_α,β} and 𝒢 is a subclass of ℱ(ϱ,K). The worst case error of 𝒜 in 𝒢 is
e^(r)(𝒜,𝒢,𝒲,δ_1,δ_2,δ_3)=sup_(a,b,η)∈𝒢 e^(r)(𝒜,a,b,η,𝒲,δ_1,δ_2,δ_3).
Finally, we look for (essentially) sharp bounds for the nth minimal error, defined as
e^(r)_n(𝒢,𝒲,δ_1,δ_2,δ_3)=inf_𝒜∈Φ_ne^(r)(𝒜,𝒢,𝒲,δ_1,δ_2,δ_3).
In (<ref>) we define the minimal possible error among all algorithms of the form (<ref>) that use at most n noisy evaluations of a,b and W . Our aim is to find possibly sharp bounds on the nth minimal error, i.e., lower and upper bounds which match up to constants. We are also interested in defining an algorithm for which the infimum in e^(r)_n(𝒢,𝒲,δ_1,δ_2,δ_3) is asymptotically attained. We call such an algorithm the optimal one.
Unless otherwise stated, all constants appearing in this paper (including those in the 'O', 'Ω', and 'Θ' notation) will only depend on the parameters of the class ℱ(ϱ,K), α,β and r. Furthermore, the same symbol may be used in order to denote different constants.
§ ERROR OF THE EULER SCHEME UNDER INEXACT INFORMATION
We investigate the error of the randomized Euler scheme in the case of inexact information about a, b, and the driving Wiener process W.
Fix n∈ℕ, t_i=iT/n for i=0,1,…,n. Let {ξ_i}_i=0^n-1 be independent random variables on (Ω,Σ,ℙ), such
that the σ-fields σ(ξ_0, ξ_1,…, ξ_n-1) and Σ_∞ are independent, with ξ_i being uniformly distributed on [t_i,t_i+1]. Let us fix (a,b,W)∈ℱ(ϱ,K) and take any (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲(δ_3), where 𝒲∈{𝒲_0,𝒲_α,β}. The randomized Euler scheme under inexact information is defined by taking
X̅^RE_n(0)=η,
and
X̅^RE_n(t_i+1) = X̅^RE_n(t_i) + ã(ξ_i, X̅^RE_n(t_i)) ·T/n + b̃(t_i, X̅^RE_n(t_i)) ·ΔW̃_i,
for i=0,1, …, n-1, where ΔW̃_i = W̃(t_i+1) - W̃(t_i). The randomized Euler algorithm 𝒜̅^RE_n is defined as
𝒜̅^RE_n(ã,b̃,W̃,η):=X̅^RE_n(T).
The informational cost of the randomized Euler algorithm is O(n) noisy evaluations of a,b,W. By X^RE_n we denote the randomized Euler algorithm X̅^RE_n under the case when information is exact, i.e., when δ_1=δ_2=δ_3=0.
Let 𝒢^n=σ(ξ_0,ξ_1,…,ξ_n-1) and Σ̃_t^n=σ(Σ_t∪𝒢^n), t≥ 0. Since the σ-fields Σ_∞ and 𝒢^n are independent, the process W is still the m-dimensional Wiener process on (Ω,Σ,ℙ) with respect to {Σ̃_t^n}_t≥ 0.
Let r∈ [2,+∞), ϱ∈ (0,1].
(i) There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_0(δ_3) it holds
max_0≤ i≤ nX^RE_n(t_i)-X̅^RE_n(t_i)_r≤ C(δ_1+δ_2+δ_3).
(ii) Let α,β∈ (0,1]. There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K), α,β, and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_α,β(δ_3) it holds
max_0≤ i≤ nX^RE_n(t_i)-X̅^RE_n(t_i)_r
≤ C(δ_1+δ_2+δ_3· n^1-γ)· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r),
where γ=min{α,β/2}.
Firstly, we prove (i). For W̃∈𝒲_0(δ_3) we have that
W̃(t)=W(t)+δ_3· Z(t),
with Z(t)=p_W(t,W(t)) and p_W∈𝒦_0. Then, by the Itô formula we get that
Z(t)=Z(0)+M(t)+V(t), t∈ [0,T],
where M(t)=[M^1(t),M^2(t),…, M^
m(t)], V(t)=[V^1(t),V^2(t),…, V^m(t)] and
V^j(t) = ∫_0^tℒp^j_W(z,W(z)) z,
M^j(t) = ∑_i=1^m∫_0^t∂ p^j_W/∂ y_i(z,W(z)) W^i(z),
for j=1,2,…,m. We stress that {V(t)}_t∈ [0,T] is a continuous process of bounded variation that is adapted to {Σ̃_t^n}_t≥ 0.
Moreover, since (M(t),Σ̃_t^n)_t∈ [0,T] is a martingale, Z is still a continuous semimartingale with respect to the extended filtration {Σ̃_t^n}_t≥ 0. In the sequel we will consider stochastic integrals, with respect to the semimartingales W and Z, of processes that are adapted to the filtration {Σ̃_t^n}_t≥ 0.
From (<ref>), (<ref>) for i=0,1,…,n we can write that
X̅^RE_n(t_i)=η+T/n∑_j=0^i-1ã(ξ_j,X̅^RE_n(t_j))+ ∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ W_j
+δ_3∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ Z_j,
and
X^RE_n(t_i)=η+T/n∑_j=0^i-1 a(ξ_j,X^RE_n(t_j))+ ∑_j=0^i-1 b(t_j, X^RE_n(t_j))·Δ W_j.
Therefore
e̅_i := X^RE_n(t_i)-X̅^RE_n(t_i)=∑_j=0^i-1∫_t_j^t_j+1(a(ξ_j, X^RE_n(t_j))-ã(ξ_j,X̅^RE_n(t_j))) s
+ ∑_j=0^i-1∫_t_j^t_j+1(b(t_j,X^RE_n(t_j))-b̃(t_j,X̅^RE_n(t_j))) W(s)
+ (-δ_3)∑_j=0^i-1∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) Z(s)
= ∑_j=0^i-1(A_j+B_j+C_1,j+C_2,j+C_3,j),
where
A_j=∫_t_j^t_j+1(a(ξ_j,X^RE_n(t_j))-a(ξ_j,X̅^RE_n(t_j))) s
B_j=∫_t_j^t_j+1(b(t_j,X^RE_n(t_j))-b(t_j,X̅^RE_n(t_j))) W(s)
C_1,j=(-δ_1)∫_t_j^t_j+1p_a(ξ_j,X̅^RE_n(t_j)) s
C_2,j=(-δ_2)∫_t_j^t_j+1p_b(t_j,X̅^RE_n(t_j)) W(s),
C_3,j=(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) Z(s)
=(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) M(s)+(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j)) V(s)
=(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s)
+(-δ_3)∫_t_j^t_j+1b̃(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s.
Then for all i=0,1,…,n
e̅_i≤∑_j=0^i-1A_j+∑_j=0^i-1B_j+∑_j=0^i-1C_1,j+∑_j=0^i-1C_2,j+∑_j=0^i-1C_3,j,
where
∑_j=0^i-1C_3,j≤δ_3·(C^1_3,i+C^2_3,i),
with
C^1_3,i= ∑_j=0^i-1∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s),
C^2_3,i=∑_j=0^i-1∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s.
Hence, for i=0,1,…,n
e̅_i≤∑_j=0^i-1A_j+∑_j=0^i-1B_j+∑_j=0^n-1C_1,j
+max_1≤ i ≤ n∑_j=0^i-1C_2,j+δ_3·max_1≤ i ≤ n C^1_3,i+δ_3 · C^2_3,n,
and for all k=0,1,…,n
𝔼(max_0≤ i≤ ke̅_i^r)≤ c_r𝔼(∑_j=0^k-1A_j)^r+c_r𝔼(max_1≤ i≤ k∑_j=0^i-1B_j^r)+c_r𝔼(∑_j=0^n-1C_1,j)^r
+c_r𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r)+c_rδ_3^r·𝔼(max_1≤ i ≤ n (C^1_3,i)^r)+c_rδ_3^r ·𝔼(C^2_3,n)^r.
By the Jensen inequality we have for k=0,1,…,n
(∑_j=0^k-1e̅_j)^r≤ k^r-1∑_j=0^k-1e_j^r≤ n^r-1∑_j=0^k-1e_j^r,
which implies that
(1/n∑_j=0^k-1e̅_j)^r≤1/n∑_j=0^k-1e_j^r.
Moreover
A_j≤KT/ne̅_j,
and hence
(∑_j=0^k-1A_j)^r≤ K^rT^r(1/n∑_j=0^k-1e̅_j)^r≤K^rT^r/n∑_j=0^k-1e̅_j^r.
It holds that (∑_j=0^k B_j,Σ̃^n_t_k+1)_k=0,1,…,n-1 is a discrete-time martingale. To see that let us denote by M_k ∑_j=0^k B_j. By the basic properties of the Itô integral we get for k=0,1,…,n-1 that σ ( M_k ) ⊂Σ̃^n_t_k+1,
𝔼(M_k+1 - M_k | Σ̃^n_t_k+1) = 𝔼(B_k+1 | Σ̃^n_t_k+1) = 0, k=0,1,…,n-2,
and for j=0,1,…,n-1
𝔼B_j^r≤ C(T/n)^r/2𝔼e̅_j^r<+∞.
Hence, by the Burkholder and Jensen inequalities we get for k=0,1,…,n
𝔼(max_1≤ i≤ k∑_j=0^i-1B_j^r)=𝔼(max_0≤ i≤ k-1∑_j=0^iB_j^r)≤ C_r^r𝔼(∑_j=0^k-1B_j^2)^r/2
≤ C_r^r k^r/2-1∑_j=0^k-1𝔼B_j^r≤C/n∑_j=0^k-1𝔼e̅_j^r.
From (<ref>), (<ref>), (<ref>), and the fact that e̅_0=0 we get for k=0,1,…,n that
𝔼(max_0≤ i≤ ke̅_i^r)≤C/n∑_j=0^k-1𝔼e̅_j^r+c_r R_n≤C/n∑_j=0^k-1𝔼(max_0≤ i≤ je̅_i^r)+c_r R_n
=C/n∑_j=1^k-1𝔼(max_0≤ i≤ je̅_i^r)+c_r R_n,
where
R_n=𝔼(∑_j=0^n-1C_1,j)^r+𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r)
+ δ_3^r·𝔼(max_1≤ i ≤ n (C^1_3,i)^r)+δ_3^r ·𝔼(C^2_3,n)^r.
By the discrete version of the Gronwall's lemma (see, for example, Lemma 2.1 in <cit.>) we get
𝔼(max_0≤ i≤ ne̅_i^r)≤ KR_n.
By the Jensen inequality and Lemma <ref> we get
𝔼(∑_j=0^n-1C_1,j)^r≤δ_1^r T^r n^-1∑_j=0^n-1𝔼p_a(ξ_j,X̅^RE_n(t_j))^r
≤ Cδ_1^r (1+max_0≤ j≤ n𝔼X̅^RE_n(t_j)^r)≤ K_1δ_1^r.
The process (∑_j=0^k C_2,j,Σ̃^n_t_k+1)_k=0,1,…,n-1 is a discrete-time martingale - this can be justified in analogous way as for (∑_j=0^k B_j,Σ̃^n_t_k+1)_k=0,1,…,n-1. Hence, again by the Burkholder and Jensen inequalities, we obtain
𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r)≤ C_r^r n^r/2-1∑_j=0^n-1𝔼C_2,j^r
≤ K_1 (1+max_0≤ j≤ n𝔼X̅^RE_n(t_j)^r)δ_2^r
≤ K_2δ_2^r.
Let us denote by
D_j=∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s),
then (∑_j=0^k D_j,Σ̃^n_t_k+1)_k=0,1,…,n-1 is also a discrete-time martingale. Therefore, by the Burkholder and Jensen inequalities, we obtain
𝔼(max_1≤ i ≤ n (C^1_3,i)^r)=𝔼(max_1≤ i≤ n∑_j=0^i-1D_j^r)≤ C_r^r n^r/2-1∑_j=0^n-1𝔼D_j^r,
where, by (<ref>) and submultiplicativity of the Frobenius norm, we get
𝔼D_j^r=𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s)) W(s)^r
≤ C(T/n)^r/2-1𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))∂ p_W/∂ y(s,W(s))^r s
≤ C(T/n)^r/2-1𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))^r·∂ p_W/∂ y(s,W(s))^r s≤ K_3 n^-r/2,
for j=0,1,…,n-1. This implies that
𝔼(max_1≤ i ≤ n (C^1_3,i)^r)≤ K_4.
Finally, from (<ref>) and Lemma <ref>
𝔼(C^2_3,n)^r=𝔼(∑_j=0^n-1∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s)^r
≤ n^r-1∑_j=0^n-1𝔼∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))ℒp_W(s,W(s)) s^r
≤ n^r-1∑_j=0^n-1𝔼(∫_t_j^t_j+1b̃
(t_j,X̅^RE_n(t_j))·ℒp_W(s,W(s)) s)^r
≤C/n∑_j=0^n-1𝔼b̃
(t_j,X̅^RE_n(t_j))^r≤ K_5.
Combining (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) we obtain
𝔼(max_0≤ i≤ ne̅_i^r)≤ K_6(δ_1^r+δ_2^r+δ_3^r),
which proves (i).
We now show (ii). Let W̃∈𝒲_α,β(δ_3). Note that in this case Z might not be semimartingale nor even a process of bounded variation. Hence, we have that
e̅_i = X^RE_n(t_i)-X̅^RE_n(t_i)=∑_j=0^i-1(A_j+B_j+C_1,j+C_2,j+C_3,j),
where
A_j=∫_t_j^t_j+1(a(ξ_j,X^RE_n(t_j))-a(ξ_j,X̅^RE_n(t_j))) s
B_j=∫_t_j^t_j+1(b(t_j,X^RE_n(t_j))-b(t_j,X̅^RE_n(t_j))) W(s)
C_1,j=(-δ_1)∫_t_j^t_j+1p_a(ξ_j,X̅^RE_n(t_j)) s
C_2,j=(-δ_2)∫_t_j^t_j+1p_b(t_j,X̅^RE_n(t_j)) W(s),
and
C_3,j=(-δ_3)·b̃(t_j,X̅^RE_n(t_j))·Δ Z_j.
Using similar arguments as for the proof of (<ref>) we have
𝔼(max_0≤ i≤ ke̅_i^r)≤C/n∑_j=1^k-1𝔼(max_0≤ i≤ je̅_i^r)+c_r R̅_n,
where this time we get from (<ref>), (<ref>), and Lemma <ref> that
R̅_n=𝔼(∑_j=0^n-1C_1,j)^r+𝔼(max_1≤ i ≤ n∑_j=0^i-1C_2,j^r) + 𝔼(max_1≤ i ≤ n∑_j=0^i-1C_3,j^r)
≤ C(δ_1^r+δ_2^r)(1+δ_3^r n^r(1-γ))· e^C(1+δ_3^r n^r(1-γ))+𝔼(max_1≤ i ≤ n∑_j=0^i-1C_3,j^r).
Agian by the discrete version of the Gronwall's lemma we get
𝔼(max_0≤ i≤ ne̅_i^r)≤ KR̅_n.
Moreover,
max_1≤ i ≤ n∑_j=0^i-1C_3,j_r≤∑_j=0^n-1C_3,j_r≤∑_j=0^n-1C_3,j_r,
and, since X̅^RE_n(t_j) and Δ W_j are independent, we have by Lemma <ref>
C_3,j_r≤ Cδ_31+X̅^RE_n(t_j)_r·(T/n)^α+Δ W_j^β_r
≤ Cδ_3· n^-γ·(1+max_0≤ i ≤ nX̅^RE_n(t_i)_r)
≤ C δ_3 n^-γ· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r),
and
max_1≤ i ≤ n∑_j=0^i-1C_3,j_r≤ C δ_3 n^1-γ· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r).
From (<ref>), (<ref>), and (<ref>) we get the thesis of (i).
Let r∈ [2,+∞), ϱ∈ (0,1].
(i) There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_0(δ_3) it holds
X(a,b,W,η)(T)-𝒜̅^RE_n(ã, b̃,W̃,η)_r≤ C(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3).
(ii) Let α,β∈ (0,1]. There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_α,β(δ_3) it holds
X(a,b,W,η)(T)-𝒜̅^RE_n(ã, b̃,W̃,η)_r≤ Cn^-min{ϱ,1/2}
+C(δ_1+δ_2+δ_3· n^1-γ)· (1+δ_3 n^1-γ)· e^C(1+(δ_3 n^1-γ)^r),
where γ=min{α,β/2}.
By Proposition 1 in <cit.> (for the case δ_1=δ_2=δ_3=0) and from Proposition <ref> (i) we get
X(a,b,W,η)(T)-𝒜̅^RE_n(ã, b̃,W̃,η)_r≤X(a,b,W,η)(T)-X_n^RE(a,b,W,η)(T)_r
+X_n^RE(a,b,W,η)(T)-X̅_n^RE(ã,b̃,W̃,η)(T)_r≤ C_1n^-min{ϱ,1/2}+C_2(δ_1+δ_2+δ_3),
which implies (i). Similarly, by using the above error decomposition together with Proposition <ref> (ii) we obtain the thesis of (ii).
§ LOWER BOUNDS AND OPTIMALITY OF THE RANDOMIZED EULER ALGORITHM
In this section, we investigate lower error bound for an arbitrary method (<ref>) from the class Φ_n. We focus only on the class 𝒲_0 of disturbed Wiener processes W̃. Essentially, sharp lower bounds in the class 𝒲_α,β are left as an open problem. For some special cases we also show optimality of the randomized Euler algorithm 𝒜̅^RE_n.
The following result follows from Lemma 3 in <cit.> and Theorem <ref>.
Let r∈ [2,+∞), K∈ (0,+∞), ϱ∈ (0,1]. Then there exist C_1,C_2∈ (0,+∞), depending only on the parameter of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1] it holds
C_1(n^-min{ϱ,1/2}+δ_1+δ_2)≤e^(r)_n(ℱ(ϱ,K),𝒲_0,δ_1,δ_2,δ_3)≤C_2(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3).
In particular, the nth minimal error satisfies
e^(r)_n(ℱ(ϱ,K),𝒲_0,δ_1,δ_2,0)=Θ(n^-min{ϱ,1/2}+δ_1+δ_2),
and
e^(r)_n(ℱ(ϱ,K),𝒲_0,δ_1,δ_2,max{δ_1,δ_2})=Θ(n^-min{ϱ,1/2}+δ_1+δ_2),
as n→+∞, max{δ_1,δ_2}→ 0^+. In both cases (<ref>), (<ref>) an optimal algorithm is the randomized Euler algorithm 𝒜̅^RE_n.
In general we have a gap between upper and lower bounds, and sharp bounds appear only in special cases (i.e.: when δ_3=0 or δ_3=max{δ_1,δ_2}). However, in the particular case for the randomized Euler algorithm we have the following bounds for its worst-case error (the proof follows from Proposition 1 in <cit.>).
Let r∈ [2,+∞), K∈ (0,+∞), ϱ∈ (0,1]. Then for the randomized Euler algorithm 𝒜̅^RE_n it holds
e^(r)(𝒜̅^RE_n,ℱ(ϱ,K),𝒲_0,δ_1,δ_2,δ_3)=Θ(n^-min{ϱ,1/2}+δ_1+δ_2+δ_3),
as n→+∞, max{δ_1,δ_2,δ_3}→ 0+.
§ NUMERICAL EXPERIMENTS
m
ḍ
Let us consider the following linear SDE that describes the well-known multidimensional Black-Scholes model
{[ X̣(t)=[ μ_1 X_1(t); μ_2 X_2(t); ⋮; μ_d X_d(t) ]ṭ+[ σ^1,1X_1(t) σ^1,2X_1(t) ⋯ σ^1, X_1(t); σ^2,1X_2(t) σ^2,2X_2(t) ⋯ σ^2, X_2(t); ⋮ ⋮ ⋱ ⋮; σ^d,1X_d(t) σ^d,2X_d(t) ⋯ σ^d, X_d(t) ]Ẉ(t), ; X(0)=x_0, t∈ [0,T], ].
where σ^i,j>0 for i∈{1,…, d}, j∈{1,…, }, μ_i∈ , x_0∈_+^d.
Functions a and b take the following forms
a(t,x)= (μ_1 x_1,…,μ_d x_d)^T,
b(t,x)=[ σ^1,1x_1 σ^1,2x_1 ⋯ σ^1, x_1; σ^2,1x_2 σ^2,2x_2 ⋯ σ^2, x_2; ⋮ ⋮ ⋱ ⋮; σ^d,1x_d σ^d,2x_d ⋯ σ^d, x_d ].
The exact solution of problem (<ref>) has the following form
X_i (t)=X_i(0) ·exp((μ-1/2∑_j=1^(σ^i,j)^2)t+∑_j=1^σ^i,j W_j(t))
for i=1,…,d.
To perform numerical experiments, we choose two examples.
Example 1
a(t,x)= [ 0.5 x_1 ,0.7 x_2; ] ^ T,
b(t,x)=[ 0.5 x_1 , 0.7 x_1 , 0.2 x_1; -0.5 x_2 , -0.7 x_2 , -0.2 x_2; ],
x_0 = (1,2)^T, T=1.
Example 2
a(t,x)=[ 0.5 x_1 ,0.7 x_2, 0.4 x_3; ] ^ T,
b(t,x)=[ 0.5 x_1 , 0.7 x_1 , 0.2 x_1; 0.1 x_2 , 0. x_2 , 0.013x_2; 0. x_3 , 0.75 x_3, 0.013 x_3; ],
x_0 = (1,0.1,0.4)^T, T=1.
We take an estimator of the error of X(T)-X̅_n^RE(T)_2
ε_K(X̅^RE_n(T))=(1/K∑_j=1^K X_(j)(T)-X̅^RE_(j),n(T)^2)^1/2.
We also conduct numerical experiments for an equation in which the exact solution is unknown (Example 3). In this case, to estimate the error X(T)-X̅_n^RE(T)_2, the exact solution X(T) is approximated by X̅^RE_n computed under exact information with for n=1310720=10·2^17, and then
ε_K(X̅^RE_n(T))=(1/K∑_j=1^K X̅^RE_(j),1310720(T)-X̅^RE_(j),n(T)^2)^1/2.
Example 3
a(t,x)= 0.5[ t sin (10 x_1); cos (7x_2) ] ^ T,
b(t,x)=[ tx_1 tx_2 sin(x_2); t cos (x_1) x_2 -x_1 ],
x_0 = (1,2)^T, T=1.
For Examples 1 - 3 we used K=20000.
§.§ Linear disturbing function
To obtain results in the numerical simulations related to the Theorem <ref> (ii) we propose the following disruptive function
p_a(t,x)= U_1 · a(t,x)
and
p_b(t,x)= U_2 · b(t,x),
where U_1,U_2 has uniform distribution over the interval [-1,1].
As a corrupting function for the Wiener process W in Examples 1 - 3 we take
p_w(t,x) =U_3· x,
where U_3 is a random variable with a uniform distribution over the interval [-1,1]. We also assume that U_1, U_2, U_3 are independent of W and 𝒢^n. We use those uniform distributions to better approximate the worst case scenario setting of error.
§.§ Nonlinear disturbing function
To conduct illustrative numerical simulations, as per Theorem <ref> (ii), we propose the following disruptive functions
p_a(t,x)= U_1 · a(t,x)
and
p_b(t,x)= U_2 · b(t,x),
where U_1,U_2 is a random variable with a uniform distribution over the interval [-1,1].
As a corrupting function for the Wiener process, we consider
p_w, β(t,x) =U_3 ·sgn(sin(100‖ x‖))·|sin(100‖ x‖)|^β,
where U_3 is a random variable with a uniform distribution over the interval [-1,1]. We also assume that U_1, U_2, U_3 are independent of W and 𝒢^n.
In Figures <ref> and <ref>, we present results for (<ref>) as the disruptive function for Wiener process in Example 3.
In this case the, logarithmic error exhibits exponential growth, necessitating the use of doubly logarithmic y-axis. Notably, such error behavior was not observed for disruptive functions from the class 𝒦_0 for the Wiener process.
§ CONCLUSIONS
We have investigated the error and optimality of the randomized Euler scheme in the case when we have access only to noisy standard information about the coefficients a, b, and driving Wiener process W. We considered two classes of disturbed Wiener processes for which we derived upper error bounds for the randomized Euler algorithm. These bounds indicate that the error significantly depends on the regularity of the disturbing functions.
The numerical experiments demonstrate that beyond a certain value of n, which depends on the size of the disturbance, the error of the randomized Euler algorithm stabilizes, and increasing number of discretization points n does not lead to reduction in error.
One particularly interesting observation is depicted in Figure <ref>. When using function (<ref>) as a perturbation for the Wiener process with sufficiently high δ, we observe an exponential increase in error as n increases.
In future research, we plan to investigate the error of (multilevel) Monte Carlo method under inexact information for the weak approximation of solutions of SDEs.
§ APPENDIX
The proof of the following fact is straightforward and, therefore, omitted.
If p∈𝒦_0 then for all t∈ [0,T],x,y∈ℝ^m it holds
p(t,x)≤ m^1/2(1+x), p(t,x)-p(t,y)≤ m^1/2x-y,
∂ p/∂ y(t,y)≤ m^1/2,
ℒp(t,y)≤( 2m+m^2/2)^1/2.
In order to estimate absolute moments of X̅^RE_n(t_i), in the case when the disturbed Wiener process W̃ belongs to the class 𝒲_0(δ_3), we use the following time-continuous randomized Euler process
{[ X̃̅̃^RE_n(0) = η,; X̃̅̃^RE_n(t) = X̃̅̃^RE_n(t_i) + ã(ξ_i, X̃̅̃^RE_n(t_i)) · (t-t_i) + b̃(t_i, X̃̅̃^RE_n(t_i)) · (W̃(t)-W̃(t_i)) , ].
for t∈ [t_i,t_i+1], i=0,1, …, n-1, where W̃(t)= W(t)+δ_3· Z(t), Z(t)=p_W(t, W(t)), p_W∈𝒦_0. It is easy to see that X̃̅̃^RE_n(t_i)=X̅^RE_n(t_i) for i=0,1,…,n. Moreover, the process (X̃̅̃^RE_n(t))_t∈ [0,T] is adapted to (Σ̃^n_t)_t∈ [0,T], which can be shown by induction.
Let r∈ [2,+∞). There exists C∈ (0,+∞), depending only on the parameters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_0(δ_3) it holds
sup_0≤ t ≤ T𝔼X̃̅̃^RE_n(t)^r≤ C(1+δ_1^r+δ_2^r+δ_3^r)e^CT(1+δ_1^r+δ_2^r+δ_3^r).
We denote by
V̅_i=(ξ_i, X̃̅̃^RE_n(t_i)), U̅_i=(t_i, X̃̅̃^RE_n(t_i)).
Firstly, we show by induction that
max_0≤ i≤ n𝔼X̃̅̃^RE_n(t_i)^r<+∞.
Let us assume that there exists l∈{0,1,…,n-1} such that max_0≤ i≤ l𝔼X̃̅̃^RE_n(t_i)^r<+∞. (This obviously holds for l=0.) Due to the fact that σ(U̅_l)⊂Σ̃^n_t_l and, by (<ref>),
V(t_l+1)-V(t_l)≤∫_t_l^t_l+1ℒp_W(s,W(s)) s≤ c(m)· (t_l+1-t_l),
we get
𝔼X̃̅̃^RE_n(t_l+1)^r≤ C 𝔼X̃̅̃^RE_n(t_l)^r+C(T/n)^r·𝔼ã(U̅_l)^r
+ C𝔼b̃(U̅_l)^r·𝔼W(t_l+1)-W(t_l)^r
+ C δ_3𝔼∫_t_l^tb̃(U̅_l)∂ p_W/∂ y(s,W(s)) W(s)^r
+ δ_3𝔼(b̃(U̅_l)^r·V(t_l+1)-V(t_l)^r)≤ K(1+𝔼X̃̅̃^RE_n(t_l)^r) <∞.
Hence, max_0≤ i≤ l+1𝔼X̃̅̃^RE_n(t_i)^r=max{max_0≤ i≤ l𝔼X̃̅̃^RE_n(t_i)^r,𝔼X̃̅̃^RE_n(t_l+1)^r}<+∞ and the inductive step is completed. Hence, we have shown (<ref>). Moreover, by (<ref>) we get
sup_0≤ t≤ T𝔼X̃̅̃^RE_n(t)^r≤ C(1+max_0≤ i≤ n-1𝔼X̃̅̃^RE_n(t_i)^r )<+∞
The constant in (<ref>) depends on n. In the second part of the proof we will
show that we can obtain the bound (<ref>) with C that does not depend on n.
Let for t∈ [0,T]
ϕ_n(t)=∑_i=0^n-1ã(V̅_i)·1_(t_i,t_i+1](t),
ψ_n(t)=∑_i=0^n-1b̃(U̅_i)·1_(t_i,t_i+1](t).
Note that {ψ_n(t)}_t∈ [0,T] is {Σ̃^n_t}_t≥ 0-progressively measurable simple process. Hence, we have for all t∈ [0,T] that
X̃̅̃^RE_n(t)=η+Ã̅̃^RE_n(t)+B̃̅̃^RE_n(t)+C̃̅̃^RE_n(t),
where
Ã̅̃^RE_n(t)=∫_0^tϕ_n(s) s,
B̃̅̃^RE_n(t)=∫_0^tψ_n(s) W(s),
and
C̃̅̃^RE_n(t)=δ_3·∫_0^tψ_n(s) Z(s)
=δ_3·∫_0^t ψ_n(s)∂ p_W/∂ y(s,W(s)) W(s)+δ_3·∫_0^tψ_n(s)ℒp_W(s,W(s)) s.
and all above stochastic integrals are well-defined. Hence, we have for t∈ [0,T]
X̃̅̃^RE_n(t)=η+∫_0^t[ϕ_n(s)+δ_3·ψ_n(s)·ℒp_W(s,W(s))] s
+∫_0^tψ_n(s)·[I+δ_3·∂ p_W/∂ y(s,W(s))] W(s),
where I is an identity matrix of size m× m. Hence, by (<ref>), (<ref>)
𝔼X̃̅̃^RE_n(t)^r≤ C_1η^r+C_2𝔼∫_0^tϕ_n(s)^r s+C_3(1+δ_3^r)𝔼∫_0^tψ_n(s)^r s
≤ K_1·(1+η^r)· (1+δ_1^r+δ_2^r+δ_3^r)
+K_2·(1+δ_1^r+δ_2^r+δ_3^r)·∫_0^t∑_i=0^n-1𝔼X̃̅̃^RE_n(t_i)^r·1_(t_i,t_i+1](s) s
and therefore for all t∈ [0,T]
sup_0≤ u≤ t𝔼X̃̅̃^RE_n(u)^r≤ K_1·(1+η^r)· (1+δ_1^r+δ_2^r+δ_3^r)
+K_2· (1+δ_1^r+δ_2^r+δ_3^r)∫_0^tsup_0≤ u≤ s𝔼X̃̅̃^RE_n(u)^r s
where K_1 and K_2 depends only on the parameters of the class ℱ(ϱ,K) and r. Since the function [0,T]∋ t→sup_0≤ u≤ t𝔼X̃̅̃^RE_n(u)^r is bounded (by (<ref>)) and Borel measurable (as a nondecreasing function), by using the Gronwall's lemma we get (<ref>).
In the case of the class 𝒲_α,β(δ_3) we have the following absolute moments estimate for X̅_n^RE(t_i). The proof technique is different from the one used in the proof of Lemma <ref>, since for p_W∈𝒦_α,β the process Z(t)=p_W(t,W(t)) might not be a semimartingale nor a process of bounded variation.
Let r∈ [2,+∞). There exists C∈ (0,+∞), depending only on the paramters of the class ℱ(ϱ,K) and r, such that for all n∈ℕ, δ_1,δ_2,δ_3∈ [0,1], (a,b,η)∈ℱ(ϱ,K), (ã,b̃, W̃)∈ V_a(δ_1)× V_b(δ_2)×𝒲_α,β(δ_3) it holds
𝔼[max_0≤ i ≤ nX̅_n^RE(t_i)^r]≤ C(1+δ_3^r n^r(1-γ))· e^C(1+δ_3^r n^r(1-γ)),
where γ=min{α,β/2}.
We have that for i=0,1,…,n we can write that
X̅^RE_n(t_i)=η+T/n∑_j=0^i-1ã(ξ_j,X̅^RE_n(t_j))+ ∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ W_j
+δ_3∑_j=0^i-1b̃(t_j,X̅^RE_n(t_j))·Δ Z_j,
and we have that
X̅_n^RE(t_i)≤ K+∑_j=0^i-1Ã_j+∑_j=0^i-1B̃_j+∑_j=0^i-1C̃_j,
where
Ã_j=T/nã(ξ_j,X̅^RE_n(t_j)),
B̃_j = b̃(t_j,X̅^RE_n(t_j))·Δ W_j,
C̃_j = δ_3·b̃(t_j,X̅^RE_n(t_j))·Δ Z_j.
Hence, for all k=0,1,…, n
𝔼(max_0≤ i≤ kX̅_n^RE(t_i)^r)≤ c_rK^r+c_r𝔼(∑_j=0^k-1Ã_j)^r
+c_r𝔼[max_1≤ i ≤ k∑_j=0^i-1B̃_j^r]+c_r𝔼(∑_j=0^k-1C̃_j)^r.
By Jensen inequality we have that
𝔼(∑_j=0^k-1Ã_j)^r≤ C_1+C_1/n∑_j=0^k-1𝔼X̅^RE_n(t_j)^r.
From Burkholder and Jensen inequality we obtain that
𝔼[max_1≤ i ≤ k∑_j=0^i-1B̃_j^r]≤ C_2+C_2/n∑_j=0^k-1𝔼X̅^RE_n(t_j)^r.
Finally, since X̅^RE_n(t_j) and Δ W_j are independent, and
Δ W_j^β_r≤ c (T/n)^β/2,
we get that
𝔼C̃_j^r≤K̅_1δ_3^r𝔼[(1+X̅^RE_n(t_j))^r·((T/n)^α+Δ W_j^β)^r]
≤K̅_2 δ_3^r (1+𝔼X̅^RE_n(t_j)^r)·(T/n)^α+Δ W_j^β_r^r
≤K̅_3δ_3^r n^-rγ+K̅_4δ^rn^-rγ𝔼X̅^RE_n(t_j)^r,
and hence
𝔼(∑_j=0^k-1C̃_j)^r≤ n^r-1∑_j=0^k-1𝔼C̃_j^r≤ C_3δ_3^r n^r(1-γ)
+C_4δ_3^r n^r(1-γ)-1∑_j=0^k-1𝔼X̅^RE_n(t_j)^r.
Combining (<ref>), (<ref>), (<ref>), (<ref>) we arrive at
𝔼(max_0≤ i≤ kX̅_n^RE(t_i)^r)≤ C_5(1+δ_3^rn^r(1-γ))
+C_6(δ_3^rn^r(1-γ)-1+n^-1)∑_j=1^k-1𝔼(max_0≤ i≤ jX̅_n^RE(t_i)^r).
By the discrete version of the Gronwall's lemma we get the thesis.
Acknowledgements
This research was realized as a part of joint research project between AGH UST and NVIDIA.
22
JenNeuen
A. Jentzen, A. Neuenkirch, A random Euler scheme for Carathéodory differential equations, J. Comp. and Appl. Math. 224 (2009), 346–359.
GSM2022
M. Giles, O. Sheridan-Methven, Analysis of nested Multilevel Monte Carlo using approximate normal random variables, SIAM J. Uncert. Quant., 10 (2022),
Hein1
S. Heinrich, Lower complexity bounds for parametric stochastic Itô integration, J. Math. Anal. Appl., 476 (2019), 177–195.
KaPl90
B. Kacewicz, L. Plaskota, On the minimal cost of approximating linear problems based on information with deterministic noise, Numer. Funct. Anal. and Optimiz. 11 (1990), 511-528.
KaPr16
B. Kacewicz, P. Przybyłowicz, On the optimal robust solution of IVPs with noisy information, Numer. Algor. 71 (2016), 505–518.
AKAPhD
A. Kałuża, Optimal algorithms for solving stochastic initial-value problems with jumps, PhD thesis, AGH University of Science and Technology, Kraków 2020,
.
AKPMPP
A. Kałuża, P. M. Morkisz, P. Przybyłowicz, Optimal approximation of stochastic integrals in analytic noise model, Appl. Math. and Comput., 356 (2019), 74–91.
KRWU_0
R. Kruse, Y. Wu, Error analysis of randomized Runge-Kutta methods for differential equations
with time-irregular coefficients, Comput. Methods Appl. Math., 17 (2017), 479–498.
KRWU
R. Kruse, Y. Wu, A randomized Milstein method for stochastic differential equations with non-differentiable drift coefficients, Discrete Contin. Dyn. Syst. Ser B, 24 (2019), 3475–3502.
Mao11
X. Mao, Stochastic differential equations and applications 2nd edition, Woodhead Publishing, Cambridge, 2011.
milvic
M. Milanese, A. Vicino, Optimal estimation theory for dynamic systems with set membership uncertainty: an overview, Automatica 27 (1991), 997–1009.
MoPl16
P. M. Morkisz, L. Plaskota, Approximation of piecewise Hölder functions from inexact information, J. Complex. 32 (2016), 122–136.
PMPP14
P. M. Morkisz, P. Przybyłowicz,
Strong approximation of solutions of stochastic differential equations with time-irregular coefficients via randomized Euler algorithm, Appl. Numer. Math. 78 (2014), 80–94.
PMPP17
P. M. Morkisz, P. Przybyłowicz,
Optimal pointwise approximation of SDE's from inexact information, Journal of Computational and Applied Mathematics 324 (2017), 85–100.
PMPP19
P. M. Morkisz, P. Przybyłowicz, Randomized derivative-free Milstein algorithm for efficient approximation of solutions of SDEs under noisy information, J. Comput. Appl. Math. 383 (2021), 1–22.
NOV E. Novak, Deterministic and Stochastic Error Bounds in Numerical Analysis, Lecture Notes in Mathematics, vol. 1349, New York, Springer–Verlag, 1988.
Pla96
L. Plaskota, Noisy Information and Computational Complexity,
Cambridge Univ. Press, Cambridge, 1996.
Pla14
L. Plaskota, Noisy information: optimality, complexity, tractability,
in Monte Carlo and quasi-Monte Carlo Methods 2012,
J. Dick, F.Y. Kuo, G.W. Peters, I.H. Sloan (Eds.), Springer 2013, 173–209.
Protter
P. Protter, Stochastic Integration and Differential Equations, second ed., Springer-Verlag Berlin Heidelberg, 2005.
TWW88
J.F. Traub, G.W. Wasilkowski, H. Woźniakowski,
Information-Based Complexity, Academic Press, New York, 1988.
Wer96
A.G. Werschulz, The complexity of definite elliptic problems with noisy data. J. Complex. 12 (1996), 440-473.
Wer97
A.G. Werschulz, The complexity of indefinite elliptic problems with noisy data. J. Complex. 13 (1997), 457-479.
|
http://arxiv.org/abs/2307.03948v1 | 20230708101129 | Reading Between the Lanes: Text VideoQA on the Road | [
"George Tom",
"Minesh Mathew",
"Sergi Garcia",
"Dimosthenis Karatzas",
"C. V. Jawahar"
] | cs.CV | [
"cs.CV"
] |
G. Tom et al.
Center for Visual Information Technology (CVIT), IIIT Hyderabad, India
{george.tom,minesh.mathew}@research.iiit.ac.in, [email protected]
Computer Vision Center (CVC), UAB, Spain
{sergi.garcia,dimos}@cvc.uab.cat
AllRead Machine Learning Technologies
Reading Between the Lanes: Text VideoQA on the Road
George Tom1 0009-0002-7343-1680 Minesh Mathew1 0000-0002-0809-2590 Sergi Garcia-Bordils2,3 0000-0002-4222-8367 Dimosthenis Karatzas2 0000-0001-8762-4454 C.V. Jawahar10000-0001-6767-7057
August 12, 2023
=============================================================================================================================================================================================
Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness.
Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time.
To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3,222 driving videos collected from multiple countries, annotated with 10,500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering. The dataset is available at http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqahttp://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa
§ INTRODUCTION
In this work, we propose a new dataset for Visual Question Answering (VQA) on driving videos, with a focus on questions that require reading text seen on the roads and understanding road signs. Text and road signs provide important information to the driver or a driver assistance system and help to make informed decisions about their route, including how to reach their destination safely and efficiently. Text on roads can also provide directions, such as turn-by-turn directions or the distance to a destination. Road signs can indicate the location of exits, rest stops, and potential hazards, such as road construction or detours. Reading text and understanding road signs is also important for following traffic laws and regulations. Speed limit signs, yield signs, and stop signs provide important information that drivers must follow to ensure their own safety and the safety of others on the road.
VQA is often dubbed as the Turing test for image/video understanding. The early datasets for VQA on images and videos <cit.> largely ignored the need for reading and comprehending text on images and videos, and questions were mostly focus on the visual aspects of the given image or video. For example, questions focused on the type, attributes and names of objects, things or people. However, the text is ubiquitous in outdoor scenes, and this is evident from the fact that nearly 50% of the images in the MS-COCO dataset have text in them <cit.>.
Realizing the importance of reading text in understanding visual scenes, two datasets—Scene text VQA <cit.> and Text VQA <cit.> were introduced that focus exclusively on VQA involving scene text in natural images.
Two recent works called NewsVideoQA<cit.>, and M4-ViteVQA<cit.> extend text-based VQA works to videos by proposing VQA tasks that exclusively focus on question-answers that require systems to read the text in the videos.
Similar to these works that focus on text VQA on videos, our work proposes a new dataset where all the questions need to be answered by watching driving videos and reading the text in them. However, in contrast to NewsVideoQA which contains news videos where question-answer pairs are based on video text (born-digital embedded text) appearing on news tickers and headlines, the text in videos in our dataset are scene text. The text in the road or driving videos are subjected to blur, poor contrast, lighting conditions and distortions. Text while driving goes by fast and tends to be heavily occluded. Often, multiple frames needs to be combined to reconstruct the full text, or a good frame with readable text needs to be retrieved. These difficulties made researchers focus on road-text recognition exclusively, and there have been works that focus exclusively on the detection, recognition and tracking of road text videos <cit.>. On the other hand M4-ViteVQA contains varied type of videos such as sports videos, outdoor videos and movie clips. A subset of these videos are driving videos. In contrast, our dataset is exclusively for VQA on driving videos and contains at least three times more questions than in the driving subset of M4-ViteQA. Additionally, questions in our dataset require both reading road text and understanding road signs, while M4-ViteVQA's focus is purely on text-based VQA.
Specifically our contributions are the following:
* We introduce the first large scale dataset for road text and road sign VQA containing 10K+ questions and 3K+ videos.
* We provide a thorough analysis of the dataset and present detailed statistics of videos, questions and answers. We also establish heuristic baselines and upper bounds that help to estimate the difficulty of the problem.
* We evaluate an existing popular VQA model and two SoTA VideoQA models on our dataset and demonstrate that these models fail to perform well on the new dataset since they are not designed to read and reason about text and road signs.
§ RELATED WORK
§.§ VideoQA
In video question answering(VideoQA), the goal is to answer the question in the context of the video. Earlier approaches to VideoQA use LSTM to encode the question and videos<cit.>.
Several datasets have been created in recent years to assist research in the field of video question answering (VideoQA). Large datasets such as MSRVTT-QA<cit.> contain synthetic generated questions and answers where the questions require only an understanding of the visual scenes. MOVIE-QA<cit.> and TVQA<cit.> are based on scenes in movies and TV shows. Castro et al.<cit.> introduced a dataset with videos from the outside world for video understanding through VideoQA and Video Evidence Selection for interpretability. MOVIE-QA<cit.>, TVQA<cit.>, HowtoVQA69M<cit.>
provide explicit text in the form of subtitles. Multiple-Choice datasets<cit.> consist of a pre-defined set of options for answers. When compared to open-ended datasets, they can be considered limiting in the context of real-world applications. Synthetically generated datasets<cit.> contain questions that are generated through processing video descriptions, narration and template questions. MSRVTT-QA<cit.> exploits the video descriptions for QA creation. HowToVQA69M<cit.> uses cross-modal supervision and language models to generate question-answer pairs from narrated videos, whereas ActivityNetQA<cit.> uses template questions to generate the QA pairs. Xu et al. introduced the SUTD-TrafficQA<cit.> dataset and the Eclipse model for testing systems' ability to reason over complex traffic scenarios. The SUTD-TrafficQA<cit.> dataset contains multiple-choice questions that are based on different traffic events. RoadTextVQA is an open-ended dataset that deals with questions related to the text information found in road videos or the signs posted along roads. Recent studies<cit.> on pretraining transformers on other vision and language tasks have shown excellent results for the VideoQA task. Lei et al. <cit.>, in their study, uncovered the bias present in many video question-answering datasets, which only require information from a single frame to answer, and introduced new tasks aimed at training models to answer questions that necessitate the use of temporal information.
§.§ VideoQA involving video text
NewsVideoQA<cit.> and M4-ViteVQA<cit.> are two recently introduced datasets that include videos with embedded born-digital text and scene text, respectively. Both datasets require an understanding of the text in videos to answer the questions.
Embedded text, sometimes called video text in news videos, is
often displayed with good contrast and in an easy-to-read style.
Scene text in the RoadTextVQA dataset can be challenging to read due to the factors such as occlusion, blur, and perspective distortion. M4-ViteVQA contains videos from different domains, a few of them being shopping, driving, sports, movie and vlogs. The size of RoadTextVQA is more than three times the size driving subset of M4-ViteVQA. Additionally, a subset of questions in RoadTextVQA also requires domain knowledge to answer questions related to road signs. Few recent works<cit.> on vision and language transformers have shown to work well with text-based VQA tasks. Kil et al.<cit.> introduced PreSTU, a pretraining method that improves text recognition and connects the recognized text with the rest of the image. GIT(GenerativeImage2Text)<cit.> is a transformer-based model for vision and language tasks with a simple architecture that does not depend on external OCR or object detectors.
§.§ Scene Text VQA
Our work, which focuses on VQA requiring text comprehension within videos, shares similarities with other studies dealing with text in natural images, commonly known as Scene Text VQA. The ST-VQA<cit.> and TextVQA<cit.> datasets were the first to incorporate questions requiring understanding textual information from natural images. LoRRa<cit.> and M4C<cit.> utilized pointer networks<cit.> that generate answers from a fixed vocabulary and OCR tokens. In addition, M4C used a multimodal transformer<cit.> to integrate different modalities. TAP<cit.> employed a similar architecture to M4C and incorporated a pretraining task based on scene text, improving the model's alignment among the three modalities. Another study, LaTr<cit.>, focused on pretraining on text and layout information from document images and found that incorporating layout information from scanned documents improves the model's understanding of scene text.
§ ROADTEXTVQA DATASET
This section looks at the data collection and annotation procedure, data analysis, and statistics.
§.§ Data Collection
The videos used in the dataset are taken from the RoadText-3K<cit.> dataset and YouTube. The RoadText-3K dataset includes 3,000 ten-second road videos that are well-suited for annotation because they have a considerable quantity of text.
The RoadText-3K dataset includes videos recorded in the USA, Europe, and India and features text in various languages such as English, Spanish, Catalan, Telugu and Hindi. Each video contains an average of 31 tracks. However, the European subset is excluded from the annotation process for RoadTextVQA as it is dominated by texts in Spanish/Catalan, and the RoadTextVQA is designed specifically for English road-text.
In addition to the videos from RoadText-3K, additional dashcam videos were sourced from the YouTube channel J Utah[ <https://www.youtube.com/@jutah>]. 252 videos from USA and UK were selected, and clips with a substantial amount of text were further selected by running a text detector over the video frames. Being a free and open-source text detector popular for scene text detection, we went with EasyOCR<cit.> as our choice of text detector. The RoadText-3K videos have a resolution of 1280x720 with a frame rate of 30 frames per second. To keep the data consistent, the YouTube clips were downsampled to the same resolution and frame rate of 1280x720 at 30fps.
Individuals who are proficient in the English language were hired to create the question-answer pairs. To ensure the quality of the applicants, an initial training session was conducted, followed by a filtering mechanism in the form of a comprehensive quiz. The quiz was designed to ensure that the question-answer pairs were created by individuals who had a solid grasp of the English language and a good understanding of the task, thereby enabling us to maintain a high standard of quality in the annotations.
The annotation process involved two stages, and a specifically designed web-based annotation tool was used. In the initial stage, annotators add the question, answers and timestamp triads for videos shown to them.
All the questions have to be based on either some text present in the video or on any road sign. In cases where a question could have multiple answers in a non-ambiguous way, the annotators were given the option to enter several answers. The timestamp is an additional data point which is collected, and it is the aptest point in the video at which the question is answerable. The annotators were instructed to limit the number of questions to not more than ten per video and to avoid asking any questions related to the vehicle license plate numbers. If there were no possible questions that could be asked from the video, then the annotators were given the option to reject it.
In the verification stage, the video and the questions are shown, and the annotators had to add the answers and the timestamps. We made sure that verification is done by an annotator different from the one who has annotated it in the first stage.
If the question is incorrect or does not follow the annotation guidelines, it is flagged and rejected. If for a question, there are common answers in the annotation stage and verification stage, then that question is considered valid. All the common answers are considered valid answers to the question.
In the verification stage, additional data regarding the question-answers are also collected. The questions are categorically tagged into two distinct classes. Firstly, based on the type of question— text-based or traffic sign-based.
The second classification captures whether the answer for a question, i.e., the text that makes up the answer, is present in the video or not.
§.§ Data Statistics and Analysis
The RoadTextVQA dataset contains 3,222 videos and 10,500 question-answer pairs.
Among the 3,222 videos, 1,532 videos are taken from the RoadText-3K dataset and the rest are from YouTube.
The data is randomly split into 2,557 videos and 8,393 questions in the train set, 329 videos and 1,052 questions in the test, and 336 videos and 1,055 questions in the validation set.
The videos for the test and validation sets were randomly chosen from the RoadText-3K split, as it has ground truth annotations for text tracking. Methods that use OCR data can take advantage of the accurate annotations provided by RoadText-3K.
We present statistics related to the questions in RoadTextVQA through <ref>, and <ref>. <ref> shows the most frequent questions and their frequencies. “What is written on the road with white block letters?" is the most recurrent, followed by questions regarding the speed limits on the roads.
<ref> provides a comprehensive overview of the question distribution in RoadTextVQA, with the majority of the questions being centred around details of shops located along the road. <ref> depicts the word count in the questions and answers, respectively. The average number of words in the questions in RoadTextVQA is 10.8, while the average number in the answers is 1.45. The average number of words in questions is much higher when compared to other text-based VideoQA datasets, as seen in <ref>. The percentage of unique questions stands at 86.6%, while the percentage of unique answers is 40.7%. <ref> shows the top 30 answers and the number of occurrences. <ref>, in the form of a word cloud, illustrates the most frequently occurring answers and OCR tokens. The most popular answers are “right", “left", “yes", and “no". The most prevalent OCR tokens in the videos are “stop", “only", and “one way".
The distribution of the videos in the dataset based on the geographic location where it was captured is shown in <ref>.
More than two-thirds of the videos in the dataset are captured from roads in the USA.
The majority of questions are grounded on text seen in the video (61.8%), and the rest are based on road signs. Road signs can also contain text, such as speed limit signs or interchange exit signs. 68% of questions have answers that can be found within the text present in the video, while the remaining 32% of questions require an answer that is not a text present in the video.
§ BASELINES
This section presents details of the baselines we evaluate on the proposed RoadTextVQA dataset.
§.§ Heuristic Baselines and Upper Bounds
We evaluate several heuristic baselines and upper bounds on the dataset. These heuristics and upper bounds are similar to those used in other VQA benchmarks, such as TextVQA<cit.> and DocVQA<cit.>. The following heuristic baselines are evaluated:
(i) Random Answer: performance when answers to questions are randomly selected from the train split.
(ii) Random OCR token: performance when a random OCR token from the video is picked as the answer.
(iii) Majority Answer: performance when the most common answer in the train split is considered as the answer for all the questions.
The following upper bounds are evaluated
(i) Vocab UB: the upper bound on predicting the correct answer if it is present in the vocabulary of all the answers from the train split.
(ii) OCR UB: the upper bound on performance if the answer corresponds to an OCR token present in the video.
(iii) Vocab UB + OCR UB: this metric reflects the proportion of questions for which answers can be found in the vocabulary or the OCR transcriptions of the video.
§.§ M4C
The M4C<cit.> model uses a transformer-based architecture to integrate representations of the image, question and OCR tokens. The question is embedded using a pretrained BERT<cit.> model. Faster R-CNN<cit.> visual features are extracted for the objects detected and the OCR tokens in the image.
The representation of an OCR token is formed from the FastText<cit.> vector, PHOC<cit.> vector, bounding box location feature, and Faster R-CNN feature of the token. A multi-head self-attention mechanism in transformers is employed, enabling all entities to interact with each other and model inter- and intra-modal relationships uniformly using the same set of transformer parameters. During answer prediction, the M4C model employs an iterative, auto-regressive decoder that predicts one word at a time. The decoder can use either a fixed vocabulary or the OCR tokens detected in the image to generate the answer.
§.§ SINGULARITY
The architecture of SINGULARITY<cit.> is made up of three major components: a vision encoder using ViT<cit.>, a language encoder utilizing BERT<cit.>, and a multi-modal encoder using a transformer encoder<cit.>. The multi-modal encoder uses cross-attention to collect information from visual representations using text as the key. Each video or image is paired with its corresponding caption during the pretraining phase, and the model is trained to align the vision and text representations using three losses (i) Vision-Text Contrastive: a contrastive loss which aligns the representations of vision and language encoders, (ii) Masked Language Modeling<cit.>: masked tokens are predicted (iii) Vision-Text Matching: using the multi-modal encoder, predict the matching score of a vision-text pair.
We use the SINGULARITY-temporal model, which is pretrained on 17M vision caption pairs<cit.>.
The SINGULARITY-temporal model contains a two-layer temporal encoder that feeds its outputs into the multi-modal encoder. SINGULARITY-temporal makes use of two new datasets named SSv2-Template Retrieval, and SSv2-Label Retrieval created from the action recognition dataset Something-Something v2 (SSv2)<cit.>. The pretraining is a video retrieval task using text queries. An additional multi-modal decoder is added for open-ended QA tasks and is initialised from the pretrained multi-modal encoder, which takes the multi-modal encoder's output as input and generates answer text with [CLS] as the start token.
§.§ GenerativeImage2Text
GIT(GenerativeImage2Text)<cit.> is a transformer-based architecture aimed at unifying all vision-language tasks using a simple architecture pretrained on 0.8 billion image text pairs. GIT consists of an image encoder and a text decoder and is pretrained on a large dataset of image text pairs. The image encoder is a Swin-like<cit.> transformer based on the contrastive pretrained model, which eliminates the need for other object detectors or OCR. As for the text decoder, the GIT uses a transformer with a self-attention and feed-forward layer to generate text output. The visual features and the text embeddings are concatenated and used as inputs to the decoder. GIT is able to gradually learn how to read the scene text with large-scale pretraining and hence achieves SoTA performance on scene-text-related VQA tasks such as ST-VQA. For video question answering, GIT employs a method of selecting multiple frames from the video and separately embeds each frame with a learnable temporal embedding which is initialized as zeros, and the image features are concatenated and used similarly to the image representation. The question and the correct answer are combined and used in the sense of a special caption, and the language model loss is computed solely on the answer and the [EOS] token.
§ EXPERIMENTS AND RESULTS
This section covers the evaluation metrics, the experimental setup, and the experiment results.
§.§ Experimental Setup
Evaluation metrics. We use two evaluation metrics to evaluate the model's performance: Average Normalized Levenshtein Similarity (ANLS)<cit.> and Accuracy (Acc. (%)). The Accuracy metric calculates the percentage of questions where the predicted answer exactly matches any of the target answers.
ANLS, on the other hand, does not award a zero score for all predictions that do not match the ground truth string exactly.
The score was originally proposed to act softly on cases where the predicted answer differs slightly from the actual.
ANLS measures a similarity(based on the Levenshtein distance) between the prediction and ground truth and normalizes it as a score in the range [0,1]. If the score is less than 0.5, the final ANLS score for the prediction is set to zero.
OCR transcriptions. The ground truth annotations were utilized for the videos in the RoadText-3K set, while for the remaining videos, the OCR transcriptions were sourced using the Google Cloud Video Intelligence API. Both RoadText-3K ground truth annotations, and the Google API provide text transcriptions at the line level.
We use the line-level text transcriptions as the OCR tokens for the calculation of OCR upper bounds and OCR-based heuristics as given in the <ref>. When a text track gets cut off from the frame or partially occluded by other objects in a video, the Google Cloud Video Intelligence API treats it as a new track, whereas RoadText-3K annotations ignore the partially occluded tracks. This is why in the <ref>, the number of videos vs the number of tracks is a bit inflated for the YouTube clips when compared to RoadText-3K clips.
Experimental setup for M4C.
The M4C<cit.> model is trained using the official implementation, and the training parameters and implementation details remain consistent with those used in the original paper. We used a fixed vocabulary of size 3926 generated from the train set.
The training data consists of image question-answer pairs where the image selected for training is the one on which the questions are based, specifically the timestamp frame. After training, the model is evaluated using two approaches. Firstly, it is tested on the timestamp QA pairs of the test set, and secondly, it is evaluated on the video level by sampling ten frames from the respective video for each QA pair and obtaining the model prediction for every frame individually. The final answer is determined by taking the most common answer from the ten individual frame predictions.
Experimental setup for SINGULARITY.
We fine-tuned the pretrained SINGULARITY-temporal 17M model on four NVIDIA Geforce RTX 2080 Ti. The fine-tuning process was run for 20 epochs with a batch size of 16, starting with an initial learning rate of 1e-5 and increasing linearly in the first half epoch, followed by cosine decay<cit.> to 1e-6. The other parameters used for training are the same as the official implementation. The video frames were resized to 224x224, and a single frame with random resize, crop and flip augmentations was utilised during training, whereas 12 frames were used during testing. Additionally, we fine-tuned the SINGULARITY model, which has been pretrained on the MSRVTT-QA<cit.> dataset.
Experimental setup for GIT.
The training process for GIT was carried out using a single Tesla T4 GPU for 20 epochs with a batch size of 2.
We use an Adam<cit.> optimizer with an initial learning rate starting at 1e-5 and gradually decreasing to 1e-6 through the use of cosine decay.
The GIT model was trained using the official VideoQA configuration used for MSRVTT-QA training. We fine-tuned the pretrained GIT-large model on our dataset, using six frames that were evenly spaced as inputs during both training and testing. In addition, we further fine-tuned the GIT model that was pretrained on the MSRVTT-QA<cit.> dataset.
§.§ Results
Heuristic baselines and upper bound results are presented in the <ref>. The heuristic baselines yield very low accuracy, which indicates the absence of any bias due to the repetition of answers.
Random OCR heuristic gives close to 2% accuracy, meaning that there is enough text present in the video that selecting a random OCR from the video will not yield high accuracy. The OCR upper bound is 36.6% which is low when compared to the percentage of questions which have the answers present in the video. The low OCR UB can be attributed to how the text detection and how ground truth annotation is done. The response to a question may be split into multiple lines within the video, leading to the representation of the answer as separate tokens in the OCR output. This happens because the annotations in the OCR process were carried out on a line level. From the upper bound result of Vocab + OCR UB, we can see that more than three-quarters of the answers are present in either the vocabulary or in the OCR tokens of the video.
The results on M4C are shown on <ref>. The frame level results, where we evaluate on the timestamp frame, show an accuracy of 38.20% and the video level results, where we evaluate on ten frames, give an accuracy of 28.92%. The results show that answering the question is still a challenging task, even when we reduce the complexity of the problem by providing the aptest frame for answering the question and ground truth OCR tokens.
We show the results after fine-tuning on SINGULARITY and GIT in <ref>. The accuracy of the questions requiring answers to be extracted from the video (AP) is comparatively lower, while the accuracy of the questions where the answer is not present in the video is comparatively higher.
Compared to AP, ANP is less complex to answer because it involves a fixed set of answers. In contrast, AP requires dynamic extraction from OCR tokens, resulting in the ANP set having better accuracy than AP.
Additionally, fine-tuning the model that has been pretrained on the MSRVTT-QA dataset shows improvement in accuracy across all categories(TB, RSB, AP, and ANP).
Fine-tuning GIT results in better performance compared to SINGULARITY. GIT also shows a similar trend when fine-tuned on pretrained MSRVTT-QA dataset. The “answer is present in the videos(AP)" subset has an improvement of 3.9% in accuracy when compared with SINGULARITY, whereas the “answer is not present(ANP)" in the videos subset has a gain of 6.3%. M4C tested on a single frame shows better results compared to VideoQA models. This can be attributed to the fact that we explicitly provide the OCR tokens and the correct frame on which the question is framed to the model. M4C tested on ten frames gives comparable results to GIT.
We show some of the qualitative results in <ref>. As the complexity of the scene and the obscurity of the scene text increase, it becomes more and more difficult for the model to predict the correct answer. VideoQA baselines achieve better results on questions that do not require the extraction of answers from the video.
§ CONCLUSIONS
We introduce RoadTextVQA, a new Video Question Answering dataset where the questions are grounded on the text and road signs present in the road videos. Our findings from the baseline models' performance indicate a need for improvement in existing VideoQA approaches for text-aware multimodal question answering.
Future work can involve augmenting the dataset by incorporating videos obtained from diverse global locales. Currently, there are recurrent questions and answers due to repeating elements in the videos.
Including videos from various locations broadens the diversity of the dataset by providing a more comprehensive range of questions and answers and minimizes any biases within the dataset. To our best knowledge, currently, there are no Visual Question Answering models that explicitly incorporate road signs. Models can integrate road signs as an additional input or pretrain on road sign-description pairs to enhance their ability to respond to questions that require domain knowledge.
We believe this work would encourage researchers to develop better models that incorporate scene text and road signs and are resilient to the challenges posed by driving videos. Additionally, drive further research in the area of scene text VideoQA and the development of advanced in-vehicle support systems.
§ ACKNOWLEDGEMENTS
This work has been supported by IHub-Data at IIIT-Hyderabad, and grants PDC2021-121512-I00, and PID2020-116298GB-I00 funded by MCIN/AEI/
10.13039/501100011033 and the European Union NextGenerationEU/PRTR.
splncs04
|
http://arxiv.org/abs/2307.03987v1 | 20230708142557 | A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation | [
"Neeraj Varshney",
"Wenlin Yao",
"Hongming Zhang",
"Jianshu Chen",
"Dong Yu"
] | cs.CL | [
"cs.CL"
] |
PCG-based Static Underground Garage Scenario Generation
Wenjin Li, Kai Li
Wenjin Li, Kai Li are with the Department of Computer Science and Technology, Southern University of Science and Technology, Shenzhen, 518055, China
August 12, 2023
============================================================================================================================================================================
Recently developed large language models have achieved remarkable success in generating fluent and coherent text. However, these models often tend to `hallucinate' which critically hampers their reliability.
In this work, we address this crucial problem and propose an approach that actively detects and mitigates hallucinations during the generation process.
Specifically,
we first identify the candidates of potential hallucination leveraging the model's logit output values, check their correctness through a validation procedure, mitigate the detected hallucinations, and then continue
with the generation process.
Through extensive experiments with the `article generation task', we first demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, the detection technique achieves a recall of ∼88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations.
Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average.
In summary, our work contributes to improving the reliability and trustworthiness of large language models, a crucial step en route to enabling their widespread adoption in real-world applications.
§ INTRODUCTION
Recently developed large language models such as GPT-3 <cit.>, InstructGPT <cit.>, PaLM <cit.>, LLaMA <cit.>, and several others <cit.>
have achieved remarkable performance on a wide range of language understanding tasks.
Furthermore, they have been shown to possess an impressive ability to generate fluent and coherent text.
Despite all these abilities, their tendency to `hallucinate' critically hampers their reliability and limits their widespread adoption in real-world applications.
Hallucination in the context of language refers to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input <cit.>.
These hallucinations can lead to serious consequences such as spreading of misinformation and violation of privacy.
Thus, in this work, we focus on the crucial problem of `addressing' hallucinations of the large language models.
We propose to actively `detect' and `mitigate' hallucinations during the generation process.
This is crucial as we show that a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input.
Thus, actively detecting and mitigating hallucinations is also important to prevent the propagation of hallucinations in the subsequently generated sentences. We divide our approach into two stages, Detection and Mitigation.
In the hallucination detection stage, we first identify the candidates of potential hallucination, i.e., the key `concepts' of the generated sentence.
Next, leveraging the logit output values of the model, we calculate model's `uncertainty' on the identified concepts.
We demonstrate that this uncertainty provides a signal for hallucination.
However, we note that this is an additional signal and not a necessary requirement for our approach.
Then, we check the correctness of the
`uncertain' concepts through a validation procedure where we:
(a) create a query that tests the correctness of the information pertaining to the concept,
(b) retrieve knowledge relevant to the validation question, (c) answer the validation question leveraging the retrieved knowledge, and verify the corresponding information in the generated sentence to detect hallucinations.
This is followed by the hallucination mitigation stage in which we
`repair' the potentially hallucinated sentence using the retrieved knowledge as evidence.
Figure <ref> illustrates the key steps of our approach.
Furthermore, we conduct a systematic and wide study exploring multiple techniques to achieve the objective of each of the steps.
We design an experimental setup where
we prompt the model to write about topics from diverse domains such as sports, politics, music, literature, etc.
Then, we annotate the correctness of the first five generated sentences for each topic.
We first demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, the detection technique achieves a recall of ∼88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations.
Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average (Figure <ref>).
We conduct a thorough analysis that further
results in several interesting and important findings.
Lastly, we release our code and correctness annotations that will also facilitate a systematic future research in addressing hallucinations.
§ APPROACH
§.§ Overview
We propose to actively detect hallucinations and mitigate them during the generation process.
This is crucial as we show that
a generated sentence is hallucinated more often
when the model has already hallucinated in its previously generated sentences for the input (Section <ref>).
Similarly, a generated sentence is relatively less often hallucinated when the model has not hallucinated in its previously generated sentences.
Thus, actively detecting hallucinations and mitigating them is also important to prevent the propagation of further hallucinations in subsequently generated sentences.
To this end, we iteratively generate sentences through the model and actively detect and mitigate hallucinations.
Figure <ref> illustrates the key steps of our approach.
In section <ref>, we detail the steps of our hallucination detection approach, i.e., identifying the important `concepts' of the generated sentence, i.e., the candidates of potential hallucination (<ref>), calculating model's uncertainty on the concepts using the logit output values (<ref>), and checking the correctness by creating validation query (<ref>), finding relevant knowledge (<ref>), and verifying information leveraging the retrieved knowledge (<ref>).
We describe various techniques to achieve the objective of each of these steps and also elaborate on several important points such as
using a `self-inquiry' method to answer validation questions without using an external knowledge source and trade-off between executing the validation procedure in parallel for all the concepts and in sequential order based on their `uncertainty'.
For each step, we also indicate the most preferred technique with (*) and provide our justification.
In section <ref>, we detail our hallucination mitigation approach.
Specifically, we `repair' the hallucinated sentence by removing or substituting the hallucinated information leveraging the retrieved knowledge as evidence and also utilize the retrieved knowledge as context (prepended to the input) to generate the next sentence.
§.§ Hallucination Detection
§.§.§ Identify Key Concepts
In the first step, we identify the important concepts from the generated sentence.
We identify these concepts because validating the correctness of the entire sentence at once is infeasible; this is because a sentence may contain a number of different facets all of which can not be validated at once.
On the other hand, individually validating the correctness corresponding to the concepts provides opportunities for accurately detecting hallucinations.
Thus, the objective of this step is to identify the candidates of potential hallucination.
We note that a concept or keyphrase is essentially a span of text consisting of one or more words.
We study the following techniques to identify the concepts:
Entity Extraction:
Entities are usually an important part of a sentence, thus, we use an off-the-shelf entity extraction model
to identify the concepts.
A limitation of this method is that a concept need not necessarily be an entity and can be a non-entity span also.
We address this limitation with a keyword extraction model.
Keyword Extraction:
To also identify the non-entity concepts, we explore an off-the-shelf keyword extraction model[https://huggingface.co./ml6team/keyphrase-extraction-kbir-kpcrowd].
This model uses Keyphrase Boundary Infilling with Replacement (KBIR) as its base model and fine-tunes it on the KPCrowd dataset <cit.>.
*Instructing the Model*:
Since state-of-the-art language models perform remarkably well on a wide range of tasks, in this technique, we directly instruct the model to identify the important concepts from the generated sentence.
An important characteristic of this technique is that it doesn't require calling a task-specific tool (entity or keyword extraction model) for this task.
Table <ref> (in Appendix <ref>) illustrates examples of concepts identified using the three techniques.
It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also.
In contrast, instruction technique successfully identifies all the important concepts.
Moreover, it doesn't require calling a task-specific tool.
Thus, we represent this technique with (*), our preferred technique for this step.
§.§.§ Calculate Model's Uncertainty
GPT-3 <cit.> and several other publicly available models also provide logit output values in their prediction response.
Thus, we study if these logit output values can be utilized to
detect hallucinations.
However, we note that this is an additional source of information and not a necessary requirement for our hallucination detection method as some models that are available only via API calls do not provide these logit output values.
Recall that a concept can consist of more than one token also (note that the model provides logit output values at the level of tokens); thus, we study three different techniques for calculating a probability score for a concept.
Consider a concept consisting of n tokens and having the maximum softmax probabilities as p_1, p_2, p_3, ..., p_n for the n token positions respectively.
We obtain these probabilities by applying the softmax function over the logit values for each token position.
We study the following techniques:
Average of Token Probabilities:
In this technique, we simply take the average of the probabilities of the tokens corresponding to the concept:
score = AVG (p_1, p_2, ..., p_n)
Normalized Product of Token Probabilities:
Here, we take a normalized product of the probabilities of the tokens:
score = (p_1 × p_2 × ... × p_n)^1/n
*Minimum of Token Probabilities*:
Here, we take the minimum of probabilities as the score.
score = MIN (p_1, p_2, ..., p_n)
This is our preferred technique for this step as the other techniques average out the effect of model's uncertainty on the tokens while low probability in even one token of the concept provides a strong evidence of the model being uncertain.
For example, if the model is uncertain on the name of the USA president then its uncertainty on the first token (`Joe') would be high but on the next token (`Biden') would be very low as the token `Joe' is frequently followed by the token `Biden'.
Thus, averaging or normalizing the probabilities will have a limited capability to capture this signal.
Through our experiments (Section <ref>), we show that this score (especially `MIN') indeed provides a signal for hallucination, i.e., the more uncertain a model is on a concept (low probability score), the more likely it is to be hallucinating about that concept.
However, we note that this score is just a signal for hallucination and in no way provides a guarantee for presence of hallucinations.
We utilize this signal and check for hallucinations with respect to the uncertain concepts using our validation procedure (<ref>-<ref>).
In the absence of logit output values:
For models that do not provide the logit output values, all or some heuristically selected concepts (depending on the computational and latency budget of the system) can be passed to the validation stage for detecting hallucinations.
§.§.§ Create Validation Question
We start the validation procedure for a concept by creating a question that tests the correctness of the information (in the generated sentence) pertaining to the concept.
We create Yes/No Questions, i.e., questions for which the answer is either a `Yes' or a `No'.
Table <ref> shows examples of validation questions.
For creating these questions, we explore the following two techniques:
Question Generation Tool:
Here, we use an off-the-shelf answer-aware question generation model.
*Instructing the Model*:
Here, we directly instruct the model to create a validation question checking the correctness of the information about the selected concept.
For the same reason as in the concept identification step, this is our preferred technique as it does not require calling a task-specific tool.
We note that instead of Yes/No questions, Wh-questions can also be used for validation.
We prefer Yes/No questions as it is relatively easier to check the answer for these questions.
We leave exploring Wh-questions for validation for future work.
§.§.§ Find Relevant Knowledge
*Web Search*:
In order to answer the validation question, we retrieve knowledge relevant to it which serves as additional context.
For generality and wide coverage, we use web search (via Bing search API) for retrieving this knowledge.
However, we note that any other search API or knowledge corpus can also be utilized for this purpose.
Self-Inquiry:
We also explore a self-inquiry technique where we directly prompt the model to answer the validation question.
In this technique, the model relies on its parametric knowledge to answer the validation question.
This technique has several drawbacks as compared to web search such as lack of a reliable strategy to extract the parametric knowledge from the model and staleness of the parametric knowledge.
§.§.§ Answer Validation Question
In this step, we prompt the model to answer the validation question (leveraging the retrieved knowledge as context) and verify its response.
If the validation procedure succeeds for all the uncertain concepts then we continue generating the next sentence; otherwise, we interrupt the generation process, mitigate the potential hallucination in the sentence, and then continue generation.
Order of Validation of Concepts:
Validation of different concepts can be done in a sequence (in ascending order of their calculated probability score) or in parallel.
However, running this in parallel would require starting multiple threads which may not be supported by all machines.
Thus, in this work we study only the sequential validation strategy but note that it can be made more efficient by running it in parallel.
We regard this sequential validation as a greedy exiting strategy as we proceed to the mitigation stage on detection of the first potential hallucination.
§.§ Hallucination Mitigation
For mitigating the hallucination in the generated sentence, we instruct the model to repair the generated sentence by either removing or substituting the hallucinated information using the retrieved knowledge as evidence.
Table <ref> shows the instructional prompts for different steps of our approach.
Note: We note that the result of the validation procedure is contingent on the retrieved knowledge and the model's ability to leverage that knowledge in answering the validation question.
Thus, a case is plausible in which the validation procedure reports hallucination even though the sentence is actually not hallucinated.
However, in Section <ref>, we show that our approach performs fairly well on this task.
Moreover, it achieves a very high recall demonstrating its efficacy at detecting hallucinations.
Moreover, in Section <ref>, we show that our mitigation approach does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
§ EXPERIMENTS AND RESULTS
In this section, we first demonstrate the two findings that motivate our approach (<ref> and <ref>).
Then, we show the individual efficacy of our hallucination detection and mitigation techniques in <ref> and <ref>, respectively.
Finally, in <ref>, we show the effectiveness of the proposed active detection and mitigation approach in addressing hallucinations.
Data and Annotation:
In our experimental setup, we prompt the large language model (GPT-3: text-davinci-003) to write about various topics.
Specifically, we use a total of 150 topics from diverse domains.
Figure <ref> shows the distribution of different domains in our topic set.
In each domain, we include different kinds of topics; for instance, Sports domain consists of sports persons, administrators, teams, and games, Music consists of musicians, songs, music labels, and bands, Politics includes politicians, political parties, and elections, Film & TV includes actors, TV personalities, shows, and movies, History includes historians and events, etc.
For selecting the names of people, we use randomly sampled names from the top 20% of longest articles in WikiBio dataset <cit.> as done in <cit.>.
Similarly, for the other topics, we randomly sample from the longest Wikipedia articles.
This is done to ensure that no obscure or ambiguous concept is selected.
Equipped with the list of topics, we give the following input prompt to the model:
for each topic.
Following this, we (the authors) manually annotate the correctness of the first five sentences generated by the model for each topic.
For annotating the correctness, we look at search results from the web to find the relevant knowledge that either supports or contradicts the information present in the generated sentence.
In some cases, multiple web searches were required to check the correctness of different facets of a sentence.
Furthermore, in a small number of cases, we could not find information supporting or contradicting the information in the generated sentence, we mark it as a case of extrinsic hallucination.
We opt for this expert annotation strategy because despite our annotation task being a simple binary classification task, it requires considerable effort in checking the correctness of a given sentence which can not reliably be collected via crowdsourcing.
In addition to this sentence-level annotation, we also annotate correctness at the concept-level that we will detail in <ref>.
We release both sentence-level and concept-level hallucination annotations that will also facilitate a systematic future research in this direction.
§.§ Motivating Findings
§.§.§ Hallucination Causes Further Hallucination
Recall that we consider the first five sentences generated by the model for each topic and annotate their correctness.
Since the sentences are sequentially generated, we investigate the
relationship between `hallucination in a generated sentence' and `hallucination in the previously generated sentences' for an input.
Since there are two binary variables, there exist four possibilities in this relationship, i.e.,
a sentence is hallucinated and there was hallucination in the previously generated sentences (A), the sentence is not hallucinated and there was hallucination in the previously generated sentences (B), the sentence is hallucinated and there was no hallucination in the previously generated sentences (C), the sentence is not hallucinated and there was no hallucination in the previously generated sentences (D).
For illustration, consider a sample case for sentence 3, the two binary variables are whether sentence 3 is hallucinated and whether there was hallucination in the previously generated sentences (i.e. in sentence 1 OR sentence 2).
Figure <ref> demonstrates this relationship for sentences 2, 3, 4 and 5 aggregated over all the topics in our data.
We do not show this for sentence 1 as there is no previously generated sentence for it.
From this figure, we draw the following inferences:
(a) A > B: Cases A and B correspond to the scenario when there is hallucination in the previously generated sentences. It can be observed that A is considerably greater than B which implies that when there is hallucination in the previously generated sentences, a sentence is hallucinated more often.
Moreover, the gap keeps increasing as the sentence number increases.
(b) A > C: Cases A and C correspond to the scenario when a generated sentence is hallucinated. It can be observed that A is greater than C which implies that a generated sentence is hallucinated more when there is hallucination in the previously generated sentences as compared to when there is no previous hallucination.
(c) D > C: Cases C and D correspond to the scenario when there is no hallucination in the previously generated sentences. Here, D is greater than C which implies that when there is no hallucination in the previously generated sentences, a generated sentence is more often not hallucinated.
(d) D > B: Cases B and D correspond to the scenario when a generated sentence is not hallucinated. D is greater than B which implies that a generated sentence is not hallucinated more when there is no previous hallucination as compared to when there is previous hallucination.
This shows that hallucination in a sentence often results in further hallucinations in the subsequently generated sentences and thus actively detecting and mitigating hallucinations can not only fix the current hallucination but can also prevent its propagation in the subsequently generated sentences.
Next, we demonstrate the utility of logit output values in detecting hallucinations.
§.§.§ Logit Output Values Provide a Signal for Hallucination
In this subsection, we first show the trend of hallucination with the probability score.
Note that this score is calculated using the logit output values.
Then, we demonstrate the benefit of identifying concepts from the generated sentence in detecting hallucinations.
Finally, we compare the efficacy of different probability calculation techniques in detecting hallucinations.
Hallucination vs Probability Score:
In order to study the relationship between logit output values and hallucination, we annotate correctness at concept-level also (in addition to sentence-level annotations described earlier).
Specifically, for each identified concept, we mark whether the information about it in the generated sentence is hallucinated or not.
This can be different from sentence-level annotation as it focuses only on the correctness of the information about the concept in the sentence.
Table <ref> shows examples of both sentence-level and concept-level annotations.
Figure <ref> shows the trend of hallucination with our calculated probability scores at both sentence and concept levels.
For a sentence, we use the minimum across tokens of all its identified concepts as the probability score and for a concept, we use the minimum across all its tokens as the probability score.
It can be observed that as the probability score increases (or uncertainty decreases), tendency to hallucinate decreases.
This shows that these probability values can be utilized as a signal for hallucination, i.e., the low probability concepts in a generated sentence can be considered as candidates of potential hallucination and their correctness in the generated sentence can be validated for detecting hallucinations.
On average, we observe an absolute difference of ∼0.15 between the probabilities of concepts when the model is hallucinating vs when it is not hallucinating.
Benefit of Identifying Concepts from a Sentence:
Now, we demonstrate the benefit of identifying concepts from a sentence and leveraging the logit output values corresponding to their tokens for detecting hallucinations.
To this end, we plot precision-recall curves for the hallucination detection task corresponding to two methods that use the probabilities calculated from the logit output values.
The blue curve corresponds to the technique in which we use the minimum probability across all tokens of the sentence and the orange curve is for the technique in which we use the minimum over only the tokens of the identified concepts.
Figure <ref> shows the two curves.
The orange curve achieves higher area under the precision-recall curve implying that utilizing the probabilities of the concept tokens provides a stronger signal for hallucination as compared to the probabilities corresponding to all the tokens.
Comparing Probability Calculation Techniques:
Figure <ref> shows the Precision-Recall curves for the hallucination detection task (at concept-level) using the three probability calculation techniques, i.e., Minimum, Average, and Normalized (described in <ref>).
The `Minimum' technique achieves the highest area under the curve and hence is better at the hallucination detection task.
§.§ Hallucination Detection Performance
In this subsection, we demonstrate the hallucination detection performance of various techniques at both sentence and concept-levels.
Self-Inquiry vs Web Search:
Table <ref> and <ref>
show the hallucination detection performance of the self-inquiry and web search techniques at sentence-level and concept-level, respectively.
For sentence-level results, we predict the sentence to be hallucinated if the validation procedure fails on any identified concept.
Note that in these results, we do not leverage the uncertainty score to select concepts for validation, instead we validate all the identified concepts.
We study the relationship of recall with probability thresholds in Figure <ref> (in Appendix).
From the tables, it can be observed that the web-search technique achieve considerably high recall in detecting hallucinations.
Here, we emphasize on the high `recall' of web-search technique as we show that our mitigation approach does not introduce any new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives (<ref>).
Figure <ref> shows the recall of hallucination detection vs Probability threshold plot for Self Inquiry and web search techniques at both sentence-level and concept-level.
Web-search is consistently and considerably better than self-inquiry.
§.§ Hallucination Mitigation Performance
On sentences where our validation procedure (using Web search) reports hallucinations, we apply our mitigation technique.
We note that a sentence which is reported as hallucination can either be actually hallucinated or not hallucinated, i.e., it could also be a false positive.
Table <ref> shows the result of our method.
It successfully mitigates the hallucination on 57.6% of the correctly detected hallucinations (True Positives); we refer to this metric as `success'.
Furthermore, it achieves this at minimal `deterioration' (3.06%), i.e., it incorrectly converts a minimal 3.06% of the non-hallucinated instances to hallucinated.
§.§ Active Detection and Mitigation
The two findings in Section <ref> motivate our approach of addressing hallucinations in which we actively detect hallucinations leveraging the logit output values and mitigate them during the generation process to prevent their propagation.
Specifically, using the calculated probability scores, we identify the uncertain concepts and check their correctness using our validation procedure.
We generate one sentence at a time and when our detection method reports hallucination, we fix it using our mitigation approach and continue generating the next sentence.
We demonstrated separate detection and mitigation efficacy in <ref> and <ref>, respectively.
Figure <ref> compares the percentage of hallucination in the output of GPT-3 model and our active detection and mitigation approach.
Our approach reduces the percentage of hallucinations from 47.4% to 14.53%.
In Figure <ref>, we demonstrate this comparison for different categories of hallucination.
It shows that our approach reduces hallucinations for all categories.
§ RELATED WORK
Advancements in the field of natural language processing led to the development of models that possess an impressive ability to generate fluent and coherent text. However, these models are vulnerable to a phenomenon called text hallucination.
Prior work <cit.> has categorized text hallucinations into two classes: Intrinsic (when the generated output contradicts the source content) and Extrinsic (when the generated output cannot be verified from the source content, i.e., it that can neither be supported nor contradicted by the source).
One thread of research pertaining to hallucinations has focused on studying different causes of this phenomenon such as training data quality <cit.>, source-target divergence <cit.>, ill-suited modeling <cit.>, and randomness during inference <cit.>.
The other thread focuses on addressing the hallucination problem <cit.>.
<cit.> propose a sampling-based hallucination detection approach in which they first sample multiple responses from the model and then measure the information consistency between the different responses. They posit that when a language model knows a given concept well, the sampled responses are likely to be similar and contain consistent facts; on the other hand, for hallucinated facts, stochastically sampled responses are likely to diverge and may completely contradict one another.
Another recent work <cit.> leverage LLM's internal state to identify the truthfulness of a statement. Using an annotated dataset, they train a separate classifier that takes the LLM's activation values as input and predicts its truthfulness.
<cit.> hypothesize that the randomness of sampling is more harmful to factuality when it is used to generate the latter part of a sentence than the beginning of a sentence and propose a new sampling algorithm named factual-nucleus sampling that dynamically adapts the `nucleus' p along the generation of each sentence.
<cit.> propose an approach motivated by The Society of Mind and multi-agent settings in which multiple models individually propose and jointly debate their responses and reasoning processes to arrive at a common answer.
In our approach, we leverage the logit output values and web search to actively detect and mitigate hallucinations.
§ CONCLUSION
In this work, we proposed an approach that actively `detects' and `mitigates' hallucinations of the large language models.
Through systematic and extensive experiments, we show that our approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average.
We also demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, our detection technique achieves a high recall and our mitigation technique successfully mitigates majority of the correctly detected hallucinations.
Notably, the mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Overall, our work contributes to improving the reliability and trustworthiness of text generation systems, a crucial step en route to enabling their widespread adoption in real-world applications.
acl_natbib
§ APPENDIX
§ APPROACH
Table <ref> shows the instructional prompts used for different steps of our approach.
We note that these techniques are the preferred techniques as they do not require calling an external task-specific tool to achieve the corresponding objectives.
§.§ Identify Key Concepts
Table <ref> shows examples of concepts identified using the three methods, i.e., Entity Extraction, Keyword Extraction, and Instructing the Model.
It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also.
In contract, instruction technique successfully identifies majority of the important concepts.
§.§ Create Validation Question
Table <ref> shows examples of validation questions corresponding to each concept created via instructing the model technique.
It shows examples of both the question types, i.e., Yes/No and Wh questions.
We prefer Yes/No questions as it is relatively easier to check the answer for these questions.
We leave exploring Wh-questions for validation for future work.
§ EVALUATION DATA
Table <ref> shows the statistics of the sentences generated by the GPT-3 (text-davinci-003 with temperature 0) model.
A sentence has ∼18 word on average and each sentence has ∼3.2 key concepts that are identified by our instruction technique.
Table <ref> shows examples of sentence-level and concept-level hallucination annotations.
§ RECALL OF HALLUCINATION DETECTION VS PROBABILITY THRESHOLD
Figure <ref> compares recall of hallucination detection for self-inquiry and web search techniques at different probability thresholds.
Web search considerably outperforms self-inquiry at all thresholds.
|
http://arxiv.org/abs/2307.04541v2 | 20230710130942 | Learning Large Margin Sparse Embeddings for Open Set Medical Diagnosis | [
"Mingyuan Liu",
"Lu Xu",
"Jicong Zhang"
] | cs.CV | [
"cs.CV",
"cs.AI"
] |
Mingyuan Liu, Lu Xu, Jicong Zhang.
^1School of Biological Science and Medical Engineering,
Beihang University, Beijing, China
^2Hefei Innovation Research Institute, Beihang University, Hefei, Anhui, China
{liumingyuan95, xulu181221, jicongzhang}@buaa.edu.cn
Learning Large Margin Sparse Embeddings for Open Set Medical Diagnosis
Mingyuan Liu1 Lu Xu1 Jicong Zhang1,2,*
August 12, 2023
======================================================================
Fueled by deep learning, computer-aided diagnosis achieves huge advances.
However, out of controlled lab environments, algorithms could face multiple challenges.
Open set recognition (OSR), as an important one, states that categories unseen in training could appear in testing.
In medical fields, it could derive from incompletely collected training datasets and the constantly emerging new or rare diseases.
OSR requires an algorithm to not only correctly classify known classes, but also recognize unknown classes and forward them to experts for further diagnosis.
To tackle OSR, we assume that known classes could densely occupy small parts of the embedding space and the remaining sparse regions could be recognized as unknowns.
Following it, we propose Open Margin Cosine Loss (OMCL) unifying two mechanisms.
The former, called Margin Loss with Adaptive Scale (MLAS), introduces angular margin for reinforcing intra-class compactness and inter-class separability, together with an adaptive scaling factor to strengthen the generalization capacity.
The latter, called Open-Space Suppression (OSS), opens the classifier by recognizing sparse embedding space as unknowns using proposed feature space descriptors.
Besides, since medical OSR is still a nascent field, two publicly available benchmark datasets are proposed for comparison.
Extensive ablation studies and feature visualization demonstrate the effectiveness of each design.
Compared with state-of-the-art methods, MLAS achieves superior performances, measured by ACC, AUROC, and OSCR.
§ INTRODUCTION AND RELATED WORK
Deep learning achieves great success in image-based disease classification.
However, the computer-aided diagnosis is far from being solved when considering various requirements in real-world applications. As an important one, open set recognition (OSR) specifies that diseases unseen in training could appear in testing <cit.>. It is practical in the medical field, caused by the difficulties of collecting a training dataset exhausting all diseases, and by the unpredictably appearing new or rare diseases. As a result, an OSR-informed model should not only accurately recognize known diseases but also detect unknowns and report them. Clinically, these models help construct trustworthy computer-aided systems. By forwarding unseen diseases to experts, not only the misdiagnosis of rare diseases could be avoided, but an early warning of a new disease outbreak could be raised.
There are many fields related to OSR but are essentially different.
In classification with reject options <cit.>, samples with low confidence are rejected to avoid misclassification. However, since its closed set nature, unknown classes could still be misclassified confidently <cit.>.
Anomaly detection, novelty detection, and one-class classification<cit.> aim at recognizing unknowns but ignore categorizing the known classes.
In outlier detection or one-/few-show learning <cit.>, samples of novel classes appear in training.
In zero-shot learning <cit.>, semantic information from novel classes could be accessed. Such as zebra, an unknown class, could be identified given the idea that they are stripped horses, and abundant samples of horse and stripe patterns.
Differently, OSR knows nothing about novel classes and should have high classification accuracy of the known meanwhile recognize unknowns, as illustrated in Fig. <ref> a). Due to limited space, some reviews <cit.> are recommended for more comprehensive conceptual distinctions.
Most OSR researches focus on natural images, while medical OSR is still in its infancy. In medical fields, representative work like T3PO <cit.> introduces an extra task to predict the input image augmentation, and samples with low probabilities are regarded as unknowns.
CSL <cit.> uses generative adversarial neural networks (GAN) to generate proxy images and unknown anchors.
As for natural images, a line of work tries to simulate unknowns using generated adversarial or counterfactual samples using GAN <cit.>. However, whether unknown patterns could be generated by learning from the known is unclear.
Some works learn descriptive feature representations. They enhance better feature separation between unknowns and knowns or assume the known features following certain distributions so that samples away from distributional centers could be recognized as unknowns <cit.>.
Differently, this work categorizes densely distributed known features and recognizes sparse embedding space as unknowns, regardless of the specific distribution.
This work tackles OSR under the assumption that known features could be assembled compactly in feature embedding space, and remaining sparse regions could be recognized as unknowns.
Inspired by this, the Open Margin Cosine Loss (OMCL) is proposed merging two components, Margin Loss with Adaptive Scale (MLAS) and Open-Space Suppression (OSS).
The former enhances known feature compactness and the latter recognizes sparse feature space as unknown.
Specifically, MLAS introduces the angular margin to the loss function, which reinforces the intra-class compactness and inter-class separability. Besides, a learnable scaling factor is proposed to enhance the generalization capacity.
OSS generates feature space descriptors that scatter across a bounded feature space. By categorizing them as unknowns, it opens a classifier by recognizing sparse feature space as unknowns and suppressing the overconfidence of the known.
An embedding space example is demonstrated in Fig. <ref> b), showing OMCL learns more descriptive features and more distinguishing known-unknown separation.
Considering medical OSR is still a nascent field, besides OMCL, we also proposed two publicly available benchmark datasets. One is microscopic images of blood cells, and the other is optical coherence tomography (OCT) of the eye fundus. OMCL shows good adaptability to different image modalities.
Our contributions are summarized as follows.
Firstly, we propose a novel approach, OMCL for OSR in medical diagnosis. It reinforces intra-class compactness and inter-class separability, and meanwhile recognizes sparse feature space as unknowns.
Secondly, an adaptive scaling factor is proposed to enhance the generalization capacity of OMCL.
Thirdly, two benchmark datasets are proposed for OSR. Extensive ablation experiments and feature visualization demonstrate the effectiveness of each design. The superiority over state-of-the-art methods indicates the effectiveness of our method and the adaptability of OMCL on different image modalities.
§ METHOD
In Section <ref>, the open set problem and the formation of cosine Softmax are introduced. The two mechanisms MLAS and OSS are sequentially elaborated in Section <ref> and <ref>, followed by the overall formation of OMCL in Section <ref>.
§.§ Preliminaries
Problem setting:
Both closed set and open set classifiers learn from the training set 𝒟_train={(x_i, y_i)}_i=1^N with N image-label pairs (x_i, y_i), where y_i∈𝒴={1, 2, ..., C} is a class label.
In testing, closed set testing data 𝒟_test shares the same label space 𝒴 with the training data. However, in the open set problem, unseen class y_i=C+1 could appear in testing i.e. y_i∈𝒴_open={1, 2, ..., C, C+1}.
Cosine Loss:
The cosine Softmax is used as the basis of the OMCL. It transfers feature embeddings from the Euclidian space to a hyperspherical one, where feature differences depend merely on their angular separation rather than spatial distance.
Given an image x_i, its vectorized feature embedding z_i, and its label y_i, the derivation progress of the cosine Softmax is
S_cos
=e^W_y_i^Tz_i/∑_j=1^Ce^W_j^Tz_i_Conventioanl Form
=e^∥W_y_i∥∥z_i∥ cos(θ_y_i,i)/∑_j=1^Ce^∥W_j∥∥z_i∥ cos(θ_j,i)
=e^s· cos(θ_y_i,i)/∑_j=1^Ce^s· cos(θ_j,i)_Cosine Form
, where W_j denotes the weights of the last fully-connected layer (bias is set to 0 for simplicity). ∥W_j∥=1 and ∥z_i∥=s are manually fixed to constant numbers 1 and s by L2 normalization. s is named the scaling factor. cos(θ_j,i) denotes the angle between W_j and z_i. By doing so, the direction of W_j could be regarded as the prototypical direction of class j as shown in Fig. <ref> a). Samples with large angular differences from their corresponding prototype will be punished and meanwhile class-wise prototypes will be pushed apart in the angular space.
Compared with Softmax, the cosine form has a more explicit geometric interpretation, promotes more stabilized weights updating, and learns more discriminative embeddings <cit.>.
Moreover, the L2 normalization constrains features to a bounded feature space, which allows us to generate feature space descriptors for opening a classifier (will be further discussed in Section <ref>).
§.§ Margin Loss with Adaptive Scale (MLAS)
MLAS serves three purposes.
1) By applying angular margin, the intra-class compactness and the inter-class separability are strengthened.
2) The threshold could represent the potential probability of the unknowns, which not only prepares for the open set but also learns more confident probabilities of the knowns.
3) A trainable scaling factor is designed to strengthen the generalization capacity.
MLAS is:
S_MLAS
=e^s·(cos(θ_y_i,i)-m)/e^s·(cos(θ_y_i,i)-m)+e^s· t+∑_j=1, j≠ y_i^Ce^s· cos(θ_j,i)
m, t, and s respectively denote margin, threshold, and learnable scaling factor, with corresponding geometric interpretation demonstrated in Fig. <ref> b).
By using the angular margin, the decision boundary could be more stringent.
Without it, the decision boundary is cos(θ_1,i)>cos(θ_2,i) for the i-th sample of class 1.
It becomes cos(θ_1,i)>cos(θ_2,i)+m when using the margin, which leads to stronger intra-class compactness. Moreover, the angular similarities with other classes are punished in the denominator to increase inter-class separability.
The threshold t could be regarded as an extra dimension that prepares for unknown classes. Given the conventional input of Softmax as [q_i^1, q_i^2, ..., q_i^C]∈ℝ^C, ours could be understood as [q_i^1, q_i^2, ..., q_i^C, t]∈ℝ^C+1. Since t is added, the class-wise output q_i^c before Softmax is forced to have a higher value to avoid misclassification (at least larger than t). It reinforces more stringent learning and hence increases the feature compactness in the hyperspherical space.
A large s makes the distribution more uniform, and a small s makes it collapses to a point mass.
In this work, it is learnable, with a learning rate 0.1× the learning rate of the model. It theoretically offers stronger generalization capacity to various datasets and is experimentally observed to converge to different values in different data trails and could boost performances.
LMCL <cit.> and NMCL <cit.> are the most similar arts to ours. Differently, from the task perspective, these designs are proposed for closed-world problems. From the method perspective, an OSS mechanism is designed to tackle OSR leveraging generate pseudo-unknown features for discriminative learning. Moreover, an adaptive scaling factor is introduced for increasing generalization.
§.§ Open-Space Suppression (OSS)
OSS generates feature space descriptors of bounded feature space. By categorizing them into an extra C+1 class, samples in sparse feature space could be recognized as unknown and the overconfidence of the known is suppressed.
OSS selects points scattered over the entire feature space, named descriptors, representing pseudo-unknown samples. Different from existing arts that generate pseudo-unknowns by learning from known samples, the OSS selects points scattered over the feature space. It guarantees all space could be possibly considered for simulating the potential unknowns. By competing with the known features, feature space with densely distributed samples is classified as the known, and the sparse space, represented by the descriptors, will be recognized as unknown.
In this work, the corresponding descriptor set, with M samples, is 𝒟_desc={(z_i, C+1)}_i=1^M, where z_i ∈𝕌[-s,s]^d subject to ∥z_i∥=s. 𝕌[-s,s] denotes random continuous uniform distribution ranges between -s to s, and d is the dimension of feature embeddings.
s is trainable and the descriptors are dynamically generated with the training.
Fig. <ref> c) demonstrates the geometric interpretation. During training, descriptors are concatenated with the training samples at the input of the last fully-connected layer, to equip the last layer with the discrimination capacity of known and unknown samples. The OSS is
S_OSS=e^s· t/e^s· t+∑_j=1^Ce^s· cos(θ_j,i)
where t and s follow the same definition in MLAS.
Most similar arts like AL <cit.> attempts to reduce misclassification by abandoning ambiguous training images. Differently, we focus on OSR and exploit a novel discriminative loss with feature-level descriptors for OSR.
§.§ Open Margin Cosine Loss (OMCL)
OMCL unifies MLAS and OSS into one formula, which is
L_OMCL=-1/N+M∑_i=1^N+M𝕀_i log(S_cos) + λ𝕀_i log(S_MLAS) +λ𝕀_i log(S_OSS)
𝕀_i equals 1 if the i-th sample is training data, and equals 0 if it belongs to the feature space descriptors. λ is a weight factor. Since the output of the channel C+1 is fixed as t, no extra weights W_C+1 are trained in the last fully-connected layer. As a result, OMCL does not increase the number of trainable weights in a neural network. During testing, just as in other works <cit.>, the maximum probability of known classes is taken as the index of unknowns, where a lower known probability indicates a high possibility of unknowns.
§ RESULT
§.§ Datasets, Evaluation Metrics, and Implementation Details
Two datasets are adapted as new benchmarks for evaluating the OSR problem. Following protocols in natural images <cit.>, half of the classes are selected as known and reminders as unknowns. Since the grouping affects the results, it is randomly repeated K times, leading to K independent data trials. The average results of K trials are used for evaluation. The specific groupings are listed in the supplementary material, so that future works could follow it for fair comparisons.
BloodMnist contains 8 kinds of individual normal cells with 17,092 images <cit.>. Our setting is based on the closed set split and prepossessing from <cit.>. Classes are selected 5 rounds (K=5). In each trial, images belonging to 4 chosen classes are selected for training and closed-set evaluation. Images belonging to the other 4 classes in testing data are used for open set evaluation.
OCTMnist has 109,309 optical coherence tomography (OCT) images <cit.>, preprocessed following <cit.>. Among the 4 classes, 1 is healthy and the other 3 are retinal diseases. In data trail splitting, the healthy class is always in the known set, which is consistent with real circumstances, and trails equal to 3 (K=3).
Metrics: Following previous arts <cit.>, accuracy (ACC_c) validates closed set classification. Area Under the Receiver Operating Characteristic (AUROC_o), a threshold-independent value, measures the open set performances. Open Set Classification Rate (OSCR_o) <cit.>, considers both open set recognition and closed set accuracy, where a larger OSCR indicates better performance.
Implementation Details:
The classification network is ResNet18 <cit.>, optimized by Adam with an initial learning rate of 1e-3 and a batch size 64. The number of training epochs is 200 and 100 for BloodMnist and OCTMnist respectively because the number of training samples in BloodMnist is smaller. Margin m, threshold t, λ are experimentally set to -0.1, 0.1, and 0.5 respectively. Images are augmented by random crop, random horizontal flip, and normalization.
§.§ Comparison with State-of-the-art Methods
As demonstrated in Table <ref>, the proposed OMCL surpasses state-of-the-art models, including typical discriminative methods, baseline<cit.>, GCPL<cit.>, and RPL<cit.>; latest generative model DIAS<cit.>; and ARPL+CS<cit.> that hybrids both. All methods are implemented based on their official codes. Their best results after hyperparameter finetunes are reported. Results show the OMCL maintains the accuracy, meanwhile could effectively recognize unknowns.
§.§ Ablation Studies
Effectiveness of MLAS and OSS: Table <ref> demonstrates the respective contributions of MLAS and OSS in OMCL. Each of them enhances the performances and they could work complementarily to further improve performances.
Ablation Study of Adaptive Scaling Factor: Fig. <ref> a) demonstrates the effectiveness of the adaptive scaling factor. Quantitatively, the adaptive design surpasses a fixed one. Moreover, Fig. <ref> b) displays the scaling factor will converge to different values in different training trials. Both results demonstrate the effectiveness and the generalization capacity of the adaptive design.
Ablation Study of Hyperparameters t, m, and λ: Fig. <ref> a), b), and c) respectively show the influence on results when using different hyperparameters. t and m are the threshold and angular margin, presented in equation <ref>, and λ is the trade-off parameter in equation <ref> .
Ablation Study of M: Fig. <ref> d) illustrates the effect of the number of feature space descriptors upon results. The ratio 1:1 is experimentally validated as a proper ratio. Because a randomly generated descriptor could be extremely close to a known feature point, but classified as a novel category, which may disturb the training. If the number of descriptors is far more than that of the training samples (the 5 times shown in Fig. <ref> 4), the performance gets lower.
Feature Visualization: Fig. <ref> b) visualizes the t-SNE results of features z of both known and unknown classes after dimension reduction. For each class, 200 samples are visualized and the perplexity of the t-SNE is set to 30. It shows that OMCL could learn better intra-class compactness and inter-class separability. Moreover, samples of unknown classes tend to be pushed away from known classes, incidcating the effectiveness of our designs.
§ CONCLUSION
In this paper, two publicly available benchmark datasets are proposed for evaluating the OSR problem in medical fields. Besides, a novel method called OMCL is proposed, under the assumption that
known features could be assembled compactly in feature space and the sparse regions could be recognized as unknowns.
The OMCL unifies two mechanisms, MLAS and OSS, into a unified formula. The former reinforces intra-class compactness and inter-class separability of samples in the hyperspherical feature space, and an adaptive scaling factor is proposed to empower the generalization capability.
The latter opens a classifier by categorizing sparse regions as unknown using feature space descriptors.
Extensive ablation experiments and feature visualization demonstrate the effectiveness of each design. Compared to recent state-of-the-art methods, the proposed OMCL performs superior, measured by ACC, AUROC, and OSCR.
splncs04
|
http://arxiv.org/abs/2307.04119v1 | 20230709082201 | Categorical Realizability for Non-symmetric Closed Structures | [
"Haruka Tomita"
] | cs.LO | [
"cs.LO"
] |
Categorical Realizability for Non-symmetric Closed Structures]Categorical Realizability for
Non-symmetric Closed Structures
H. Tomita]Haruka Tomita
In categorical realizability, it is common to construct categories of assemblies and categories of modest sets from applicative structures.
These categories have structures corresponding to the structures of applicative structures. In the literature, classes of applicative structures inducing categorical structures such as Cartesian closed categories and symmetric monoidal closed categories have been widely studied.
In this paper, we expand these correspondences between categories with structure and applicative structures by identifying the classes of applicative structures giving rise to closed multicategories, closed categories, monoidal bi-closed categories as well as (non-symmetric) monoidal closed categories. These applicative structures are planar in that they correspond to appropriate planar lambda calculi by combinatory completeness.
These new correspondences are tight: we show that, when a category of assemblies has one of the structures listed above, the based applicative structure is in the corresponding class.
In addition, we introduce planar linear combinatory algebras by adopting linear combinatory algebras of Abramsky, Hagjverdi and Scott to our planar setting, that give rise to categorical models of the linear exponential modality and the exchange modality on the non-symmetric multiplicative intuitionistic linear logic.
[
[
August 12, 2023
===================
§ INTRODUCTION
Realizability started with <cit.> to give interpretations for Heyting arithmetic, and subsequently has been developed in many directions.
The categorical realizability we call here is one such development, giving categorical models of various programming languages and logics.
Given a very simple algebraic structure called applicative structure (or often called combinatory algebra), we construct categories and used as categorical models.
For an applicative structure , the category of assemblies is the category of “-computable universe" and its categorical structure depends on the computational structure of .
Therefore, giving certain conditions, we obtain with corresponding categorical structures.
The best known is that the condition of being a partial combinatory algebra (PCA) leads that (and ) is a Cartesian closed category (CCC) <cit.>.
A PCA is an applicative structure containing two special elements and which expresses substitution and discarding.
(We often call -combinator or -combinator as such elements.)
PCAs also can be characterized by the combinatory completeness, that is, the property that any computable functions (i.e., functions expressed as untyped lambda terms) on a PCA can be represented by elements of the PCA itself.
Categorical realizability for linear structures is also well investigated.
Assuming is a -algebra, that have combinators , and , and become symmetric monoidal closed categories (SMCCs) <cit.>.
, and are combinators expressing composition, exchanging and identity operations respectively, and -algebras correspond to the linear lambda calculus by the combinatory completeness.
These results for PCAs and -algebras are used as an useful method to giving various models based on CCCs and SMCCs.
On the other hand, categorical realizability based on non-symmetric structures has been less investigated.
In our previous studies <cit.>, we proposed “planar realizability" giving rise to non-symmetric categorical structures, such as closed multicategories, closed categories, skew closed categories and monoidal bi-closed categories.
The aim of this paper is to summarize and develop these results.
First in section <ref>, we start with recalling basic notions of categorical realizability.
Results of PCAs and -algebras are shown in the section.
Also notions of applicative morphisms and linear combinatory algebras (LCAs) are recalled from <cit.>, that are used to obtain models of linear exponential modalities on linear calculus.
Basic knowledge of category theory and the lambda calculus is assumed and not referred here.
Next in section <ref>, we introduce several classes of applicative structures inducing non-symmetric categorical structures.
Realizing non-symmetric closed structures is a more subtle problem than the symmetric cases like CCCs and SMCCs.
Since the -combinator in -algebras induces the symmetry of the monoidal structure on the category of assemblies, one may think we can obtain non-symmetric categorical structures by excluding the -combinator.
However, simply excluding the -combinator leads no interesting categorical structures like internal hom functors, since realizing closed structures needs some exchanging of realizers even if the closed structures are not symmetric.
We have to give applicative structures with appropriately weakened exchanging that realizes internal hom structures but does not realize symmetries.
To resolve this problem, in <cit.>, we introduced a unary operation () on an applicative structure, which allows restricted exchanging.
In section <ref> and <ref>, we recall these results, that -algebras induce (non-symmetric) closed multicategories and -algebras induce closed categories.
By the combinatory completeness, these classes of applicative structures correspond to the planar lambda calculus.
By the unary operation (), we obtain non-symmetric closed structures, however, this operation is not sufficient to obtain non-symmetric monoidal structures.
Assume that on a -algebra has tensor products.
When we take realizers of tensor products of in the same way that we take realizers of Cartesian/tensor products of assemblies on PCAs/-algebras, the realizer of unitors of leads a realizer of the symmetry.
That is, this attempt to get non-symmetric tensor products from -algebras ends in failure that the tensor products are symmetric.
Here what matters is that the way realizing products of assemblies on PCAs/-algebras corresponds to the representation of tensor products
X ⊗ Y ≅∀α. (X Y α) α
in the second-order linear logic ( <cit.>), which is valid only if the tensor is symmetric.
Thus, categorical realizability for non-symmetric monoidal structures needs some modification on the way realizing tensor products.
In this paper, we give two answers for this problem.
One is the way preparing a new combinator which directly realizes pairings.
The class of applicative structures, -algebras, is newly introduced in this paper and give rise to non-symmetric monoidal closed categories.
We show results about -algebras in section <ref>.
The other way is taking realizers of tensor products matching the representation
X ⊗ Y ≅∀α. (α Y X) α
in the second-order linear logic, which is valid even in the non-symmetric case.
To give such realizers, the class of applicative structures, bi--algebras, was introduced in <cit.>.
Bi--algebras feature two kinds of applications corresponding to two kinds of implications and , and have the combinatory completeness for the lambda calculus with two kinds of applications (which we call the bi-planar lambda calculus in this paper).
In section <ref>, we recall these results about bi--algebras.
Classes of applicative structures appearing in this paper are summarized in Table <ref>.
Also combinators and operations are summarized in Table <ref>.
The classes of applicative structures in this paper form a hierarchy as summarized in Table <ref>.
In section <ref>, we show that these classes are different from each other.
To show the strictness of the inclusion, it is sufficient to give examples belonging to one side and not to the other side, and we give such examples in section <ref>.
While these proofs in section <ref> are mostly straightforward and not conceptually new, sometimes it is not easy to show that some applicative structure does not belong to some class of applicative structures.
As such an example, in section <ref>, we show that the untyped planar lambda calculus (with no constants) is not a bi--algebra.
In the next section <ref>, we give the computational lambda calculus <cit.> as a rather unexpected example of a -algebra and show the computational lambda calculus is not a bi--algebra.
To better clarify the relationship between applicative structures and categorical structures of categories of assemblies, in section <ref>, we show certain “inverses" of propositions shown in section <ref>.
That is, assuming has certain categorical structure (such as being an SMCC), we show belongs to the corresponding class (such as -algebras) under several conditions.
While the propositions for the cases of -algebras and -algebras were already presented in <cit.>, those for the cases of -algebras and bi--algebras are newly shown in this paper.
By integrating results of section <ref>, <ref> and <ref>, we can say that, for instance, the category of assemblies on the planar lambda calculus indeed has non-symmetric closed structure.
In section <ref>, we reformulate notions of LCAs for our -algebras.
Although linear exponential comonads are usually defined as comonads on symmetric monoidal categories, we can also define linear exponential comonads on non-symmetric monoidal categories <cit.>.
In <cit.>, we defined exponential relational planar linear combinatory algebras (exp-rPLCAs) as pairs of a bi--algebra and an applicative endomorphism on it, that give rise to linear exponential comonads on (non-symmetric) monoidal bi-closed categories.
The definition of exp-rPLCAs in <cit.> are the reformulation of the definition of (relational) LCAs to bi--algebras.
In this paper, we generalize exp-rPLCAs a bit by changing “bi--algebras" to “-algebras," and then similarly call the generalized ones as exp-rPLCAs.
New exp-rPLCAs give rise to linear exponential comonads on (non-symmetric) monoidal closed categories, and correspond to adjoint pairs of applicative morphisms between -algebras and PCAs.
There are also modalities on (non-symmetric) linear calculus other than the linear exponential modality.
The exchange modality, investigated in <cit.>, is a modality connecting a commutative logic and a non-commutative logic (the Lambek calculus).
Categorical models of the exchange modality are given as monoidal adjunctions between monoidal bi-closed categories and SMCCs, which are called Lambek adjoint models.
In <cit.>, we defined exchange relational planar linear combinatory algebras (exch-rPLCAs) that give rise to Lambek adjoint models.
In this paper, like exp-rPLCAs, we reformulate exch-rPLCAs for -algebras.
New exch-rPLCAs correspond to adjoint pairs between -algebras and -algebras, and give rise to monoidal adjunctions between (non-symmetric) monoidal closed categories and SMCCs, that are models of the exchange modality based on the non-symmetric multiplicative intuitionistic linear logic (that is, a fragment of the Lambek calculus without bi-closedness).
Finally in section <ref> and <ref>, we discuss related work, summarize conclusion and describe future work.
§ BACKGROUND
§.§ Applicative structures and categories of assemblies
First we recall basic notions of the categorical realizability.
Notations and definitions in this subsection are from <cit.>.
A partial applicative structure is a pair of a set and a partial binary operation (x ,y) ↦ x · y on .
When the binary operation is total, we say is a total applicative structure.
We often omit · and write x · y as x y simply.
We also omit unnecessary parentheses assuming that application joins from the left.
For instance, x y (z w) denotes (x · y) · (z · w).
In the sequel, we use two notations “↓” and “≃.”
We write x y ↓ for that x · y is defined.
“≃” denotes the Kleene equality, which means that if the one side of the equation is defined then the other side is also defined and both sides are equal.
Let be a partial applicative structure.
* An assembly on is a pair X = (|X|,_X), where |X| is a set and _X is a function sending x ∈ |X| to a non-empty subset x_X of .
We call elements of x_X realizers of x.
* For assemblies X and Y on , a map of assemblies f:X Y is a function f:|X| |Y| such that there exists an element r ∈ realizing f.
Here we say “r realizes f” or “r is a realizer of f” if r satisfies that
∀ x ∈ |X|, ∀ a ∈x_X, r a ↓ and r a ∈f(x)_Y.
If we assume two additional conditions on a partial applicative structure, we can construct two kinds of categories.
Let be a partial applicative structure satisfying that:
* has an element such that ∀ x ∈, x ↓ and x = x;
* for any r_1, r_2 ∈, there exists r ∈ such that ∀ x ∈, r x ≃ r_1 (r_2 x).
Then we construct categories as follows.
* The category , called the category of assemblies on , consists of assemblies on as its objects and maps of assemblies as its maps.
Identity maps and composition maps are the same as those of (the category of sets and functions).
* We call an assembly X a modest set on if X satisfies
∀ x,x' ∈ |X|, x ≠ x' ⇒x_X ∩x'_X = ∅.
The category , called the category of modest sets on , is the full subcategory of whose objects are modest sets on .
We need above two conditions <ref> and <ref> to give realizers of the identities and composition maps.
Identities are realized by .
For maps f_1:Y Z realized by r_1 and f_2:X Y realized by r_2, we obtain r given by the condition <ref>, which realizes f_1 ∘ f_2.
Since all the classes of applicative structures introduced later satisfy these conditions, the conditions are not be much problems in this paper.
Intuitively, the category (and ) can be understood as the category of “-computable universe.”
For an assembly X=(|X|,_X) on , elements of x_X can be seen as “machine-level interpretations” of x ∈ |X|.
For a map f:X Y of , the realizer r of f can be seen as “machine implementation” of f, since r takes interpretations of x (that is, elements of x_X) as input and computes interpretations of f(x) (that is, elements of f(x)_Y).
§.§ PCAs and Cartesian closed categories
Since is the category of -computable universe, the structure of depends on the computational structure of .
When applicative structures belong to a specific class, specific categorical structures may be found on the categories of assemblies.
The best known such class is the class of PCAs, which induce Cartesian closed categories of assemblies.
Results in this subsection are from <cit.>.
A partial combinatory algebra (PCA) is a partial applicative structure which contains two special elements and such that:
* ∀ x,y ∈, x ↓, x y ↓ and x y = y;
* ∀ x,y,z ∈, x ↓, x y ↓ and x y z ≃ x z (y z).
When a PCA is a total applicative structure, we say is an -algebra.
The most fundamental example of PCAs is the untyped lambda calculus.
Suppose infinite supply of variables x,y,z,…. Untyped lambda terms are terms constructed from the following six rules:
(identity)
x ⊢ x
Γ⊢ M
Δ⊢ N
(application)
Γ , Δ⊢ MN
Γ , x ⊢ M
(abstraction)
Γ⊢λ x.M
Γ , x , y, Δ⊢ M
(exchange)
Γ , y, x, Δ⊢ M
Γ , x , y ⊢ M
(contraction)
Γ , x ⊢ M[x/y]
Γ⊢ M
(weakening)
Γ, x ⊢ M
Here, in the application rule, Γ and Δ are sequences of distinct variables and contain no common variables.
In the contraction rule, M[x/y] denotes the term obtained by substituting x for all free y in M.
In the weakening rule, x is a variable not contained in Γ.
Note that abstraction rules are only applied to the rightmost variables. In order to
apply the abstraction rule to a variable in a different position, we need to use the exchange rule several times and move the variable to the rightmost place.
We define β-equivalence relation on lambda terms as the congruence of the relation (λ x.M)N ∼ M[N/x].
Untyped lambda terms modulo =_β form a PCA (actually an -algebra). The underlying set of the PCA consists of β-equivalence classes of untyped closed lambda terms (i.e., lambda terms with no free variables) and the application is defined as that of lambda terms. In this example, λ xyz.xz(yz) is the representative of and λ xy.x is the representative of .
The correspondence between PCAs and the lambda calculus is more than just an example. PCAs have an important property called the combinatory completeness, which gives interpretations of “computable functions” on by elements of itself.
First, we give the definition of polynomials over an applicative structure (not restricted to PCAs).
Let be a partial applicative structure.
A polynomial over is a syntactic expression generated by variables, elements of and the application of .
For two polynomials M and N over , M ≃ N means that
M[a_1/x_1,… ,a_n/x_n] ≃ N[a_1/x_1,… ,a_n/x_n]
holds in for any a_1,… ,a_n ∈, where { x_1,… ,x_n } contains all the variables of M and N.
Let be a PCA and M be a polynomial over .
For any variable x, there exists a polynomial M' such that the free variables of M' are the free variables of M excluding x and M' a ≃ M[a/x] holds for all a ∈.
We write such M' as x.M.
We define x.M by induction on the structure of M.
* x.x :=
* x.y := y (when x ≠ y)
* x.MN := ( x.M)( x.N)
For the special case of the above proposition, any closed lambda term is β-equivalent to some term constructed from λ xy.x and λ xyz.xz(yz) using applications.
Using the combinatory completeness, we can give (and ) on a PCA the structure of Cartesian closed category (CCC).
When is a PCA, and are CCCs.
While this result is standard, we shall outline its proof for comparison with the parallel results on various classes of combinatory algebras to be developed in this paper.
First we prove the proposition for .
Let :=.
* By the combinatory completeness, has elements x.x and xyz.x(yz), which make satisfying the conditions <ref> and <ref> of Definition <ref>.
Thus is a category.
* For objects X and Y, the underlying set of the binary product X × Y is |X| × |Y|.
Realizers are defined as
(x,y)_X × Y := { t.tp q | p ∈x_X, q ∈y_Y }.
* For maps f:X X' realized by r_f and g:Y Y' realized by r_g, f × g is the function sending (x, y) to (f(x),g(y)).
A realizer for f × g does exists as u.u ( pqt.t(r_f p)(r_g q)).
* The underlying set of the terminal object 1 is the singleton {∗}.
Realizers are ∗_1 :=.
It is easy to see that this 1 satisfy the conditions of the terminal object.
* The projection π:X × Y X is the function sending (x,y) to x and has a realizer u.u( pq.p).
The projection π':X × Y Y is the function sending (x,y) to y and has a realizer u.u( pq.q).
It is easy to see that these π and π' satisfy the conditions of the projections of the Cartesian category.
* For objects X and Y, the underlying set of the exponential Y^X is _(X,Y).
Realizers are
f_Y^X := { r ∈|}.
* For maps f:X' X realized by r_f and g:Y Y' realized by r_g, the map g^f is the function sending a map h ∈_(X,Y) realized by r_h to g ∘ h ∘ f ∈_(X',Y') realized by v.r_g (r_h (r_f v)).
A realizer of g^f is uv.r_g (u (r_f v)).
* The adjunction Φ: (X × Y,Z) (X,Z^Y) is the function sending f:X × Y Z realized by r_f to the map Φ(f):x ↦ (y ↦ f(x,y)).
Φ(f) is realized by pq.r_f ( t.tpq).
For a map g:X Z^Y realized by r_g, Φ^-1(g):X × Y Z is the map sending (x,y) to g(x)(y).
Φ^-1(g) is realized by u.u r_g.
It is easy to see that this Φ satisfies the condition of the adjunction of the CCC.
Therefore, is a CCC.
Next we show that is a CCC.
Given modest sets X and Y on , we define the binary product X × Y in the same way as .
Here we can show that X × Y also is a modest set.
Suppose there is some a ∈ realizing different (x,y) and (x',y') of |X| × |Y|.
When we assume x ≠ x', though π (x,y) ≠π (x',y'), both sides have the same realizer ( u.u( pq.p))a.
It contradicts that X is a modest set.
The same contradiction is lead when y ≠ y'.
Therefore, different (x,y) and (x',y') do not have common realizers and X × Y is a modest set.
For modest sets X and Y on , we also define Y^X in the same way as .
We can show that Y^X also is a modest set.
Suppose there is some r realizing different f:X Y and g:X Y.
Take x ∈ |X| and a ∈x_X such that f(x) ≠ g(x).
Then r a is an element of both f(x)_Y and g(x)_Y.
However, it contradicts that Y is a modest set.
Therefore, Y^X is a modest set.
Hence, we can show that is a CCC by the same proof for .
In this proof, we use the combinatory completeness for the PCA a lot to give realizers for each assembly and map.
§.§ -algebras and symmetric monoidal closed categories
Given an applicative structure which has the different computational structure from PCAs, we obtain with a different categorical structure from CCCs.
In this subsection, we recall another well-known class of applicative structures called -algebras, which correspond to linear structures.
Results given in this subsection are from <cit.>.
A -algebra is a total applicative structure which contains three elements , and such that ∀ x, y, z ∈, x y z = x (y z), x y z = x z y and x = x.
Untyped linear lambda terms are untyped lambda terms constructed without using weakening and contraction rules (See Example <ref>).
That is, an untyped linear lambda term is an untyped lambda term whose each variable appears just once in the term.
Untyped closed linear lambda terms modulo form a -algebra.
Here λ xyz.x(yz), λ xyz.xzy and λ x.x are the representatives of , and respectively.
Let be a -algebra and M be a polynomial over .
For any variable x appearing exactly once in M, there exists a polynomial x.M such that the free variables of x.M are the free variables of M excluding x and ( x.M) a = M[a/x] for all a ∈.
We define x.M by induction on the structure of M.
* x.x :=
* x.MN := ( x.M) N (x ∈ FV(M))
M ( x.N) (x ∈ FV(N))
The combinatory completeness for a -algebra allows interpreting only linear lambda terms, not the whole of lambda terms.
Thus some realizers used in the proof of Proposition <ref> (such as u.u( pq.p)) may not exist in a -algebra.
For -algebras, the categories of assemblies have other categorical structure than CCCs.
When is a -algebra, is a symmetric monoidal closed category (SMCC).
* For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|.
Realizers are defined as x ⊗ y_X ⊗ Y := { t.tp q | p ∈x_X, q ∈y_Y }.
* For maps f:X X' realized by r_f and g:Y Y' realized by r_g, the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y),
which is realized by u.u ( pqt.t(r_f p)(r_g q)).
* The underlying set of the unit object I is the singleton {∗}.
The realizer is ∗_I := {}.
* The right unitor ρ_X : X X ⊗ I is the function sending x to x ⊗∗, which is realized by p.( t.tp).
The inverse ρ^-1 is realized by u.u( pq.qp).
* Also we can take the left unitor λ_X : I ⊗ X X as the function (∗⊗ x)↦ x and the associator
α_XYZ : X ⊗ (Y ⊗ Z) (X ⊗ Y) ⊗ Z as x ⊗ (y ⊗ z) ↦ (x ⊗ y) ⊗ z.
* The symmetry σ_XY : X ⊗ Y Y ⊗ X is the function sending x ⊗ y to y ⊗ x, which is realized by u.u( pqt.tqp).
* For objects X and Y, the underlying set of the exponential[For an SMCC , the exponential is often denoted using the symbol satisfying
(X⊗ Y,Z)≅(X,Y Z).
However, here we use the reversed symbol satisfying (X⊗ Y,Z)≅(X,Z Y) to be consistent with the notation of monoidal bi-closed categories in Section <ref>.]
Y X is _(X,Y).
Realizers are f_Y X := { r ∈|}.
* For maps f:X' X realized by r_f and g:Y Y' realized by r_g, the map g f is the function sending h:X Y to g ∘ h ∘ f :X' Y'.
A realizer of g f is uv.r_g (u (r_f v)).
* The adjunction Φ sends a map f:X ⊗ Y Z to the map Φ(f):x ↦ (y ↦ f(x ⊗ y)).
It is easy to see that the above components satisfy the axioms of the SMCC.
The above proof is almost the same as the proof of on a PCA being a CCC.
However, when we prove that on a -algebra is an SMCC, we cannot use the same proof as for PCAs.
That is because for modest sets X and Y on a -algebra , X ⊗ Y given by the same way as is not generally a modest set.
The following proposition is proven with a modification to resolve the problem.
When is a -algebra, is an SMCC.
Let G: ↪ be the inclusion functor and F: be the left adjoint of G.
F is the functor sending an assembly X = (|X|,_X) to a modest set Z = (|X|/≈,_Z).
Here the relation “≈” is the transitive closure of the relation “∼” defined as x ∼ x' :⇔ x_X ∩x'_X ≠∅.
The realizers of z ∈ |Z| are defined as z_Z := ⋃_x ∈ zx_X.
F sends a map f of to the canonical map of , which is realized by realizers of f.
We define the tensor product ⊠ in as X ⊠ Y := F(GX ⊗ GY).
We can prove Proposition <ref> by the same proof of Proposition <ref> by replacing ⊗ to ⊠.
More general about constructing monoidal structures on reflexive full subcategories, see <cit.>.
While we define -algebras as a class of total applicative structures, we also can define “partial -algebras” naturally.
For a partial -algebra , we can see that:
* is not generally an SMCC;
* adding an extra element (which means “undefined”), naturally extends to a total -algebra _;
* is the full subcategory of _.
The same discussion is given in <cit.>.
§.§ Applicative morphisms
In this subsection, we recall the notion of applicative morphisms from <cit.>.
Let be a partial applicative structure satisfying:
* has an element such that ∀ x ∈, x ↓ and x = x;
* for any r_1, r_2 ∈, there exists r ∈ such that ∀ x ∈, r x ≃ r_1 (r_2 x).
Let be another partial applicative structure satisfying the same conditions.
An applicative morphism γ : is a total relation from to such that there exists a realizer r_γ∈ of γ satisfying that
∀ a, a' ∈, ∀ b ∈γ a, ∀ b' ∈γ a', r_γ b b' ∈γ (a a') whenever a a' ↓.
We say γ is functional when γ a is a singleton for each a ∈, and simply write γ a = b for γ a = { b }.
Our definition is slightly more general than the definition in <cit.> that makes sense only on PCAs.
We define applicative morphisms between applicative structures satisfying the conditions of Definition <ref>.
We assume these conditions to realize identity and composition morphisms.
By the condition <ref>, the identity applicative morphism id: can be realized by .
For applicative morphisms γ: and δ: 𝒞 realized by r_γ and r_δ, taking p ∈δ r_γ, the composition δ∘γ can be realized by r ∈ || such that ∀ b ∈ ||, r b ≃ r_δ (r_δ p b).
The condition <ref> gives such a realizer r.
In the sequel, for an applicative morphism γ, when we write an indexed element r_γ, it denotes a realizer of γ.
Also, for a ∈ and S,S' ⊆,
when we write a S, it denotes the set { as | s ∈ S } and we consider as↓ for all s ∈ S,
and when we write S S', it denotes the set
{ ss'| s ∈ S,s' ∈ S' } and we consider ss' ↓ for all s ∈ S and s' ∈ S'.
For instance, the condition that γ is an applicative morphism is denoted as
∃ r_γ∈, ∀ a,a' ∈, aa' ↓⇒ r_γ (γ a)(γ a') ⊆γ (aa').
From applicative morphisms, we can obtain functors between the categories of assemblies.
For an applicative morphism γ:, : is the functor sending an object (|X|, _X) to (|X|,γ_X) and sending a map to the same function.
For a map f in realized by r_f, f is realized by elements of r_γ (γ r_f).
It is obvious that satisfies (id)=id and (g ∘ f) = (g) ∘(f).
Next we recall the preorder relation ≼ between applicative morphisms.
For two applicative morphisms γ, δ:, γ≼δ iff there is r ∈ such that ∀ a ∈, r (γ a) ⊆δ a.
Using the conditions <ref> and <ref> of Definition <ref>, we can easily show that ≼ is a preorder.
By the preorder ≼, we can define adjunctions and comonads on applicative structures.
For two applicative morphisms γ: and δ:, γ is a right adjoint of δ iff δ∘γ≼ id_ and id_≼γ∘δ.
We write (δ⊣γ): for these settings.
An applicative morphism γ: is called comonadic when has two elements and such that ∀ a ∈, (γ a) ⊆{ a } and (γ a) ⊆γ (γ a).
For adjunctions of applicative morphisms, the following properties hold.
* An adjoint pair of applicative morphisms (δ⊣γ): gives rise to an adjoint pair (⊣) :.
* For an adjoint pair of applicative morphisms (δ⊣γ):, δ∘γ : is a comonadic applicative morphism.
* For a comonadic applicative morphism γ:, is a comonad on .
In Definition <ref>, an applicative morphism γ: gives rise to the functor :.
However, here we cannot generally obtain a functor : since x_X ∩x'_X = ∅ does not imply γ(x_X) ∩γ(x'_X) = ∅ and X may not be in .
However, for a comonadic applicative morphism γ:, can be restricted to the endofunctor on .
Indeed, for a modest set X on , if a ∈γ(x_X) ∩γ(x'_X) then
a is an element of x_X ∩x'_X and thus x = x' concludes.
Furthermore, this is a comonad on .
§.§ Linear combinatory algebras
In the previous subsection, we saw comonadic applicative morphisms give rise to comonads, and adjoint pairs of applicative morphisms give rise to adjoint pairs between categories of assemblies.
Using this construction, we can obtain linear exponential comonads and linear-non-linear models for the linear logic.
In this subsection, we recall notions of linear combinatory algebras (LCAs) from <cit.> and relational linear combinatory algebras (rLCAs) from <cit.>.
A linear combinatory algebra (LCA) consists of:
* a -algebra ;
* a functional comonadic applicative morphism (, , ) on ;
* an element ∈ such that ∀ x, y ∈, x ( y) = x;
* an element ∈ such that ∀ x, y ∈, x ( y) = x ( y)( y).
As we get comonads from comonadic applicative morphism, from LCAs, we get linear exponential comonads, which are categorical models of the linear exponential modality of the linear logic.
Let be a symmetric monoidal category.
A linear exponential comonad consists of the following data.
* A symmetric monoidal comonad (!, δ, ϵ, m, m_I).
Here ! is an endofunctor on ,
δ_X :!X !!X and ϵ_X :!X X are monoidal natural transformations for the comultiplication and the counit.
The natural transformation m_X,Y:!X ⊗ !Y !(X ⊗ Y) and the map m_I:I !I make ! be a monoidal functor.
* Monoidal natural transformations e_X:!X I and d_X:!X !X ⊗ !X.
Here these components need satisfy the following conditions for each X.
* (!X,d_X,e_X) is a commutative comonoid in .
* e_X and d_X are coalgebra morphisms.
* δ_X is a comonoid morphism.
For an LCA (,), is a linear exponential comonad on the SMCC (or ).
LCAs can be generalized from functional applicative morphisms to not functional ones, called rLCAs.
A relational linear combinatory algebra (rLCA) consists of:
* a -algebra ;
* a comonadic applicative morphism (, , ) on such that ≼ [ ,] and ≼ k_i.
Here [,] and k_i are applicative morphisms defined as [ ,] (x) := { t.ta a' | a,a' ∈ x } and k_i (x) := {}.
Next proposition shows the correspondence between LCAs, rLCAs and adjoint pairs between -algebras and PCAs.
* Let be a -algebra and be a PCA.
For an adjoint pair
(δ⊣γ):, (, δ∘γ) is an rLCA.
* Let (,) be an LCA.
The applicative structure _ = (, @) defined by x @ y := x ( y) is a PCA.
Furthermore, γ: _ defined as the identity function and δ :_ sending a ∈ to a form an adjoint pair (δ⊣γ):_.
From rLCAs, we also get linear exponential comonads.
Moreover, we get linear-non-linear models <cit.> on categories of assemblies or categories of modest sets.
A linear-non-linear model is a symmetric monoidal adjunction
(F ⊣ G): for an SMCC and a CCC .
For an rLCA (,), is a linear exponential comonad on the SMCC (or ).
Furthermore, the co-Kleisli adjunction between and _ (or and _) is symmetric monoidal.
Thus the adjunction forms a linear-non-linear model.
§ CONSTRUCTING NON-SYMMETRIC CATEGORICAL STRUCTURES
In section <ref>, we saw two known results that PCAs/-algebras induce CCCs/SMCCs as the categories of assemblies and the categories of modest sets.
It is natural to try to extend these results to other classes of applicative structures, and we introduce such new classes inducing certain “non-symmetric” categorical structures.
In this section we recall -algebras, -algebras and bi--algebras from <cit.>, and introduce a new class -algebras.
§.§ -algebras and closed multicategories
When we try to obtain some non-symmetric categorical structures on categories of assemblies, we will find a subtle problem.
In a -algebra , the -combinator expresses exchanging the order of arguments, and is the source of the symmetric structures of .
So one might guess that simply omitting would be sufficient for getting a non-symmetric categorical structure on .
However, this does not work well; and alone are too weak to give an interesting structure on .
For instance, if we want the internal hom functor (- -) on on a total applicative structure , we need certain exchanging operation in even if the closed structure is not symmetric.
Take an object A of as |A| := and a_A := { a }.
For maps f,g:A A, to realize g f, we need a realizer r which satisfies ∀ a, a' ∈, r a a' = r_g (a (r_f a')).
This r acts as the exchanging to move the information of r_f from the left of a to the right of a.
(In a -algebra, such r exists as ( ( r_g))r_f.)
Therefore, when we want some non-symmetric categorical structures such as non-symmetric closed structures, we need to prepare some “more restricted exchanging” than the -combinator.
One way to resolve the problem is to supply not a combinator but the unary operation () for exchanging.
In this subsection, we introduce -algebras from <cit.>, which induce non-symmetric closed multicategories.
A total applicative structure is a -algebra iff it contains , and a for each a ∈, where a is an element of such that ∀ x ∈, a x = x a.
This () enable restricted exchanges than the -combinator.
Since in a -algebra, a satisfies the axiom of a, all -algebras are also -algebras.
The definition of -algebras may seem strange compared to the definitions of PCAs or -algebras.
However, the definition of -algebras is natural in the aspect of having a good correspondence with the “planar" lambda calculus.
Untyped planar lambda terms are untyped lambda terms constructed without using weakening, contraction nor exchange rules (See Example <ref>).
That is, untyped planar lambda terms are untyped linear lambda terms such that for each subterm λ x.M, x is the rightmost free variable of M.
Untyped closed planar lambda terms modulo form a -algebra, which we call in this paper.
Here λ xyz.x(yz) and λ x.x are the representatives of and respectively. Given a representative M of a ∈ ||, λ x.xM is also a closed planar term and is the representative of a.
The definition of construction rules of planar lambda terms has two different styles. In our definition, the abstraction rule is only allowed for the rightmost variable. Such a style is seen in <cit.>. On the other hand, there is also the definition that the abstraction rule is only allowed for the leftmost variable, as in <cit.>. Here we employ the former style for preservation the planarity of terms under the βη-conversions.
Let be a -algebra and M be a polynomial over .
For the rightmost variable x of M, if x appears exactly once in M, there exists a polynomial x.M such that the free variables of x.M are the free variables of M excluding x and ( x.M) a = M[a/x] for all a ∈.
We define x.M by induction on the structure of M.
* x.x :=
* x.MN := N ( x.M) (x ∈ FV(M))
M ( x.N) (x ∈ FV(N))
Note that for x.MN, x is the rightmost free variable in MN, and thus, if x is in FV(M), N has no free variables and N can be defined.
Then we show -algebras induce certain categorical structures on the categories of assemblies.
First we recall the definition of closed multicategories from <cit.>.
A multicategory consists of the following data:
* a collection Ob();
* for each n ≥ 0 and X_1 , X_2 , … , X_n , Y ∈ Ob(), a set (X_1 , … , X_n ; Y).
We often write f ∈ (X_1 , … , X_n ; Y) as f:X_1 , … , X_n Y;
* for each X ∈ Ob(), an element id_X ∈ (X ; X), called the identity map;
* for each n, m_1 , m_2 , … , m_n ∈ℕ and X^k_j , Y_k , Z (1 ≤ k ≤ n , 1 ≤ j ≤ m_k), a function
∘ : (Y_1 , … , Y_n ; Z) ×∏_k^n (X^k_1 , … , X^k_m_k ;Y_k) (X^1_1 , … ,X^1_m_1 ,X^2_1 , … , X^n_m_n ; Z)
called the composition. g ∘ (f_1 , … , f_n) denotes the composition of g ∈ (Y_1 , … , Y_n ; Z) and f_k ∈ (X^k_1 , … , X^k_m_k ; Y_k) (1 ≤ k ≤ n).
The compositions satisfy associativity and identity axioms.
A closed multicategory consists of the following data:
* a multicategory ;
* for each X_1 , X_2 , … , X_n , Y ∈ Ob(), an object (X_1 , X_2 , … , X_n ; Y), called the internal hom object;
* for each X_1 , … , X_n , Y ∈ Ob(), a map
ev_X_1 , … , X_n ; Y : (X_1 , … , X_n ; Y), X_1
, … , X_n → Y,
called the evaluation map such that ∀ Z_1 , Z_2 , … , Z_m ∈ Ob(), the function
ϕ_Z_1 , … , Z_m ; X_1 , … , X_n ; Y : ( Z_1 , … , Z_m ;
(X_1 , … , X_n ; Y) ) →(Z_1 , … , Z_m, X_1 , … , X_n ; Y)
sending f to ev_X_1 , … , X_n ; Y∘ (f, id_X_1 , … , id_X_n ) is invertible.
We write the inverse function Λ_Z_1 , … , Z_m ; X_1 , … , X_n ; Y.
Here our definition of closed multicategories is different from the original definition in <cit.> in that the order of objects of domain of maps are reversed.
This is for ease to read by matching the orders of objects and realizers.
When is a -algebra, and are closed multicategories.
Let :=.
Since have the -combinator and the -combinator, is a category.
First we give a bi-functor (- -):^op× as follows:
* For X,Y ∈, Y X is an assembly whose underlying set is _(X,Y) and f_Y X := { r |}.
* For two maps f:X' X and g:Y Y' in , (g f):(Y X) (Y' X') is the function sending h ∈_(X,Y) to g ∘ h ∘ f.
Given realizers r_f of f and r_g of g, (g f) is realized by
uv.r_g (u (r_f v)).
Thus, for any maps f and g in , (g f) certainly is a map of .
It is easy to see that (- -) preserves identities and compositions.
Next we give the structure of closed multicategory.
* For an object X ∈, (;X) := |X| and (;X) := X.
* For objects X_1, X_2, … , X_n, Y ∈ (n ≥ 1), we define the internal hom object
(X_1,… ,X_n ;Y) := (… ((Y X_n) X_n-1)… ) X_1
and (X_1,… ,X_n ;Y) is the underlying set of (X_1,… ,X_n ;Y).
We write f(x_1)(x_2)… (x_n) as f(x_1,… ,x_n) for f ∈(X_1,… ,X_n ;Y) and x_i ∈ |X_i|.
* Identity maps id_X ∈(X;X) (X ∈) are the same as identity maps of .
* Suppose maps g∈(Y_1,… ,Y_n;Z) and f_k ∈(X^k_1,… ,X^k_m_k;Y_k) (1 ≤ k ≤ n).
We define g ∘ (f_1,… ,f_n) as the function that receives
x^1_1,… , x^1_m_1 ,… , x^n_1 ,… , x^n_m_n
and returns g(f_1(x^1_1,… ,x^1_m_1) ,… , f_n(x^n_1,… ,x^n_m_n)).
Here when m_i = 0 for some 1 ≤ i ≤ n, we define
g ∘ (f_1,… ,f_n) by giving y_i ∈ |Y_i| pointed by f_i ∈(;Y_i) as the i-th argument of g.
Given realizers q ∈g_(Y_1,… ,Y_n;Z) and p_k ∈f_k_(X^k_1,… ,X^k_m_k;Y_k), by the combinatory completeness for -algebras, there is r ∈ such that
r a^1_1 … a^1_m_1… a^n_1 … a^n_m_n = q (p_1 a^1_1 … a^1_m_1)… (p_n a^n_1… a^n_m_n)
holds for any a^1_1,… ,a^n_m_n∈.
This r realizes g ∘ (f_1,… ,f_n) and thus g ∘ (f_1,… ,f_n) is in (X^1_1,… ,X^n_m_n;Z).
* The evaluation map ev_X_1 ,… , X_n ; Y : (X_1 ,… , X_n ; Y), X_1 ,… , X_n Y is given as the function that receives f,x_1,… ,x_n and returns f(x_1,… ,x_n), which is realized by .
Then, ϕ_Z_1,… ,Z_m;X_1,… ,X_n;Y is invertible as a function and for g ∈(Z_1,… ,Z_m,X_1,… ,X_n;Y), Λ (g) is indeed in (Z_1,… ,Z_m;(X_1,… ,X_n;Y)) since it is realized by realizers of g.
Therefore, is a closed multicateogry.
For , we can use the same proof as for .
While we define -algebras as a class of total applicative structures, we also can define “partial -algebra” naturally.
For a partial -algebra , () is a total unary operation on such that ∀ a, x ∈, a x ≃ x a.
Unlike the case of partial -algebras as in Remark <ref>, the proof of Proposition <ref> is applicable to the case of partial -algebras.
§.§ -algebras and closed categories
In this subsection, we recall a class of applicative structures from <cit.>, which induce closed categories of assemblies and modest sets.
First we recall the definition of closed categories in <cit.>.
A closed category consists of the following data:
* a locally small category ;
* a functor (- -): ^op×, called the internal hom functor[While the internal hom object in the closed category is often written as (X,Y), [X,Y] or X Y, here we denote Y X to be consistent with other categorical structures in this paper.];
* an object I, called the unit object;
* a natural isomorphism i_X : (X I) X;
* an extranatural transformation j_X : I (X X);
* a transformation L_Y,Z^X : (Z Y) ((Z X) (Y X)) natural in Y and Z and extranatural in X,
such that the following axioms hold:
* ∀ X,Y ∈, L_Y,Y^X ∘ j_Y = j_(Y X);
* ∀ X,Y ∈, i_(Y X)∘ (id_(Y X) j_X) ∘ L_X,Y^X = id_(Y X);
* ∀ X,Y,Z,W ∈, the following diagram commutes:
@C=-25pt@R=30pt
(W Z) [dl]_L_Z,W^X [dr]^L_Z,W^Y
(W X) (Z X) [d]^-L_(Z X),(W X)^(Y X)
((W Y) (Z Y)) [dd]^-L_Y,W^X id
((W X) (Y X)) ((Z X) (Y X)) [drr]_-id L_Y,Z^X
((W X) (Y X)) (Z Y)
* ∀ X,Y ∈, L_X,Y^I ∘ (i_Y id_X) = id_(Y I) i_X;
* ∀ X,Y ∈, the function γ : (X,Y) (I , (Y X)) sending f:X Y to
(f id_X) ∘ j_X is invertible.
Closed categories are something like monoidal closed categories without tensor products. That is, categories with internal hom functors which are defined directly, not via tensor products and adjunctions.
The structures of closed categories are very similar to the structures of closed multicategories.
As shown in <cit.>, the category of closed categories are cat-equivalent to the category of closed multicategories with unit objects.
However, when we want to construct (non-symmetric) closed categories as categories of assemblies, it is not sufficient that the applicative structures are -algebras, since realizers for i^-1_X : X (X I) may not exist.
Thus, we add another condition to a -algebra to realize i^-1_X and obtain the following definition.
A -algebra is a -algebra which contains an element such that ∀ a ∈, a = a.
In -algebras, the role we expect to is to eliminate the “harmless" second argument, which does not necessarily eliminate .
Even without specifying , we can define the same class as -algebras.
For instance, for a -algebra , suppose there is ^×∈ such that ∀ a ∈, ^× a = a.
Then this is a -algebra since xy.^× x (y ) satisfies the axiom of .
Conversely, for a -algebra, we can take ^× := xy. x (y ) and thus -algebras and ^×()-algebras are the same classes.
in Example <ref> is a -algebra.
Since the planar lambda calculus has the strongly normalizing property, for any closed planar term M, there are some u and N such that M λ u.N. Then
(λ xyz.x(yz)) M (λ v.v) λ z.M((λ v.v)z)
λ z.(λ u.N)z
λ z.N[z/u]
=_α M
and thus λ xyz.x(yz) represents .
Since (which nicely corresponds to -algebras) is also a -algebra, one might suspect that -algebras and -algebras are the same class.
However, these two classes are different ones.
Later in Section <ref>, we will discuss an example that separates classes of -algebras and -algebras (Proposition <ref>).
The next example based on an ordered group is from <cit.>.
(However, here we reverse the direction of the implication symbol of the original example in <cit.>.)
Take an ordered group (G,·,e,≤).
Let T be a set of elements constructed grammatically as follows:
t ::= g | t t' (g ∈ G).
That is, T is a set of binary trees whose leaves are labeled by elements of G.
We further define a function | | :T G by induction: |g| := g and |t_2 t_1| := |t_2| · |t_1|^-1.
Let be the powerset of { t ∈ T | e ≤ |t| }.
Then we can get a -algebra by :
* For M,N ∈, MN := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }.
* := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T }.
Here joins from the left.
* := { t_1 t_1 | t_1 ∈ T }
* := { t_1 (t_2 t_2) t_1 | t_1,t_2 ∈ T }.
* For M ∈, M := { t_2 (t_2 t_1) | t_1 ∈ M, t_2 ∈ T }.
This example is based on Comod(G) introduced in <cit.>, which is a category of sets and relations equipped with G valued functions.
For any (not necessarily ordered) group G, Comod(G) is a pivotal category.
is a set of maps from the unit object to a reflexive object in (ordered) Comod(G).
The structure of depends on G.
For instance,
{ (t_3 t_2 t_1) (t_3 t_1 t_2) | t_1, t_2, t_3 ∈ T }
acts as the -combinator whenever G is Abelian.
The above later appears several times as examples of applicative structures of other classes (Example <ref>, <ref>, <ref>).
When is a -algebra, and are closed categories.
Let :=.
We give the same bi-functor (- -):^op× as in the proof of Proposition <ref>.
* We define the unit object I as ({∗}, _I), where ∗_I := {}.
* j_X is the function sending ∗ to id_X, which is realized by .
* i_X is the function sending (f:∗↦ x) to x, which is realized by . The inverse i_X^-1 is realized by .
* L_Y,Z^X is the function sending g to the function (f ↦ g ∘ f), which is realized by .
* γ is invertible. Indeed, γ^-1 is the function sending g:I (Y X) to the map g(∗):X Y.
It is easy to verify that j, i and L have naturality and satisfy the axioms of the closed category.
For , we can use the same proof for .
While we define -algebras as a class of total applicative structures, we also can define “partial -algebra” naturally.
For a partial -algebra , satisfies that ∀ a ∈, a ↓ and a = a.
Proposition <ref> also holds in the case of partial -algebras.
§.§ -algebras and monoidal closed categories
In the previous two subsections, we obtain closed multicategories and closed categories as categories of assemblies.
Next we further attempt to obtain a richer categorical structure, the (non-symmetric) tensor products, by categorical realizability.
First, let us consider whether we can realize products by a -algebra in the same way as PCAs and -algebras.
Even when we use a -algebra, we can take the object X ⊗ Y in the same way as PCAs and -algebras (See the proofs of Proposition <ref> and <ref>).
That is, for a -algebra , we take an assembly X ⊗ Y that the underlying set is |X| × |Y| and realizers are
x ⊗ y_X ⊗ Y := { t.tp q | p ∈x_X, q ∈y_Y }.
We also take the unit object in in the same way as -algebras: |I|:= {∗} and ∗_I := {}.
Then is this a monoidal category?
Now let us assume it is.
Take an assembly
A:= (,_A), where a_A := { a }.
Then since is a monoidal category, the unitor
A I ⊗ A has a realizer r, which satisfies that r a = t.t a.
Taking an elements C := xyz.r x ( ( ( w.r y ( w))))z, this C satisfies the axiom of the -combinator and make a -algebra.
In summary, when we attempt to make a non-symmetric monoidal category using a -algebra , it follows that is actually a -algebra and becomes an SMCC.
Therefore, we need some major modification on the definition of realizers of tensor products in to make a non-symmetric monoidal category.
One way to solve this problem is supposing a combinator expressing the “pairing" operation.
And we define realizers for tensor products as
x ⊗ y_X ⊗ Y := { pq | p ∈x_X, q ∈y_Y }.
Since pq itself cannot separate the data of p and q from pq, we need another combinator to decompose pq.
A -algebra is a -algebra which contains and such that ∀ x, y, z ∈, x ( y z) = x y z.
A fundamental example of -algebras is given as the untyped planar lambda calculus with tensor products.
Add the following term construction rules to the planar lambda calculus (Example <ref>).
Γ⊢ M
Δ⊢ N
(pair construction)
Γ , Δ⊢ M ⊗ N
Γ⊢ M
Δ, x, y ⊢ N
(pair deconstruction)
Δ, Γ⊢x ⊗ yMN
We define a relation ∼ on planar terms as the congruence of the following relations.
* (λ x.M)N ∼ M[N/x]
* M ∼λ x.Mx
* (x_1 ⊗ x_2M_1 ⊗ M_2N) ∼ N[M_1 /x_1][M_2 /x_2]
* M ∼ (x ⊗ yMx ⊗ y)
Let the equational relation be the reflexive, symmetric and transitive closure of ∼.
Closed terms modulo form a -algebra, which we call in this paper.
Here λ xyz.x(yz), λ tu.(x ⊗ yutxy) and λ xy. (x ⊗ y) are the representatives of , and respectively.
Unlike the planar lambda calculus of Example <ref> (that does not have tensor products) does not need the η-equality to be a -algebra, the planar lambda calculus with tensor products of Example <ref> needs the βη-equality to use λ xyz.x(yz) as .
Indeed, (λ xyz.x(yz)) ((λ u.u) ⊗ (λ v.v)) (λ w.w) is βη-equal to (λ u.u) ⊗ (λ v.v) but not β-equal to it.
When constructing linear lambda terms with tensor products, we often suppose a constant ⋆ for the unit ( <cit.>). For the above example, we can add the following rules to the term construction rules.
(star introduction)
⊢⋆
⊢ M
Γ⊢ N
(star elimination)
Γ⊢⋆MN
However, for our aim that constructing monoidal categories by categorical realizability, this ⋆ is not needed since we can use as the realizer of the unit instead of ⋆.
-algebras correspond to the lambda calculus with tensor products, which has components other than applications, unlike the ordinary/linear/planar lambda calculus.
Thus, we cannot state the combinatory completeness property for -algebras in the same way we have seen in previous sections.
Here we only show the special case of the combinatory completeness property for -algebras.
Any closed term M in is βη-equivalent to some term M that is constructed from := λ xyz.x(yz), := λ x.x, := λ tu.(x ⊗ yutxy) and := λ xy. x ⊗ y using the application and the unary operation ():M ↦λ x.xM.
We inductively define the function .
* x := x
* MN := M N
* M ⊗ N := M N
* x ⊗ yMN := λ xy.N M
* λ xy.M := λ x. λ y.M
* λ x.x :=
* λ x.MN := N λ x.M (x ∈ FV(M))
M λ x.N (x ∈ FV(N))
* λ x.M ⊗ N := N (λ x.M ) (x ∈ FV(M))
( M ) λ x.N (x ∈ FV(N))
* λ x.(y ⊗ zMN) := (λ yz.N ) λ x.M (x ∈ FV(M))
M (λ xyz.N ) (x ∈ FV(N))
It is easy to see that M M for any closed term M.
Next we give an example of -algebra similar to Example <ref>.
Take an ordered group (G,·,e,≤).
Let T' be a set whose elements are constructed grammatically as follows:
t ::= g | t t' | t ⊗ t' (g ∈ G).
That is, T' is a set of binary trees whose leaves are labeled by elements of G, and whose nodes are two colored by and ⊗.
We further define a function | | :T' G by induction: |g| := g, |t_2 t_1| := |t_2| · |t_1|^-1 and |t_1 ⊗ t_2| := |t_1| · |t_2|.
Let |'| be the powerset of { t ∈ T' | e ≤ |t| }.
Then we can get a -algebra ' by |'|:
* For M,N ∈ |'|, MN := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }.
* := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T' }.
* := { t_1 t_1 | t_1 ∈ T' }.
* := { t_1 (t_2 t_2) t_1 | t_1,t_2 ∈ T' }.
* := { t_3 (t_1 ⊗ t_2) (t_3 t_2 t_1) | t_1,t_2,t_3 ∈ T' }.
* := { (t_1 ⊗ t_2) t_2 t_1 | t_1,t_2 ∈ T' }.
* For M ∈ |'|, M := { t_2 (t_2 t_1) | t_1 ∈ M, t_2 ∈ T' }.
In the above example, we prepare ⊗ in the construction of T' to express and .
However, in fact, in Example <ref> is already a -algebra even without ⊗.
In of Example <ref>, we have -combinator and -combinator as
* := { t_1 (e t_2) t_2 t_1 | t_1,t_2 ∈ T };
* := { t_3 (t_1 (e t_2)) (t_3 t_2 t_1) | t_1,t_2,t_3 ∈ T }.
This is a less standard example in that uses t_1 (e t_2) as the role of t_1 ⊗ t_2.
For another example, as well as we can construct an LCA (and the based -algebra) from a “reflexive object” (See <cit.> and <cit.>), we can get -algebras by appropriate settings.
Let (,⊗, I) be a monoidal closed category and
Φ:(- ⊗ X, -) (-,- X) be the adjunction.
Suppose an object V that has:
* an isomorphism r:(V V) V and s := r^-1;
* a retraction t: (V ⊗ V) ◃ V:u, that is, maps t:V ⊗ V V and u:V V ⊗ V such that u ∘ t =id_V ⊗ V.
Then the set of maps (I,V) is a -algebra.
* For maps M,N:I V, the application is defined as
I unitor I ⊗ I (s ∘ M) ⊗ N (V V) ⊗ V ev V.
* Take a map f:(V ⊗ V) ⊗ V V as
(V ⊗ V) ⊗ V associator V ⊗ (V ⊗ V) s ⊗ (ev ∘ (s ⊗ id)) (V V) ⊗ V ev V.
The -combinator is given as r ∘Φ (r ∘Φ (r ∘Φ(f)) ∘λ_V), where λ_V:I ⊗ V V is the unitor.
* The -combinator is r ∘Φ(λ_V).
* The -combinator given above satisfies the axiom of the -combinator.
Here we use
r ∘ s =id_V, and thus we need to assume r is an isomorphism (not merely a retraction).
* Take a map g:V ⊗ V V as
V ⊗ V s ⊗ u (V V) ⊗ (V ⊗ V) ev ∘ ( associator) V ⊗ V ev ∘ (s ⊗ id) V.
The -combinator is r ∘Φ (r ∘Φ(g) ∘λ_V).
* The -combinator is r ∘Φ (r ∘Φ(t) ∘λ_V).
* Given arbitrary M:I V, M is r ∘Φ(ev ∘ (s ⊗ M) ∘ρ_V ∘λ_V). Here ρ_V :V V ⊗ I is the unitor.
We will use the above -algebra later in the last of Section <ref>.
Next we show that -algebras induce monoidal closed categories.
When is a -algebra, is a monoidal closed category.
Since is also a -algebra, we can use the combinatory completeness for the planar lambda calculus.
* For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as
x ⊗ y_X ⊗ Y := { p q | p ∈x_X, q ∈y_Y }.
* For f: X X' and g:Y Y', the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y).
A realizer for f ⊗ g is ( pq. (r_f p)(r_g q)).
* The underlying set of the unit object I is a singleton {∗}. The realizer is ∗_I := {}.
* The left unitor λ_X: I ⊗ X X sends ∗⊗ x to x, whose realizer is .
A realizer of λ_X^-1 is .
* The right unitor ρ_X: X X ⊗ I sends x to x ⊗∗, whose realizer is p. p.
A realizer of ρ_X^-1 is .
* The associator α_XYZ:(X ⊗ Y) ⊗ Z X ⊗ (Y ⊗ Z) sends (x ⊗ y) ⊗ z to x ⊗ (y ⊗ z).
A realizer of α_XYZ is ( ( pqr. p ( qr))).
A realizer of α_XYZ^-1 is ( pu. (M p) u), where M := pqr. ( pq) r.
* For objects X and Y, the underlying set of Y X is _(X,Y). Realizers are defined as
f_Y X := { r |}.
* For f: X' X and g:Y Y', g f is the function sending a map h : X Y to g ∘ h ∘ f : X' Y'.
A realizer for g f is uv. r_g (u (r_f v)).
* The evaluation map ev :(Y X) ⊗ X Y sends f ⊗ x to f(x), which is realized by .
* For any map f:Z ⊗ X Y, there exists a unique map g:Z (Y X) which satisfies
ev ∘ (g ⊗ id_X) = f. This g is given as the function sending z to the function x ↦ f(z ⊗ x), which is realized by rp. r_f ( rp).
Similar to the case of -algebras (Proposition <ref> and <ref>), we cannot use the same proof of Proposition <ref> to the case of .
We prove that on a -algebra is a monoidal closed category by the same modification used in the proof of Proposition <ref>.
That is, we take the inclusion functor G:↪ and the left adjoint F:, and define the tensor product ⊠ in as X ⊠ Y := F(GX ⊗ GY).
When is a -algebra, is a monoidal closed category.
For functors given by applicative morphisms between -algebras, the next properties hold.
Let _1 and _2 be -algebras and γ :_1 _2 is an applicative morphism. Then :_1_2 is a lax monoidal functor.
A realizer for I_2 (I_1) is in the set u.u (γ(_1)).
A realizer for ( X) ⊗_2 ( Y) (X ⊗_1 Y) is in _2 ( pq.r_γ (r_γ (γ_1) p) q).
For -algebras _1 and _2 and an adjoint pair
(δ⊣γ) : _1 _2, the adjunction (⊣):_1_2 is monoidal.
We show that the left adjoint is strong monoidal.
Since is lax monoidal by the previous proposition, it is sufficient to show that there are realizers for maps I_2 I_1 and (X ⊗_2 Y) X ⊗_1 Y.
A realizer for the former is
x. (r_δ (δ ( y.y (γ_1))) x).
A realizer for the latter is
z. (r_δ (δ ( ( uv.r_γ (r_γ (γ) ( u) ) ( v)))) z).
Here ∈ |_1| is an element such that ∀ x ∈ |_1|, (δ (γ x)) =x and ∈ |_2| is an element such that ∀ y ∈ |_2|, y =γ (δ y), that are obtained by the assumption that γ and δ form an adjoint pair.
§.§ Bi--algebras and monoidal bi-closed categories
Let us consider once again why non-symmetric tensor products in categories of assemblies cannot be constructed from -algebras,
from the viewpoint of the “polymorphic encoding.”
In the second-order linear logic, a tensor product X ⊗ Y can be interpreted as
∀α . (X Y α) α. (This interpretation is seen in <cit.>, for instance.)
This formula (X Y α) α corresponds to the type inhabited by λ t.txy in the typed linear lambda calculus.
This correspondence connected to that (in a PCA or a -algebra,) a realizer of x⊗ y ∈ |X ⊗ Y| is t.tp q for p ∈x_X and q ∈y_Y.
What matters here is that the interpretation X ⊗ Y ≅∀α . (X Y α) α holds only when the tensor product is symmetric.
Whereas, for the non-symmetric cases, X ⊗ Y is expressed as ∀α. (α Y X) α or ∀α. α (Y X α).
Here we need to distinguish two sorts of implications and .
In an applicative structure like a -algebra, we cannot distinguish them since we only have one sort of application.
Conversely, providing some structure in an applicative structure that allows to distinguish these two implications, we may be able to construct non-symmetric tensor products in .
From this viewpoint, we introduced bi--algebras in <cit.>.
In this subsection, we recall bi--algebras from <cit.>.
First we recall a variant of the lambda calculus, which is an example of an applicative structure with two sorts of applications.
Bi-planar lambda terms are constructed by the following rules:
(identity)
x ⊢ x
Γ, x ⊢ M
(right abstraction)
Γ⊢xM
x, Γ⊢ M
(left abstraction)
Γ⊢xM
Γ⊢ M
Δ⊢ N
(right application)
Γ, Δ⊢ M N
Δ⊢ N
Γ⊢ M
(left application)
Δ, Γ⊢ N M
Note that here is none of weakening, contraction nor exchange rules.
For the sake of clarity, we will classify right and left by red and blue color.
That is, we write each of them as M N, xM, N M and xM.
We define a relation _β on bi-planar lambda terms as the congruence of the following relations:
* (right β-reduction) xM N _β M[N/x]
* (left β-reduction) N xM_β M[N/x]
The bi-planar lambda calculus consists of bi-planar lambda terms and the reflexive, symmetric and transitive closure of _β as the equational relation .
Basic properties about the β-reduction _β, such as the confluence and the strongly normalizing property, can be shown in the same way as the proof for the linear lambda calculus.
The bi-planar lambda calculus is not essentially a new concept, since it often appears as the Curry-Howard corresponding calculus with the Lambek calculus ( <cit.>).
However, note that unlike the calculus corresponding to the Lambek calculus, the bi-planar lambda calculus is based on untyped setting.
The reason why we use a less-standard notation is to shorten the length of terms and to make them easier to read.
Then we define a class of applicative structures which we call bi--algebras.
A total applicative structure =(,) is a bi--algebra iff there is an additional total binary operation on and contains several special elements:
* ∈ such that ∀ x,y,z ∈, (( x) y) z = x (y z).
* ∈ such that ∀ x,y,z ∈, z (y (x )) = (z y) x.
* ∈ such that ∀ x,y,z ∈, x (( y) z) = (x y) z.
* ∈ such that ∀ x,y,z ∈, (z (y )) x = z (y x).
* ∈ such that ∀ x ∈, x = x.
* ∈ such that ∀ x ∈, x = x.
* For each a ∈, a∈ such that ∀ x ∈, (a) x = x a.
* For each a ∈, a∈ such that ∀ x ∈, x (a) = a x.
We call and as right application and left application respectively.
We often write
= (,,) for a bi--algebra =(,) with the left application .
In the sequel, we use as a left-associative operation and often omit unnecessary parentheses, while we do not omit parentheses for .
For instance, (u v w) ((x y) z) denotes ((u v) w) ((x y) z).
The definition of bi--algebras is intended having a good correspondence with the bi-planar lambda calculus.
Untyped closed bi-planar lambda terms modulo form a bi--algebra, which we call in this paper.
We give a few examples of representatives: xyzx (y z) represents ; yxzx (y z) represents ; xM x represents M.
Let = (,,) be a bi--algebra.
A polynomial over is defined as a syntactic expression generated by variables, elements of and the applications and .
For a polynomial M over and the rightmost variable x of M, if x appears exactly once in M, there exists a polynomial M' such that the free variables of M' are the free variables of M excluding x and M' a = M[a/x] for all a ∈. We write such M' as xM.
Also, for a polynomial N over and the leftmost variable y of N, if y appears exactly once in N, there exists a polynomial N' such that the free variables of N' are the free variables of N excluding y and a N' = N[a/y] for all a ∈. We write such N' as yN.
We define xM by induction on the structure of M.
* xx :=.
* xM N := ( N)xM (x ∈ FV(M))
M xN (x ∈ FV(N))
Note that in case x ∈ FV(M), N has no variables since x is the rightmost free variable in M N.
* xN M :=
N (xM) (x ∈ FV(M))
(M) xN (x ∈ FV(N))
Note that in case x ∈ FV(N), M has no variables since x is the rightmost free variable in N M.
The case of the left abstraction yN is given in the same way, with all the left and right constructs reversed.
Next we give another example of bi--algebra which is introduced in <cit.> and similar to Example <ref>.
Take an ordered group (G,·,e,≤).
Let T” be a set whose elements are constructed grammatically as follows:
t ::= g | t t' | t t' (g ∈ G).
That is, T” is a set of binary trees whose leaves are labeled by elements of G, and whose nodes are two colored by and .
We further define a function | | :T” G by induction: |g| := g, |t_2 t_1| := |t_2| · |t_1|^-1 and |t_1 t_2| := |t_1|^-1· |t_2|.
Let |”| be the powerset of { t ∈ T”| e ≤ |t| }.
Then we can get a bi--algebra ” by |”|:
* For M,N ∈ |”|, M N := { t_2 |∃ t_1 ∈ N, (t_2 t_1) ∈ M }.
* For M,N ∈ |”|, N M := { t_2 |∃ t_1 ∈ N, (t_1 t_2) ∈ M }.
* := { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T”}, dual for .
* := { ((t_1 t_2) t_3) (t_1 (t_2 t_3)) | t_1,t_2,t_3 ∈ T”}, dual for .
* := { t_1 t_1 | t_1 ∈ T”}, dual for .
* For M ∈ |”|, M := { t_2 t_1 | (t_1 t_2) ∈ M }, dual for M.
In the above example, we prepare in the construction of T” to express the left application.
However, in fact, in Example <ref> is a bi--algebra even without preparing .
Let T be the same set in Example <ref>.
For t, t' ∈ T, we define t t' ∈ T as (e t) (e t').
Then is a bi--algebra, whose components are taken in the same way as Example <ref>.
Next we give some basic properties of bi--algebras.
* Any bi--algebra is also a -algebra.
* Any -algebra is also a bi--algebra whose left and right applications coincide.
* When =(,) is a bi--algebra, the left application is unique up to isomorphism. That is, when both (,,_1) and (,,_2) are bi--algebras, _1 = (,_1) and _2 = (,_2) are isomorphic as applicative structures, where x _i y := y _i x.
* Let = (,,) be a bi--algebra and take an applicative structure
' := (,') by x ' y := y x.
Then is a -algebra iff ' is a -algebra.
Moreover, in such a case, and ' are isomorphic as applicative structures.
* , , , , and a are given as , , xyx (y ), xyx y,
xytt x y and xx a respectively.
* For a -algebra (, ), (, , ) is a bi--algebra when we take y x := x y. Here = :=, = :=, = := and a = a := a.
* By the combinatory completeness of _2, we have L := yxy _2 x such that L y x = y _2 x = x _2 y.
By the combinatory completeness of _1, we have an element
r := xyL y x, which satisfies r _1 x _1 y = L y x = x _2 y.
This r realizes the applicative morphism i_1 : _1 _2 given as the identity function on .
Similarly we have the inverse applicative morphism i_2 : _2 _1 given as the identity function.
i_1 and i_2 are the isomorphisms between _1 and _2.
* Suppose that is a -algebra, that is, there is some element ∈ such that
x y z = x z y.
Take an element := xyz M z y x, where M := yzxy (z x).
, and make ' a -algebra.
Similarly, when we suppose ' is a -algebra, is also a -algebra.
Furthermore, when we suppose (and also ') is a -algebra, we have an element
r := yxy x, which realizes the applicative morphism i : ' given as the identity function.
Similarly we have the inverse applicative morphism i' : ' given as the identity function, and thus ≅'.
By (<ref>) and (<ref>) of the above proposition, the class of bi--algebras is the class of applicative structures in between -algebras and -algebras.
We named the “-combinator" of -algebras by the reason that it is represented as xyx y in a bi--algebra, that gives the “left" application of two arguments.
Although xyx y always acts as a -combinator in a bi--algebra, it is not the only way to take a -combinator.
Indeed, in Example <ref>, has a -combinator as
xyx y = ()
= { t_2 ((e t_1) (e t_2)) t_1 | t_1,t_2 ∈ T }
which is different from the -combinator taken in Example <ref>.
Since a bi--algebra is also a -algebra, we know that (and ) is a monoidal closed category.
Moreover, we can show that the categories of assemblies on bi--algebras are not just a monoidal closed categories, but are monoidal bi-closed categories, having richer categorical structures.
A monoidal bi-closed category is a monoidal category with two sorts of adjunction (X ⊗ Y,Z) ≅(X,Z Y) and (X ⊗ Y,Z) ≅(Y,X Z).
When =(,) is a bi--algebra, is a monoidal bi-closed category.
Let be the left application of .
* A realizer for identities is .
* A realizer for the composition of f:X Y and g:Y Z is r_g r_f.
* For objects X and Y, the underlying set of X ⊗ Y is |X| × |Y|. Realizers are defined as
x ⊗ y := {tt p q| p ∈x_X, q ∈y_Y }.
* For f: X X' and g:Y Y', the map f ⊗ g is the function sending x ⊗ y to f(x) ⊗ g(y).
A realizer for f ⊗ g is upqtt (r_f p)
(r_g q) u.
* The underlying set of the unit object I is a singleton {∗}. The realizer is ∗_I := {}.
* The left unitor λ_X: I ⊗ X X sends ∗⊗ x to x, whose realizer is p p.
A realizer of λ_X^-1 is ptt p.
* The right unitor ρ_X: X X ⊗ I sends x to x ⊗∗, whose realizer is ptt p.
A realizer of ρ_X^-1 is upvp (v ) u.
* The associator α_XYZ:(X ⊗ Y) ⊗ Z X ⊗ (Y ⊗ Z) sends (x ⊗ y) ⊗ z to x ⊗ (y ⊗ z).
α_XYZ is realized by uvM v u,
where
M := pqrtt p t't' q r.
A realizer of α_XYZ^-1 is upvqrN v u, where
N := t(t t't' p q) r.
* For objects X and Y, the underlying set of Y X is _(X,Y). Realizers are
f_Y X := { r |}.
* For f: X' X and g:Y Y', g f is the function sending a map h : X Y to the map g ∘ h ∘ f : X' Y'.
A realizer for g f is uvr_g (u (r_f v)).
* The evaluation map ev :(Y X) ⊗ X Y sends f ⊗ x to f(x), which is realized by u u.
* For any map f:Z ⊗ X Y, there exists a unique map g:Z (Y X) which satisfies
ev ∘ (g ⊗ id_X) = f. This g is given as the function sending z to the function x ↦ f(z ⊗ x), which is realized by qpr_f tt q p.
* For objects X and Y, the underlying set of X Y is is _(X,Y). Realizers are
f_X Y := { r |}.
This set is not empty since (r_f) is in the set for a realizer r_f of f.
* For f: X' X and g:Y Y', f g is the function sending a map h : X Y to the map g ∘ h ∘ f : X' Y'. A realizer for f g is uvr_g ((r_f v) u).
* The evaluation map ev' : X ⊗ (X Y) Y sends x ⊗ f to f(x), which is realized by upvp v u.
* For any map f:X ⊗ Z Y, there exists a unique map g:Z (X Y) which satisfies
ev' ∘ (id_X ⊗ g) = f. This g is given as the function sending z to the function x ↦ f(x ⊗ z), which is realized by qpr_f tt p q.
For the category of modest sets, we use the same discussion as Proposition <ref>.
That is, for the functor F that is left adjoint of the inclusion functor G:, we define tensor products ⊠ in as X ⊠ Y := F(GX ⊗ GY).
When =(,) is a bi--algebra, is a monoidal bi-closed category.
In Proposition <ref>, is the category of assemblies on the applicative structure (, ).
Even if we employ the left application to construct the category of assemblies, we can obtain a category with the same structures as , as the next proposition says.
Let =(,,) be a bi--algebra.
When we take an applicative structure '=(,') by x ' y := y x, and are isomorphic as categories.
Moreover, is monoidally isomorphic to with the reversed tensor products.
That is, there is an isomorphism R: such that R(I) ≅ I', R^-1(I') ≅ I,
R(X ⊗ Y) ≅ RY ⊗' RX and R^-1(X' ⊗' Y') ≅ R^-1Y' ⊗ R^-1X' hold.
For a map f:X Y in , the map is also a map in since the realizer exists as r_f.
Therefore, we can take a functor R: which sends objects to the same objects and maps to the same maps.
Similarly we can get R^-1 which sends objects to the same objects and maps to the same maps.
' is a bi--algebra by taking the left application x ' y := y x.
We define the monoidal structure (⊗',I') on in the same way as Proposition <ref>.
Here the realizers for tensor products are x ⊗' y_X ⊗' Y = {tq (p t)| p ∈x_X, q ∈y_Y }.
A realizer for R(I) I' is uu and a realizer for the inverse is u u.
A realizer for R(X ⊗ Y) RY ⊗' RX is upqtp (q t) u and a realizer for the inverse is uu qptt p q.
Similar for the realizers related to R^-1.
We can define “partial bi--algebras” naturally.
Similar to partial -algebras discussed in Remark <ref>, for a partial bi--algebra :
* is not generally a monoidal bi-closed category;
* adding an extra element , naturally extends to a total bi--algebra _;
* is the full subcategory of _.
Here the does not need to be two (for and for ), just one.
§ SEPARATION OF CLASSES OF APPLICATIVE STRCTURES
As we have already mentioned, the classes of applicative structures in this paper form a hierarchy summarized in the following table (Table <ref>).
However, we have not yet shown the strictness of the hierarchy.
To show the strictness of the each inclusion, it is sufficient to provide an applicative structure separating the classes, that is, an applicative structure belonging to one side of the class but not belonging to the other.
In this section we give several such applicative structures, as summarized in Table <ref>.
§.§ Proofs of separations
First we show that the planar lambda calculus with a constant separates -algebras and -algebras.
Suppose a constant symbol c and add the following constant rule to the construction rules of planar lambda terms (See Example <ref> and <ref>).
(constant)
⊢ c
We assume no additional reduction rules about the constant.
That is, for instance, c (λ x.x) c has no redex.
Closed planar terms (which may contain c) modulo form a -algebra, which we call .
Even adding the constant c, the planar lambda calculus still has the properties of confluence and strongly normalizing.
is a -algebra but not a -algebra.
Hence,
-algebras ⊊ -algebras.
Assume that is a -algebra.
That is, assume there exist terms I and in such that I M M and M I M for any term M in .
We take I and as β-normal terms w.l.o.g.
If M N in , the number of appearance of c is equal between M and N.
Thus, since c I c, I and cannot contain c.
* When c is β-normal, c I is also β-normal and obviously not equal to c.
This contradicts to the confluence of the planar lambda calculus (with constant c).
* When = λ u.J for some J and u, c I (J[c/u]) I.
* When J = λ v.J' for some J' and v, c I J'[c/u][I/v].
Suppose v receives just n arguments N_1,… ,N_n (n ≥ 0) in J'.
J' = C[v N_1 … N_n] for some context C[-] which contains u to the left of the hole [-].
For the β-normal form N of I N_1 … N_n, c I (C[N])[c/u].
(C[N])[c/u] is β-normal and obviously not equal to c.
This contradicts to the confluence.
* Otherwise, J[c/u] I is β-normal and not equal to c.
This contradicts to the confluence.
Next we show that the planar lambda calculus additionally employing the η-equality separates -algebras and -algebras.
Suppose three constant symbols c_1, c_2 and c_3 and add the following constant rules (i=1,2,3) to the construction rules of planar lambda terms.
(constant)
⊢ c_i
We assume no additional reduction rules about the constants.
Closed planar terms (that may contain constants) modulo form a -algebra, which we call .
Note that the equivalence relation of is the βη-equality, while that of (Example <ref>) is the β-equality.
We have λ xyz.x(yz) as a representation of in .
Indeed, for any term M, (λ xyz.x(yz)) M (λ w.w) λ z.Mz =_η M.
is a -algebra but not a -algebra.
Hence,
-algebras ⊊ -algebras.
Assume that there are some terms L and P in satisfying that for any terms M_1, M_2 and M_3, L M_1 (P M_2 M_3) M_1 M_2 M_3.
Taking M_1 = M_2 = M_3 := λ x.x, we see that L and P cannot contain constants.
Taking M_i := c_i, we have L c_1 (P c_2 c_3) c_1 c_2 c_3.
Since L is a closed planar term with no constants, the βη-normal form of L is the form λ xy_1 … y_m.x N_1 … N_n (m,n ≥ 0).
Therefore, L c_1 (P c_2 c_3) (λ y_1… y_m.c_1 N_1 … N_n)(P c_2 c_3).
However, this term cannot be βη-equal to c_1 c_2 c_3 since c_1 cannot receive c_2 and c_3 as separated arguments no matter how the form of P is.
Next we show that the freely constructed -algebra separates -algebras and bi--algebras.
We take as the freely constructed -algebra with two constants c_1 and c_2.
That is, elements of are constructed from , , , , , c_1 and c_2 using the application and the unary operation ().
The equality in is obtained by the axioms of -algebras and we do not assume any axioms on the constants.
is a -algebra but not a bi--algebra.
Hence,
bi--algebras ⊊ -algebras.
Assume that is a bi--algebra and write the right and left applications as and . Here this is the same application as that of as a -algebra, that is, MN and M N denote the same element.
By the combinatory completeness, there is an element M := xyx y in .
Since M = holds, this M cannot contain c_1 nor c_2.
For this M, M c_1 c_2 = c_2 c_1.
As we can see from the axioms of , , , , and ( `- ),
it is impossible for M in any form to exchange the order of two arguments c_1 and c_2 in M c_1 c_2.
Then it is also impossible for c_2 in any form to reduce M c_1 c_2 to c_2 c_1.
Finally we show the bi-planar lambda calculus (Example <ref>) separates bi--algebras and -algebras.
is a bi--algebra but not a -algebra.
Hence,
-algebras ⊊ bi--algebras.
Assume that there is some closed bi-planar lambda term C in such that for any closed bi-planar term M, N and L, C M N L M L N.
Let C' be the β-normal form of C xx.
C' M N N M holds for any M and N.
Take M := xxyy and N := xxyyzz.
Note that for any β-normal term P and a free variable w of P, P[M/w] and P[N/w] are β-normal.
* When C' M is β-normal, both C' M N and N M are β-normal.
However, obviously C' M N N M and it contradicts to the confluence of the bi-planar lambda calculus.
* When C' = uC” for some C” and u, C' M N C”[M/u] N.
* When C” = vC”' for some C”' and v, C' M N C”'[M/u][N/v].
Since v is the rightmost free variable of C”', N is to the right of M in C”'[M/u][N/v].
Hence C”'[M/u][N/v] N M and it contradicts to the confluence.
* Otherwise, C”[M/u] N is β-normal.
C”[M/u] N N M and it contradicts to the confluence.
§.§ The planar lambda calculus is not a bi--algebra
Proofs of separations in the previous subsection are straightforward ones.
However, it is sometimes difficult to show that an applicative structure does not belong to certain class of applicative structures.
In this subsection, as an example, we will show that of Example <ref> (the planar lambda calculus with no constant) is not a bi--algebra.
Compared to propositions when constants exist (Proposition <ref> and <ref>), the proof is more tricky.
For any term M of , there is a term N of such that NM λ x.x.
Since planar lambda terms always have β-normal forms uniquely, we can assume M is β-normal w.l.o.g.
We show this lemma by the induction on the number of bound variables of M.
When BV(M) is a singleton, M is λ x.x and N:=λ x.x satisfies NM λ x.x.
Assuming that the lemma holds till the number of bound variables of M is k, we will show that the lemma holds for M which contains k+1 bound variables.
Since M is planar and β-normal, M = λ x y_1 … y_m .x P_1 … P_n for
some β-normal planar terms P_1, … , P_n. Here y_1 ,… , y_m are all the free variables of P_1 ,… , P_n.
Let Q_j be the term replacing all the y_i in P_j with λ z.z.
Each Q_j is a closed planar term and has at most k bound variables.
Hence, from the induction hypothesis, there exists some closed planar term R_j such that R_j Q_j λ x.x.
Take N' := λ w_1 … w_n.(R_1 w_1)… (R_n w_n) and N:= λ u.uN'(λ z_1.z_1)… (λ z_m.z_m).
Then N' and N are closed planar terms and
NM MN'(λ z_1.z_1)… (λ z_m.z_m)
= (λ x y_1 … y_m .x P_1 … P_n)N'(λ z_1.z_1)… (λ z_m.z_m)
N'Q_1 … Q_n
= (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n
(R_1 Q_1)… (R_n Q_n)
(λ x.x)… (λ x.x)
λ x.x.
is not a -algebra.
Assume that there is a term T in such that TMN NM for any M and N in .
(Note that a total applicative structure containing and is a -algebra iff it has such that x y = y x.
Indeed, ( ( ( )))
satisfies the axiom of the -combinator.)
Take a term λ x y_1 … y_m.x P_1 … P_n as the β-normal form of T.
If n=0, T=λ x.x and it immediately leads contradiction.
Thus n ≥ 1.
Since T MN NM for any M and N,
TM TMT TMTT TMTTT ….
Let Q_j (j=1,… ,n) be the terms replacing all the y_i in P_j with T.
Each Q_j is a closed planar term.
Let U := λ x.x Q_1 … Q_n.
UM = (λ x.x Q_1 … Q_n)M
M Q_1 … Q_n
= (M P_1 … P_n)[T/y_1]… [T/y_m]
(λ x y_1 … y_m.x P_1 … P_n) M T … T
= TMT… T
TM.
Thus UMN (TM)N NM holds for any M and N.
From Lemma <ref>, there exist closed terms R_j (j=1,… ,n) such that R_j Q_j λ z.z.
Take M_0 := λ w_1 … w_n.(R_1 w_1)… (R_n w_n).
Then for any closed planar term N,
NM_0 UM_0 N
= (λ x.x Q_1 … Q_n)M_0 N
M_0 Q_1 … Q_n N
= (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n N
(R_1 Q_1) … (R_n Q_n) N
(λ z.z)… (λ z.z) N
N.
Taking N_0 := λ x.x in N_0 M_0 N_0, we get M_0 = λ x.x.
Therefore, N (λ x.x) N holds for any closed planar term N.
However, N:= λ y.y(λ z.z) is the counterexample of this equation and it leads contradiction.
is a not a bi--algebra.
Assume that is a bi--algebra.
That is, taking as the application canonically obtained by the application of planar lambda terms, assume that there is some binary operation such that (||,,) becomes a bi--algebra.
This is the binary operation not on planar lambda terms, but on β-equivalence classes of planar lambda terms.
However, in the sequel, we denote a lambda term M indistinguishably to the equivalence class containing M.
For instance, for planar lambda terms M_1 and M_2, M_1 M_2 denotes some representation of M_1M_2, where M_i is the β-equivalence class containing M_i.
By the combinatory completeness for bi--algebras, there is a closed planar term
L representing xyx y.
Take a term λ x y_1 … y_m.x P_1 … P_n as the β-normal form of L.
For a term T representing xyx (y ), dividing to the cases of n=0 or not, we will show that T makes a -algebra and leads contradiction to Lemma <ref>.
If n=0, L=λ x.x and M N (L M) N M N holds for any M and N in .
Given arbitrary term N_0 in , take M := and N:= N_0 in M N M N.
Then we get
N_0
N_0.
For arbitrary M_0 and N_0 in ,
T M_0 N_0 = (xyx (y )) M_0 N_0
M_0 (N_0 )
M_0 N_0
N_0 M_0
holds.
Hence, T makes a -algebra and contradicts to Lemma <ref>.
Next is the case of n ≥ 1.
Since L M N M N for any M and N,
L M L M (L) L M (L) (L) L M (L) (L) (L) ….
Let Q_j be the term replacing all the y_i in P_j with L.
Each Q_j is a closed planar term.
Let
V:= λ x.x Q_1 … Q_n.
VM = (λ x.x Q_1 … Q_n)M
M Q_1 … Q_n
= (M P_1 … P_n)[L/y_1]… [L/y_m]
(λ x y_1 … y_m.x P_1 … P_n)M(L)… (L)
= LM(L)… (L)
LM.
Thus VMN LMN M N holds for any M and N.
From Lemma <ref>, there exists closed term R_j (j=1,… ,n) such that R_j Q_j λ z.z.
Take M_1 := λ w_1 … w_n.(R_1 w_1)… (R_n w_n).
Then for any closed planar term N,
M_1 N LM_1 N
= (λ x y_1 … y_m.x P_1 … P_n)M_1 N
M_1 Q_1 … Q_n N
= (λ w_1 … w_n.(R_1 w_1)… (R_n w_n))Q_1 … Q_n N
(R_1 Q_1) … (R_n Q_n) N
(λ z.z)… (λ z.z) N
N.
Taking N := in M_1 N N, we get M_1 =.
Therefore, N_1 N_1 holds for any closed planar term N_1.
Given arbitrary N_2 in , with N_1:= N_2, we get
N_2 N_2
N_2 .
For arbitrary M_2 and N_2 in ,
T M_2 N_2 = xyx (y ) M_2 N_2
M_2 (N_2 )
M_2 N_2
N_2 M_2
holds. Hence, T makes a -algebra and contradicts to Lemma <ref>.
We have already seen in Proposition <ref> that (the planar lambda calculus with constants) is not a -algebra.
However, whether is a -algebra is still open.
§.§ The computational lambda calculus
Next we consider the computational lambda calculus as an applicative structure that gives rise to non-symmetric structures.
The computational lambda calculus is a variant of the lambda calculus whose evaluation rules are sound for programs with computational effects <cit.>. The following axiomatization is from <cit.>.
Suppose infinite supply of variables x,y,z,….
Values, terms and evaluation contexts are defined as follows:
* (values) V ::= x | λ x.M
* (terms) M ::= V | MM'
* (evaluation contexts) E[] ::= [] | EM | VE
(Terms are the same ones of the ordinary lambda calculus in Example <ref>.)
An equivalence relation =_c on terms is defined as the congruence of the following equations:
* (β_V) (λ x.M)V =_c M[V/x]
* (η_V) λ x.Vx =_c V
* (β_Ω) (λ x.E[x])M =_c E[M]
Here E[M] denotes the term obtained by substituting M for [] in E[].
The (untyped) computational lambda calculus is the lambda calculus formed by terms and =_c.
In <cit.>, we showed that the computational lambda calculus is a -algebra but not a -algebra.
We can get a -algebra , whose underlying set is equivalence classes of lambda terms modulo =_c. (Note that terms of are not restricted to closed terms.)
Here λ xyz.x(yz), λ x.x, λ xy.yx and λ x.xM are representatives of , , and M respectively.
Although the computational lambda calculus has all terms of the lambda calculus, is not a PCA nor a -algebra.
This is reasonable considering that programs with effects cannot be discarded, duplicated nor exchanged in general, and thus cannot have the //-combinator.
Moreover, we can prove the next proposition.
is not a bi--algebra.
To prove this proposition, we use the CPS-translation <cit.>.
The CPS-translation sends terms of the computational lambda terms to terms of the ordinary lambda calculus and is defined inductively as follows.
* x := λ k.kx
* λ x.M := λ k.k(λ x.M)
* MN := λ k.M (λ f. N (λ x.fxk))
For any term M and N, M =_c N holds in the computational lambda calculus iff MN holds in the ordinary lambda calculus.
We will lead a contradiction by assuming is a bi--algebra.
If is a bi--algebra, we have a term L representing xyx y and a term M representing xM x for each term M.
For any terms M_1 and M_2, L M_1 (M_2) =_c M_2 M_1 holds, and thus L M_1 (M_2)M_2 M_1 holds.
Now we take a fresh variables v and let M_2 := vv.
Additionally we take a fresh variable (fresh for L, M_2 and M_2) u and let M_1 := uu.
Then
L M_1 (M_2) = λ k. (λ k'.L (λ f'.M_1(λ x'.f'x'k'))) (λ f.M_2(λ x.fxk))
λ k. L (λ f'.M_1 (λ x' .f'x'(λ f.M_2(λ x.fxk))))
λ k. L (λ f'.uu (λ x' .f'x'(λ f.M_2(λ x.fxk)))),
M_2 M_1 λ k.vv(λ f.uu(λ x.fxk)).
In M_2 M_1, vv receives the argument of the form (… uu … ).
However, since u and v are fresh, no matter what L is, in L M_1 (M_2), vv cannot receive arguments containing uu.
Hence these terms L M_1 (M_2) and M_2 M_1 cannot be βη-equal.
It leads a contradiction to the soundness of the CPS-translation.
Semantically, the untyped ordinary/linear/planar lambda calculus is modeled by a reflexive object of a CCC/SMCC/closed multicategory.
And it is related to the categorical structures of assemblies on each lambda calculus.
On the other hand, the untyped computational lambda calculus is modeled by a reflexive object of a Kleisli category.
Since the categorical structure of a Kleisli category is not monoidal in general but premonoidal (See <cit.>), it is expected that the category of assemblies on the untyped computational lambda calculus is not a monoidal category.
Thus the computational lambda calculus is expected not to be a -algebra inducing monoidal closed category, however, we have not proven this conjecture yet.
Here, we give an intuitive explanation for the conjecture.
Assume that and exist in the computational lambda calculus.
Take three non-values M_1, M_2 and M_3.
Suppose these terms are reduced to values: v_L; v_P; M_i v_i.
In M_1 M_2 M_3, the evaluation proceeds as follows:
$̄M_1is reduced tov_1⇝M_2is reduced tov_2⇝v_1 v_2is reduced⇝M_3is reduced tov_3⇝…
On the other hand, inM_1 (M_2 M_3), the evaluation proceeds as follows:
$̄is reduced tov_L⇝M_1is reduced tov_1⇝v_L v_1is reduced⇝is reduced tov_P⇝M_2is reduced tov_2⇝v_P v_2is reduced⇝M_3is reduced tov_3⇝…
These two computations seem not to coincide, since the order of the evaluations ofv_1 v_2andM_3is reversed.
§ NECESSARY CONDITIONS FOR INDUCING CLOSED STRUCTURES
We have seen that applicative structures of certain classes induce the corresponding categorical structures, in Proposition <ref> (CCCs), Proposition <ref> (SMCCs), Proposition <ref> (closed multicategories), Proposition <ref> (closed categories), Proposition <ref> (monoidal closed categories) and Proposition <ref> (monoidal bi-closed categories).
In this section, we show the certain “inverses” of these propositions hold.
Suppose is a total applicative structure and := happens to be a CCC.
is an -algebra if the following conditions hold.
* |Y^X| = _ (X,Y) and f_Y^X = { r |}.
* For f:X' X and g:Y Y', g^f : Y^X Y'^X' is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to strictly preserves finite products.
* The adjunction Φ: _ (X × Y, Z) _ (X, Z^Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Applying Φ to the first projection (a,a') ↦ a: A × A A,
we get a map k:A A^A, which sends a to (a' ↦ a).
(Here we use the conditions <ref>, <ref> and <ref> to clarify what the function k actually is.)
When we take as a realizer of k, this satisfies ∀ a,a' ∈, a a' = a.
Let ϕ : A A^A be the function sending a to the function x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
Applying Φ twice to the map from ((A^A)^A × A^A) × A to A defined as
((A^A)^A × A^A) × A
id × diagonal ((A^A)^A × A^A) × (A × A)
symmetry ((A^A)^A × A) × (A^A × A) ev × ev A^A × A ev A,
we get a map s:(A^A)^A (A^A)^(A^A) which sends a function g:A A^A to the function
(f:A A) ↦ (a ↦ g(a) (f(a))).
The map
A ϕ A^A ϕ^id (A^A)^A s (A^A)^(A^A)id^ϕ (A^A)^A
is the function
a ↦ (a' ↦ (a”↦ a a” (a' a”))).
(Here we use the conditions <ref> to clarify what the functions ϕ^id and id^ϕ actually are.)
Thus, when we take as a realizer of this map, satisfies x y z = x z (y z) for any x,y,z ∈.
To rephrase the proposition, to obtain a CCC by categorical realizability, being an-algebra is the necessary condition on the total applicative structure (under several conditions).
We will show the similar propositions for the other classes.
Combining the propositions in this section and the separations in the previous section, we can say that, for instance, the category of assemblies on an applicative structure that is a bi--algebra but not a-algebra (,) is indeed non-symmetric monoidal (as long as we try to take the symmetry in the canonical way).
When we try to prove the proposition replacing “total applicative structure” with “partial applicative structure” in Proposition <ref>, we cannot use the same proof.
This is because ϕ : A A^A is not always defined.
Indeed, when a a' is not defined in , ϕ (a) is not defined at a'.
It is still unclear whether we can prove the similar proposition as Proposition <ref> when is a partial applicative structure.
Suppose is a total applicative structure and := happens to be an SMCC.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to is a strict symmetric monoidal functor.
* The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Let ϕ : A (A A) be the function sending a to the function x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
Applying Φ twice to the map
((A A) ⊗ (A A)) ⊗ A
(A A) ⊗ ((A A) ⊗ A)
(A A) ⊗ A
A,
we get a map l: (A A) ((A A) (A A)), which sends g:A A to the function
(f:A A) ↦ g ∘ f.
The map
A ϕ (A A) l ((A A) (A A)) id ϕ ((A A) A)
is the function a ↦ (a' ↦ (a”↦ a (a' a”))).
Thus, when we take as a realizer of this map, satisfies x y z = x (y z) for any x,y,z ∈.
Applying Φ to the map
A ⊗ (A A) symmetry (A A) ⊗ A ev A,
we get a map c:A (A (A A)), which sends a to (f ↦ f(a)).
The map
A c (A (A A)) id ϕ (A A)
is the function a ↦ (a' ↦ a' a).
Thus, when we take as a realizer of this map, satisfies x y = y x for any x,y ∈.
Let := ( ( ( ))). Then xyz=xzy holds for any
x,y,z ∈.
Suppose is a total applicative structure and := happens to be a closed multicategory.
is a -algebra if the following conditions hold.
* (;X) = |X| and (;X) = X.
* (X;Y) = _ (X,Y) and (X;Y) =(_ (X,Y), ).
Here
f= { r |}.
* (X_1,… ,X_n;Y) = (X_1; (X_2,… ,X_n;Y)) and (X_1,… ,X_n;Y) is the underlying set of (X_1,… ,X_n;Y).
* For g:Y_1,… ,Y_n Z and f_l:X^l_1,… ,X^l_k_l Y_l, g ∘ (f_1,… ,f_n) is the function sending x^1_1,… ,x^1_k_1,… ,x^n_k_n to g(f_1 (x^1_1,… ,x^1_k_1),… ,f_n (x^n_1,… ,x^n_k_n)).
When k_l = 0 for some 1 ≤ l ≤ n, g ∘ (f_1,… ,f_n) is the function given y_l ∈ |Y_l| pointed by f_l as the l-th argument of g.
* ev_X_1,… ,X_n;Y sends f, x_1,… ,x_n to f(x_1,… ,x_n).
* Λ_Z_1,… ,Z_m;X_1,… ,X_n;Y sends a function (z_1,… ,z_m,x_1,… ,x_n ↦ f(z_1,… ,z_m,x_1,… ,x_n)) to the function (z_1,… ,z_m ↦ f(z_1,… ,z_m,-,… ,-)).
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Let ϕ : A (A;A) be the function sending a to the map x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
Take a map
b:A,A,A id,ϕ,id A, (A;A),A ϕ,ev (A;A),A ev A,
which sends (x,y,z) to x (y z) for any x,y,z ∈.
When we take as a realizer of Λ_A;A;(A;A) (Λ_A,A;A;A (b)),
x y z = (Λ_A;A;(A;A) (Λ_A,A;A;A (b)))(x)(y)(z)
= b(x,y,z)
= x (y z).
Given arbitrary a ∈, take a map f_a:A A as
A id,a A,A ϕ,id (A;A),A ev A,
which sends x ∈ to x a.
When we take a as a realizer of f_a, a x = x a for any x ∈.
Suppose is a total applicative structure and := happens to be a closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* i_X is the function sending a function (f:∗↦ x) to x.
* L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f.
In the condition <ref>, we assume that the unit object is a singleton {∗}.
The assumption can be derived from the condition <ref>.
Take an object X := ({ x_1,x_2 } , _X) by x_i_X :=.
From the condition <ref>,
|X I| is _ (I,X).
Since _ (I,X) = _ (|I|,{ x_1,x_2 }), |X I| = _ (|I|,{ x_1,x_2 }).
Also since X I ≅ X, |X I| ≅ |X| = { x_1,x_2 }.
_ (|I|,{ x_1,x_2 }) ≅{ x_1,x_2 } holds iff |I| is the singleton.
Take an object A := (,_A), where a_A := { a }.
When we take as a realizer of id_A, this satisfies ∀ a ∈, a_A ⊆id_A (a)_A.
That is, ∀ a ∈, a = a.
Let ϕ : A (A A) be the function sending a to the function x ↦ a x.
Here ϕ(a) is realized by a and ϕ is realized by .
The map
A ϕ (A A) L ((A A) (A A)) id ϕ ((A A) A)
is the function a ↦ (a' ↦ (a”↦ a (a' a”))).
Thus, when we take as a realizer of this map, satisfies x y z = x (y z) for any x,y,z ∈.
Since I ≅ (I I) and ∈id_I_I I, we can assume ∈∗_I w.l.o.g.
When we take as a realizer of i_A^-1:A (A I), satisfies a x = a for any a ∈ and x ∈∗_I, especially, a =a holds.
Given arbitrary a ∈, let g_a:I A be the function ∗↦ a.
g_a is realized by a.
The map
A ϕ (A A) id g_a (A I) i_A A
is the function a' ↦ a' a.
Thus, when we take a as a realizer of this map, a satisfies a x = x a for any x ∈.
Suppose is a total applicative structure and := happens to be a monoidal closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to is a strict monoidal functor.
* The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Applying Φ twice to the map
((Y X) ⊗ (X Z)) ⊗ Z
(Y X) ⊗ ((X Z) ⊗ Z)
(Y X) ⊗ X Y,
we get a map L^X_Y,Z: (Y X) ((Y Z) (X Z)).
This L is the natural transformation L of the closed category .
Applying Φ to the unitor ρ_X :X ⊗ I X, we get a map i^-1_X : X (X I).
The inverse map is the natural isomorphism i of the closed category .
We can easily check that and satisfies all the conditions of Proposition <ref> for these L and i.
Hence, is a -algebra.
Take an object A := (,_A), where a_A := { a }.
Let ϕ: A (A A) be the function sending a to the function x ↦ ax.
Here ϕ (a) is realized by a and ϕ is realized by .
Let l: A (A (A ⊗ A)) be the map obtained by applying Φ to
A ⊗ (A ⊗ A) (A ⊗ A) ⊗ A ((A A) ⊗ A) ⊗ A
A ⊗ A (A A) ⊗ A A,
and let be a realizer of l.
l is the function sending x to the function (y,z) ↦ xyz.
Also let be a realizer of p := Φ(id_A ⊗ A) : A ((A ⊗ A) A).
p is the function sending y to the function z ↦ (y,z).
Then for any x,y,z ∈, x ( y z) ∈l(x)(p(y)(z))_A and thus
x ( y z) = l(x)(p(y)(z))
= l(x)(y,z)
= xyz.
The proof of the next proposition, for monoidal bi-closed categories and bi--algebras, is a little more complicated than the proofs of previous propositions.
When we obtain a monoidal bi-closed categoryby a bi--algebra,
we take realizers of elements of the objectX Yinas
f_X Y := { r ∈ || |}
(See the proof of Proposition <ref>).
However, in the next proposition we do not assume anything about the left application of, and thus we also cannot assume anything about realizers forX Y.
This makes the proof of existence forandcumbersome.
Suppose = (, ) is a total applicative structure and := happens to be a monoidal bi-closed category.
is a bi--algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to is a strict monoidal functor.
* The adjunction Φ: _ (X ⊗ Y, Z) _ (X, Z Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
* |X Y| = _ (X,Y).
* f g : (X Y) (X' Y') is the function sending h:X Y to g ∘ h ∘ f.
* The adjunction Φ': _ (X ⊗ Y, Z) _ (Y, X Z) is the function sending a function f to the function y ↦ (x ↦ f(x,y)).
The conditions of this proposition includes all the conditions of Proposition <ref>.
Hence, is a -algebra and have the combinatory completeness for the planar lambda calculus.
We take as the -combinator and as the -combinator of .
Take an object A := (,_A), where a_A := { a }.
Applying Φ to the evaluation map
ev_1:A ⊗ (A A) A, we get a map l:A (A (A A)), which sends a to (f ↦ f(a)).
Let _1 be a realizer of l and x y := _1 x y.
We will show that (,,) is a bi--algebra.
Let ϕ : A (A A) be the function sending a to the function (x ↦ a x).
Here ϕ(a) is realized by a and ϕ is realized by .
Given arbitrary a ∈, let a := x._1 x a.
For any x ∈,
a x = _1 x a
= x a.
Given arbitrary a ∈,
take a as an element of ϕ(a)_A A.
Then for any x ∈,
x a = _1 x a
= l(x)(ϕ(a))
= ϕ (a) (x)
= a x.
Furthermore, we can take as ().
Next we obtain .
Applying Φ' to
A ⊗ A ϕ(_1) ⊗ id A ⊗ A
ϕ⊗ id (A A) ⊗ A
ev A,
we get a map ϕ':A (A A), which sends a to (a' ↦ a' a).
Applying Φ' three times to
A ⊗ ((A A) ⊗ (A A))
associator (A ⊗ (A A)) ⊗ (A A)
ev ⊗ id A ⊗ (A A)
ev A,
we get a map p: I (A A) ((A A) (A A)).
Define a map b_1 as
I p (A A) ((A A) (A A)) ϕ' (ϕ' id) A (A (A A)),
which sends ∗ to x ↦ (y ↦ (z ↦ (z y) x)).
Take M_1 ∈b_1 (∗)_A (A (A A)).
Let _2 be a realizer of Φ (ev_2), where ev_2 :A ⊗ (A (A A)) (A A) is the evaluation map.
_2 realizes a map q:A (A A) that sends a to ϕ (_2 a).
Let _3 be a realizer of Φ (ev_3), where ev_3: A ⊗ (A (A (A A))) (A (A A)) is the evaluation map.
Take r:A A as a map sending x to _3 x M_1, whose realizer is x._3 x M_1.
Applying Φ' to
A ⊗ A q ⊗ r (A A) ⊗ A ev A,
we get a map b_2 : I (A (A A)), which sends ∗ to (x ↦ (y ↦_2 y (_3 x M_1))).
Take M_2 ∈b_2 (∗)_A (A A).
Let b_3:A A be a map sending x to _2 x M_2, whose realizer is x._2 x M_2.
When we take ∈b_3_A A, for any x ∈,
x = _1 x
= b_3 (x)
= _2 x M_2.
For any y ∈,
y (x ) = y (_2 x M_2)
= _1 y (_2 x M_2)
= b_2 (∗) (x) (y)
= _2 y (_3 x M_1).
For any z ∈,
z (y (x )) = z (_2 y (_3 x M_1))
= _1 z (_2 y (_3 x M_1))
= b_1(∗)(x)(y)(z)
= (z y) x.
Next we obtain .
Applying Φ' and Φ to
A ⊗ ((A (A A)) ⊗ A) associator (A ⊗ (A (A A))) ⊗ A ev ⊗ id (A A) ⊗ A ev A,
we get a map d:(A (A A)) ((A A) A), which sends a map (a ↦ (a' ↦ f(a,a'))) to the map (a' ↦ (a ↦ f(a,a'))).
When we take as a realizer of
A ϕ'
(A A) id ϕ
(A (A A)) d ((A A) A),
x ( y z)
= d(ϕ∘ (ϕ'(y)))(z)(x)
= (ϕ∘ (ϕ'(y)))(x)(z)
= (x y) z
for any x,y,z ∈.
Finally we obtain .
Applying Φ and Φ' to
(A ⊗ ((A A) A)) ⊗ A associator A ⊗ (((A A) A) ⊗ A) id ⊗ ev A ⊗ (A A) ev A,
we get a map d_1:((A A) A) (A (A A)), sending a map (a' ↦ (a ↦ f(a',a))) to the map (a ↦ (a' ↦ f(a',a))).
Take N_1 ∈d_1 ∘ (ϕ' id) ∘ϕ_A (A (A A)).
Let _4 be a realizer of Φ (ev_4), where ev_4: A ⊗ (A (A A)) (A A) is the evaluation map.
_4 realizes a map s:A (A A) sending x to ϕ (_4 x).
Let _5 be a realizer of a map obtained by applying Φ to
ev_5: A ⊗ (A (A (A A))) (A (A A))
and t:A A be a map sending a to _5 a N_1, whose realizer is x._5 x N_1.
Applying Φ' to
A ⊗ A s ⊗ t (A A) ⊗ A ev A,
we get a map d_2:A (A A) sending y to
(x ↦ (_4 x (_5 y N_1))).
Take a realizer N_2 ∈d_2_A (A A).
Let d_3:A A be a map sending x to _2 x N_2, whose realizer is x. _2 x N_2.
When we take ∈d_3_A A, for any y ∈,
y = _1 y
= d_3 (y)
= _2 y N_2.
For any x ∈,
x (y ) = x (_2 y N_2)
= _1 x (_2 y N_2)
= d_2 (y)(x)
= _4 x (_5 y N_1).
For any z ∈,
(x (y )) z = _4 x (_5 y N_1) z
= (d_1 ∘ (ϕ' id) ∘ϕ) (y)(x)(z)
= ((ϕ' ∘ (ϕ(y)))(z)(x)
= x (y z).
In this section we showed propositions for the necessary conditions to obtain certain structures on categories of assemblies.
Next, consider whether the similar propositions hold for the cases of categories of modest sets.
The next propositions can be proven in the same way as Proposition <ref>, <ref> and <ref>.
Suppose is a total applicative structure and := happens to be a CCC.
is an -algebra if the following conditions hold.
* |Y^X| = _ (X,Y) and f_Y^X = { r |}.
* For f:X' X and g:Y Y', g^f : Y^X Y'^X' is the function sending h:X Y to g ∘ h ∘ f.
* The forgetful functor from to strictly preserves finite products.
* The adjunction Φ: _ (X × Y, Z) _ (X, Z^Y) is the function sending a function f to the function x ↦ (y ↦ f(x,y)).
Suppose is a total applicative structure and := happens to be a closed multicategory.
is a -algebra if the following conditions hold.
* (;X) = |X| and (;X) = X.
* (X;Y) = _ (X,Y) and (X;Y) =(_ (X,Y), ), where
f= { r |}.
* (X_1,… ,X_n;Y) = (X_1; (X_2,… ,X_n;Y)) and (X_1,… ,X_n;Y) is the underlying set of (X_1,… ,X_n;Y).
* For g:Y_1,… ,Y_n Z and f_l:X^l_1,… ,X^l_k_l Y_l, g ∘ (f_1,… ,f_n) is the function sending x^1_1,… ,x^1_k_1,… ,x^n_k_n to g(f_1 (x^1_1,… ,x^1_k_1),… ,f_n (x^n_1,… ,x^n_k_n)).
When k_l = 0 for some 1 ≤ l ≤ n, g ∘ (f_1,… ,f_n) is the function given y_l ∈ |Y_l| pointed by f_l as the l-th argument of g.
* ev_X_1,… ,X_n;Y sends f, x_1,… ,x_n to f(x_1,… ,x_n).
* Λ_Z_1,… ,Z_m;X_1,… ,X_n;Y sends a function (z_1,… ,z_m,x_1,… ,x_n ↦ f(z_1,… ,z_m,x_1,… ,x_n)) to the function (z_1,… ,z_m ↦ f(z_1,… ,z_m,-,… ,-)).
Suppose is a total applicative structure and := happens to be a closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* The underlying set of the unit object I is the singleton {∗}.
* i_X is the function sending a function (f:∗↦ x) to x.
* L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f.
Here note that Proposition <ref> has one more condition, that the underlying set of the unit object is a singleton, than Proposition <ref>.
This is because the assemblyXwe used in Remark <ref> is not a modest set.
On the other hand, for the cases of SMCCs, monoidal closed categories and monoidal bi-closed categories, we cannot state propositions for modest sets similar to Proposition <ref>, <ref> and <ref>.
Since we define tensor products in categories of modest sets in the different way from those of categories of assemblies (as seen in the proof of Proposition <ref>), the condition “the forgetful functor fromtois strict monoidal" is not appropriate for the case of modest sets.
For the case of SMCCs, we can avoid this problem by presenting a more generalized proposition, that is for symmetric closed categories, instead of SMCCs.
A symmetric closed category is a closed category with a natural isomorphism
S_X,Y,Z : (Z Y) X ≅ (Z X) Y
satisfying appropriate axioms ( <cit.>).
Suppose is a total applicative structure and := (or ) happens to be a symmetric closed category.
is a -algebra if the following conditions hold.
* |Y X| = _ (X,Y) and f_Y X = { r |}.
* g f : (Y X) (Y' X') is the function sending h:X Y to g ∘ h ∘ f.
* L_Y,Z^X is the function sending g:Y Z to the function (f:X Y) ↦ g ∘ f.
* S_X,Y,Z is the function sending f: x ↦ (y ↦ f(x)(y)) to S(f) : y ↦ (x ↦ f(y)(x)).
This proposition also shows that we cannot obtain(or) that is a symmetric closed category but not an SMCC, in the canonical way.
For the cases of monoidal closed categories and monoidal bi-closed categories, it is still not clear that there are any appropriate conditions to state propositions for modest sets similar to Proposition <ref> and <ref>.
§ PLANAR LINEAR COMBINATORY ALGEBRAS
In Section <ref>, we recalled LCAs and rLCAs, that relate-algebras and PCAs, and that induce categorical models of linear exponential modalities.
In this section, we apply the similar construction to-algebras.
We reformulate rLCAs for-algebras and PCAs, and call them exp-rPLCAs.
From an exp-rPLCA, we get a categorical model of!-modality on the non-symmetric multiplicative intuitionistic linear logic (MILL).
Also we reformulate rLCAs for-algebras and-algebras, and call them exch-rPLCAs.
From an exch-rPLCA, we obtain a model for an exchange modality relating the non-symmetric MILL and the symmetric MILL.
In <cit.>, we already introduced the same construction called “rPLCAs," based on bi--algebras.
What defined as rPLCAs in this section are generalizations of those in <cit.>, based on-algebras.
§.§ Exponential planar linear combinatory algebras
Linear exponential comonads on non-symmetric monoidal categories are investigated in <cit.>, which model!-modalities on non-symmetric MILL.
A linear exponential comonad on a monoidal category consists of the following data.
* A monoidal comonad (!, δ, ϵ, m, m_I). Here ! is an endofunctor on ,
δ_X : !X !!X and ϵ : !X X are monidal natural transformations for the comultiplication and the counit. A natural transformation m_X,Y : !X ⊗ !Y !(X ⊗ Y) and a map m_I : I !I make ! be a monoidal functor.
* Monoidal natural transformations e_X : !X I and d_X : !X !X ⊗ !X.
* A monidal natural transformation σ_X,Y : !X ⊗ !Y !Y ⊗ !X defined as
!X ⊗ !Y δ_X ⊗δ_Y !!X ⊗ !!Y m_!X,!Y !(!X ⊗ !Y) d_!X ⊗ !Y !(!X ⊗ !Y) ⊗ !(!X ⊗ !Y)
!(e_X ⊗ id) ⊗ !(id ⊗ e_Y) !(I ⊗ !Y) ⊗ !(!X ⊗ I) !( unitor) ⊗ !( unitor) !!Y ⊗ !!X ϵ_!Y⊗ϵ_!X !Y ⊗ !X.
Here these components need satisfy the following conditions.
* The following diagram commutes:
@C=30pt
!X ⊗ !X ⊗ !Y ⊗ !Y ⊗ !Z ⊗!Z [r]^id ⊗σ⊗ id[d]_id ⊗σ⊗ id
!X ⊗ !Y ⊗ !X ⊗ !Y ⊗ !Z ⊗ !Z [d]^m ⊗ m⊗ id
!X ⊗ !X ⊗ !Y ⊗ !Z ⊗ !Y ⊗ !Z [d]_id ⊗ m ⊗ m
!(X ⊗ Y) ⊗ !(X ⊗ Y) ⊗ !Z ⊗ !Z [d]^id ⊗σ⊗ id
!X ⊗ !X ⊗ !(Y ⊗ Z) ⊗ !(Y ⊗ Z) [d]_id ⊗σ⊗ id
!(X ⊗ Y) ⊗ !Z ⊗ !(X ⊗ Y) ⊗ !Z [d]^m ⊗ m
!X ⊗ !(Y ⊗ Z) ⊗ !X ⊗ !(Y ⊗ Z) [r]_m ⊗ m
!(X ⊗ Y ⊗ Z) ⊗ !(X ⊗ Y ⊗ Z)
* m_!Y ,!X∘σ_!X , !Y = !σ_X,Y∘ m_!X ,!Y.
* σ_X,Y^-1 = σ_Y,X.
* The following diagram commutes:
@C=50pt
!X ⊗ !Y ⊗ !Z [r]^δ_X ⊗δ_Y ⊗ id[d]_id ⊗σ_Y,Z
!!X ⊗ !!Y ⊗ !Z [r]^m_!X,!Y⊗ id
!(!X ⊗ !Y) ⊗ !Z [d]^σ_!X ⊗ !Y, Z
!X ⊗ !Z ⊗ !Y [rrd]_σ_X,Z⊗ id
!Z ⊗ !(!X ⊗ !Y) [d]^id ⊗ϵ_!X ⊗ !Y
!Z ⊗ !X ⊗ !Y
* The following diagram commutes:
@C=50pt
!X ⊗ !Y [r]^d_X ⊗ d_Y[d]_m_X,Y
!X ⊗ !X ⊗ !Y ⊗ !Y [r]^id ⊗σ⊗ id
!X ⊗ !Y ⊗ !X ⊗ !Y [d]^m ⊗ m
!(X ⊗ Y) [rr]_d_X ⊗ Y
!(X ⊗ Y) ⊗ !(X ⊗ Y)
* The following diagram commutes:
@C=50pt
I [rd]^m_I ⊗ m_I[d]_m_I
!I [r]_d_I !I ⊗ !I
* (!X, e_X,d_X) is a comonoid in .
* e_X and d_X are coalgebra morphisms.
* δ_X is a comonoid morphism.
Then we will introduce the categorical realizability to inducing linear exponential comonads on non-symmetric monoidal categories.
The results are reformulations of a part of contents in <cit.> and <cit.> to the case of-algebras.
An exponential relational planar linear combinatory algebra (exp-rPLCA) consists of a -algebra and a comonadic applicative morphism (,,) on which satisfies the followings.
* There is ∈ || such that x ( y) ⊆{ x } for any x , y ∈ ||.
* There is ∈ || such that x ( y) ⊆ x ( y) ( y) for any x , y ∈ ||.
While the above definition employs the different style from rLCAs of Definition <ref>, we can also define exp-rPLCAs in the same style.
For a -algebra and a comonadic applicative morphism (,,) on , the followings are equivalent.
* (,) is an exp-rPLCA.
* Take two total relations [,]: and k_i : as [,](x) := { a a' | a,a' ∈ x } and k_i (x) := {}.
Then they are applicative morphisms and ≼ [,] and ≼ k_i hold.
(1)⇒(2):
Realizers of [,] and k_i exist as pq. (r_ ( p)( q)) and .
Realizers for ≼ [,] and ≼ k_i are and .
(2)⇒(1):
Take a realizer r_1 of ≼ [,] and a realizer r_2 of ≼ k_i.
Then and exist as xy. x (r_2 y) and xy. x (r_1 y).
From an exp-rPLCA, we get a linear exponential comonad.
For an exp-rPLCA (, ), is a linear exponential comonad on .
* It is easy to see that the comultiplication δ and the counit ϵ are monoidal natural transformations.
From Proposition <ref>, the comonad is a lax monoidal functor and thus we have m_X,Y : X ⊗ Y (X ⊗ Y) and m_I : I I.
Therefore, we have as a monoidal comonad.
* e_X : X I is the function sending x to ∗. A realizer for e_X is .
* d_X : X X ⊗ X is the function sending x to x ⊗ x.
A realizer for d_X is ( pq. pq).
* It is easy to see that the (, e_X, d_X) satisfies conditions for linear exponential comonads.
Next we try to obtain linear-non-linear models for the non-symmetric MILL, that is, monoidal adjunctions between (non-symmetric) monoidal closed categories and CCCs.
Although now we get a linear exponential comonadon, at this point it has not concluded that we obtain a linear-non-linear model, since we have not shown that the co-Kleisli adjunction betweenandis a monoidal adjunction.
To show this, we use the next proposition shown in <cit.>.
Let be a monoidal closed category and ! be a linear exponential comonad on . When has finite products, the co-Kleisli category _! is a CCC and the co-Kleisli adjunction is monoidal.
For an exp-rPLCA (, ), has Cartesian products, and thus the co-Kleisli adjunction between and a CCC is monoidal.
* The terminal object is ({∗}, ), where ∗ := ||.
* The underlying set of X × Y is |X| × |Y|. Realizers are defined as
(x,y) := { ( uv) a | }.
The set of realizers is not empty since for m ∈x_X and m' ∈y_Y,
( ( ( m)) ( ( m'))) () ∈(x,y).
* For maps f:X X' and g :Y Y' in , f × g is the function sending (x,y) to (f(x),g(y)).
A realizer of f × g is
uv. ( (r_ M u) (r_ N v)), where M ∈ ( r_f) and
N ∈ ( r_g).
* A realizer for the projection π :X × Y X is
( ( uv. ( uv))).
A realizer for the projection π' :X × Y Y is
( ( uv. ( uv))).
* For any object Z and any maps f:Z X and g:Z Y, there exists a unique map
h:Z X × Y such that π∘ h = f and π' ∘ h =g.
h is the function sending z to (f(z),g(z)), whose realizer is in
( ( r_f) ( r_g)).
For an exp-rPLCA(,), we can restrict :to the comonad on, as we saw in Remark <ref>.
By the same proof as the above, we also can get a linear-non-linear model using.
For an exp-rPLCA (, ), is a linear exponential comonad on .
Moreover, the co-Kleisli adjunction between the monoidal closed category and the CCC _ is monoidal.
We have seen the co-Kleisli adjunctions obtained by an exp-rPLCA(, )are linear-non-linear models by showing thatandhave Cartesian products.
We can further show that these categories have better structures as the next proposition says.
For an exp-rPLCA (, ), and are finitely complete and finitely cocomplete.
First we show the proposition for .
* The terminal object and binary products are those in the proof of Proposition <ref>.
* Given maps f,g :X Y, let Z be an assembly defined as |Z| := { x ∈ |X| | f(x) = g(x) } and x_Z := x_X.
Take a map e :Z X as the inclusion function, realized by .
Then it is easy to see that this e is the equalizer of f and g.
* The initial object is the empty set.
* Given maps f,g :X Y, take a set |W| := Y/∼, where ∼ is the smallest equivalence relation satisfying ∀ x ∈ |X|, f(x) ∼ g(x).
Take an assembly W = (|W|,_W) by w_W := ⋃_y ∈ wy_Y.
Take a map e':Y W by the projection, realized by .
Then it is easy to see that this e' is the coequalizer of f and g.
* The underlying set of X+Y is { (0,x) | x ∈ |X| }∪{ (1,y) | y ∈ |Y| }.
Realizers are defined as
(0,x) := { mp | p ∈x_X }
and
(1,y) := { nq | q ∈y_Y },
where
m := uv. ( u)( v) and n := uv. u ( v).
The coprojections in_X:X X+Y and
in_Y :Y X+Y are given as x ↦ (0,x) and y ↦ (1,y), and realized by m and n respectively.
Given maps f:X Z and g:Y Z realized by r_f and r_g, we have a unique map h:X+Y Z such that h ∘ in_X = f and h ∘ in_Y = g.
h is the function sending (0,x) to f(x) and (1,y) to g(y), which is realized by ( uv.u( r_f) ( r_g) v).
Therefore, is finitely complete and finitely cocomplete.
Since is the reflexive full subcategory of , is also finitely complete and finitely cocomplete.
As an adjoint pair between a-algebra and a PCA gives rise to an rLCA, an adjoint pair between a-algebra and a PCA gives rise to an exp-rPLCA and a monidal adjunction.
Let (δ⊣γ): be an adjoint pair for a -algebra and a PCA .
* (, δ∘γ) forms an exp-rPLCA.
* (⊣): is a monoidal adjunction between the monoidal category and the Cartesian monoidal category .
* From Proposition <ref> (<ref>), δ∘γ is a comonadic applicative morphism.
Let and be elements for the counit and the comultiplication.
Then we can take ∈ as an element of xy. x ( (r_δ (δ M) y)), where M ∈ z.(γ).
Also we can take ∈ as an element of xy. x ( (r_δ (δ N) ( y))), where N ∈ z.r_γ (r_γ (γ) z) z.
* We show that the left adjoint is strong monoidal.
Let ∈ || and ∈ || be elements such that ∀ a ∈ ||, (δ (γ a)) =a and ∀ b ∈ ||, b ∈γ (δ b).
The map I 1 is realized by (δ).
The inverse 1 I is realized by a. (r_δ (δ ( b.(γ))) a).
The natural transformation ( X) ⊗ ( Y) (X × Y) is realized by
( aa'.r_δ (r_δ (δ ( bb't.tbb')) a) a').
The inverse map (X × Y) ( X) ⊗ ( Y) is realized by
u. (r_δ (δ M) u), where
M ∈ ( bb'.r_γ (r_γ (γ) ( b) ) ( b')).
Next we consider the functional case of exp-rPLCAs, like LCAs are the functional case of rLCAs.
An exponential planar linear combinatory algebra (exp-PLCA) is an exp-rPLCA (, ) that is functional.
Not only are exp-PLCAs special cases of exp-rPLCAs, but also can induce adjoint pairs between-algebras and PCAs.
Let (, ) be an exp-PLCA.
* We have a PCA _ = (, @) with x @ y := x ( y).
* Let γ : _ be the identity function and δ: _ be the function x ↦ x. Then γ and δ are applicative morphisms and δ⊣γ.
* We have the -combinator in _ as xy. ( xy).
We have the -combinator as xyz. (M x)(r_ (r_ ()( y))( z)), where
M:= xyz. x ( () ( y)) ( ( uv. r_ u ( v)) ( z)).
* Realizers of γ and δ are xy.x( y) and xy.r_ x( y). A realizer for δ∘γ≼ id_ is and for id__≼γ∘δ is .
Next we give an (functional) adjoint pair between a-algebra and a PCA.
This example is a reformulation of the linear lambda calculus with!( <cit.>) to a planar variant.
Suppose infinite supply of variables x,y,z,….
Terms are defined grammatically as follows.
M ::= x | MM' | λ x.M | M ⊗ M' | x ⊗ x'MM' | !M | λ !x.M
Here x of λ x.M is the rightmost free variable of M, appears exactly once in M and is not in any scope of !.
Also we assume that for x ⊗ x'MM', x' and x are the rightmost and the next rightmost free variables of N, appear exactly once in N and are not in any scope of !.
Take an equational relation on terms as the congruence of the following equational axioms.
* (λ x.M)N = M[N/x].
* M = λ x.Mx.
* (λ !x.M)(!N) = M[N/x].
* x ⊗ x'M ⊗ M'N = N[M/x][M'/x'].
* M= x ⊗ yMx ⊗ y.
Let Λ be the set of equivalence classes of closed terms.
Then we get a -algebra , whose underlying set is Λ and the application is that of lambda terms.
Also we get a PCA = (Λ,@), where M@N := M (!N).
Here the -combinator and the -combinator of exist as λ !x.λ !y.x and λ !x.λ !y.λ !z. x(!z) (!(y(!z))).
Take an applicative morphism γ: as the identity function whose realizer is λ !x. λ !y.xy.
Take δ : as a function M ↦ !M whose realizer is λ !x.λ !y. !(x(!y)). Then we have an adjoint pair δ⊣γ.
As well as we can construct an LCA from a “reflexive object” in a “weak linear category” (See <cit.> and <cit.> ), we can get exp-PLCAs by appropriate settings.
A weak planar linear category (WPLC) consists of:
* a monoidal closed category (,⊗,I) (not symmetric in general);
* a monoidal functor (!,m,m_I) on ;
* a monoidal pointwise natural transformation ! id_;
* a monoidal pointwise natural transformation ! !!;
* a monoidal pointwise natural transformation ! ! ⊗ !;
* a monoidal pointwise natural transformation ! K_I, where K_I is the constant I functor.
Here a pointwise natural transformationγ:F G is a family of maps γ_C :F(C) G(C)
(C ∈ Ob()) satisfying that G(f) ∘γ_I = γ_C ∘ F(f) for any f:I C.
To be a WPLC, we need not all of the conditions for linear exponential comonads (Definition <ref>).
For instance, a WPLC does not require that!is a comonad, and does not require the (ordinary) naturality of each transformation.
Let (,!) be a WPLC.
We say V is a reflexive object when there are:
* a retraction p: !V ◃ V :q;
* an isomorphism r:(V V) V and s := r^-1;
* a retraction t: (V ⊗ V) ◃ V:u.
As we saw in Example <ref>, for a reflexive objectVof a WPLC, := (I,V)forms a-algebra.
Furthermore, by givingas an endofunction sendingM:I Vtop ∘ (!M) ∘ m_I,(,)becomes an exp-PLCA.
The proof is the same as for WLCs and LCAs in <cit.>.
§.§ Exchange planar linear combinatory algebras
Exchange modalities on the Lambek calculus and their categorical models are introduced in <cit.>.
While the word “Lambek calculus" may indicate various logics, type systems or grammars ( <cit.>), here we call the Lambek calculus as a variant of non-symmetric MILL with left and right implications.
The Lambek calculus is modeled by monoidal bi-closed categories.
While the order of arguments cannot be exchanged in the Lambek calculus, the Lambek calculus can be extended to a sequent calculus that allows swapping arguments with modalities.
This sequent calculus is called the commutative/non-commutative (CNC) logic, that is composed of two (commutative and non-commutative) logics, and the exchange modality connects these two parts.
Categorical models of the CNC logic are given as monoidal adjunctions between monoidal bi-closed categories and SMCCs, that are called Lambek adjoint models.
In this subsection, we introduce the similar construction to the previous subsection, inducing Lambek adjoint models.
An exchange relational planar linear combinatory algebra (exch-rPLCA) consists of a -algebra and a comonadic applicative morphism (ξ,,) on with ∈ satisfying x (ξ y) (ξ z) ⊆ x (ξ z) (ξ y) for any x,y,z ∈.
When ξ is functional, we call (,ξ) an exchange planar linear combinatory algebra (exch-PLCA).
For an exch-rPLCA (, ξ), the co-Kleisli category is an SMCC and the co-Kleisli adjunction between and is monoidal.
* We define tensor products in as X Y := (|X| × |Y|,), where
x y := { pq |}.
* For maps f:X X' and g:Y Y' in , f g is the function sending x y to f(x) g(y).
A realizer of f g is z. M ( z), where
M ∈ pq. (r_ξ (ξ r_f) ( p)) (r_ξ (ξ r_g) ( q)).
* We define the unit object J of as ({∗}, _J), where ∗_J := {}.
* A realizer for the left unitor λ_X:J X X is u. ( p. p ) ( u).
A realizer for the inverse λ_X^-1 is in (ξ).
* A realizer for the right unitor ρ_X:X X J is in p. p (ξ).
A realizer for the inverse ρ_X^-1 is u. () ( u).
* A realizer for the associator α_XYZ :(X Y) Z X (Y Z) is u. ( v. M ( v)) ( u), where M ∈ pqr. p (r_ξ (r_ξ (ξ) ( q) ( r))).
A realizer for α_X^-1 is u. ( vw. (M' v) ( w)) ( u), where M' ∈ pq. (r_ξ (r_ξ (ξ) ( p) ( q))).
* The symmetry σ_XY:X Y Y X is the function sending x y to y x.
A realizer for σ_XY and σ_XY^-1 is u. () ( u).
* For objects X and Y, the exponential in is Y X = (_( X, Y), ), where f := { r ∈|}.
* For maps f:X' X and g:Y Y' in , g f is the function sending a map h:X Y in to g ∘ ( h) ∘ d_X ∘ ( f) ∘ d_X', where d_X : X X is the comultiplication of .
A realizer for g f is uv. r_g (r_ξ u ( (r_ξ (ξ r_f) ( v)))).
* The evaluation map ev_XY : (Y X) X Y is the function sending f x to f(x), that is realized by u. ( u).
* For any map f:Z X Y in , there exists a unique map g:Z Y X in , which sends z to x ↦ f(z x).
g is realized by uv.r_f (r_ξ (r_ξ (ξ) ( u)) ( v)).
* Finally we show that the co-Kleisli functor : is strong monoidal.
We can take natural isomorphisms J I and (X Y) X ⊗ Y in as the identity functions.
Realizers for J I and (X Y) X ⊗ Y are .
A realizer for J I is in u.u(ξ).
A realizer for X ⊗ Y (X Y) is in uv. r_ξ (r_ξ (ξ) ( u)) ( v).
The next proposition for categories of modest sets also can be shown in the same way as the above proposition.
Here sinceX Yin the above proof is not generally a modest set, we take the tensor product⊠in_by the same way as Proposition <ref>.
That is, we takeX ⊠ Y = (|Z|,_Z)by|Z| := (|X| × |Y|)/≈, where≈and_Zare defined as the same ones in the proof of Proposition <ref>.
For an exch-rPLCA (, ξ), the co-Kleisli category _ is an SMCC and the co-Kleisli adjunction between is monoidal.
Suppose is a bi--algebra and (, ξ) is an exch-rPLCA.
Then we have a Lambek adjoint model as the co-Kleisli adjunction between the monoidal bi-closed category and the SMCC (or between and _).
Similar to exp-rPLCAs, adjoint pairs between-algebras and-algebras correspond to exch-rPLCAs.
Let (δ⊣γ): be an adjoint pair for a -algebra and a -algebra .
* (,δ∘γ) forms an exch-rPLCA.
* (⊣): is a monoidal adjunction between the monoidal category and an SMCC . If is a bi--algebra, the adjunction is a Lambek adjoint model.
* From Proposition <ref> (<ref>), δ∘γ is a comonadic applicative morphism.
We can take
in as xyz. x ( (M ( y)( z))), where M ∈ y.r_δ (r_δ (δ N) y) and
N ∈ yz.r_γ(r_γ (γ) z)y.
* It follows from Proposition <ref>.
Similar to exp-PLCAs, exch-PLCAs induce adjoint pairs between-algebras and-algebras.
Let (, ξ) be an exch-PLCA.
* We have a -algebra _ξ = (,@) with x @ y := x(ξ y).
* Let γ:_ξ be the identity function and δ:_ξ be the function x ↦ξ x. Then γ and δ are applicative morphisms and δ⊣γ.
* We have the -combinator in _ξ as x. ( x).
* Same as the proof of Proposition <ref> (<ref>).
For an example of exch-PLCA, we have the similar calculus to Example <ref>.
Suppose infinite supply of variables x,y,z,….
Terms are defined grammatically as follows.
M ::= x | MM' | λ x.M | M ⊗ M' | x ⊗ x'MM' | ξ M | λ^ξ x.M
Here x of λ x.M is the rightmost free variable of M, appears exactly once in M and is not in any scope of ξ.
x of λ^ξ x.M need to appear exactly once in M.
Also we assume that for x ⊗ x'MM', x' and x are the rightmost and the next rightmost free variables of N, appear exactly once in N and are not in any scope of ξ.
The rest is the same as Example <ref>.
Finally we give an example of exch-PLCA based onof Example <ref>.
This example is similar to the one introduced in <cit.>.
Let T and | | be the same set and function defined in Example <ref>.
First we give a -algebra _e from T.
Take |_e| as the powerset of { t ∈ T | |t| =e },
and a binary operation ⊚ on |_e| as M⊚ N := { t_2 |∃ t_1 ∈ N ,(t_2 t_1) ∈ M }.
Then _e = (|_e|,⊚) is a -algebra, where
* = { (t_3 t_1) (t_2 t_1) (t_3 t_2) | t_1,t_2,t_3 ∈ T };
* = { (t_3 t_2 t_1) (t_3 t_1 t_2) | |t_1| = |t_2| = |t_3| =e };
* = { t_1 t_1 | t_1 ∈ T }.
Take γ: || |_e| as the function sending M to { t t | t ∈ M} and δ : |_e| || as the inclusion function.
Then these function forms an (functional) adjoint pair (δ⊣γ): _e.
Here corresponding realizers are
* { ((t_2 t_2) (t_1 t_1)) ((t_2 t_1) (t_2 t_1)) | t_1,t_2 ∈ T } realizing γ;
* { t_1 t_1 | t_1 ∈ T } realizing δ;
* { (t t) t | |t| = e } realizing id ≼γ∘δ;
* { t (t t) | |t| ≥ e } realizing δ∘γ≼ id.
The above construction also can be applied to obtain exch-PLCAs on ' of Example <ref> and on ” of Example <ref>.
While we gave exch-PLCAs by T, the same construction cannot be applied to obtain exp-PLCAs.
If we try to get some PCA of subsets of T, employing M⊚ N := { t_2 |∃ t_1 ∈ N ,(t_2 t_1) ∈ M } as the binary operation, ⊚ M ⊚ N = M hardly hold since the left hand side often lost information of M when N is nearly empty.
As we saw in Proposition <ref>, exp-rPLCAs can be defined in the style using not the combinators and , but the applicative morphisms [,] and k_i.
It is still unclear whether we can define exch-rPLCAs by the latter style, not using the combinator .
If we can characterize exch-rPLCAs by the latter style, we might construct exch-PLCAs using reflexive objects by the same way as exp-PLCAs and WPLCs (Definition <ref>).
§ RELATED WORK
This paper is an extended version of the earlier papers by the author <cit.>.
As a result of <cit.> not introduced in this paper, we have “ ()^∘-algebras” as a class of applicative structures. ()^∘-algebras are more general than-algebras, and give rise to skew closed categories of assemblies (or modest sets).
Skew closed categories, introduced in <cit.>, are categories with similar closed structures to closed categories, though some conditions needed in closed categories are not assumed.
(For instance, the natural transformationi_X : (X I) Xin a skew closed category is not necessarily invertible.)
Although skew closed categories and closed multicategories are generalizations of closed categories in different directions, from Proposition <ref>, we can say that we cannot (canonically) obtain(or) that is a closed multicategory but not a skew closed category.
Details of these results are given in Appendix <ref>.
Skew monoidal categories introduced in <cit.>
are categories with the same components as monoidal categories but natural transformations (left and right unitors and associators) do not need to be invertible.
The relationship between skew monoidal categories and skew closed categories is similar to that between monoidal categories and closed categories.
Recalling the proof of Proposition <ref>, we find that we useonly to realizeρ_X^-1:X ⊗ I X.
The invertibility ofρ_Xis not assumed in skew monoidal categories.
Thus, when we haveas a “()-algebra,” we can show thatis a skew monoidal category.
In <cit.>, the “extensionality” of combinatory algebras is investigated.
The extensionality defined in that paper is a more generalized condition than the standard one, seen in , <cit.>.
By the extensionality in <cit.>, we can deal with polynomials and combinatory completeness for combinatory algebras that cannot be stated in the same way as Definition <ref> and Proposition <ref>, such as the braided case.
In our study, we do not need the discussions of the extensionality to state the combinatory completeness appearing in this paper, however, assuming the extensionality on an applicative structuremay cause some structures onand.
For instance, for an “extensional”-algebra, since the-combinator always satisfies the axiom of,andbecome closed categories.
There are many other possible way to define classes of applicative structures than using the existence of certain combinators, and the extensionality is such one way.
The definition of bi--algebras may look like “dual combinators” introduced in <cit.>.
Similar to bi--algebras, in bianry operations of dual combinators, elements can act to elements from both left and right sides.
However, a dual combinatory logic has only one sort of application, whereas a bi--algebra has two sorts of applications.
Also the reductions of dual combinatory logic do not satisfy the confluence, while the confluence of the bi-planar lambda calculus holds.
In this paper, we referred several logics and their categorical models without recalling detailed definitions.
See <cit.> about the linear logic.
And for the MILL and the categorical models that we deal with in this paper, see <cit.>.
Also, for the Lambek calculus, the word “the Lambek calculus” has various means as logics, and we use this word to mean a variant of non-symmetric MILL with left and right implications in this paper.
Our treatment of the Lambek calculus and its categorical semantics are from <cit.>.
The basics about the Lambek calculus is in <cit.>.
In <cit.>, the relationships between the planar lambda calculus and planar graphs are investigated.
In that paper, the bijection between rooted trivalent planar graphs and closed planar lambda terms is given, and it is shown that such graphs can be generated by combining a few kinds of “imploid moves.”
The theory corresponds to the combinatory completeness of-algebras and the planar lambda calculus.
Similarly, we can give the bijection between rooted trivalent planar graphs and closed bi-planar terms, but here the rooted trivalent planar graphs need to have two colored (“left” and “right”) vertexes.
§ CONCLUSION
In section <ref> and <ref>, we introduced several classes of applicative structures and showed that they induce closed structures on categories of assemblies and categories of modest sets, as in Table <ref>. (The results for-algebras are newly presented in this paper.)
In section <ref>, we showed that these classes are different ones by giving several examples.
In section <ref>, we presented propositions that categorical structures ofinduce structures of, under some conditions.
(The propositions for-algebras and bi--algebras are newly shown in this paper.)
By combining the results of the above, for instance we can say that we havewith a truly non-symmetric bi-closed structures, by usingthat is a bi--algebra but not a-algebra.
In section <ref>, we introduced exp-rPLCAs and exch-rPLCAs that give rise to categorical models for the linear exponential modality and the exchange modality on the non-symmetric MILL.
As an adjoint pair between a-algebra and a PCA induces an rLCA, an adjoint pair between-algebras and a PCA/-algebra induces an exp-rPLCA/exch-rPLCA.
Finally we give three issues for future work.
First, there are several unsolved problems we mentioned in this paper.
Those that we consider important are:
* to show that the computational lambda calculus is not a -algebra (refer Section <ref>);
* to clarify conditions needed to show that is a PCA when (or ) is a CCC (refer Remark <ref>);
* to clarify conditions needed to show that is a -algebra/bi--algebra when is a monoidal closed category/monoidal bi-closed category (refer the end of Section <ref>).
Second, most examples given in this paper are the standard ones like the term models.
We would like to find more interesting examples of applicative structures and adjoint pairs, that should be useful for investigating non-commutative logics and their models in a systematic way.
Third, for various categorical structures not given in this paper, we want to clarify what we need to construct them via categorical realizability.
For instance, we have said (in section <ref>) that we cannot give(nor) that is a symmetric closed category but not an SMCC, in canonical ways.
Also we cannot give(nor) that is a closed multicategory but not a skew closed category.
As an example not yet mentioned, we cannot makea braided monoidal category but not an SMCC.
Although there is a class of applicative structure,^±-algebras, nicely corresponding the structure of braided monoidal categories and the braided lambda calculus (investigated in <cit.>), the construction ofcannot reflect the difference between two sorts of braids (realized by^+and^-) and turns braids into the symmetry.
To give the categorical structures listed above, we need to change the construction of(and), rather than trying to give conditions on applicative structures.
For instance, to makea braided monoidal category (not an SMCC), we may need to change that the construction ofis based on, that is not only braided but also symmetric.
§ ACKNOWLEDGMENT
I would like to thank Masahito Hasegawa for a lot of helpful advice, discussions and comments.
This work was supported by JST SPRING, Grant Number JPMJSP2110.
alphaurl
§ -ALGEBRAS, -ALGEBRAS AND SKEW CLOSED CATEGORIES
Though classes of applicative structures appearing in this paper are subclasses of-algebras, it does not conclude that realizability constructions for closed structures all require-algebras.
Indeed, in <cit.>, we introduced-algebras, which is a more general class than-algebras and gives rise to skew closed categories.
First we recall the definition of skew closed categories from <cit.>.
A (left) skew closed category consists of the following data:
* a locally small category ;
* a functor (- -):^op×, called the internal hom functor;
* an object I, called the unit object;
* an natural transformation i_X : (X I) X;
* an extranatural transformation j_X : I (X X);
* a transformation L_Y,Z^X : (Z Y) ((Z X) (Y X)) natural in Y and Z and extranatural in X,
such that the following axioms hold:
* ∀ X,Y ∈, L_Y,Y^X ∘ j_Y = j_(Y X);
* ∀ X,Y ∈, i_(Y X)∘ (id_(Y X) j_X) ∘ L_X,Y^X = id_(Y X);
* ∀ X,Y,Z,W ∈, the following diagram commutes:
@C=-25pt@R=30pt (W Z) [dl]_L_Z,W^X[dr]^L_Z,W^Y
(W X) (Z X) [d]^-L_(Z X),(W X)^(Y X)
((W Y) (Z Y)) [dd]^-L_Y,W^X id
((W X) (Y X)) ((Z X) (Y X)) [drr]_-id L_Y,Z^X
((W X) (Y X)) (Z Y)
* ∀ X,Y ∈, (i_Y id_(X I)) ∘ L_X,Y^I = id_Y i_X;
* i_I ∘ j_I = id_I.
A skew closed category is called left normal when the function γ : (X,Y)(I,Y X) sending f:X Y to
(f id_X) ∘ j_X is invertible for any X,Y ∈.
There is a categorical structure called skew monoidal categories introduced in <cit.>, which have the same components as monoidal categories but the invertibility of unitors and associators are not assumed.
Skew closed categories are the categorical structures determined from skew monoidal categories, like closed categories are determined from monoidal categories.
Obviously, closed categories are also left normal skew closed categories.
We investigated categorical realizability for skew closed categories in <cit.> and next we recall some of the results.
A total applicative structure is a -algebra iff it contains , , and a for each a ∈ is an element of such that ∀ x,y ∈, (a) xy = x (ay).
Since(a) xy=x(ay), any-algebra is also a-algebra.
By the similar way to the proof of Proposition <ref>, we can show the class of-algebras is different from the class of-algebras by using a freely constructed-algebra (with constants).
When is a -algebra, and are left normal skew closed categories.
The proof is almost the same as Proposition <ref>.
Here for mapsfandg, we give a realizer of(g f)as(r_f) ( r_g).
It is still not clear whetherneed to be a-algebra to make(or) a skew closed category, like propositions in Section <ref>.
(In the similar setting to Proposition <ref> and Proposition <ref>, though we can show the existence of,and(), we cannot show there is.)
Since-algebras are-algebras, the next holds.
When is a -algebra, and are skew closed categories.
From Proposition <ref>, we can say that we cannot (canonically) obtain(nor) that is a closed multicategory but not a skew closed category.
Although closed multicategories are a generalized closed categorical structure in a different direction from skew closed categories, skew closed categories are more general than closed multicategories as the categorical structures appearing in categories of assemblies.
Moreover, when constructing applicative structures from reflexive objects, skew closed categories can give even-algebras, as well as closed multicategories give.
Suppose a skew closed category and an object V with a retraction
r : (V V) ◃ V :s.
Then (I,V) forms a -algebra.
* For M,N:I V, the application is defined as
I M V s V V id_V N V I i_V V.
* The -combinator is
I j_V V (V V) (V V) L_V,V^V s ((V V) (V V)) V
(r s) id_V V V V r id_V V V r V.
* The -combinator is r ∘ j_V.
* Given arbitrary M:I V, M is
I j_V V V s id_V (V V) V (id_V M) id_V (V I) V i_V id_V V V r V.
Suppose a closed multicategory and an object V with a retraction
r : (V;V) ◃ V :s.
Then (;V) forms a -algebra.
* For M,N ∈(;V), the application is defined as
M,N V,V s,id_V(V;V),V ev V.
* Take a map f:V,V,V V as
V,V,V id_V,s,id_V V,(V;V),V s,ev(V;V),V ev V.
The -combinator is given as r ∘Λ_;V;V (r ∘Λ_V;V;V(r ∘Λ_V,V;V;V(f))). Here Λ is the function in Definition <ref>.
* The -combinator is r ∘Λ_;V;V(id_V).
* Given arbitrary M ∈(;V), M is r ∘Λ_;V;V(ev ∘ (s,M)).
When we assume the retractionr : (V V) Vof Example <ref> is an isomorphism, the-combinator further satisfies the axiom ofand(I,V)forms a-algebra.
Similarly, when we assume the retractionr : (V;V) Vof Example <ref> is an isomorphism, the-combinator satisfies the axiom ofand(;V)forms a-algebra. |
http://arxiv.org/abs/2307.04050v1 | 20230708212820 | Optimization-based Learning for Dynamic Load Planning in Trucking Service Networks | [
"Ritesh Ojha",
"Wenbo Chen",
"Hanyu Zhang",
"Reem Khir",
"Alan Erera",
"Pascal Van Hentenryck"
] | cs.AI | [
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] |
A Robust and Efficient Optimization Model for Electric Vehicle Charging Stations in Developing Countries under Electricity Uncertainty
[
=========================================================================================================================================
*Co-first authors
The load planning problem is a critical challenge in service network
design for parcel carriers: it decides how many trailers (or loads),
perhaps of different types, to assign for dispatch over time between
pairs of terminals. Another key challenge is to determine a flow plan, which specifies how parcel volumes are assigned to planned loads. This paper considers the Dynamic Load Planning Problem (DLPP) that considers both flow and load planning challenges jointly in order to adjust loads
and flows as the demand forecast changes over time before the
day of operations. The paper aims at developing a decision-support
tool to inform planners making these decisions at terminals across the
network. The paper formulates the DLPP as a MIP and shows that it
admits a large number of symmetries in a network where each commodity
can be routed through primary and alternate paths. As a result, an
optimization solver may return fundamentally different solutions to
closely related problems (i.e., DLPPs with slightly different inputs),
confusing planners and reducing trust in optimization. To remedy this
limitation, the paper proposes a Goal-Directed Optimization (GDO) that
eliminates those symmetries by generating optimal solutions staying
close to a reference plan. The paper also proposes an optimization
proxy to address the computational challenges of the optimization
models. The proxy combines a machine learning model and a
feasibility restoration model and finds solutions that satisfy real-time constraints imposed by planners-in-the-loop. An extensive computational study on industrial instances shows that the optimization proxy is around 10 times faster than the commercial solver in obtaining the same quality solutions and orders of magnitude faster for generating solutions that are consistent with each other. The proposed approach also demonstrates the benefits of the DLPP for load consolidation, and the significant savings obtained from combining machine learning and optimization.
§ INTRODUCTION
The e-commerce market continues to show robust growth and leading
analysts project that today's $3.3 trillion market could grow further
to $5.4 trillion annually by 2026 (<cit.>). Much of e-commerce
relies on home delivery of small packages or parcels and other boxed
freight. Key freight carriers like UPS and FedEx continually seek to
redesign and operate profitable logistic networks that meet e-commerce
customer service expectations. Beyond physical network design
including the location and sizing of various freight processing
terminals, these companies face challenging service network design
problems. A critical service network design challenge for package
carriers are the so-called load planning problems (for
background, see <cit.>). Here, load planning refers to
decisions related to the number of trailers or container loads, perhaps of different types, to plan for dispatch over time between
pairs of terminals. Such planned loads are the transportation capacity
of the network. Flow planning decisions represent another key challenge, where the flow plan specifies how to allocate parcel volumes to planned loads to feasibly and cost-effectively serve network demand. As each package moves from its origin to destination, it is transported by a sequence of planned loads where it is unloaded and sorted at a transfer (hub) terminal between each loaded dispatch. Together, the flow and load plan decisions define a service network that moves package
volume from origins to destinations in order to meet customer service
expectations. The research described in this paper is conducted
directly with a leading global parcel carrier that operates a massive
network moving large volumes of packages each day. Figure <ref> illustrates the load planning operations at an example
terminal. It highlights the planner-in-the-loop environment in which load planning takes place; an important consideration underlying this
research.
Packages at a terminal with the same destination and service class are
referred to as a commodity. A flow plan defines flow
rules for each commodity in the service network; these flow rules
specify how a commodity is routed through the network over time. Since parcel carriers operate massive terminal networks with large numbers of transfer locations, a flow plan may include alternate flow rules that specify loading paths for commodities in addition to the default path specified by the primary flow rules. Both the primary (default) and alternate paths specify how a commodity moves through the network, and these planned paths are service feasible, i.e., they ensure that commodities arrive on time given their service guarantees.
This paper considers the Dynamic Load Planning Problem (DLPP)
faced by the load planner at a terminal as depicted in Figure
<ref> during a short time period (one or two weeks) leading up to the day of operations. The goal of the planner, and thus of the DLPP, is to decide (1) how many loads should be planned for outbound dispatch to other terminals at various times during the day of operations and (2) how to allocate commodity volumes across planned loads respecting the capacity constraints and the primary and alternate flow rules. These two decisions define what is called a load plan in this paper. The objective of the DLPP is to obtain a load plan that
minimizes the number of loads, consolidating the commodities as best
as possible. In practice, the DLPP is solved by planners, who adjust
existing load plans manually to reflect changes in commodity volumes
arriving at the terminal. This process is typically myopic
and creates inefficiencies across the network.
The goal of this research is to develop a decision support tool to
assist planners in solving the DLPP, suggesting load plans that remove
existing inefficiencies. Moreover, for terminals that do not have a
planner, the tool can fully automate the DLPP, bridging the gap
between network design and operations. To develop such a tool, this paper first investigates optimization models for the DLPP. In its general form, the DLPP is strongly NP-hard and its MIP formulation is challenging for state-of-the-art solvers given the size of the instances encountered in practice. Moreover, the natural MIP model exhibits significant symmetries which is highly undesirable for the planner-in-the-loop environment of the industrial partner. Indeed, planners will be extremely confused if small changes in commodities result in completely different load plans. To address this challenge, this paper presents a Goal-Directed Optimization (GDO) that solves a first model to find the optimal solution to the DLPP and uses a second model to find a plan that is as close as possible to a reference plan. GDO is shown to produce consistent plans, i.e., plans that are close for inputs that only differ slightly. Unfortunately, the GDO approach is too time-consuming to be used in planner-in-the-loop environments. To address this final difficulty, this research proposes the use of
optimization proxies that combine a Machine-Learning (ML) model and a
feasibility restoration procedure to obtain near-optimal solutions in
a few seconds, even for the largest terminals. The ML model uses
supervised learning to mimic the GDO approach and predicts the optimal set of planned loads. The feasibility restoration procedure then solves a small MIP model to determine the final allocation of commodity volumes to planned loads, adding extra capacity as needed to ensure feasibility. The proposed approach is practical since it produces high-quality plans that are consistent with each other, where small changes in inputs leads to very similar load plans by virtue of the ML training that mimics the GDO optimization.
The main contributions of the paper can be summarized as follows:
* The paper formalizes the DLPP and develops a natural MIP
formulation to solve it.
* The paper proposes a Goal-Directed Optimization approach to
remedy the limitations of the MIP formulation; it uses a 2-stage
approach to eliminate symmetries and provide optimal load plans that
are close to a reference plan.
* The paper proposes an optimization proxy to address the computational
difficulties of the GDO approach; the optimization proxy uses a
machine learning model to predict the loads and a feasibility
restoration procedure to adjust the predictions to satisfy the problem
constraints and determine the commodity flows. Once trained, the optimization proxy provides high-quality solutions in a few seconds.
* The paper presents extensive computational results on industrial
instances, including some of the largest terminals in the network;
the results demonstrate the significant benefits of optimization and
the ability of the optimization proxy to find high-quality and
consistent solutions in real time. More precisely, the paper shows
that the optimization proxy outperforms a greedy heuristic and the
MIP model solved by a commercial solver both in terms of the
objective function value and consistency metrics. The
optimization proxy is around 10 times faster than the commercial
solver in obtaining solutions with the same objective function value
and orders of magnitude faster in terms of generating solutions that
are consistent with a reference plan. Empirical experiments show
the value of breaking symmetries by GDO, which helps the proxy to
produce high-quality and consistent load plans.
* From a business and sustainability perspective, the experiments
demonstrate the value of having alternate flow paths for the
commodities, in addition to the primary flow paths. The proposed load
plans allocate approximately 17% commodity volume to the alternate
flow paths and reduce the required load capacity by 12%-15%.
The rest of this paper is organized as follows. Section
<ref> summarizes related work. Sections
<ref> and <ref> introduces the DLPP
and its modeling. Sections <ref> and <ref>
present the GDO approach and the optimization proxy. Section
<ref> describes a heuristic that mimic human
planners and serve as a baseline. Section <ref>
describes the computational results. Section <ref>
discusses the benefits of the DLPP formulation, optimization, and
machine learning, quantifying the cost and sustainability benefits and
the important factors driving them.
§ RELATED WORK
Service Network Design.
There is abundant research on network design for the Less-than-truckload (LTL) trucking industry (see
<cit.>). Interested readers
can consult erera2013creating for a detailed description
of LTL operations. <cit.> present a detailed
description of the mathematical models and heuristics for the problems
arising in trucking service network design. The authors describe the
tactical flow and load planning problem which is solved weeks
in advance for “typical” commodity volume (e.g., average daily
origin-destination commodity volume) for a network of terminals. The
goal of the flow and load planning problem is to determine
effective primary flow paths for the commodity volume and the total
trailer capacity required on each flow path in a network of terminals.
Most of these network design problems are formulated over time-space
networks using integer programming models. The flow and load planning
problem with both primary and alternate flow paths for industry-scale
instances can be modeled as large-scale integer programming models
which, unfortunately, cannot be solved directly by commercial
solvers. Therefore, previous work in this area focused mainly on
finding a single cost-effective primary flow path for the
commodities. Exact approaches to solve these problems have been
proposed by <cit.>, <cit.>,
and <cit.>. However, these approaches can only
solve instances with a few thousand packages. For industry-scale
instances, researchers have resorted to various heuristics including
variants of local search heuristic algorithms
(<cit.>,
<cit.>) and greedy algorithms (<cit.>).
Flow and Load Planning with Alternate Paths.
Tactical flow and load planning is typically based on average daily
estimates of origin-destination commodity volume. However, commodity
volume differ substantially from day to day and from week to week
(<cit.>). Hence, planners at a terminal locally
modify the load plans on a daily basis, using the latest estimates of
commodity volume until the day of operations. More specifically, the
planners take advantage of both primary and alternate flow paths to
improve trailer consolidation at their respective
terminals. It is worth highlighting that the primary flow paths come from flow and load planning. Once primary options are available alternate flow paths, that are time feasible, are identified. To the best of our knowledge no paper carefully studies the problem of allocating volume across alternate flow paths in operations. Alternate flow paths are useful to reduce trailers when commodity volume can be split across paths. This is especially useful because of demand uncertainty. <cit.> present a study on the value of
having these alternate flow paths to hedge against demand
uncertainty. They show that it is sufficient to have just one
alternate to contain the impact of most of the fluctuations
in demand; the authors refer to such a load plan as a 2-alt load plan. Subsequently, the authors in <cit.> study the operational decisions that LTL carriers need to make to effectively operate a 2-alt load plan when demand changes dynamically on a day-to-day basis. However, the proposed approach cannot be solved for practical sized instances. This paper proposes a ML-based solution approach for the allocation of volume across both primary and multiple alternate flow paths; the proposed approach is shown to be effective for large scale instances experienced in practice.
Dynamic Load Planning.
Network-wide simultaneous optimization of load planning adjustments is
a daunting challenge due to the scale of the network, number of commodities and the number of transfer hubs for the commodities.
Existing research in the literature may be applicable to the problem of selecting a single primary flow path
(non-splittable) for each commodity at each terminal for each sorting
period in order to minimize the cost of the resulting load plan. Splitting commodity volume across alternate flow paths is likely to improve trailer utilization as it introduces more flexibility in the load planning process. This research considers the DLPP problem at a terminal in which the commodity volume can be split (among primary and alternate flow paths) to promote better trailer utilization, lower transportation cost, and increased sustainability. The flexibility to adjust plans enables terminal planners to better manage daily operations while maintaining service guarantees. This problem is mentioned as an
interesting and useful future research direction by <cit.>.
One paper in the literature, <cit.>, does introduce the problem of re-routing freight volume on alternate flow paths to improve on-time performance of load plans on the day-of-operations; this becomes necessary when the actual volume deviates from the forecasted volume on the day-of-operations. In this work, commodity volume is assigned to exactly one flow path (it is not splittable) such that the total (fixed) trailer capacity is respected and the objective is to minimize the total lateness of shipments. The authors develop MIP models for this problem and propose heuristic algorithms to solve them. Note that a key difference between this approach and the approach proposed in the current paper is that we allow volume to be split across multiple flow paths on the day-of-operations. Furthermore, we also adjust the load plan to identify opportunities to reduce outbound capacity (and improve utilization) as demand forecasts are updated.
The DLPP is also similar to the variable-sized bin packing problem
described by <cit.> where the objective is to
minimize the total space used to pack a set of items into bins
(available in different sizes), such that each item is packed into
exactly one bin. In the DLPP, the packages are the items and trailers
are bins but the key difference is that the DLPP allows for the
splitting of the package volume into compatible trailers in order to
further reduce the transportation cost by promoting better
consolidation or packing.
Machine Learning for Optimization.
In recent years, there has been a notable surge of interest among
researchers in the development of ML surrogates for solving MIPs. This
emerging field has attracted attention due to the potential of ML
techniques to provide efficient approximations for computationally
intensive calculations involved in solving MIPs. We refer the reader
to (<cit.>,<cit.>) for a comprehensive
overview on the topic. The techniques can fall into one of the two
categories. The first category includes methods based on
reinforcement learning
(<cit.>),
where the ML model is trained by interacting with simulation
environments. The second category comprises supervised learning
(<cit.>),
where the ML model imitates the optimization model and replaces
expensive calculations with a quick approximation. This research
focuses on the latter category since the proposed optimization model
could be used as the expert for supervised learning. Optimization
proxies, which combine learning with feasibility restoration, has
emerged from supervised learning. Recent work in this area includes
(<cit.>).
§ PROBLEM DESCRIPTION AND MODELING
Parcel carriers operate massive terminal networks with hundreds of
facilities to move large volumes of parcels each day. Each day
at a terminal is divided into time windows (typically three to four
hours in length), called sort periods or sorts,
during which parcels are sorted. A typical operational day includes
“day”, “twilight”, “night” and “sunrise” sorts that are
non-overlapping in time. All parcels sorted at a terminal
during a given sort with the same service class (e.g., one-day service
or two-day service) and the same destination are referred to as a
commodity. Suppose then that each commodity has a primary flow path and one or more alternate flow paths that each specify a sequence terminals and sorts that parcels will traverse en route from origin to destination. For a specific commodity at a specific terminal at a specific sort, each flow path will determine the next terminal and sort to which packages will be loaded. Typically, shipments are loaded on trailers moving along the primary flow path for the commodity; however, when there are better consolidation opportunities, commodity volume can be split over primary and alternate flow paths, or completely allocated to alternate flow paths. The rest of this section describes the main concepts underlying the DLPP. Section <ref> describes some key terminology and presents examples to illustrate the operations at terminals. Section <ref> describes the DLPP that includes splitting of commodity volume across
primary and alternate flow paths.
§.§ Definitions
Let 𝒢 = (𝒩,𝒮) denote a time-space network. Each node n ∈𝒩 represents a terminal location at a particular time period and is defined by a tuple, i.e. n=(terminal, sort, day). Each arc s ∈𝒮 represents a directed dispatch of loads from one timed node to another. Henceforth in the paper, we refer to each such an arc as a sort pair. Figure <ref> illustrates an example time-space network for terminal A during a single twilight sort period. In this example, three sort pairs are outbound from terminal A on day 1, namely, (A,Twilight,1)→(X,Twilight,2),
(A,Twilight,1)→(Y,Twilight,2), and
(A,Twilight,1)→(Z,Twilight,3). Figure <ref> illustrates another example of terminal B that operates multiple sort periods, i.e., the day, twilight, night sorts on a given day, and seven sort pairs (b_1,b_2,b_3,b_4,b_5,b_6,b_7) outbound from terminal B.
A key objective in load planning is to determine the number of
trailers (possibly of different types) to operate on each sort pair to containerize the total commodity volume allocated to the sort pair. During a sort, each loading door at a terminal builds/loads trailers for a specific sort pair destination. In a single sort facility, as shown in Figure <ref>, if there is commodity volume allocated on each of the three sort pairs, then at least three trailers (one on each sort pair) should be opened at the loading doors corresponding to the sort pair destinations.
In practice, commodities outbound from an origin terminal that arrive over consecutive sorts and that are heading to the same time-space destination can be consolidated together. For that, the concept of load pairs is introduced, where a load pair represents a set of consecutive sort pairs that share the same destination node. Combining sort pairs into load pairs allows better consolidation and trailer utilization, since trailers can be held partially loaded from one sort to the next prior to dispatch to the destination. Figure <ref> illustrates an example of a load pair that is composed of three different sort pairs.
We now relate primary and alternate flow paths to sort pairs. If we consider volume for commodity k ∈𝒦 at some time-space location n, its primary flow path specifies the next (terminal, sort, day) to which it should be loaded. Thus, the primary path identifies a unique outbound sort pair for k at n. Similarly, each alternate flow path identifies a (possibly different) outbound sort pair for k. Recall that primary and alternate flow paths for each k at n are specified in advance, and we assume that loading outbound on any of these options will lead to volume arriving on-time to its destination.
We will define compatible sort pairs for k at n to be the primary path sort pair (the primary sort pair) and any alternate path sort pair (an alternate sort pair). Furthermore, any sort pairs that are in load pairs with compatible sort pairs with an earlier origin sort are also compatible. When volume is assigned to such earlier sort pairs, the decision is to assign volume to trailers that are opened first for loading in those earlier sorts and held for dispatch. Figure <ref> illustrates four compatible sort pairs (outbound from terminal B) for a commodity k sorted in the twilight sort at terminal B.
§.§ Dynamic Load Planning Problem (DLPP)
Parcel carriers typically build a load plan in two phases: (1) the tactical flow and load planning phase specifies an initial plan and provides an input to the scheduling team; and (2) the load plan adjustment allows adjustments to the initial plans up to the day-of-operation. The scheduling and load dispatching teams then execute the
adjusted load plan. Weekly plans that determine the number of loads or
trailers to operate on each sort pair are fixed approximately two
weeks in advance of the operating week. However, due to demand
uncertainty, the volume forecast for commodities may change, and
adjustments to the load plan may be necessary to accommodate actual
volumes. These adjustments may lead to cost decreases when unnecessary
load capacity is removed from the plan.
Consider the following optimization problem during the two weeks
leading into the day-of-operation. Each terminal in the network has
a set of forecasted inbound commodities during some time period (for
example, a single operating day and multiple sorting periods). Each
such commodity arrives during a specific sorting period and has a
destination terminal and service class (specifying a due date at the
destination). Given this information, the fixed flow plan specifies a
primary flow path (next terminal and arriving sorting period) for each
commodity, and possibly also one or more alternate flow paths. Recall
that, if the commodity is assigned to any of these flow paths, then it
will reach its final destination on time according to plan. The
adjustment optimization problem is to assign each commodity to its
primary and/or one of its alternate flow paths while simultaneously
determining how many loads of different types are required for each
proposed flow paths. Note that existing flow and load planning
literature typically assumes that all commodities, arriving
at a terminal during a specific sorting period should be assigned to
the primary flow path. Here, the challenge is different, and is
instead to determine specifically how to split each commodity volume
among its possible compatible flow paths or sort pairs to drive high
load utilization levels and low costs while still meeting service
promises.
Consider the example shown in Figure <ref> with three
commodities (4 units destined to terminal C, 3 units destined to
terminal E, 3 units destined to F) sorted in the twilight sort of day 1 at
terminal B. In this example, we denote each commodity by its
destination terminal name. The commodity destined for terminal F has
three compatible sort pairs: (B,Twilight,1)→ (C,Sunrise,2)
is the primary sort pair, and (B,Twilight,1)→ (E,Sunrise,2)
and (B,Twilight,1)→ (D,Twilight,2) are the alternate sort
pairs. Splitting commodity volume destined to terminal F between the two
alternate sort pairs to C and D yields better consolidation (and
lower transportation cost) as the solution requires one less trailer
on the two arcs: (B,Twilight,1)→ (D,Twilight,2) and
(D,Twilight,2)→ (F,Day,3).
For a
given terminal, define S to be the set of outbound sort pairs and
let K be the set of commodities sorted at the terminal. Each
commodity k ∈ K has a cubic volume of q^k, and a set of
compatible sort pairs S^k. For every outbound sort pair s, there
is a set V_s of trailer types, that can be used to containerize the
total commodity volume allocated to the sort pair. Each sort pair can
have different set of allowed trailer types, i.e., V_s_1 can be
different from V_s_2 for two different sort pairs s_1,s_2 ∈
S. Each trailer type v ∈ V_s has a cubic capacity Q_v and has a
per-unit transportation cost c_v. A solution of the DLPP
determines the number of trailers of each type assigned to each sort
pair, as well as the volume of each commodity allocated to each
trailer. A solution must ensure that all the volume is assigned to
trailers and that the capacities of the trailers are not violated. The
goal of the DLPP is to find a solution that minimizes the costs of the
trailers. Appendix <ref> provides the complexity
results. The DLPP is strongly NP-hard. It becomes weakly NP-hard when
each commodity is compatible with exactly one or with all sort
pairs and there are multiple trailer types. It becomes polynomial when each commodity is compatible with exactly one or with all sort
pairs and there is only one type of trailer.
§.§ A Mixed-Integer Programming Formulation
An optimization model for the DLPP can be defined as follows in Model <ref>:
x,yMinimize ∑_s ∈ S∑_v ∈ V_s c_v y_s,v
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_k ∈ K:s ∈ S^k x^k_s,v≤ Q_v y_s,v, ∀ s ∈ S, v ∈ V_s,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
y_s,v∈ℤ_≥ 0 ∀ s ∈ S, v ∈ V.
It uses a non-negative continuous decision variable x^k_s,v to
represent the volume of commodity k allocated to trailer type v
operating on a sort pair s, and an integer decision variable
y_s,v to determine the number of trailers of type v installed on
sort pair s. The objective (<ref>) minimizes the total cost of creating
loads. In the experiments, c_v = Q_v ∀ v ∈ V, i.e., the
model minimizes the total trailer capacity required to containerize
the total commodity volume in the problem instances. Constraints
(<ref>) ensure that the total volume of each commodity
is assigned to its compatible sort pairs. Constraints
(<ref>) ensure that the total volume on a sort pair
respects the installed trailer capacity on it. Constraints
(<ref>)-(<ref>) define the domain and range of
variables.
§ GOAL-DIRECTED OPTIMIZATION
The optimization model of the DLPP has a large number of
symmetries. Figure <ref> depicts a simple instance with multiple optimal solutions that are operationally different from one another, yet they are equivalent from Model <ref> perspective as they require the same number of trailers of the same type. This is because in Model in <ref>, commodities are indifferent to the
sort pairs they are assigned to, as the volume allocation decisions
(x-variables) do not incur any cost.
Such symmetries are undesirable for many reasons. Paramount among them
are the realities in the field: the model is intended to be used and
validated by planners. If small variations of inputs produce
fundamentally different solutions, planners are unlikely to trust the
model. Indeed, since the model is used multiple times a day, it is
important to ensure that the successive optimal solutions are as
consistent as possible with each other. Fortunately, in practice, a
reference plan is always available and the DLPP should ideally
produce optimal solutions that are as close as possible to the
reference plan.
This section explores how to refine the model presented earlier to
satisfy this requirement, and presents a Goal-Directed Optimization
(GDO) approach to the DLPP. It uses a reference plan
to eliminate symmetries and ensure that the solution is compatible
with the planner-in-the-loop reality in the field. The use of a
reference plan eliminates many symmetries but not all. To break more
symmetries, the GDO approach also adds a flow diversion cost
that captures the cost of using alternate paths instead of the primary
path. For instance, in the example depicted in Figure <ref>,
only the solution shown in Figure <ref> is optimal following our assumptions. The flow
diversion cost is chosen to be proportional to the distance between
the next alternate terminal and the destination of the commodity, as
there is incentive to move commodities as close as possible to their
destination. For example, suppose a commodity k is in
Atlanta and is destined for Chicago. Let the primary
next terminal be Louisville (with flow diversion cost 0),
alternate 1 be Nashville, and alternate 2 be
Memphis. As Nashville is closer to Chicago than
Memphis, the flow diversion cost of allocating volume to
alternate 1 is lower than that of alternate 2. As a result, the
GDO approach has at its disposal a reference plan γ, where
γ_s,v denotes the number of trailers of type v planned to
operate on sort pair s. It also leverages the flow diversion
cost d^k_s that denotes the cost of allocating a per-unit volume
of commodity k ∈ K to a compatible sort pair s ∈ S^k.
The GDO approach first solves Model <ref>
to obtain the optimal objective value Z^*. It then solves a second
MIP Model to bias the trailer decisions so that they are as close as
possible to the reference plan and minimize diversion costs.
The second-stage model is defined as follows:
x,yMinimize ∑_s ∈ S∑_v ∈ V_s| y_s,v - γ_s,v| + ϵ∑_k ∈ K∑_s ∈ S_k∑_v ∈ V_sd^k_s x^k_s,v
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_k ∈ K:s ∈ S^k x^k_s,v≤ Q_v (y_s,v), ∀ s ∈ S, v ∈ V_s,
∑_s ∈ S∑_v ∈ V_s c_v y_s,v≤ Z^*,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
y_s,v∈ℤ_≥ 0 ∀ s ∈ S, v ∈ V.
The objective function (<ref>) minimizes the weighted sum of the Hamming
distance of the trailer decisions from the reference plan γ and
the flow diversion costs. The weight ϵ for the flow diversion cost is sufficiently small such that the cost does not dominate over the Hamming distance term in the objective function. The purpose of the flow diversion cost in (<ref>) is to break the symmetry between solutions with the same Hamming distance; it biases the solution to have more volume allocated to primary sort pairs than alternate sort pairs. Constraints (<ref>),
(<ref>), (<ref>) and (<ref>) are
the same as in Model <ref>. Constraint
(<ref>) ensures that the optimal solution does not use more
trailer capacity than Z^*. Note that the objective function is
non-linear due to the Hamming distance term. It can be linearized by
replacing | y_s,v - γ_s,v| with new variables
w_s,v≥ 0 (s ∈ S, v ∈ V_s) and imposing the following
constraints
y_s,v - γ_s,v≤ w_s,v ∀ s ∈ S, v ∈ V_s,
γ_s,v - y_s,v≤ w_s,v ∀ s ∈ S, v ∈ V_s,
Figure <ref> illustrates the sensitivity of the
trailer decisions (y-variables) subject to increases in the total
commodity volume (∑_k ∈ Kq^k) (x-axis) for the two models:
Model <ref> (red plot) and the GDO approach (blue
plot). As the total commodity volume increases, Model
<ref> exhibits solutions where the trailer decisions
fluctuate dramatically between 1 and 6 trailers for sort pair 1,
and between 1 and 5 trailers for sort pair 2. However, when
using GDO, the trailer decisions in GDO are more consistent and vary
between 1 and 2 trailers on sort pair 1, and is constant at 2 trailers on sort pair
2.
§ LEARNING-BASED OPTIMIZATION PROXIES
The GDO approach produces consistent solutions to the DLPP, but it is
too slow to be used with planners in the loop. This section proposes a
Machine Learning (ML) approach to the DLPP. Its goal is to move some
of the optimization burden offline and produce high-quality solutions
in real time. More precisely, the approach uses the concept of optimization proxies to produce high-quality solutions to an
optimization problem by learning its input/output mapping (see, for
instance,
(<cit.>)
for an overview of this concept and its applications).
The overall methodology underlying optimization proxies is depicted in Figure
<ref>. It consists of two stages,
* an offline stage where an ML model learns the input/output
mapping of the optimization problem;
* an online stage which is used in real time: it receives
an instance, applies the ML model to predict a (possibly infeasible) solution
and uses a repair procedure to deliver a feasible solution.
For the DLPP, the ML model learns the mapping between the (input)
commodity volumes and the (output) trailer decisions; in other words, given the
commodity volumes, the ML model predicts trailer decisions for every
sort pair. The trained ML model may sometimes underestimate
the number of trailers on some sort pairs when executed in real time. To circumvent this issue, the feasibility
restoration step projects the predicted trailer decisions back into the
feasible region; in addition, the feasibility restoration also
computes the volume allocation on the sort pairs. A key element in the
ML training is data augmentation that complements historical
data by generating realistic instances through input perturbations. The ML model formulation is introduced and discussed in more details in what follows.
§.§ The ML Model Formulation
This section defines a machine learning model f, parameterized by
θ, that maps the input parameters, i.e., the commodity volume,
to the optimal trailer decisions:
(<ref>)-(<ref>).
f_θ: ℝ_≥ 0^|K|⟶ℤ_≥ 0^|S| × |V|
𝐩⟼𝐲
The ML inputs are assumed to be taken from a distribution 𝒫
that captures the actual instances.
Given a dataset of input parameters {𝐩_i}_i ∈ N∼𝒫, where N is the set of instances, parametrization θ^* can be obtained by minimizing
the empirical risk shown in (<ref>), where
(<ref>) denotes the optimization problem
solved by Model <ref>, and l denotes the loss function
that measures the L1-distance of the predicted
(f_θ(𝐩)) and optimal (y^*) trailer decisions.
θMinimize 1/N∑_i∈ N l(f_θ(𝐩_i), 𝐲_i^*)
subject to (𝐱_i^*, 𝐲_i^*) = 𝐱, 𝐲∈𝒞(𝐩_i) c(𝐱, 𝐲) ,
It is important to highlight that an ML model could be used to predict
commodity volume allocation on the sort pairs (x-variables) instead
of the trailer decisions (y-variables). This may seem to be a good
approach since, after predicting volume allocation, one can easily
recover the trailer decisions and hence a feasible solution, by
setting y_s,v=⌈∑_k ∈ K:s ∈ S^k
x^k_s,v/Q_v⌉ ∀ s ∈ S, v ∈ V_s.
However, this approach has some shortcomings. First, the output
dimension is significantly larger than the input dimension which makes
it very difficult to develop an effective ML model even for the smallest
instances. Second, recovering trailer decisions is very sensitive to
the predicted volume allocation decisions. Consider an example where
100 cubic volume is allocated to a sort pair which requires two
trailers, each with capacity 50 cubic volume, in the optimal solution.
If the ML model predicts the volume on the sort pair to be 100.5,
then the total number of trailers required is ⌈100.5/50⌉ = 3 which generates a poor solution in terms of the objective function value of Model <ref>.
Experimental results confirmed that it is beneficial to learn the
mapping from input parameters to the trailer decisions rather than the
volume decisions. The trailer decisions 𝐲∈ℤ_≥ 0^|S| × |V| are more aggregated than the
volume allocation decisions ℝ_≥ 0^|K| × |S|
× |V|. The benefits comes from the significant reductions in
output dimensionality and variability. In addition, as presented in
section <ref>, once the trailer
decisions are known, restoring the feasibility of the solution is
relatively easy as the feasibility restoration MIP has a small number
of binary decision variables and therefore, it is easy to solve.
The ML model used in this paper is a deep neural network as illustrated in Figure
<ref>. It uses a Multi-Layer Perceptron (MLP),
where each dense layer is followed with a batch normalization (<cit.>),
a dropout (<cit.>), and a ReLU (Rectified Linear Unit) function.
It maps the input parameter 𝐩
to the flattened trailer decision 𝐲. The
last ReLU guarantees that the output of the neural network is
non-negative. The compatible trailer decisions 𝐲
are then generated by reshaping the flattened decision
𝐲 and masking it with the compatible
trailer mask 𝐦, where m_s, v = 1 indicates that
equipment type v ∈ V is compatible with sort pair s ∈ S. In the training
phase, the loss function is computed by measuring the distance of
predicted compatible trailer decision 𝐲 with the
optimal trailer decisions. Specifically, this work used smooth l_1 loss.
The loss is used to update the parameters of the MLP using stochastic gradient descent (<cit.>)
with back propagation (<cit.>). At inference time (i.e., in real time), the
compatible trailer decisions are rounded to an integer value.
§.§ MIP-based Feasibility Restoration
The proposed ML model predicts the number of trailers
y_s,v for each sort pair s ∈ S and equipment type v
∈ V_s. Let the total trailer capacity installed on each sort pair
s ∈ S be Λ_s = ∑_v ∈ V_sQ_v
(y_s,v). The system of equations
∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v≤Λ_s, ∀ s ∈ S,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
is then used to determine the volume of every commodity k ∈ K
allocated to its compatible sort pairs. However, it is possible that
some of the sort pairs do not have sufficient trailer capacity because
the ML model may underestimate the capacity. In that case,
(<ref>) is infeasible. The following linear program
zMinimize ∑_s ∈ S z_s
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v - z_s ≤Λ_s, ∀ s ∈ S,
x^k_s,v,z_s ≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
can be used determine the sort pairs with trailer capacity violations.
Its objective function (<ref>) minimizes the capacity
violations on the sort pairs. Constraints (<ref>)
ensure that total volume of every commodity is assigned to compatible
sort pairs. Constraints (<ref>) determine the sort
pair capacity violations. Constraints (<ref>) define the
domain and range of variables. When Model <ref>
has an optimal objective value equal to 0, it has recovered a
feasible solution to Model <ref>. Otherwise,
additional trailer capacity is required on sort pairs with capacity
violations.
This paper proposes a two-stage MIP-based feasibility restoration
process. In the first stage, Model <ref> is
solved to obtain an optimal solution z^*. Let the set of sort pairs
with trailer capacity violation be S = {s ∈ S: z^*_s >
0}. The feasibility restoration then identifies the cheapest
equipment v to serve the excess volume on sort pair s ∈S. The extra trailer capacity is given by ξ_s =
( ⌈z_s/Q_v⌉ * Q_v ) and the option to add the extra capacity to sort pair s ∈S is
added using a binary decision variable. The second stage
solves the following MIP model:
uMinimize ∑_s ∈S u_s ξ_s
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v≤Λ_s + u_s ξ_s, ∀ s ∈S,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v≤Λ_s, ∀ s ∈ S\S,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
u_s ∈{0,1} ∀ s ∈S,
The objective function in (<ref>) minimizes the total
trailer capacity added on each sort pair. Constraints
(<ref>) ensure that the commodity volume is
assigned to the compatible sort pairs. Constraints
(<ref>) and (<ref>) ensure
that commodity volume allocated to each sort pair respects the trailer
capacity. Constraints (<ref>) and
(<ref>) define domain and range of variables. The
number of binary variables in this model is at most the number of sort
pairs in the instance, i.e. | S |. The main difference
between Model <ref> and Model
<ref> is that Model <ref> uses
binary variables u_s instead of continuous variable z_s, to denote
the option of adding extra trailer capacity ξ_s on sort pairs s
∈S. When u_s = 1, extra trailer capacity is added to
sort pair s.
After solving Model (<ref>), adding ξ_s
capacity on every sort pair s ∈S yields a feasible
solution to Model (<ref>). However, the goal is to
use Model (<ref>) to obtain a better feasible
solution. Consider an example with a set of commodities all of which
can be allocated to any of the two sort pairs s_1 and s_2 and
trailer with capacity 2 units. Suppose the optimal solution of Model
<ref> is z_s_1=z_s_2=1. In this case, a
feasible solution to Model <ref> can be recovered by
adding two trailers, one on each sort pair. However, Model
(<ref>) (which has two binary variables) yields a
solution with only one trailer on any one of the two sort pairs. Algorithm 1 provides a summary of the feasibility restoration procedure.
§.§ Value of Symmetry-Breaking Data Generation for Learning
The optimization proxies are trained using the solutions provided by
the GDO which uses the same reference plan for all instances of a given terminal. As
a result, the proxies are consistent by design and do not rely on a
reference plan. The GDO approach is not only critical for
environments with planners in the loop, but it also has an additional
benefit: it makes the learning problem easier. This section provides
theoretical insights about why the data generation using GDO results
in better function approximation than data generation from Model
(<ref>) alone.
Observe that the solution trajectory associated with different
instances can often be effectively approximated by piecewise linear
functions, as depicted in Figure <ref>. This
approximation becomes exact in the case of linear programs and mixed
integer programs when the input reflects incremental changes in the
objective coefficients or right-hand sides of the constraints. This
paper utilizes ReLU-based neural networks to approximate the solutions
of optimization problems. These neural networks are capable of
capturing piecewise linear functions, which makes them well-suited for
this purpose. However, the ability of representing a target piecewise
linear function accurately depends on the model capacity. As the
complexity of the function grows with more pieces, a larger model is
required to obtain a high-quality approximations.
(Model Capacity) (<cit.>)
Let f: ℝ^d →ℝ be a piecewise linear function with p pieces.
If f is represented by a ReLU network with depth k+1, then it must have size at least 1/2kp^1/k-1. Conversely, any piecewise linear function f that is represented by a ReLU network of depth k+1 and size at most s, can have at most (2s/k)^k pieces.
Due to the symmetry in optimal solutions of Model
(<ref>), as shown in
Figure <ref>, the solution trajectory varies
dramatically. Theorem <ref> states that the approximation
of a more volatile solution trajectory (i.e., a piecewise linear
function with more pieces) requires a deep neural network with greater
capacity, which makes the learning task more challenging. In other
words, given a fixed-size ReLU network, higher variability of the
solution trajectory typically results in higher approximation
errors. These errors are bounded by the following theorem.
(Approximation Error) (<cit.>)
Suppose a piecewise linear function f_p', with p' pieces each of width h_k for k ∈ [p'], is used to approximate a piecewise linear f_p with p pieces, where p' ≤ p. Then the approximation error
f_p - f_p'_q ≤1/2h^2_max∑_1≤ k ≤ p|L_k+1 - L_k|,
holds where L_k is the slope of f_p on piece k and h_max is the maximum width of all pieces.
Theorem <ref> relates the approximation error of
a piecewise linear function with the total variation of its slopes.
It implies that the data generated using GDO (which exhibits lower
sensitivity than the data from Model (<ref>))
should facilitate learning and result in lower approximation errors.
§ GREEDY HEURISTIC (GH)
This section proposes a greedy heuristic to construct feasible
solutions for Model (<ref>) and benchmark the quality
of the solution obtained from optimization proxies. This heuristic
iteratively solves linear programs (LP) until all the y-variables
are integers, i.e., they satisfy the integrality tolerance
(10^-5). In each iteration, the algorithm identifies fractional
variables with minimum (⌈y_s,v⌉-y_s,v) value, updates the lower bound of variable y_s,v
to ⌈y_s,v⌉, and re-solves the LP as
shown in Algorithm <ref>. The main idea is that
for a given sort pair s ∈ S and trailer type v ∈ V_s, if
y_s,v has a fractional value very close to an integer ⌈y_s,v⌉, then, this indicates that there is
enough commodity volume to have at least ⌈y_s,v⌉ trailers on the sort pair. GH
greedily adjusts the lower bound of a y-variable in each
iteration till all y-variables can be labelled as integers, in which
case a feasible solution to Model (<ref>) has been
found.
§ COMPUTATIONAL STUDY
This section reports a series of experiments conducted on real-life
instances provided by our industry partner. Section <ref>
presents statistics for the problem instances. Section
<ref> discusses the experimental setup for the
optimization models and proxies. Section
<ref> evaluates the computational
performance of the optimization proxies against the greedy heuristic
(GH) and the optimization models (Model (<ref>) and GDO). Section <ref> evaluates the benefits
of GDO for learning.
§.§ Instances
The experiments are based on industrial instances for three different
terminals in the service network of our industry partner: medium (M),
large (L), and extra-large(XL). Each category
has a reference plan for a
terminal on a particular day as provided by our industry partner. Table <ref> reports the
statistics of the instances: #Arcs denoting the total
number of unique outgoing sort pairs or arcs from the terminal,
#Commodities denoting the number of commodities that are sorted at the terminal and loaded into
outbound trailers (rounded to nearest
multiple of 1,000), and #Loads denoting the number of planned loads in the reference load plan for the
corresponding terminals (rounded to the nearest multiple of 50). Note that, in addition to the planned loads,
small package companies typically operate empty trailers on
the outbound sort pairs for trailer repositioning. This study only
considers trailers that are filled with commodity volume and do not
include empty trailer capacity.
It is worth highlighting that the XL
instance operates more volume and capacity than the M and L instances
combined. Table <ref> reports some statistics for Model
(<ref>) for the three instances: #Integer-Vars and
#Continuous-Vars denoting the number of integer and continuous
decision variables, respectively, and #Constraints denoting the total number
of constraints.
§.§ Experimental Setup
Parameters for GDO
The cost of assigning commodity k to a sort pair s ∈ S^k
(denoted by d^k_s) is defined as
d^k_s =
0, if s is primary flow path for commodity k
(α^k_s + 10*β^k ) otherwise
β_k =
1, if commodity k belongs to one-day service class
2, if commodity k belongs to two-day service class
3, if commodity k belongs to three-day service class
4, otherwise
ϵ = 1/(max_k ∈ K, s ∈ S^k(α^k_s + 10*β^k) )∑_k ∈ K q^k
where α^k_s denotes the distance between the alternate next
terminal and the destination of commodity k ∈ K for sort s, and parameter β_k depends on the commodity service level.
Recall that a commodity k ∈ K is defined as all packages with the
same destination and service class. The term α^k_s ensures that
two commodities with different destinations have different flow
diversion cost. However, two commodities with different service class can
have the same destination. β^k ensures that such commodities have different flow diversion cost for the same
destination. The weight for the flow diversion cost is defined in (<ref>).
Data Generation for ML Model
The dataset is generated by perturbing the input parameters of
real-life instances provided by the industry partner with up to
20,000 commodities. Denote by 𝐩^ref the volume of
different commodities in a given reference plan. The DLPP instances are
generated by perturbing this reference commodity volume. Namely, for
instance i, 𝐩^(i) = γ^(i)×η^(i)×𝐩^ref, where γ^(i)∈ℝ denotes a
global scaling factor and η∈ℝ^|K| is the commodity
level multiplicative white noise. γ is sampled from a uniform
distribution U[80%, 120%], and for every commodity η is
sampled from a normal distribution with mean equals to 1 and standard deviation
of 0.05. For every category, 10,000 instances are generated, and a commercial solver is used to solve the GDO model for each instance. The dataset of
10,000 instances for each category is then split as follows: 80%
for the training set, 10% for the validation set, and 10% for
the test set.
Performance Metrics
The performance metrics in this study are designed to compare the
total trailer cost and the consistency of the
solutions generated by the optimization proxies against the total
trailer cost from Model (<ref>) and then the consistency
of solution from Model (<ref>) of the GDO approach. Given
an instance 𝐩 with optimal trailer decision 𝐲^*
and a feasible trailer decision 𝐲̂, the optimality gap
is defined as
Gap = (Ẑ - Z^*)/|Z^*|,
where Z^* is the optimal trailer cost of Model
(<ref>), and Ẑ is the trailer cost computed
from 𝐲̂. Recall that the total trailer cost does not
increase in Model (<ref>) of the GDO approach due to constraint
(<ref>). If Model (<ref>) cannot be solved
to optimality in 30 minutes, then the best lower bound obtained from
the solver run is used to compute the optimality gap instead of Z^*.
This paper proposes two metrics to quantify the consistency. The
first one is a normalized distance (Δ) between the optimized load plan and the load plan
𝐲, using shifted
geometric means as given by
Δ_s,v =
|y_s,v - y_s,v| if y_s,v = 0
|y_s,v - y_s,v|/y_s,v, otherwise ∀ s ∈ S, v ∈ V_s
Δ = exp(1/|S||V|∑_s∈ S∑_v ∈ V_slog (Δ_s,v + 0.01) ) - 1%.
From a planner perspective, this metric captures the deviation of the
optimized load plan with respect to the reference load plan. As mentioned
in Section <ref>, load plans that are as
close as possible to the reference plan are highly desirable.
The second metric is the total variation of the set of trailer decisions
across a set of N instances (for each terminal). For simplicity,
instances are ordered such that ∑_k ∈ K q^k_i+1≥∑_k
∈ K q^k_i ∀ i ∈{1,2,⋯,N-1}. The goal is to
analyze the variation in trailer decisions on sort pairs when the total
commodity volume is incrementally increased from ∑_k ∈ K
q^k_1 to ∑_k ∈ K q^k_N. Let {𝐲_i}^N_i=1
denote the set of trailer decisions of N instances. The total
variation is defined as:
TV({𝐲_i}^N_i=1) = ∑_i=1^N-1𝐲_i+1 - 𝐲_i_p,
where p=2.
This metric captures the sensitivity of the models, i.e., the impact of changes in total commodity volume on the trailer decisions of different sort pairs. Lower total variation implies that the trailer decisions are less sensitive to changes in total commodity volume. Planners are more amenable to such solutions because fewer (but effective) load plan modifications reduce the solution evaluation effort and is also easier to execute in practice.
The computational efficiency of different models is measured by the
training time of optimization proxies including the data-generation
time and the inference time. Unless specified otherwise, the average
metrics on the test dataset are reported in shifted geometric means:
μ_s(x_1, …, x_n) = exp(1/n∑_i log (x_i + s) ) - s,
where the shift is set as 0.01 for the optimality gap and normalized
distance, 1 second for the inference/solving time, and 1 cube for
the distance between the optimized load plan to the reference load plan.
Implementation Details
All optimization problems are formulated using the Gurobi Python interface,
and solved with Gurobi 9.5 (<cit.>) with 8 CPU threads and
default parameter settings, except for MIPFocus which is set
to a value of 3. All deep learning models are implemented using
PyTorch (<cit.>) and trained using the Adam
optimizer (<cit.>). The ML models are multiple layer
perceptron and are hyperparameter-tuned using a grid search with
learning rate in {10^-1, 10^-2}, number of layers in {3, 4,
5}, and hidden dimension in {128, 256}. For each system, the best
model is selected on the validation set and the performances on the
test set are reported. Experiments are conducted on dual Intel Xeon
[email protected] machines running Linux, on the PACE Phoenix cluster
(<cit.>). The training of ML models is performed on Tesla
V100-PCIE GPUs with 16GBs HBM2 RAM.
§.§ Computational Performance of the Optimization Proxies
This section presents numerical experiments used to assess the
performance of the proposed optimization proxies (Proxies) against the
optimization models (GDO) and the greedy heuristic (GH).
Optimality Gap
Table <ref> presents the optimality gaps of various
approaches, including the results of Model (<ref>)
under various time constraints. In the table, the columns under “Gap
of Model (<ref>)” denote the optimality gaps of the
model under various time limits. Similarly, columns Gap for GH
and Proxies denote optimality gaps for GH and the
optimization proxies. In addition, columns Time(s) denote the solving
times for GH and Proxies.
Recall that Model (<ref>) produces solutions that
exhibit considerable variability when the total commodity volume is
perturbed as detailed in Table <ref> and <ref>. As such, it is unlikely to be practical in
scenarios with planners in the loop. Hence, the table compares the
optimization proxies and the heuristics GH with an “idealized”
benchmark. With this caveat in place, observe the performance of the optimization
proxies under tight time constraints. Proxies generate solutions with
low optimality gaps and may be up to 10 to 50 times faster than GH,
and around 10 times faster than Model (<ref>) solved
with Gurobi. Second, although Model (<ref>)
efficiently produces solutions with low optimality gaps, closing the
optimizality gap proves to be a significant challenge due to the poor
LP relaxation. The performance of GH is also impeded by the
inefficiencies of the LP relaxation, as it solves the LP relaxations
over many iterations; it takes the GH around 30 iterations for
terminal M, 200 iterations for terminal L, and more than 1000
iterations for terminal XL to generate a feasible solution.
Consistency
Tables <ref> and <ref> report
the consistency of solutions obtained from different models in terms
of the normalized distance to the reference load plan and the total
variation of the generated solutions. As GDO requires running Model
(<ref>) and Model (<ref>) sequentially,
these experiments set the same time limits for the two stages. For
example, if a time limit of 30 seconds is set, GDO runs Model
(<ref>) for 30 seconds and subsequently runs Model
(<ref>) using the best upper bound obtained from Model
(<ref>) for another 30 seconds.
The high-level result is that proxies are ideally suited to
produce consistent plans. Table <ref> shows
that the proxies accurately predict, in a few seconds, the results
produced by GDO after an hour. Furthermore, Table <ref> shows that proxies produce solutions that have at
least an order of magnitude smaller total variations in trailer
decisions than both GDO and GH. Proxies produce load plans that
exhibit great stability with changing total commodity volume.
The fact that proxies improve the consistency of the GDO plans is
especially interesting: it means that the optimization proxies, by
virtue of the learning step, avoid oscillations present in the GDO
approach. Of course, it does so at a small loss in objective value
(if, for instance, the GDO model is allowed a minute to run instead of
the 2.5 seconds of the optimizations). But the consistency benefits
are substantial as shown in Table <ref>. The
proxies also provide dramatic improvements over the GH heuristic. Note
also that GDO itself brings significant improvements over Model
(<ref>).
In Table <ref>, observe that the normalized
distance for solution from GDO for the large (L) instance first increases from
0.45% to 11.40%, and then follows the expected
decreasing trend with increase in computational time limit. Recall
that GDO first minimizes the total trailer capacity required in Model
(<ref>) and then solves Model (<ref>) to
minimize the Hamming distance of the solution (and the flow diversion
cost) from the reference load plan. As shows in Table <ref>
the feasible solution obtained from Model <ref> is of
poor quality and is closer to the reference load plan in terms of the
number of trailers. Hence, the normalized distance value is small. As
the computational time limit increases, the feasible solution obtained
from Model (<ref>) exhibits a reduced total trailer
capacity compared to the reference load plan. Hence, the normalized
distance increases as the model tries to find more cost effective solutions. With further increases in computational time, the
normalized distances decrease as the solver finds a better solution
with a smaller Hamming distance using Model <ref>.
It is also interesting to observe that the total trailer capacity
predicted by the ML model, i.e., the capacity provided by all the
trailers predicted to be needed by the ML model, is very close to a
feasible solution. Only a few trailers must be added to recover a
feasible solution. Figure <ref> shows the
distribution of the predicted trailer capacity as a percentage of the
total trailer capacity in the feasible solution generated by the
proxies for each type of terminal. The results show that more than
98% of the trailer capacity is predicted correctly and less than
2% comes from feasibility restoration step generated by algorithm <ref>. More
accurate predictions might even result in a feasibility restoration
model that has fewer decision variables, hence, requiring less
computational time to produce a feasible solution. Appendix <ref> shows that one of the key benefits of the optimization proxies is that it replaces a model with large number of integer decision variables with a prediction model, and requires to solve a relatively simpler feasibility restoration model with small number of binary variables.
§.§ Value of Symmetry-breaking Data Generation
As discussed in section <ref>, the optimal (or near-optimal)
trailer decisions of Model (<ref>) are very sensitive
to changes in total commodity volume due to the presence of symmetries
in the model and the randomized nature of MIP solvers. The solutions
to Model (<ref>) are reported in red in the
plots of Figure <ref>, which illustrates this
behavior. This is not desirable in environments with
planners-in-the-loop, where similar solutions are expected for similar
instances. The GDO approach is much more consistent and its solutions
are shown in orange in the plots of Figure <ref>. The ML component of the optimization proxy uses GDO as
the expert to imitate and learn solution patterns from. As
shown in blue in the plots of Figure <ref>,
the ML model is effective in producing solutions that are close to the
solutions generated by GDO. It should be highlighted that the GDO
approach has two benefits. First, it generates consistent solutions that
are amenable to planner-in-the-loop environments. Second, it makes the
learning problem much more tractable. Designing an ML model for
(<ref>) is really challenging due to the high
sensitivities in small changes: typically an ML model for learning
Model (<ref>) would return an average value.
§ BENEFITS OF DYNAMIC LOAD PLANNING, OPTIMIZATION, AND LEARNING
This section discusses the benefits of the dynamic load planning approach,
optimization, and learning. The load planning methodology studied in
this paper is based on the concepts of primary and alternate flow
paths. With the availability of optimization models, it is possible to
evaluate the benefits of this approach for load consolidation, at
least from a local perspective.
The results in the paper also make it possible to evaluate the
benefits of optimization compared to human planners. During
operations, planners typically assign commodities to the primary flow
paths. If there is no capacity available on the primary flow path,
then planners allocate the remaining volume on the first alternate
flow path and, if there is no capacity on the first alternate, they
turn to the second alternate flow path, and so on. Observe that this
is a greedy strategy of loading commodity volume on trailers, and
hence, it is myopic in nature. A comparison between such a greedy approach
and the optimization models help assess the value of optimization. Of
course, the optimization models are too slow to be used with planners
in the loop. The optimization proxies proposed in the paper are the
technology enabler for translating the benefits of optimization into
practice.
The first results in this section aim at quantifying the value of a
network with alternate flow paths relative to a network with primary
flow options only. Figure <ref> presents some characteristic
of the networks studied in this paper: it shows the distribution of
the number of commodities with a specific number of alternate flow
paths for each instance. It highlights that the network has some
significant flexibility for many of its commodities.
Figure <ref> presents the benefits of the load
planning methodology. It compares the variation in trailer cubic
capacity required to containerize the total commodity volume
(blue curve) and the total volume allocated to alternate flow
paths (green curve) across four different load plans: Primary
Only, Reference Plan, 1-Alt and All-Alt for
the three instance categories. In the Primary Only plan, each
commodity can be assigned only to its primary flow path. The
Reference Plan, referred to as the P-Plan, is the reference load plan from our industry
partner. Note that in the reference plan each commodity can use any number
of compatible alternate flow paths. In the 1-Alt plan, each
commodity can be assigned to either its primary path or the
cheapest alternate path. In the All-Alt plan, each commodity
can be assigned to all the available paths, i.e. splitting is allowed. Observe that the curves are
on different scales: the left scale for the blue curve and the right
scale for the green curve. The P-Plan is produced by the
planners using the greedy approach proposed earlier.
Figure <ref> demonstrates a consistent trend in
cubic capacity required in the four different load plans: the capacity
monotonically decreases and the decreases are significant. Allowing
spittability of commodity volume across primary and alternate flow
paths improves trailer consolidation. These benefits are already
apparent in the P-Plan of the planners, despite the fact that
this is a greedy approach. The optimization model with a single
alternate flow path, i.e., the 1-Alt plan, brings another
step change in consolidation, highlighting the benefits of
optimization. This benefit stems for the fact that a large number of
commodities have at least one alternate flow path in all instances
(see Figure <ref>). Note also that the 1-Alt load
plan requires significantly smaller total trailer capacity than the
P-Plan, although the P-Plan has the flexibility of
using any number of alternate flow paths. The All-Alt plan
brings further benefits but they are rather incremental. Part of the
reasons comes from the fact that a relatively small fraction of the
commodities have more than one alternate flow path. It would be
interesting to study a network with more flexibility as this may bring
further load consolidation benefits.
There is an interesting phenomenon that appears in the medium-sized instance M: the volume
assigned on the alternate flow paths decreased when moving from
1-Alt to an All-Alt plan. This comes from the fact
that this instance has many commodities, with a smaller volume, that
have new alternate flow paths options available in the
All-Alt setting. As a result, commodities with larger volume
are allocated to their primary flow path (as the flow diversion cost
is proportional to the total commodity volume assigned to alternate
flow paths) and the commodities with smaller volume can be allocated
to the alternate flow path that is the primary for the commodities
with larger volume (and not the cheapest alternate path of the
1-Alt setting). Hence, the total volume assigned to the
alternate flow paths reduces, although the total number of commodities
that use alternate flow paths increases.
Figure <ref> compares the percentage of the total
commodity volume that is assigned to the alternate flow paths in the
P-Plan and the All-Alt plan. It is undesirable to
allocate a major proportion of the volume to the alternate flow paths
because the downstream buildings may not be better equipped to handle
or process the large inbound volume. Observe that, on average across
all the instances, the All-Alt plan (resp. P-Plan)
allocates around 17% (resp. 9%) commodity volume on the
alternate flow paths. The All-Alt plan reduces the total
trailer capacity by roughly 12%-15% relative to the
P-Plan. For the XL instance, there is a significant gap
between the P-Plan and All-Alt plan statistics
because most of the commodities in the P-Plan are allocated
to the primary flow paths. This is why the total commodity volume
allocated to the alternate flow paths in the P-Plan and
the Primary Only have a small difference; see Figure
<ref> for XL category.
These results show that optimization proxies can bring substantial
benefits in practice. They provide, in real time, significant
improvements over the existing planning process. Moreover, by virtue
of their training mimicking the GDO optimization, that makes sure that
plans evolve smoothly during the planning process: small changes in
inputs will not result in large changes in the proposed solutions.
These results are eminently practical. One of the challenges
in the operational setting is the need for additional trailers when
the total commodity volume increases. Planners can acquire these
trailers either through empty trailer repositioning or by engaging in
short-term trailer leasing with local companies. Conversely, if the
commodity volume decreases, planners are left with a plan that has low
trailer utilization. The optimization proxies address this issue
directly. Planners can also use the proposed optimization proxies to obtain
recommendations for load plan adjustment in the event of a disruption
(due to uncertainty in commodity volume), even for the largest
terminal, within a matter of seconds. Furthermore, the recommendations
from the optimization proxies are consistent with existing load plans,
which makes it easy for the planners to evaluate and implement the
suggestions. Finally, new terminals in the service network often do
not have dedicated planners to develop load plans and extra capacity
is built in the system to handle the commodity volume in the
worst-case scenario. Optimization proxies can be used as a decision
support tool at such terminals.
§ CONCLUSIONS AND FUTURE WORK
This paper studies the Dynamic Load Planning Problem (DLPP) that
considers both load and flow planning challenges jointly in order to adjust loads and flows
as the demand forecast keeps changing over time before the day of
operations. The paper is motivated by the need of a
decision-support tool to advice planners making these decisions at
terminals across the network. The paper formulates the problem as a
MIP and shows that it admits many symmetries. As a result, the
optimization solver may return fundamentally different solutions to
closely related problems (i.e., DLPPs with slightly different inputs),
confusing planners and reducing trust in optimization. To remedy this
limitation, the paper proposes a Goal-Directed Optimization (GDO) that
eliminates those symmetries by generating optimal solutions staying
close to a reference plan. The paper also proposes an optimization
proxy, combining learning and optimization, to provide
high-quality and consistent load plans. An extensive computational
study on industrial instances shows that the optimization proxy is
around 10 times faster than the commercial solver in obtaining the
same quality solutions and orders of magnitude faster for generating
solutions that are consistent with each other. The proposed approach
also highlights the benefits of the DLPP for load consolidation, and
the significant savings from the combination of machine learning and
optimization.
This research is the first stage of a multi-stage project with our
industry partner (a large parcel carrier) for solving load planning
problems. Future research will extend the proposed approach to clusters of
terminals, taking into account their capacities for processing
commodities. The resulting problem thus requires to determine both
inbound and outbound planning decisions at each terminal, which
significantly complicates the optimization and learning models.
§ ACKNOWLEDGEMENT
This research was partly supported by the NSF AI Institute for Advances in Optimization (Award 2112533).
§ APPENDIX
§.§ Complexity Results
Model <ref> is difficult to solve because in addition to determining the right combination of trailer types to contain volume on each arc, we need to determine the right splits of commodity volume on the given set of compatible arcs. We will analyze the complexity of Model <ref> using the special cases described below.
Case 1: There is only one trailer type available at the terminal, i.e., | V_s | = 1 ∀ s ∈ S. Each commodity k ∈ K is compatible with exactly one sort pair s_k, i.e., S^k = {s_k} ∀ k ∈ K
Case 2: There is only one trailer type available at the terminal, i.e., | V_s | = 1 ∀ s ∈ S. Each commodity k ∈ K is compatible with all sort pairs, i.e., S^k = S ∀ k ∈ K
Cases 1 and 2 are polynomial time solvable
In Case 1, the volume of each commodity k is assigned to its only compatible sort pair, s_k, i.e. x^k_s_k = q^k. Then, the optimal solution has y_s = ⌈∑_k ∈ K: s ∈ S^k x^k_s/Q⌉ = ⌈∑_k ∈ K: s ∈ S^k q^k/Q⌉ ∀ s ∈ S.
In Case 2, the optimal solution is to assign the volume of all commodities on any sort pair s ∈ S and set x^k_s = q^k ∀ k ∈ K, y_s = ⌈∑_k ∈ K q^k/Q⌉, y_s' = 0 ∀ s' ∈ S, s' ≠ s.
Case 3: Same as Case 1, but with more than one trailer type available at the terminal
Case 4: Same as Case 2, but with more than one trailer type available at the terminal
Cases 3 and 4 are weakly NP-Hard
In the optimal solution in Case 3 the volume of each commodity k is assigned to its only compatible sort pair s_k. Thus, it remains to decide the optimal combination of trailer types required to containerize the volume on every sort pair. This is the minimum knapsack problem (see <cit.> for the problem definition) for each sort pair (that has more than one trailer type) as shown in <ref> which is known to be weakly NP-Hard.
For every s ∈ S: yMinimize ∑_v ∈ V_s c_v y_s,v
subject to ∑_k ∈ K: s ∈ S^k q^k≤∑_v ∈ V_sQ_v (y_s,v),
y_s,v∈ℤ_≥ 0 ∀ v ∈ V_s
Similarly, for Case 4 there exists an optimal solution in which the volume of all commodities is assigned to one sort pair s^* ∈ S, i.e. x^k_s^* = q^k ∀ k ∈ K and it remains to solve a minimum knapsack problem for the sort pair s^* due to which Case 4 is weakly NP-Hard.
Case 5: Each commodity k ∈ K is compatible with a subset of sort pairs, i.e., S^k ⊂ S, and has unit volume, q^k = 1. There is only one trailer type with per-unit cost c_s=1 ∀ s ∈ S and capacity Q=max_s ∈ S{∑_k ∈ K1_s ∈ S^k}; hence, y_s ∈{0,1}∀ s ∈ S, as installing one unit of trailer is enough to containerize the total volume that can be assigned to the sort pair. Note that we ignore the index v for trailer because each sort pair has exactly one and the same trailer type.
In the optimal solution of Case 5, each commodity is assigned to exactly one compatible sort pair (i.e. there is no splitting of volume)
We will present a proof by contradiction. WLOG, suppose there exists an optimal solution in which the volume of a commodity k̂ is split between two sort pairs and the volume of all other commodities k ∈ K \{k̂} is assigned to exactly one sort pair s_k. Thus, we have x^k_s_k = q^k ∀ k ∈ K \{k̂} and x^k̂_s_1 + x^k̂_s_2 = q^k. Consider a solution x^k_s =x^k_s ∀ k ∈ K \{k̂} and x^k̂_s_1 = x^k̂_s_1 + ϵ,x^k̂_s_2 = x^k̂_s_2 - ϵ, where ϵ > 0 is a small real number. Note that x^k̂_s_1 + x^k̂_s_2 = q. Consider another solution x̅^k_s = x^k_s ∀ k ∈ K \{k̂} and x̅^k̂_s_1 = x^k̂_s_1 -ϵ,x̅^k̂_s_2 = x^k̂_s_2 + ϵ. Note that both solutions x and x̅ satisfy constraints (<ref>) and are feasible to constraints (<ref>) because we choose Q = max_s ∈ S{∑_k ∈ K1_s ∈ S^k}. The solution x can be written as a convex combination of the solution x and x̅ (x^k_s = 1/2x̅^k_s + 1/2x^k_s ∀ k ∈ K, s ∈ S^k) which contradicts the optimality of the solution.
Case 5 is strongly NP-Hard
We will show that this special case can be solved as a set cover problem which is known to be strongly NP-Hard (<cit.>). An instance of a set cover is given by a ground set U = {x_1, x_2, ⋯, x_n} and a collection of m subsets E_i ⊆ U ∀ i ∈{1,2,⋯,m} of the ground set U. The optimization problem is to find the smallest number of subsets i ∈{1,2,⋯,m} such that ⋃_i ∈{1,2,⋯,m} E_i = U.
From claim <ref> we know that each commodity is assigned to exactly one compatible sort pair in the optimal solution. Let commodity k ∈ K denote element x_k ∈ U, | K | = n and set of sort pairs S = {1,2,⋯,m}. Define K_i = {k ∈ K : x_k ∈ E_i} as the set of commodities or elements that can be covered by selecting sort pairs i ∈{1,2,⋯,m}. Now note that finding the smallest number of sort pairs s ∈ S such that all commodities in K are covered is equivalent to finding the smallest number of subsets i ∈{1,2,⋯,m} to cover all elements in U.
§.§ Additional Experimental Results
Table <ref> compares the the number of integer decision variables in Model <ref> and the average number of binary decision variables in Model <ref> across multiple test instances. The number of integer decision variables remain the same for each instance category because it depends in the number of arcs or sort pairs and trailer types; only the commodity volume changes across different test instances. However, the size of the feasibility restoration model <ref> depends on the predictions of the ML model. Recall that the ML model predicts the value of the integer decision variables of Model <ref>. Hence, if the predictions are accurate, then fewer sort pairs would have capacity violations. Consequently, there would be fewer binary decision variables in Model <ref>; the number of binary decision variables in Model <ref> is equal to the number of sort pairs with capacity violation. As the ML predictions can vary for different test instance with the same set of sort pairs due to different commodity volume, the number of binary variables in Model <ref> can be different for different test instances. This is why the Table <ref> reports fractional values for the average number of binary variables. It is worth highlighting that one of the key benefits of the optimization proxies is that it replaces a model with large number of integer decision variables with a prediction model, and requires to solve a relatively simpler model with small number of binary variables.
|
http://arxiv.org/abs/2307.04471v1 | 20230710104047 | Phonon-assisted coherent transport of excitations in Rydberg-dressed atom arrays | [
"Arkadiusz Kosior",
"Servaas Kokkelmans",
"Maciej Lewenstein",
"Jakub Zakrzewski",
"Marcin Płodzień"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"physics.atom-ph",
"quant-ph"
] |
new_plots
f
frozen
⟨⟩kJ^_ijdscg_Jg_WIPRḍÑInstitute for Theoretical Physics, University of Innsbruck, 6020 Innsbruck, AustriaDepartment of Applied Physics, Eindhoven University of Technology, PO Box 513, 5600 MB Eindhoven, The NetherlandsInstitut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Barcelona, SpainICREA, Passeig Lluis Companys 23, 08010 Barcelona, SpainInstytut Fizyki Teoretycznej, Uniwersytet Jagielloński, Łojasiewicza 11, 30-348 Kraków, PolandMark Kac Center for Complex Systems Research, Jagiellonian University, Łojasiewicza 11, 30-348 Kraków, PolandInstitut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels, Barcelona, Spain
Polarons, which arise from the self-trapping interaction between electrons and lattice distortions in a solid, have been known and extensively investigated for nearly a century. Nevertheless, the study of polarons continues to be an active and evolving field, with ongoing advancements in both fundamental understanding and practical applications. Here, we present a microscopic model that exhibits a diverse range of dynamic behavior, arising from the intricate interplay between two excitation-phonon coupling terms.
The derivation of the model is based on an experimentally feasible Rydberg-dressed system with dipole-dipole interactions, making it a promising candidate for realization in a Rydberg atoms quantum simulator. Remarkably, our analysis reveals a growing asymmetry in Bloch oscillations, leading to a macroscopic transport of non-spreading excitations under a constant force. Moreover, we compare the behavior of excitations, when coupled to either acoustic or optical phonons, and demonstrate the robustness of our findings against on-site random potential. Overall, this work contributes to the understanding of polaron dynamics with their potential applications in coherent quantum transport and offers valuable insights for research on Rydberg-based quantum systems.
Phonon-assisted coherent transport of excitations in Rydberg-dressed atom arrays
Marcin Płodzień
August 12, 2023
================================================================================
§ INTRODUCTION
Polarons are quasi-particles that emerge from the coupling between electrons (or holes) with ions of a crystalline structure in polarizable materials. The idea of electron self-trapping due to lattice deformations dates back to Landau's seminal 1933 paper <cit.>, but the modern concept of a polaron as an electron dressed by phonons was formulated in 1946 by Pekar <cit.>, and developed later by Fröhlich <cit.>, Feynman <cit.>, Holstein <cit.>, and Su, Schrieffer and Heeger <cit.>.
Since their discovery, polarons have been extensively investigated, both theoretically and experimentally, not only in the field of condensed matter physics (for reviews see Refs. <cit.>), but also in various chemical and biological contexts, e.g., in protein propagation <cit.>. In particular, in the modeling of charge migration in DNA molecules, it is assumed that a localized polaron is formed in the helix near a base due to an interaction between a charge carrier and a phonon. When a uniform electric field is applied, the polaron moves at a constant velocity, and a current flows through the chain <cit.>. The charge carrier transport takes place due to coupling between carrier and phonons; in contrast, in the absence of phonons, an external constant force induces Bloch oscillations <cit.>, where the mean position of the carrier is constant while its width periodically changes in time.
Polarons have been studied in many, seemingly different experimental setups, ranging from ultracold ions
<cit.>, polar molecules
<cit.>, mobile impurities in Bose and Fermi gases <cit.>,
ultracold Rydberg atoms
<cit.>, to quantum dots on a carbon nanotube <cit.>.
Although each of these platforms possesses its unique strengths and benefits, recently there has been an exceptional outburst of interest in quantum simulation and computation with Rydberg atoms, which provide a remarkable level of flexibility for executing quantum operations and constructing quantum many-body Hamiltonians <cit.>. While the latter can contribute to our comprehension of the static properties of many-body systems, their main benefits are centered around exploring the complex dynamics displayed by these systems. In particular, in the context of polarons, it has been demonstrated that the dipole-dipole interactions between distinct Rydberg-dressed states can result in coherent quantum transport of electronic-like excitations <cit.>, which can further be coupled to optical phonons <cit.>. The paradigmatic one-dimensional topological Su-Schrieffer-Heeger (SSH) model <cit.> describing the soliton formation in long-chain polyacetylene due to excitation-phonon coupling, has been realized in Rydberg arrays <cit.>.
In this paper, we continue along this path and present theoretical studies of an implementation of a microscopic model featuring the interplay of Su-Schrieffer-Heeger (SSH) and Fröhlich electron-phonon interaction terms under the influence of an external force and disorder.
In particular, we focus on the directional transport of an excitation interacting with phonons. We indicate an excitation-phonon coupling regime where the competition between Bloch oscillations and interactions results in the coherent transport of a well-localized wave packet over a long distance. Moreover, we show the robustness of such a coherent transport of well-localized wave packets to the on-site random potential, indicating that a relatively strong disorder does not affect significantly the transport properties.
The paper is divided into three parts. In the first part, Sec. <ref>, we describe the physical setup and derive the effective Hamiltonian in Rydberg-dressed atomic arrays. The second part, described in Section <ref>, focuses on the dynamics of the system under experimentally relevant parameters. In this section, we observe the macroscopic transport of the center of mass and a transition between Bloch oscillations and moving polaron regimes. In the third part, Sec. <ref>, we comprehensively analyze the previously derived microscopic model, which exhibits a rich phase diagram due to the interplay of two different electron-phonon coupling mechanisms. Finally, we compare the behavior of excitations with acoustic and optical phonons and demonstrate the robustness of our results.
§ THE MODEL AND ITS HAMILTONIAN
We consider a one-dimensional chain of N equidistant Rydberg atoms with lattice constant x_0 and positions x_j = j x_0, confined in a periodic trap, implemented either by an optical lattice <cit.>, an optical tweezer array <cit.>, a Rydberg microtrap <cit.>, or a
painted potential <cit.>. We assume that the spatial motion of the atoms is suppressed by the strong confinement of each Rydberg atom in local potential minima. Although the atomic motion is frozen, it is remarkable that such a Rydberg system can display highly non-trivial dynamics.
In particular, the induced dipole-dipole interactions between distinct Rydberg-dressed states can lead to the emergence of coherent quantum transport of electronic-like excitations <cit.>.
In the following, we first briefly repeat the derivation of the Hamiltonian that characterizes the dynamics of single excitations <cit.>. The purpose of this recap is to modify the setup in order to incorporate nearly arbitrary on-site potential terms.
Next, after introducing phonons into the system <cit.>,
we derive an effective nearest-neighbor Hamiltonian that includes two excitation-phonon coupling terms, which we comprehensively study in the forthcoming sections, focusing on the dynamics in the presence of an external constant field.
§.§ Single excitation Hamiltonian in arbitrary potentials
We assume that each Rydberg atom of can be initially found in one the ground state hyperfine levels, |g⟩ or |g'⟩. By applying far-detuned dressing laser fields, with effective Rabi frequencies Ω_s, Ω_p and detunings Δ_s, Δ_p respectively, these two hyperfine states can be coherently coupled to selected highly excited Rydberg states, |s⟩ or |p⟩, with principal quantum number n≫ 1 and different angular momenta. Consequently, each atom can be found in one of the two Rydberg dressed states <cit.>, which are a slight admixture of Rydberg states to the atomic ground states,
|0⟩_j ≈ |g⟩_j + α_s |s⟩_j
|1⟩_j ≈ |g'⟩_j + α_p |p⟩_j ,
with α_s/p = Ω_s/p/[2Δ_s/p] and j denoting the position of an atom. Treating α_s, α_p as perturbation parameters in van Vleck perturbation theory, Wüster at al. <cit.> have shown that the dipole-dipole interaction can exchange the internal states of a neighboring pair, e.g. |1⟩_1 |0⟩_2 → |0⟩_1 |1⟩_2. This process can be viewed as a hopping of an excitation from j=1 to j=2 lattice site, which conserves the number of excitations.
The perturbation analysis can be extended to a chain of N atoms, where the effective Hamiltonian in the single excitation manifold (up to the fourth order in α_s and α_p) reads <cit.>Ĥ_0 = ∑_j n̂_j (E_2 + E_4+ A_j) + ∑_j,k A _jkâ_j^†â_k,
where â_j (â^†_j) denote an annihilation (creation) operator of excitation on site j, while
A _j = ħα_s^2 α_p^2 (∑_k j1/1-U̅_kj^2) (Δ_s + Δ_p),
A_jk = ħα_s^2 α_p^2 U̅_jk/1-U̅_jk^2 (Δ_s + Δ_p),
with U̅_jk = C_3/[ħ| x_i- x_j|^3 (Δ_s + Δ_p)] and C_3 quantifying the transition dipole moment between the
Rydberg states,
describe perturbative dipole-dipole interactions.
Finally, E_2 and E_4 are constant energy shifts of the second and fourth order, respectively,
E_2 /ħ = (N-1)α_s^2 Δ_s + α_p^2 Δ_p,
E_4 /ħ = (N-1) α_s^4 Δ_s+ α_p^4 Δ_p
+ (N-1)α_s^2 α_p^2 (Δ_s+ Δ_p) .
Although in principle constant energy terms could be always ignored as they do not contribute to the dynamics of excitations, let us consider now a scenario where the Rabi frequency Ω_p depends on the atomic position on the lattice, i.e., we assume that
Ω_p →Ω_p(j) ≡Ω_p [1 + δΩ(j) ],
where δΩ(j) is arbitrary, but small correction of the order (α_p/s)^2. With this assumption, and by retaining terms up to the fourth order, the effective Hamiltonian in Eq. (<ref>) acquires an additional term, namely
Ĥ = Ĥ_0 + ħα_p^2 Δ_p ∑_j δΩ(j) n̂_j .
Because the term proportional to α_p^2δΩ(j) is of the same order as A_j, it can be incorporated into the definition of A_j in Eq. (<ref>).
With this simple modification, we have gained a position-dependent effective potential term that can strongly affect the dynamics of excitations. Although the potential term can be tailored almost arbitrarily, from now on we consider one of its simplest forms, i.e., we choose
δΩ(j) = 2 α_s^2 (F j + ϵ_j ).
The first term in the parentheses, being linearly proportional to position j, emulates the presence of a constant external field F.
The second term, with ϵ_j being a random variable, gives rise to the on-site potential disorder.
Note that both terms lead to localization of the excitation either due to Stark localization <cit.>
in a constant tilt, F, or Anderson localization <cit.> due to random ϵ_j. As explained in the next part, the situation is not so straightforward.
§.§ Excitation-phonon Hamiltonian
In this part, we relax our previous assumption that the atoms of the array are completely immobile. Although we still assume that no atom can move through the lattice, we now let them vibrate in the vicinity of their local equilibrium points. This will affect, as we shall see, the dynamics of excitations. We consider now a scenario where
an atom
in the j-th lattice site and with mass m may oscillate with a frequency ω_0 = √(k/m) inside a local potential well, that can be approximated by a quadratic potential
k/2 (x-j x_0)^2 ≡k x_0^2/2 (u_j)^2,
with k being the force constant and where u_j denotes dimensionless distortion from the local equilibrium position.
The motion of atoms can be quantized u_j →û_j and described by a simple quantum harmonic oscillator. This vibrational motion is responsible for the distortion of an atomic array and can be considered as a phonon. Since the Hamiltonian of the previous section describing the motion of single excitations strongly depends on the position of atoms, phonons can propagate through space due to the coupling to excitations. Before proceeding to derive the effective Hamiltonian of the system with phonon-excitation coupling, for clarity and simplicity we assume that
α≡α_s=α_p , Δ≡Δ_s=Δ_p .
Moreover, from now on we also fix the time and energy scales and go to the dimensionless units by dividing all the energy scales by 2ħα^4 Δ.
Although the setup described in Section <ref> admits only dispersionless optical phonons that correspond to local vibrations of atoms around local minima, we consider here two different types of phonons. We proceed by writing the phononic Hamiltonian explicitly in terms of the dimensionless position and momentum operators û_j, p̂_j of local distortions,
Ĥ_ph =
∑_j p̂^2_j/2 m_eff + m_effω_eff^2/2 (û_j - ηû_j-1)^2,
with the effective dimensionless mass,
m_eff = 2 m x_0^2 α^4 Δ / ħ,
and the effective oscillator frequency,
ω_eff =ω_0 / (2 α^4 Δ), ω_0 = √(k/m),
where ω_0 is the bare frequency. By changing the parameter η in Eq. (<ref>), diverse phonon types can be achieved. In particular,
η=0 corresponds to the aforementioned local vibrations (i.e., dispersionless optical phonons), and η=1 describes acoustic phonons.
These two phonon types are characterized by the dispersion relation
ϵ_q =
ω_eff ,
2 ω_eff|sin(q x_0/2)|, ,
which can be readily found by writing the phononic Hamiltonian (<ref>) in terms of its eigenmodes,
Ĥ_ph = ∑_q ϵ_q(b̂^†_qb̂_q+1/2),
where b̂^†_q (b̂_q) creates (annihilates) the phonon with quasi-momentum q, and are related to the local dimensionless momentum and position operators p̂_i, û_i of distortion by
û_j = ∑_q 1/√(2 N ϵ_q m_eff)(b̂_q+ b̂^†_-q)e^iqjx_0,
p̂_j = -i∑_q √(ϵ_q m_eff/ 2 N )(b̂_q - b̂^†_-q) e^iqjx_0.
Having discussed the phononic degrees of freedom, we can now write the fully effective Hamiltonian governing the motion of single excitations coupled to phonons. The derivation is straightforward and requires: (i) the expansion of the position-dependent coefficients [given by Eq. (<ref>)] in the Hamiltonian (<ref>) of the previous section up to the first order in û_j, and (ii) dropping the next-to-nearest neighbor contributions <cit.>.
By following these steps, we obtain the effective excitation-phonon Hamiltonian [cf. Fig. <ref>], which consists of four parts, i.e.,
Ĥ_eff = Ĥ_ph + Ĥ_ex + Ĥ_J + Ĥ_W,
where Ĥ_ph is the phononic Hamiltonian, Eq. (<ref>),
Ĥ_ex = J_0(â^†_j+1â_j + ) + ∑_j (j F +ϵ_j)â^†_jâ_j ,
describes excitations with the hopping amplitude
J_0 = κ/(1-κ^2), κ = C_3/(2ħΔ x_0^3),
experiencing an external constant force F, and a local on-site disorder ϵ_j. Finally,
Ĥ_J = g_J∑_j (û_j+1 - û_j) â^†_j+1â_j + ,
Ĥ_W = g_W ∑_j (û_j+1 - û_j-1)â^†_jâ_j,
are the notable SSH and Fröhling Hamiltonians <cit.>, respectively, that correspond to two different mechanisms of excitation-phonon couplings,
with dimensionless coupling parameters
g_J = -3κ(1+κ^2)/(κ^2-1)^2,
g_W = -6κ^2/(κ^2-1)^2 .
§.§ Equations of motion
The full numerical analysis of the polaron dynamics on the many-body level is one of the most challenging computational tasks, due to the non-conserved total number of phonons in the system, which prevents it from working in a restricted, fixed particle-number Hilbert space sector of the phononic degrees of freedom. Additionally, even without a force F the effective Hamiltonian of the systems (<ref>) depends, in principle, on many parameters, namely J_0, g_W, g_J, ω_ eff and m_ eff, making the full analysis of the system even more challenging.
To analyze the dynamical properties of the considered system, in the following, we assume that the phononic degrees of freedom are independent in each lattice site. We make the semiclassical approximation by applying the Davydov Ansatz <cit.>, i.e., we assume that phonons are in a coherent state and that the full wave function is a product state of the excitation and coherent phonons part, as
| Ψ (t)⟩ = (∑_jψ_j(t)â_j^†)⊗( e^-i ∑_n[ u_j(t)p̂_j-p_j(t)û_j])|𝚟𝚊𝚌⟩,
where |ψ_j(t)|^2 is a probability of finding an excitation at a site j, u_j(t) and p_j(t) are expectation values of phononic position and momentum operators. The equation of motion for ψ_j(t) and u_j(t) can be subsequently derived from a classical conjugate variable Heisenberg equations of motions using the generalized Ehrenfest theorem, see, for example, Ref. <cit.>.
By following these steps, we obtain a closed set of coupled differential equations for the excitation amplitude ψ_j(t) and classical field u_j(t). The equations can be written in a concise form, as
i ψ̇_̇j̇ = J_j ψ_j+1 + J_j-1ψ_j-1 + W_j ψ_j ,
ü_j = -ω_eff^2 D[u_j] + S[ψ_j],
where the effective potential experienced by an excitation W_j(t) = j F +ϵ_j + g_W[u_j+1(t) - u_j-1(t)], and the effective hopping amplitude J_j(t) = J_0 + g_J[u_j+1(t) - u_j(t)] are both time-dependent functions due to the coupling to the gradient of phononic field u_j(t). As such, both W_j(t) and J_j(t) are responsible for the self-trapping of an excitation. Similarly, the phononic equation (<ref>) also depends on the excitation amplitude ψ_i(t) through the S[ψ_j] operator, given by
S[ψ_j] = - g_W/m_eff (|ψ_j+1|^2 - |ψ_j-1|^2)
- g_J/m_eff[ψ_j^*(ψ_j+1 - ψ_j-1) + ],
which acts as a time-dependent source for the phonon field u_j(t).
Finally, the phononic dispersion relation, given by Eq. (<ref>), is necessarily present in the phononic equation through the D[u_j] operator,
D[u_j]=
u_j , η = 0,
2u_j - u_j+1-u_j-1, η = 1,
,
which introduces a crucial difference in the propagation of optical (η=0) and acoustic (η=1) phonons <cit.>, which we investigate in the next sections.
§.§ Analysed observables
Throughout this article we choose the initial conditions ψ_j(0) = δ_j,0 and u_j(0) = u̇_j(0) = 0 for the equations of motion, Eq. (<ref>), that correspond to a single excitation on a central lattice site and initially unperturbed lattice. Without a phonon-coupling and for F=0, these initial conditions simply correspond to a quantum particle that spreads symmetrically in both lattice directions characterized by a constant Lieb-Robinson velocity <cit.>, so that its center of mass remains localized at the initial position. Contrary to the classical case, a quantum particle on a lattice will not even move in the presence of a constant force F, but instead, it starts to perform Bloch oscillations <cit.>. The situation is different in interacting systems, either in a case of particle-particle interactions <cit.>, which may further lead to disorder-free many-body localization <cit.>, or in the presence of phonons, which
can induce transient polarons at the end of Bloch oscillation periods <cit.> (see also Ref. <cit.>).
In this study, we investigate how the propagation of a single excitation is influenced by the two competing phonon-coupling mechanisms under the applied, constant force. Specifically, we aim at answering the two following questions: (i) how much does the excitation spread due to the coupling with phonons, and (ii) does its center of mass move in the presence of the constant force F? In order to respond to these questions we focus on three simple observables that can be calculated based on the local density measurements. First, we consider the participation ratio (PR), defined as <cit.>:
(t) = (∑_j|ψ_j(t)|^4)^-1,
where we have assumed a unit normalization of the wavefunction ∑_j|ψ_j|^2=1.
The participation ratio PR is equal to 1 where excitation is localized on a single lattice site and equals N when is completely delocalized over the whole lattice.
The second observable is the center of mass position of the wave packet, i.e.,
x(t) = ∑_j=-N/2^N/2 j |ψ_j(t)|^2.
Moreover, in some cases, analyzing the ratio of the two quantities mentioned above can provide valuable insights. We define this ratio, denoted as ξ, as:
ξ(t) = |x(t)|/(t).
ξ is a quantity ranging from 0 to ξ_max = N/2. The maximum value ξ_max corresponds to a moving, maximally-localized, non-dispersive solution that has reached the boundary of the system. As such, ξ can be viewed as an indicative measure for selecting well-localized solutions moving in one direction.
Finally, it is worth mentioning that it is often not necessary to analyze the entire time range of the above observables. In fact, to discern various dynamic behaviors, it is usually sufficient to look at (t), x(t) and ξ(t) at the final evolution time t_f ≫ 1. For example, large (t_f) (relative to the system size N) suggests that excitation is not stable and has delocalized over a lattice.
§ POLARON DYNAMICS: EXPERIMENTAL CONSIDERATIONS
In this section we elaborate on the results of the previous sections and study the dynamics of a Rydberg excitation under the presence of the external force F, solving the equations of motion for a physically relevant range of parameters.
The effective Hamiltonian (<ref>) of the system relies on several effective, dimensional parameters, including m_eff = 2 m x_0^2 α^4 Δ / ħ, ω_eff =ω_0 / (2 α^4 Δ), as well as J_0, g_J, g_W, given by Eq. (<ref>) and Eqs. (<ref>). However, it is worth noting that the latter three parameters are not independent within our setup, and their values are determined by a single parameter κ = C_3 / (2ħΔ x_0^3). This provides us with significant flexibility in selecting appropriate physical parameters for our convenience.
In the following, we choose the highly excited Rydberg states |s, |p of Rubidium-87 with principal quantum
number n=50 and angular momentum equal to 0 or ħ, for which C_3 = 3.224 GHz×μ m^-3. We fix the lattice spacing x_0 = 2 μ, and the local trap frequency ω_0 =20 kHz. In the numerical simulations, we vary the dimensionless parameter κ between 0.80-0.86, which is equivalent to the change of the detuning Δ∼ 234-252 MHz, and corresponds to the dressing parameter α∼ 0.04. Importantly, by increasing κ we also increase the phonon coupling strength from around g_J/m_eff∼ g_W/m_eff∼-4.5 to g_J/m_eff∼ g_W/m_eff∼-8. Furthermore, we remind the reader that in our setup only the optical phonons (i.e., dispersionless vibrations) are experimentally relevant and, therefore, in this section we set η=0. Finally, we fix the value of the force at F=0.2, and we choose the system size to N=401.
In order to characterize the transport properties of an excitation ψ_i(t), in the top panel of Fig. <ref> we plot its center of mass position x(t) and the corresponding participation ratio PR(t), see Eqs. (<ref>)-(<ref>) for the respective definitions. In the bottom panel, we additionally illustrate the ratio ξ = |x|/PR.
All these quantities are plotted as a function of κ, at a fixed time t_f=2.1 T_B ≈ 66, where T_B = 2π/F is the Bloch oscillation period. We find that up to κ∼ 0.83 both x(t_f) and PR(t_f) are small (relative to the system size N) which corresponds to the Bloch oscillation-like dynamics where the phonon-influence is minimal. In contrast, phonons play important role above κ∼ 0.83 where the system dynamics is quite sensitive to the choice of microscopic parameters.
Within the chaotic-like regime, the typical Bloch oscillation dynamics is completely disrupted, as the majority of solutions become delocalized across the lattice, leading to large values of PR(t). However, amidst this chaotic behavior, we also discover intervals of stability, characterized by peaks of ξ(t_f), where a substantial portion of the wave packet becomes well-localized and exhibits near-constant velocity motion.
We illustrate those different dynamical behaviours in Fig. <ref>, where the first column, i.e., panels (a)-(d), show the time evolution of the excitation density |ψ_j(t)|^2, while the second column [panels (e)-(h)] illustrates the corresponding time evolution of the center of mass position x(t) and the participation ratio PR(t). In the first row (κ = 0.8), we observe almost perfect Bloch oscillations. However, upon closer examination, a subtle asymmetry becomes apparent, which is evident by a non-zero x(t). The asymmetry is enhanced for a higher κ = 0.83, as depicted in the second row of Fig. <ref>. Finally, the last two rows of Fig. <ref> illustrate the time evolution of the excitation density in the chaotic-like regime above κ∼ 0.83, cf. Fig. <ref>, where most of the solutions are delocalized over a lattice, as in Fig. <ref>(d) for κ = 0.86. In contrast, in Fig. <ref>(c) we illustrate a regular behaviour for κ=0.834, which lies inside one of the aforementioned stability windows.
In this scenario, due to constructive interference after one Bloch oscillation period, a prominent portion of the wave function coalesces into a very narrow non-dispersive wave packet that moves with a nearly constant velocity. Overall, Fig. <ref> offers a comprehensive visual representation of the dynamic phenomena investigated in this section, shedding light on the varying dynamical behaviors and properties of the system with increasing phonon interaction.
§ DYNAMICAL PHASE DIAGRAMS OF THE EFFECTIVE HAMILTONIAN
In the previous sections, we have derived and then analysed a microscopic Hamiltonian (<ref>), governing the dynamics of an excitation
coupled to phonons through two different mechanisms, i.e., the SSH and Fröhling Hamiltonians, see Eq. (<ref>). While maintaining a close connection to the experimental platform, it is important to note that in the considered Rydberg setup, the phonon coupling strengths g_J and g_W are not independent. Instead, they can both be expressed in terms of a single parameter κ, as demonstrated in Eq. (<ref>). Consequently, investigating the interplay between these two competing phonon-coupling mechanisms within the current Rydberg platform becomes challenging. To address this limitation and explore the complete phase diagram in a more general context, in this section, we treat g_J and g_W as completely independent and fix other parameters. In the initial phase, as described in Section <ref>, our primary objective is to identify a stable polaron regime. Specifically, we aim to find a regime in which an initially localized excitation does not spread during the course of time evolution. Subsequently, in Section <ref>, we demonstrate the existence of stable islands where polarons can exhibit non-dispersive motion when subjected to a constant force, even in the presence of substantial disorder.
Furthermore, in this part, we thoroughly examine the quantitative differences in dynamics of optical and acoustic phonons.
In the following, we set the system size to N = 401, and solve the equations of motions in a fixed time interval t∈ [0,t_f = 16.5]. Unless explicitly stated otherwise, we also set m_ eff=0.5, ω_ eff = 10, and J_0 = 1.
§.§ Polaron formation
In the preceding section, we have already witnessed the emergence of a non-dispersive, self-trapped polaron through the excitation-phonon coupling. Building upon this observation, here we independently vary the two coupling strengths, g_J, and g_W, to identify a stable polaron regime. It is worth noting that the Hamiltonian of the system, as described by Eq. (<ref>), is invariant under the simultaneous transformation: u_j → - u_j, g_J → - g_J, and g_W → - g_W. Therefore, without loss of generality, we can assume g_J≥0.
In Fig. <ref>, we present a phase diagram of the participation ratio PR calculated at the final evolution time for a broad range of values: g_J∈ [0,45] and g_W∈ [-16,20]. Each panel of Fig. <ref> corresponds to distinct values of m_ eff and η, as specified in the figure caption.
In terms of the layout, the left (right) column corresponds to the optical (acoustic) phonons, and m_ eff increases from top to bottom.
In all panels of Fig. <ref>, we observe wide regions with both extended states (warm colors) and well-localized solutions (dark blue colors), with the latter corresponding to stable, stationary polarons. We discover a non-trivial dependence of the participation ratio on both coupling strengths.
Moreover, we find qualitatively similar behavior for both types of phonon, however, the acoustic phonons exhibit greater dynamic stability. This is evident from the presence of a chaotic-like region (the light blue dotted area), [compare with Fig. <ref> and see the discussion in Sec. <ref>]. Finally, we indicate that a decrease of effective mass m_ eff stabilizes the excitation supporting localized polaron formation.
§.§ Robustness of coherent transport against disorder
In this paragraph, we focus on the parameters regime, where a well-localized excitation can be transported over a long distance. Namely, after identifying stable polaron regimes, we proceed to apply a constant force to investigate the propagation of non-spreading solutions.
For this analysis, we fix F=0.2, m_ eff=0.5 and select the coupling strengths within the range g_J ∈ [4,16] and g_W ∈ [8,20]. These regions are indicated by a dashed square in the bottom panels of Fig. <ref>. The results are presented in Fig. <ref>. The top row of Fig. <ref> illustrates the participation ratio, PR(t_f), for both optical [panel (a)] and acoustic [panel (b)] phonons. In both panels, we observe a shift in the boundary between extended and localized states due to the presence of the applied force. However, the prevalence of dark blue colors, indicating localized regimes, remains evident. The bottom row of Fig. <ref> displays ξ(t_f), as given by Eq. (<ref>). This quantity serves as a measure for selecting well-localized solutions propagating in a single direction. We observe stable transport islands of such solutions, indicated by warm colors. Panel (c) corresponds to optical phonons, while panel (d) corresponds to acoustic phonons.
Finally, in Fig.<ref>, we examine the robustness of the non-dispersive moving solutions against on-site disorder, ϵ_j, as in Eq.(<ref>). The disorder is introduced by assuming ϵ_j to be a pseudorandom variable drawn from a uniform distribution in [-W/2,W/2].
Panels (a) and (b) depict the time propagation of excitations for optical and acoustic phonons, respectively. Panel Fig. <ref>(c) illustrate the center of mass position, while Fig. <ref>(d) presents the participation ratio evaluated at the final evolution time, plotted as functions of the disorder amplitude W. The results are averaged over 200 independent realizations of disorder. Notably, the participation ratio for both acoustic and optical phonons remains relatively constant, providing evidence for the robustness of the polaron self-trapping mechanism, while the center of mass positions takes place on a significant distance.
§ SUMMARY AND CONCLUSIONS
In summary, we propose a quantum simulator with Rydbeg-dressed atom arrays for SSH-Frölich Hamiltonian allowing studies of polaron formation and dynamics. The interplay between two competing excitation-phonon coupling terms in the model results in a rich dynamical behavior, which we comprehensively analyze. In particular, our findings reveal the presence of asymmetry in Bloch oscillations allowing coherent transport of a well-localized excitation over long distances.
Moreover, we compare the behavior of excitations coupled to either acoustic or optical phonons and indicate similar qualitative behavior. Finally, we demonstrate the robustness of phonon-assisted coherent transport to the on-site random potential.
=
Our analysis is restricted to weak lattice distortions related to a small number of phonons per lattice site, however, the proposed quantum simulator allows the studies of the excitation dynamics in strong distortion limit, as well as studies of a plethora of different scenarios, such as bi- and many-polaron dynamics, and investigation of the quantum boomerang effect <cit.> affected by the presence of phonons, both in a single-particle and many-body scenario. We believe, that our work opens up new avenues for research in Rydberg-based quantum simulators.
A.K. acknowledges the support of the Austrian Science Fund (FWF) within the ESPRIT Programme ESP 171-N under the Quantum Austria Funding Initiative.
S.K. acknowledges the Netherlands Organisation for Scientific Research (NWO) under Grant No. 680.92.18.05.
ICFO group acknowledges support from: ERC AdG NOQIA; MICIN/AEI (PGC2018-0910.13039/501100011033, CEX2019-000910-S/10.13039/501100011033, Plan National FIDEUA PID2019-106901GB-I00, FPI; MICIIN with funding from European Union NextGenerationEU (PRTR-C17.I1): QUANTERA MAQS PCI2019-111828-2); MCIN/AEI/ 10.13039/501100011033 and by the “European Union NextGeneration EU/PRTR" QUANTERA DYNAMITE PCI2022-132919 within the QuantERA II Programme that has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No 101017733Proyectos de I+D+I “Retos Colaboración” QUSPIN RTC2019-007196-7); Fundació Cellex; Fundació Mir-Puig; Generalitat de Catalunya (European Social Fund FEDER and CERCA program, AGAUR Grant No. 2021 SGR 01452, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020); Barcelona Supercomputing Center MareNostrum (FI-2023-1-0013); EU (PASQuanS2.1, 101113690); EU Horizon 2020 FET-OPEN OPTOlogic (Grant No 899794); EU Horizon Europe Program (Grant Agreement 101080086 — NeQST), National Science Centre, Poland (Symfonia Grant No. 2016/20/W/ST4/00314); ICFO Internal “QuantumGaudi” project; European Union’s Horizon 2020 research and innovation program under the Marie-Skłodowska-Curie grant agreement No 101029393 (STREDCH) and No 847648 (“La Caixa” Junior Leaders fellowships ID100010434: LCF/BQ/PI19/11690013, LCF/BQ/PI20/11760031, LCF/BQ/PR20/11770012, LCF/BQ/PR21/11840013).
The work J.Z. was funded by the National Science Centre, Poland under the OPUS call within the WEAVE programme
2021/43/I/ST3/01142 as well as via project 2021/03/Y/ST2/00186 within the QuantERA II Programme that has received funding from the European Union Horizon 2020 research and innovation programme under Grant agreement No 101017733. A
partial support by the Strategic Programme Excellence Initiative within Priority Research Area (DigiWorld) at Jagiellonian University is acknowledged.
M.P. acknowledges the support of the Polish National Agency for Academic Exchange, the Bekker programme no:
PPN/BEK/2020/1/00317.
Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union, European Commission, European Climate, Infrastructure and Environment Executive Agency (CINEA), nor any other granting authority. Neither the European Union nor any granting authority can be held responsible for them.
|
http://arxiv.org/abs/2307.06234v1 | 20230712152628 | Forward hysteresis and Hopf bifurcation in an NPZD model with application to harmful algal blooms | [
"Joshua C. Macdonald",
"Hayriye Gulbudak"
] | q-bio.PE | [
"q-bio.PE",
"92B05, 92-10"
] |
Forward hysteresis and Hopf bifurcation in a NPZD model]Forward hysteresis and Hopf bifurcation in an NPZD model with application to harmful algal blooms
[1,2]J. C. [email protected]
[1]H. [email protected]
[1]Department of Mathematics, University of Louisiana at Lafayette, 1401 Johnston Street, Lafayette, 70504, Louisiana, USA
[2]Current address: School of Zoology, Faculty of Life Sciences, Tel Aviv University, Tel Aviv-Yafo, Israel
Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD) models, describing the interactions between phytoplankton, zooplankton systems, and their ecosystem, are used to predict their ecological and evolutionary population dynamics. These organisms form the base two trophic levels of aquatic ecosystems. Hence understanding their population dynamics and how disturbances can affect these systems is crucial. Here, starting from a base NPZ modeling framework, we incorporate the harmful effects of phytoplankton overpopulation on zooplankton - representing a crucial next step in harmful algal bloom (HAB) modeling - and split the nutrient compartment to formulate an NPZD model. We then mathematically analyze the NPZ system upon which this new model is based, including local and global stability of equilibria, Hopf bifurcation condition, and forward hysteresis, where the bi-stability occurs with multiple attractors. Finally, we extend the threshold analysis to the NPZD model, which displays both forward hysteresis with bi-stability and Hopf bifurcation under different parameter regimes, and examine ecological implications after incorporating seasonality and ecological disturbances. Ultimately, we quantify ecosystem health in terms of the relative values of the robust persistence thresholds for phytoplankton and zooplankton and find (i) ecosystems sufficiently favoring phytoplankton, as quantified by the relative values of the plankton persistence numbers, are vulnerable to both HABs and (local) zooplankton extinction (ii) even healthy ecosystems are extremely sensitive to nutrient depletion over relatively short time scales.
[
*
August 12, 2023
===================
§ INTRODUCTION
Plankton are a ubiquitous group of very small drifting organisms which live in both salty and fresh water and form the base trophic levels of aquatic food webs <cit.>. Divided by trophic position, phytoplankton are primary producers, generating growth through photosynthesis, while zooplankton are primary consumers, and feed primarily on phytoplankton <cit.>. Mixotrophic organisms, those which can opportunistically switch between photosynthesis and heterotrophy, also exist in this group <cit.>.
Disturbance is an important mechanism that affects the functioning of these ecosystems, and variation in type, frequency, intensity, and duration of disturbance has important implications for ecosystem and community structure and thus underlying population dynamics <cit.>. There is general agreement that a high frequency of ecological disturbance has a net negative effect on ecosystem species diversity <cit.>. Given plankton's foundational position in aquatic food webs, understanding their interactions and population dynamics is a central focus in aquatic ecology (cf <cit.>) - not least because they can be main drivers of knock-on effects of ecological disturbances. Here we focus in particular on plankton population dynamics in the wake of nutrient influx (eutrophication, <cit.>) and nutrient depletion events (re-oligotrophication, see <cit.>), which are of immediate concern given their potential cascading cross-scale knock-on effects which can range from local die-offs <cit.>, to diverse disease spillover events and pathogenesis of vector-borne disease (particularly in freshwater ecosystems, <cit.>), and that under the current climate regime, conditions that promote their occurrence have been both predicted to and shown to increase (<cit.>) in contrast to other potential disturbances.
Plankton blooms are naturally occurring in temperate zones during the spring <cit.>. However, phytoplankton blooms may cause deleterious ecological effects through toxicity (e.g., harmful algal blooms (HABs) or through the formation of low oxygen/hypoxic zones, cf <cit.>), both of which negatively affect ecological consumers (e.g., zooplankton or fishes; <cit.>). There is debate about what exactly is classified as a bloom versus what is not (but see <cit.> for one definition). Here, we take a conservative approach and say a bloom has occurred if the peak phytoplankton concentration is at least 300% the mean value of the time simulation over the course of one year, and investigate their effects as a disturbance of plankton population dynamics.
Nutrient-Phytoplankton-Zooplankton-Detritus (NPZD) models (cf. <cit.>) have long been used by plankton ecologists to investigate plankton interactions and long-term population dynamics in various settings from lakes to the far-from-shore ocean. These models are essentially nested Lotka-Volterra models with phytoplankton `predating' nutrients and zooplankton predating phytoplankton and are used to study the mechanisms that sustain plankton coexistence and diversity <cit.>.
Hopf bifurcation, which biologically corresponds to predator-prey feedback loops, is a common feature of these models (cf. <cit.>), and forward hysteresis has been demonstrated depending on the choice of zooplankton mortality term <cit.>. Nutrient cycling has also been modeled by incorporating delay differential equations instead of a separate detritus compartment <cit.>. However, many of these models - particularly those with many nonlinear terms - have not been mathematically analyzed, which is crucial for understanding complex interactions between NPZ(D) systems, and have not been used in the context of ecological disturbances.
In this study, starting from a base NPZ modeling framework from the literature <cit.>, we incorporate the harmful effects of phytoplankton overpopulation on zooplankton by adding a functional form to the zooplankton compartment and split the nutrient compartment to formulate an NPZD model (Section <ref>). To our best knowledge, this is the first multi-trophic level model to incorporate process-based effects of harmful algal blooms (HABs), which has been argued for as a crucial next step in HAB modeling <cit.>. We then mathematically analyze the NPZ system upon which this new model is based with both quadratic and linear zooplankton mortality terms (Section <ref>) - deriving global stability conditions for the zooplankton extinction equilibrium in terms of zooplankton invasion number for a special case of the linear mortality model. We also derive local stability, Hopf bifurcation, and derive existence conditions for the coexistence equilibria of the NPZ model with both quadratic and linear zooplankton mortality. We also provide one and two-parameter bifurcation diagrams for both models, showing forward hysteresis with bi-stability or Hopf bifurcation in the quadratic loss case depending on parameter values, along with Hopf bifurcation or transcritical bifurcation in the linear loss case. Finally, we extend the threshold analysis to the new NPZD model, which displays both forward hysteresis with bi-stability and Hopf bifurcation, and examine ecological implications after incorporating seasonality and ecological disturbances in the form of eutrophication and re-oligotrophication events (Section <ref>). Ultimately we quantify ecosystem health in terms of the relative values of the robust persistence thresholds for phytoplankton and zooplankton and find (i) ecosystems sufficiently favoring phytoplankton are vulnerable to both HABs and (local) zooplankton extinction (ii) even balanced ecosystems are extremely sensitive to nutrient depletion over relatively short time scales.
§ MODELING NPZ AND NPZD SYSTEMS
The general form of the coupled nonlinear NPZ ODE models describes the interactions between nutrients (N), phytoplankton (P), and zooplankton (Z).
The terms f(P), g(N) represent phytoplankton's response to light (ie. growth), and phytoplankton nutrient uptake, respectively. The terms h(P), i(P) denote phytoplankton mortality due to zooplankton grazing, and natural mortality, respectively.
Zooplankton growth is driven by interaction with phytoplankton, scaled by zooplankton assimilation, which can be thought of as `messy eating'. Because zooplankton are the highest trophic level modeled, all zooplankton loss is generally accounted for with a single functional form, j(Z). In addition to nutrient loss due to phytoplankton nutrient uptake, nutrient loss/exchange can also occur due to other factors such as re-oligotrophication <cit.> or cross-thermocline exchange of nutrients <cit.>, which is represented by the functional term m(N). Furthermore, nutrient growth results from plankton death and inefficient zooplankton grazing, denoted with the term (1 - γ)h(P)Z, where γ represents the zooplankton assimilation rate. The general form of the NPZ model is given as follows <cit.> (see schematic Figure <ref>):
Ṗ=f(P)g(N)P - h(P)Z - i(P)P,
Ż = γ h(P)Z - j(Z)Z,
Ṅ = -f(P)g(N)P + (1 - γ)h(P)Z + i(P)P - m(N).
Following <cit.>, we consider the specific functional forms:
f(P) =β(1 - P/K), g(N)=N/k + N, h(P)= η P^2/μ^2 + P^2,
j(Z) =δ Z^σ,i(P)=0, m(N) = S(N-Θ).
Therefore one can obtain the following coupled ODE model:
{Ṗ = N/k + Nβ P(1 - P/K) - η P^2/μ^2 + P^2Z,
Ż = γ[η P^2/μ^2 + P^2 - δ Z^σ]Z,
Ṅ = -βN/k + N(1 - P/K)P + (1 - γ)η P^2/μ^2 + P^2Z - S(N - Θ)
.
In the NPZ model (<ref>), the modified logistic equation for phytoplankton growth has the saturating response of Holling's type II for nutrient uptake (Michaelis-Menten kinetics). We also have a saturating response of Holling's type III function for the zooplankton grazing of phytoplankton <cit.>. These are chosen because plankton are known to not have well-mixed spatial distributions <cit.>. The parameters k, K, η and μ denote the Michaelis-Menton half saturation constant, the phytoplankton carrying capacity, the maximum zooplankton grazing rate, and the half-saturation grazing constant, respectively. Zooplankton growth is in direct response to interaction with phytoplankton, scaled by the zooplankton assimilation rate, γ, and loss is assumed to be either linear (σ = 0) or quadratic (σ = 1) with death rate δ.
Moreover, the parameter S represents the nutrient loss/exchange rate. Finally, the parameter Θ represents the intrinsic nutrient level.
§.§ Incorporating harmful affect of phytoplankton overpopulation
To account for the harmful effect of phytoplankton overpopulation on zooplankton during harmful algal blooms, as well as nutrient cycling, we incorporate a new variable, D(t), representing the amount of Detritus, to the model (<ref>), providing a general NPZD framework (see Fig.<ref>c)
We then define the net reproductive rate of zooplankton as a function of phytoplankton as follows:
r(P) = h(P)_zooplankton grazing - ℓ(P)_harmful affect of phytoplankton,
where ℓ(P) = α P/Ξ + P with α representing the maximum harmful effect of phytoplankton on zooplankton, and Ξ denoting the half-saturation constant of the effect. The term ℓ(P)Z represents the deleterious effect of phytoplankton overpopulation on zooplankton. Additionally, to better capture nutrient cycling we split the compartment, Ṅ, into two compartments, with Ḋ representing the rate of change in detritus concentration which decays into nutrients at rate q(D)=Ψ, and phytoplankton having natural mortality rate i(P) = ϵ.
This functional form of r(P), given in (<ref>), represents the beneficial effect of phytoplankton grazing for zooplankton population, and the deleterious effect of phytoplankton overpopulation on zooplankton.
This formulation is motivated by our prior work, modeling antibody-dependent enhancement (ADE) in Dengue, where an increase in preexisting crossreactive antibodies can increase infection severity <cit.>. Here we assume that the net zooplankton reproduction can decrease as the phytoplankton population size P achieves a peak above some threshold, representing harmful bloom occurrence <cit.> (see figure <ref>).
Taken together this results in the model
{Ṗ = N/k + Nβ P(1 - P/K) - η P^2/μ^2 + P^2Z - ϵ P,
Ż = γ([η P^2/μ^2 + P^2 - α P/Ξ + P]Z - δ Z^2),
Ṅ = -N/k + Nβ P(1 - P/K) - S(N - θ) + Ψ D
Ḋ = (1 - γ)η P^2/μ^2 + P^2Z + ϵ P - Ψ D.
.
This model can be reparamatrized via
{τ = β t, p = μ^-1P, z = η (βμ)^-1Z,n= μ^-1N,
a = δβμ/η^2,s = Sβ^-1,θ = Θμ^-1,
k̃ = kμ^-1,c = Kμ^-1, γ̃ = ηγβ^-1,
.
similar to the NPZ models in <cit.>.
To further rescale new parameters in the model (<ref>), we consider
{α̃ = αη^-1, ξ = Ξμ^-1, d = Dμ^-1
ψ = Ψβ^-1, ϵ̃ = ϵβ^-1,
.
Then the full NPZD model (<ref>) can be re-scaled as:
{dp/dτ = n/k̃ + np(1 - p/c) - p^2/1 + p^2z - ϵ̃p
dz/dτ = γ̃[p^2/1 + p^2 -α̃p/ξ+p - az]z
dn/dτ = -n/k̃ + np(1 - p/c) - s(n - θ) + ψ d
dd/dτ = (1 - γ)p^2/1 + p^2z + ϵ̃p - ψ d
.
In the system (<ref>), the parameters a,s are respectively the rescaled zooplankton mortality and nutrient loss/exchange rates. Phytoplankton carrying capacity is represented by c, and γ̃ represents zooplankton assimilation scaled by the ratio of phytoplankton response to light, β, and maximum zooplankton grazing rate η.
§ THRESHOLD ANALYSIS OF THE NPZ MODEL
To understand the model dynamics and ecological impact of HABs, we first analyze the NPZ system (<ref>) then we extend the analysis to the NPZD model (<ref>) along with crucial ecological implications.
Note that the rescaled NPZ subsystem (<ref>) is as follows <cit.>:
{dp/dτ = n/k̃ + np(1 - p/c) - p^2/1 + p^2z
dz/dτ = γ̃[p^2/1 + p^2 - az^σ]z
dn/dτ = -n/k̃ + np(1 - p/c) + (1 - γ)p^2/1 + p^2z - s(n - θ)
.
For the analysis of the model (<ref>), we consider two cases: (i) quadratic zooplankton loss term (σ = 1) and (ii) linear zooplankton loss term (σ = 0). All model parameters are assumed to be strictly positive. In both the case of a quadratic zooplankton loss term (σ = 1) and a linear zooplankton loss term (σ = 0), the model has two boundary equilibria in the positive phytoplankton-nutrient plane: _np = (c,0,θ) and _n = (0,0,θ).
For both of these equilibria, the steady state nutrient level is the intrinsic nutrient level of the system, θ. Equilibrium _np corresponds to the steady state with phytoplankton at their carrying capacity, c, and equilibrium _n represents community collapse. As we show below, changes in zooplankton loss term induce distinct qualitative dynamics. Regardless of the choice of zooplankton mortality functional form, the community collapse equilibrium is always unstable for this model.
The community collapse equilibrium, _n, of model (<ref>) is always unstable.
See appendix A.
See Table <ref> for a summary of results.
§.§ Linear zooplankton loss term (σ = 0) with the full model
Define the zooplankton invasion number,
ℛ_0:= 1/a·c^2/1+c^2.
If ℛ_0 < 1 and σ = 0, then the zooplankton extinction equilibrium of system (<ref>), _np, is locally asymptotically stable. If ℛ_0 > 1, it is unstable.
See appendix A.
From expression (<ref>) it is apparent that the invasion number ℛ_0 may be interpreted as the average lifespan of zooplankton, a^-1, scaled down by a function of the phytoplankton carrying capacity, c^2(1+c^2)^-1. Note that for fixed a, lim_c→∞R_0 = a^-1 and lim_c→ 0R_0 = 0. Similarly, for fixed c, lim_a→ 0ℛ_0 = ∞ and lim_a→ 1ℛ_0 = c^2(1+c^2)^-1 < 1. This, together with Prop. <ref>, indicates that when σ = 0, phytoplankton carrying capacity increases or the zooplankton loss rate decreases, the probability of zooplankton extinction will approach zero. When phytoplankton carrying capacity is low or the zooplankton loss rate is high, zooplankton extinction is more likely.
If σ = 0, then the system (<ref>) has a unique coexistence equilibrium if and only if ℛ_0 > 1.
Suppose that for the system (<ref>), we have σ = 0,
dp/dτ = dz/dτ = dn/dτ = 0,
and that n,p,z > 0. Then, the system has the following potential unique coexistence equilibrium:
_* = {
p_* = √(a/1-a)
n_* = (2cs)^-1[n_b + √(n_b^2 + 4c^2k̃θ s^2)]
z_* = n_*/p_*(k̃ + n_*)(1 - p_*/c)(1+p_*^2)
.
where
n_b = -ck̃s + cθ s - cp_*γ + p_*^2γ.
Note that ℛ_0 > 1 if and only if p_* < c:
ℛ_0 = c^2/a(1 + c^2) > 1 ⇔c^2/1 + c^2 > a
⇔ p_* = √(a/1-a) < √(c^2(1+c^2)^-1/1 - c^2(1+c^2)^-1) = √(c^2) = c.
Additionally, from its form, it is clear that n_* > 0. Thus, we may conclude that this expression for the coexistence equilibrium is biologically feasible.
Let
{ψ_1 = n_*/k̃+n_*(1 - p_*/c), ψ_2 = 2p_*p_*^2/1+p_*^2z_*, ψ_3 = 2p_*/(1+p_*^2)^2z_*,
ψ_4 = n_*/k̃+n_*·p_*/c,
ψ_5 = p_*(c-p_*)/c(k̃ + n_*)(1-n_*/k̃+n_*), ξ_0 = aγ̃ψ_3(s+γψ_5),
ξ_1 = s(-ψ_1 + ψ_3+ψ_4) + γψ_3ψ_5 + aγ̃ψ_3, ξ_2 = -ψ_1 +ψ_3+ψ_4 + ψ_5 + s
ξ̂_1 = ξ_1 + sψ_1, ξ̂_2 = ξ_2 + ψ_1, 𝒞_1^1 = ψ_1 + aγ̃/ψ_3 + ψ_4 + sψ_5 + aγ̃,
𝒞_1^2 = ξ_0 + ψ_1(ξ̂_1 + sξ̂_2)/ξ̂_1ξ̂_2 + sψ_1^2, 𝒞_1 = max{𝒞_1^1,𝒞_1^2}.
where p_*,n_*,z_* are as defined in (<ref>).
If ℛ_0 > 1 and σ = 0, then the coexistence equilibrium, _* (<ref>), is locally asymptotically stable if
𝒞_1:=max{𝒞_1^1,𝒞_1^2} < 1
and has a simple Hopf bifurcation in a parameter of interest, α, at value α_0 if
𝒞_1^1 < 𝒞_1^2 = 1
and
-ξ_0'(α_0) + ξ_2(α_0)ξ_1'(α_0) + ξ_1(α_0)ξ_2'(α_0) ≠ 0
See appendix A.
We choose the notation 𝒞_1 because, as shown in Prop. 3 (similarly with 𝒞_2 in Prop. <ref>), it describes the qualitative nature of coexistence in the model with linear zooplankton loss term. While the above expressions (derived from the characteristic polynomial of the Jacobian with MatLab Symbolic Math Toolbox) do not have an easily interpreted biological meaning they are useful in two ways. First, they provide mathematically rigorous bounds for both local asymptotic stability and Hopf bifurcation of the coexistence equilibrium. Second, these expressions give an indication of which parameters are important to the qualitative dynamics of the model. Via these expressions, we can see that, in the case of linear zooplankton loss, the only parameters which affect the unique coexistence equilibrium and its asymptotic dynamics are a, zooplankton loss rate, and c, phytoplankton carrying capacity.
Hopf bifurcation is defined as the birth of a limit cycle from an equilibrium where the equilibrium changes stability via a pair of purely imaginary eigenvalues. The bifurcation can be supercritical or subcritical, resulting in stable or unstable limit cycles <cit.>. In model (<ref>) with σ = 0 in the parameter region where Hopf bifurcation occurs we observe that these bifurcations occur around the values of bifurcation parameter a where p_* ≈ z_*, and act as a transitory state from high to low phytoplankton abundance (see figure <ref>(b)).
§.§ Linear zooplankton loss term: a special case
Suppose that nutrient loss/exchange rate s = 0, γ̃ = γ, and upon death, zooplankton instantaneously become nutrients. Then, system (<ref>) is closed (ie. ∀τ≥ 0, n(τ)+p(τ)+z(τ) = N_T = p(0)+z(0)+n(0)). In this case, we may rewrite the system as:
{dp/dτ = N_T-(p+z)/k̃+N_T-(p+z)p(1-p/c)-p^2/1+p^2z
dz/dτ = γ(p^2/1+p^2-a)z
dn/dτ = -N_T-(p+z)/k̃+N_T-(p+z)p(1-p/c)+(1-γ)p^2/1+p^2z +γ a z.
.
For this system, there are the following boundary equilibria: _0 = (0,0, N_T)^T, _N_T = (N_T,0,0)^T, and _c = (c,0, N_T-c)^T (exists if and only if N_T ≥ c). Also, note that for this version of the model, the p=0 and z = 0 planes are invariant and if n = 0 (that is, N_T = p + z), then
dn/dτ = (1-γ)p^2/1+p^2z+γ a z ≥ 0.
Hence, the set B = {(p,z,n): 0 ≤ p+z ≤ N_T} is invariant (and indeed p+z > N_T ⇒ n < 0).
Thus, we need only consider solutions on this set. Additionally, because we may write n(τ) = N_T - p(τ) - z(τ), we need only consider the reduced system
{dp/dτ = N_T-(p+z)/k̃+N_T-(p+z)p(1-p/c)-p^2/1+p^2z = f_1(p,z)
dz/dτ = γ(p^2/1+p^2-a)z = f_2(p,z)
.
Now, define the zooplankton invasion number:
ℛ_1 := 1/a·N_T^2/1+N_T^2
If N_T < c and ℛ_1 < 1, then the zooplankton extinction equilibrium _N_T is globally asymptotically stable in B\{p = 0}. If N_T > c or ℛ_1 > 1, it is unstable.
First, note that the lines p = 0 and z = 0 are invariant. Then observe that the Jacobian of system (<ref>) evaluated at _N_T is
J(_N_T) = [ -N_T(c - N_T)/ck̃ *; 0 γ a (R_1 - 1) ],
from this it is clear that if N_T < c and ℛ_1 < 1, then ℰ_N_T is locally stable, and is unstable if either N_T > c or ℛ_1 > 1.
Now observe that for this version of the model, the community collapse equilibrium _0 is always unstable since
J(_0) =[ N_T/k̃+N_T 0; 0 -aγ ].
Note that
ℛ_1 = N_T^2/a(1 + N_T^2) < 1 ⇔N_T^2/1 + N_T^2 < a
⇔ p_* = √(a/1-a) > √(N_T^2(1+N_T^2)^-1/1 - N_T^2(1+N_T^2)^-1) = √(N_T^2) = N_T,
where p_* is the p component of any possible coexistence equilibrium.
Thus, when N_T < c and ℛ_1 < 1, there are only two equilibria in B: _0 and _N_T. Since the model is closed, all of our solutions are bounded for τ≥ 0. Thus, any solution contains [0,∞) in its domain and has a compact and non-empty ω-limit set.
Note that for
ϕ = 1/p^2, N_T < c, and ℛ_1 < 1, for any solution to (<ref>) (p,z)^T ∈ B\({p =0}∪{z = 0}):
∂/∂ p[ϕdp/dτ] = -1-p/c/p(k̃/(k̃+N_T-(p+z))^2) - N_T-(p+z)/k̃+N_T-(p+z)·1/p^2
-2p/(1+p^2)^2z < 0
∂/∂ z[ϕdz/dτ] = γ(p^2/1+p^2-a)1/p^2≤γ a/p^2(ℛ_1-1) < 0
since p^2/1+p^2 is an increasing function of p. Hence, when N_T < c and ℛ_1 < 1,
∇·(ϕ(p)f(p,z)) = ∂/∂ p[ϕdp/dτ] + ∂/∂ z[ϕdz/dτ] < 0 ∀ (p,z)^T ∈ B\({p =0}∪{z = 0}).
Thus, by Dulac's criterion, it follows that there are no closed orbits wholly contained in B\({p =0}∪{z = 0}).
Now, notice that in the invariant line N_T = n+p (ie. z = 0), dz/dτ = 0 and if p > 0, then dp/dτ > 0 when p < N_T < c. Similarly, on the invariant line N_T = n+z (p = 0), ℰ_0 attracts all solutions. Let L stand for the ω-limit set of some point (p_0,z_0,n_0)^T∈ B\({p =0}∪{z = 0}). Recall that there are two equilibia, _0 and _N_T, in B. Because _0 is a hyperbolic saddle-node it cannot belong to any heteroclinic cycle or homoclinic loop. Consequently, there are no heteroclinic cycles or homoclinic loops at all (Poincare–Bendixson theorem). Hence, L = {_N_T}. It follows that _N_T is globally asymptotically stable in B\{p = 0}.
The zooplankton extinction equilibrium _c = (c,0,N_T - c)^T exists if and only if N_T ≥ c,. Further, if ℛ_0 < 1 and N_T > c, then it is locally asymptotically stable. If ℛ_0 > 1, it is unstable.
See appendix A.
§.§ Quadratic zooplankton loss (σ = 1)
Suppose that n,p,z > 0 and
dp/dτ = dz/dτ = dn/dτ = 0.
It follows that in the case of quadratic zooplankton loss, any possible coexistence equilibrium must satisfy
{
p_* = √(az_*/1-az*),
n_* = s^-1(sθ - γ a z_*^2),
0 = az_*^2 + z_* +n_*k+n_*p_* (1-p_*/c),.
which cannot be solved explicitly.
Because of this, we now make use of the following result from general persistence theory:
Theorem (Existence of coexistence equilibrium, <cit.>) Suppose that
* X is a closed, convex subset of a Banach Space,
* ϕ has a compact attractor, B, of bounded subsets in X,
* ρ is continuous and concave,
* ϕ is uniformly weakly persistent,
* ϕ(t,·) is compact for some t > 0.
Then, there exists an equilibrium x^* with ρ(x^*) > 0.
There exists at least one coexistence equilibrium in the system (<ref>) when σ = 1.
We first show that the system (<ref>) is dissipative.
Note that the planes where p = 0 and z = 0 are invariant, and when n = 0, dn/dτ > 0. Then, define
A_p = {(p,z,n)^T: p ≥ c, z> 0, n > 0}
and note that ∀ (p,z,n)^T ∈ A_p, dp/dτ < 0. Now define
A_z^(2) = {(p,z,n)^T: 0 < p≤ c, z > ℛ_0, n > 0}
A_n = {(p,z,n)^T: 0 < p ≤ c, 0 < z ≤ℛ_0, n > n̂}
where n̂ = s^-1((1-γ)aℛ_0^2 + sθ).
First, note that ∀ (p,z,n)^T ∈ A_z^(2)
dz/dτ≤γ̃a(ℛ_0 - z)z < 0.
Next, observe that ∀ (p,z,n)^T ∈ A_n,
dn/dτ ≤ (1-γ)c^2/1+c^2ℛ_0 + s(θ - n)
= (1-γ)aℛ_0^2 + s(θ - n)
< 0.
Hence ∀ (p,z,n)^T such that p_0,z_0,n_0 ≥ 0,
lim_τ→∞(p(τ),z(τ),n(τ))^T ∈ B = {(p,z,n): 0 ≤ p ≤ c, 0 ≤ z ≤ℛ_0, 0 ≤ n ≤n̂}.
Thus, system (<ref>) is dissipative.
Note that when σ = 1, the Jacobian evaluated at equilibrium _np takes the form
J(_np) =
[ A 0; - s ],
where
A =
[ -θ/k+θ -aℛ_0; ; 0 γ̃aℛ_0 ].
Thus if σ = 1, _np is always a hyperbolic saddle node.
Next we show that the system is robustly uniformly ρ-persistent, where ρ = min{n(τ),p(τ),z(τ)}.
We may consider the equilibria of the system as trivial periodic solutions. Applying corollary 4.7 of <cit.>, we see, via proposition 4.1 and theorem 3.2 of <cit.>, that phytoplankton are robustly persistent if the eigenvalue in the p direction of J(_n), λ_p^_n > 0. Similarly, since zooplankton depend upon phytoplankton as a resource, we need only show the eigenvalue in the z direction of J(_np), λ_z^_np > 0 provided the first condition holds. Note that
J(_n) = [ θ/k̃ + θ 0 0; ; 0 -γ̃a(1-σ) 0; ; -θ/k̃ + θ 0 -s ].
From this it is clear that λ_p^_n = θ(k̃ + θ)^-1 > 0.
Next, notice that boundary equilibrium _np has λ_z^_np = ηγβ^-1a(ℛ_0 - (1-σ)). From this, it is clear that this eigenvalue is strictly positive if σ = 1 or ℛ_0 > 1 and σ = 0.
Hence, all of the conditions for the existence of a coexistence equilibrium are either met or exceeded.
For arbitrary coexistence equilibrium _* = (p_*,z_*n_*)^T satisfying (<ref>), let
{ν_0 = aγ̃z_*[(ψ_3+ψ_4)(ψ_5 + s)+ψ_5(ψ_1 + ψ_2(1-γ))+ψ_2ψ_5γ z_* + ψ_3sz_*]
- aγ̃z_*[(ψ_1+ψ_2)(ψ_5+s) + ψ_5(ψ_3(1-γ) + ψ_4) + ψ_3ψ_5γ z_* + ψ_2sz_*]
ν_1 = (ψ_3 + ψ_4)(aγ̃z_*+ψ_5+s) + aγ̃z_*(ψ_5 + s) + ψ_1ψ_5 + ψ_2ψ_5(1-γ)
-((ψ_1+ψ_2)(aγ̃z_*+ψ_5+s) + aγ̃z_*^2ψ_2 +ψ_3(1-γ)+ψ_4 )
ν_2 = ψ_3 + ψ_4 + aγ̃z_* + s -(ψ_1 + ψ_2)
ν̂_0 = ν_0 + aγ̃z_*[(ψ_1+ψ_2)(ψ_5+s) + ψ_5(ψ_3(1-γ) + ψ_4) + ψ_3ψ_5γ z_* + ψ_2sz_*],
ν̂_1 = ν_1 + ((ψ_1+ψ_2)(aγ̃z_*+ψ_5+s) + aγ̃z_*^2ψ_2 +ψ_3(1-γ)+ψ_4 ),
ν̂_2 = ν_2 + ψ_1 + ψ_2, 𝒞_2^1 = ν̂_2 - ν_2 /ν̂_2, 𝒞_2^2 = ν̂_0 + ν̂_1(ν̂_2 -
ν_2)+ ν̂_2(ν̂_1 - ν_1)/ν̂_1ν̂_2 + (ν̂_1-ν_1)(ν̂_2-ν_2) + (ν̂_0 - ν_0),
𝒞_2^3 = ν̂_0 - ν_0 /ν̂_0, 𝒞_2 = max{𝒞_2^1,𝒞_2^2,𝒞_2^3},
.
where ψ_i are as defined in system (<ref>) and expressions were again derived from the characteristic polynomial of the Jacobian with MatLab Symbolic Math Toolbox.
_* is locally asymptotically stable if
𝒞_2 < 1
and has a simple Hopf bifurcation in a parameter of interest, α at point α_0, if
𝒞_2^1 < 𝒞_2^2 = 1
and
-ν_0'(α_0) + ν_2(α_0)ν_1'(α_0) + ν_1(α_0)ν_2'(α_0) ≠ 0
See appendix A.
As with Prop. <ref>, while it is difficult to directly interpret these quantities biologically, they indicate that many more parameters can affect the qualitative dynamics of the model.
In the case of a quadratic zooplankton loss term, studying these equations indicates a much broader subset of the model parameters can affect both the number of coexistence equilibria and the asymptotic dynamics of each such equilibrium. Specifically from the numerical generation of bifurcation diagrams (see GitHub <cit.>), we can see that the parameters which can affect both stability and number of equilibria are again a and c as well as the nutrient loss/exchange rate, s; the intrinsic nutrient level, θ; the zooplankton assimilation rate, γ; and the nutrient update half saturation constant, k̃. In addition γ̃, which γ scaled by the ratio of maximum zooplankton grazing rate, η, to the phytoplankton response to light, β, can affect the asymptotic behavior of the system.
§ EXTENDING THRESHOLD ANALYSIS TO THE NPZD MODEL AND ECOLOGICAL IMPLICATIONS
Here, we investigate the analytical and numerical properties of the full NPZD system (<ref>) along with ecological implications of phytoplankton overpopulation during HABs and the effects of ecological disturbances on these dynamics.
Define phytoplankton persistence number
𝒫_0^p := 1/ϵ̃·θ/k̃+θ
If 𝒫_0^p > 1 then for the model (<ref>) the phytoplankton population is robustly uniformly ρ-persistent where ρ = min_τ p(τ).
See appendix A.
We see that for this model 𝒫_0^p is the average lifespan of phytoplankton scaled by a ratio of the intrinsic nutrient level of the system, with persistence being assured if ϵ̃ < θ/(k+θ). We particularly focus on its implications for the zooplankton extinction equilibrium as well as the conditions for zooplankton persistence and the existence of at least one coexistence equilibrium.
The model (<ref>) has two boundary equilibria, zooplankton extinction equilibrium ℰ_npd = (p̂,0,θ,d̂)^T, where
{p̂ = c(1-1/𝒫_0^p)
d̂ = ϵ̃ c/ψ(1-1/𝒫_0^p),
.
and community collapse equilibrium ℰ_n' = (0,0,θ,0)^T. Now define the zooplankton invasion number
ℛ_0^z := max{ℛ_0,1^z,ℛ_0,2^z,ℛ_0,3^z,ℛ_0,4^z},
where
{
b_1 = α̃ c(1-1/𝒫_0^p), b_2 = ck̃(𝒫_0^p-1)/(𝒫_0^p(k̃+θ))^2,
b_3 = c(1-1/𝒫_0^p)/𝒫_0^p(k̃+θ)(1-ϵ̃)+s
ℛ_0,1^z := c(1-1/𝒫_0^p)(ξ+c(1-1/𝒫_0^p))/α̃(1+c^2(1-1/𝒫_0^p)^2),
ℛ_0,2^z := ϵ̃(𝒫_0^p - 1)/b_3 + ψ
ℛ_0,3^z := ((b_3+ψ)(b_2+b_3+ψ) + ϵ̃(ϵ̃b_2 + ψ b_3))(𝒫_0^p-1)/(b_3 + ψ)(ϵ̃b_2+ψ b_3) + ϵ̃((𝒫_0^p-1)(b_2+b_3+ψ) + ψ(b_2+b_3))(𝒫_0^p-1)
ℛ_0,4^z := ℛ_0,3^z + ℛ_0^p/ℛ_0,3^zℛ_0^p+ 1.
These expressions are complex and so difficult to interpret directly, though we observe that they are each functions of 𝒫_0^p (as well as other model parameters).
For the model (<ref>), boundary equilibrium ℰ_npd exists if and only if 𝒫_0^p > 1. It is locally asympotically stable if ℛ_0^z < 1, and is unstable if ℛ_0^z > 1.
The existence of ℰ_npd if and only if 𝒫_0^p > 1 is clear from its form. Note that the Jacobian of model (<ref>) evaluated at ℰ_npd is
J(ℰ_npd) = [ ϵ̃(𝒫_0^p -1) -α̃ c(1-1/𝒫_0^p)/ξ + c(1-1/𝒫_0^p)ℛ_0,1^z ck̃(𝒫_0^p-1)/(𝒫_0^p)^2(k̃+θ)^2 0; 0 α̃γ̃c(1-1/𝒫_0^p)(ℛ_0,1^z-1) 0 0; ϵ̃(𝒫_0^p - 2) 0 -(c(1-1/𝒫_0^p)/𝒫_0^p(k̃+θ)(1-ϵ̃)+s) ψ; ϵ̃ (1-γ)α̃ c(1-1/𝒫_0^p)/ξ + c(1-1/𝒫_0^p)ℛ_0,1^z 0 -ψ ].
which has characteristic polynomial
χ_J(_npd)(λ) =
f(λ)· g(λ)
=(λ - b_1(ℛ_0,1^z-1))[λ^3 + (b_3 + ψ -ϵ̃(𝒫_0^p - 1))λ^2
(ϵ̃(b_2 - (𝒫_0^p-1)(b_2+b_3+ψ)) + ψ b_3)λ-ϵ̃ψ(b_2+ b_3)(𝒫_0^p - 1)].
The positivity of b_1,b_2,b_3 is assured by 𝒫_0^p > 1 and the form of 𝒫_0^p. The sign of the eigenvalue λ_1 = b_1(ℛ_0,1^z-1) is clear. Thus we turn to Hurwitz determinants to find conditions on the sign of the real parts of the roots of g(λ):
H_1 = b_3 + ψ - ϵ̃(𝒫_0^p-1)
H_2 = H_1(ϵ̃(b_2 - (𝒫_0^p-1)(b_2+b_3+ψ)) + ψ b_3) + ϵ̃ψ(b_2+b_3)(𝒫_0^p-1)
H_3 = -ϵ̃ψ(b_2+ b_3)(𝒫_0^p - 1)H_2
Next note that for Hurwitz determinant H_i, 1 ≤ i ≤ 3, H_i > 0 if and only if R_0,i+1 < 1. The desired result follows.
Now define zooplankton persistence number
𝒫_0^z := min{𝒫_0^p,ℛ_0,1^z},
where
ℛ_0,1^z = c(1-1/𝒫_0^p)(ξ+c(1-1/𝒫_0^p))/α̃(1+c^2(1-1/𝒫_0^p)^2)
= p̂(ξ+p̂)/α̃(1 + p̂^2)
Much like the quadratic zooplankton loss term case of model (<ref>), an explicit expression for the coexistence equilibria of the extended model does not exist, however, any potential coexistence equilibrium must satisfy the following system of equations:
{
n_* = 1/2s(-n_b + √(n_b^2+4sk(ϵ̃γ p_* + sθ)))
z_* = 1+p_*^2/p_*(n_*/k̃+n_*(1-p_*/c) -ϵ̃)
d_* = 1/ψ((1-γ)p_*/1+p_*^2 + ϵ̃p_*)
0 = p_*^2/1+p_*^2 - az_* - α̃p_*/ξ + p_*
.
where
n_b = s(k̃-θ) - ϵ̃γ p_* + γ p_*(1-p_*/c).
The positivity of n_* and d_* is clear from their forms provided p_* > 0. Therefore we need to provide a condition such that there exists at least one p_* > 0 for which the resulting z_* > 0, which as we show below is 𝒫_0^z > 1.
For the model (<ref>), if 𝒫_0^z > 1, then zooplankton population is robustly uniformly ρ-persistent for ρ = min_τ z(τ) and there exists at least one coexistence equilibrium.
See appendix A.
We make four key observations about the persistence numbers of zooplankton and phytoplankton which are relevant to our simulations.
{lim_p̂→ c𝒫_0^z = lim_𝒫_0^p →∞𝒫_0^z = 1/α̃
lim_θ→∞𝒫_0^p = 1/ϵ̃, lim_θ→ 0𝒫_0^p = 0
.
and further that, for fixed 𝒫_0^p,
{lim_α̃→∞𝒫_0^z = lim_ξ→ 0𝒫_0^z = 0
lim_α̃→ 0𝒫_0^z = lim_ξ→∞𝒫_0^z = ∞.
The NPZD model also presents interesting complex bifurcation dynamics including forward hysteresis (see figure <ref>). In the context of population dynamics, forward hysteresis refers to the appearance of multiple local attractors when a threshold condition (usually a condition analogous to the basic reproduction number, ℛ_0, in disease dynamical systems models) is larger than one <cit.>. For our model specifically, the curve of coexistence equilibria bifurcates from the zooplankton extinction equilibrium when 𝒫_0^z > 1.
In (a)-(c), we observe that the curve of phytoplankton equilibria in particular follows a pattern of hysteresis where the transitory state between high and low phytoplankton abundance is a region of bi-stability with model trajectory dependent on initial conditions. In this region of bi-stability, we note that the basin of attraction for the lower abundance equilibrium is in a neighborhood where the initial phytoplankton and zooplankton populations are approximately equal (see GitHub <cit.>) (d) sensitivity of 𝒫_0^z to variation in ξ, α̃. In Fig. <ref>(a), we observe that zooplankton will go (locally) extinct when the maximum harmful effect of phytoplankton is more than twice that of the maximum zooplankton grazing rate. In (b), we see that zooplankton will go (locally) extinct when the square root of the half-saturation constant for grazing is more than twice the half saturation of harmful effect (c) phytoplankton population decreases as a function of 𝒫_0^z, reflecting the top-down regulation of phytoplankton population by zooplankton predation. For the model (<ref>) Hopf bifurcation can also occur (see figure <ref>). Notice that, similarly to model (<ref>) (see figure <ref>(b)), the region of stable limit cycles is a transitory state between low and high phytoplankton abundance when γ̃ is sufficiently small.
We observe that our model is capable of capturing a wide range of plankton population dynamics as a function of 𝒫_0^z and 𝒫_0^p. To quantify this we define what we term the balance of the ecosystem,
ℬ := 1 - 𝒫_0^z/𝒫_0^p,
with the ideal balance being ℬ = 0 (see figures <ref>, <ref>), and so 𝒫_0^z = 𝒫_0^p, and having a positive (negative) value when phytoplankton (zooplankton) are favored. We observe in particular that our simulations suggest harmful algal blooms, which mathematically we define as a period of time where dp/dτ > 0 and dz/dτ < 0, occur from approximately when ℬ > 0.5 for our choice of other model parameters. This is chosen because it indicates that despite increasing abundance in the phytoplankton (prey) population, the zooplankton (predator) population still decreases. Moreover, zooplankton extinction occurs when ℬ approaches 1 (see figure <ref>(e)-(h) and supplementary figures <ref>-<ref>).
Thus - as the intrinsic nutrient level of the ecosystem increases in the scenario of a eutrophication event - 𝒫_0^p will increase. And if we are in the region of the parameter space representing an unhealthy ecosystem, so will the size of the region where robust persistence of zooplankton is not assured. Similarly in the scenario of a re-oligotrophication event as the intrinsic nutrient level of the ecosystem decreases the balance of the ecosystem will change to (temporarily) favor zooplankton - reflecting the key biological reality we seek to capture with our model.
§.§ Incorporating seasonality and ecological disturbances
Eutrophication (re-oligotrophication) events may occur at different times, have different peak (trough) nutrient levels, and have different durations; thus here we incorporate seasonality. For a given event, consider the disturbance function, θ_d(τ) where
{θ_d(τ) = θ±M_θ/max_τ g(ω,τ)g(ω,τ)
g(τ) = 1/ω^2τ e^-τ/ω ,
.
with M_θ as the maximum intrinsic nutrient level increase/decrease relative to baseline level θ and ω, a scale parameter, which controls the duration of the disturbance. We note that this is a modified gamma distribution which is chosen for its flexibility in representing different disturbance curves. As we illustrate in Fig. <ref>, the duration of the disturbance increases with ω.
Then we define
θ (τ) = θ τ < τ_*
θ_d(τ) τ≥τ_*
where τ_* is the disturbance start time.
To incorporate seasonality of light availability we introduce the forcing term f_s(τ) into our re-scaled model (<ref>):
{
f_s(τ) = 1 + 1/2sin(2πτ/100)
γ̃_s(τ) = γ̃/ f_s(τ).
. ,
which is the same seasonality function considered in <cit.> and <cit.> (chosen so that the mean value of f_s is one over a complete period).
Finally, putting everything together we arrive at the model,
{dp/dτ = f_s(τ)n/k̃ + np(1 - p/c) - p^2/1 + p^2z - ϵ̃p
dz/dτ = γ̃_s(τ)[p^2/1 + p^2 -α̃p/ξ+p - az]z
dn/dτ = -n/k̃ + np(1 - p/c) + s(θ(τ) - n) + ψ d
dd/dτ = (1 - γ)p^2/1 + p^2z + ϵ̃p - ψ d,
.
A table with a detailed summary of all model terms for the system (<ref>) is available in the appendices (see table <ref>).
§.§ Ecological implications
𝒫_0^z and 𝒫_0^p as measures of ecosystem health
Eutrophication
We further observe that the further out of ideal balance an ecosystem is the more likely that zooplankton will be at risk of (local) extinction during a disturbance (see figure <ref>) - with an already unhealthy ecosystem which favors phytoplankton being more vulnerable to (additional) eutrophication. Thus, when ecological factors external to the particular eutrophication event have changed the composition of the existing phytoplankton assemblage there is an elevated risk of this and subsequent knock-on effects from total deregulation of phytoplankton populations with potential far-reaching up-tropic level consequences.
Reoligotrophication
In contrast to eutrophication, even a previously healthy ecosystem may be at risk of plankton community collapse provided a prolonged period of re-oligotrophication (see figure <ref>). Model simulations suggest that even a single year of nutrient depletion is sufficient to cause this collapse. This may explain the relatively rapid onset of such events in world rivers described by <cit.>.
§ DISCUSSION
Starting from a base NPZ modeling framework, we incorporated the harmful effects of phytoplankton overpopulation on zooplankton during HABs, representing a crucial next step in HAB modeling <cit.>, and split the nutrient compartment to formulate an NPZD model. We then mathematically analyze the NPZ system upon which this new model is based - deriving global stability conditions for the zooplankton extinction equilibrium in terms of zooplankton invasion number for a special case of the linear mortality model as well as local stability, Hopf bifurcation, and derived existence conditions for the coexistence equilibria of the NPZ model with both quadratic and linear zooplankton mortality. We provided one and two-parameter bifurcation diagrams for both models, showing forward hysteresis with bi-stable dynamics and Hopf bifurcation in the quadratic loss case depending on parameter values, and either Hopf bifurcation or transcritical bifurcation in the linear loss case. Finally, we extended the threshold analysis to the NPZD model, which displays both forward hysteresis with bi-stability and Hopf bifurcation, and examined ecological implications after incorporating seasonality and ecological disturbances in the form of eutrophication and re-oligotrophication events. Ultimately we quantified ecosystem health in terms of the relative values of the robust persistence thresholds for phytoplankton and zooplankton and found (i) ecosystems sufficiently favoring phytoplankton, as measured by the relative values of plankton persistence numbers, are vulnerable to both HABs and (local) zooplankton extinction (ii) even balanced ecosystems are extremely sensitive to nutrient depletion over relatively short time-scales.
Modeling can provide crucial insights into functional understandings of population dynamics and interactions between ecosystem species and functional groups. Phytoplankton occupy niches in part based upon water temperature and nutrient composition <cit.>.
Thus phytoplankton niche loss due to anthropologically driven rising water temperatures may cause increased competition in phytoplankton populations and shifts in phytoplankton assemblage composition. We found that the increasing frequency of harmful algal blooms may be explained, at least in part, by these shifting compositions of phytoplankton assemblages towards types of phytoplankton with more severe harmful effects due to overpopulation as the overall nutrient richness and temperature increase (<cit.>), represented by a decrease in 𝒫_0^z and a shift in balance towards phytoplankton and that this may be exacerbated by eutrophication events, with already unhealthy ecosystems risking (local) zooplankton extinction provided a eutrophication event of sufficient severity occurs (figure <ref>), representing a tipping point or critical transition <cit.>.
In contrast to eutrophication, we found that even a previously healthy ecosystem is extremely vulnerable to prolonged re-oligotrophication. Model simulations suggest that only one year of nutrient depletion relative to typical intrinsic levels, could cause plankton population collapse. Importantly, original conditions were not recovered by simply reversing the course of nutrient flow showing clear evidence of a tipping point. This mirrors the perspective of <cit.> and may explain the relatively rapid onset of such events in comparison to eutrophication events (see figure <ref>).
Plankton, by definition, cannot swim against large-scale currents, and of course, live in many ecosystems which are affected by strong currents <cit.>. Thus, in many settings such as a river, the interplay between biological and physical dynamics is an important factor in the occurrence and severity of harmful algal blooms. However, a crucial step to understanding the population dynamics resulting from these complex interactions is to first understand bloom dynamics in a simpler physical setting, as we have here. The interplay between physical and biological factors is typically described via a one-way coupling to a diffusion-advection equation with a system of ODEs such as model (<ref>) <cit.>. In this framework, the NPZD model can be thought of as the biological dynamics at physical location x and time t. Thus, future work should incorporate fluid dynamics into our modeling framework to understand the interplay between physical and biological factors in the context of ecological disturbances and HABs. Finally, given the potential utility of 𝒫_0^p and particularly 𝒫_0^z as ecosystem monitoring tools - our model should be parameterized to specific ecosystems and specific eutrophication and re-oligotrophication events.
§ ACKNOWLEDGMENTS
The authors would like to thank Zachary Topor for informative discussions of plankton ecology. J.C.M is supported by a Zuckerman Foundation STEM leadership postdoctoral scholarship. During the main portion of this work, J.C.M. and H.G. were partially supported by a U.S. NSF RAPID grant (no. DMS-2028728) and NSF grant (no. DMS-1951759), and H.G. by a grant from the Simons Foundation/SFARI 638193.
§ DATA AVAILABILITY STATEMENT
All software for this project is available under a CC-BY-NC 4.0 license and is archived in the Zenodo repository https://doi.org/10.5281/zenodo.7650341https://doi.org/10.5281/zenodo.7650341 in citeable format.
§ APPENDIX
§.§ A. Proofs not appearing in the main text
(Prop. <ref>)
Consider the Jacobian matrix evaluated at this point, which is
J(_n) = [ θ/k̃ + θ 0 0; ; 0 -γ̃a(1-σ) 0; ; -θ/k̃ + θ 0 -s ].
From the form of (<ref>) it is readily apparent that _n is always unstable.
(foo1Prop. <ref>)
Note that when σ = 0, the Jacobian evaluated at equilibrium _np takes the form of
J(_np) =
[ A 0; - s ],
where
A =
[ -θ/k+θ -aℛ_0; ; 0 γ̃a(ℛ_0-1) ].
(foo2Prop. <ref>) foo1First, note that given system (<ref>) and σ = 0 the Jacobian matrix evaluated at _* is,
J(_*) =
[ ψ_1 + ψ_2 - ψ_3 - ψ_4 -a ψ_5; ; γ̃z_*(ψ_3 - ψ_2) 0 0; ; -ψ_1 + (1-γ)(ψ_3-ψ_2) + ψ_4 a(1-γ) -(ψ_5+s) ]
It follows that we have characteristic polynomial (assisted by MatLab Symbolic Math Toolbox):
χ_J(_*)(λ) = λ^3 + ξ_2λ^2 + ξ_1λ + ξ_0.
Thus we have Hurwitz determinants, H_i:
H_1 = ξ_2
H_2 = ξ_2ξ_1 - ξ_0
H_3 = ξ_0H_2
The generalized Routh-Hurwitz criterion indicates that _* will be locally asymptotically stable if and only if H_i > 0 ∀ i. This is equivalent to condition (<ref>). Liu <cit.> also indicates that if
H_1 > 0, H_2|_α_0 = 0, and d/dαH_2|_α_0≠ 0
thus a necessary condition for a simple Hopf bifurcation to occur is ξ_0 = ξ_1ξ_2. Note condition (57) is equivalent to conditions (<ref>), (<ref>).
(foo3Prop. <ref>)
Existence is clear from the form of _c. Note that
J(_c) = [ -(N_T-c)/k̃ + N_T - c *; 0 γ a (ℛ_0 - 1) ].
(foo4Prop. <ref>)
foo2The proof is similar to that of Prop. <ref> with the difference in detail due only to the Jacobian in the quadratic loss case being:
J(_*) =
[ ψ_1 + ψ_2 - ψ_3 - ψ_4 -az_* ψ_5; ; γ̃z_*(ψ_3 - ψ_2) -aγ̃z_* 0; ; -ψ_1 + (1-γ)(ψ_3-ψ_2) + ψ_4 az_*(1-γ) -(ψ_5 + s) ]
and so our characteristic polynomial is
χ_J(_*)(λ) = λ^3 + ν_2λ^2 + ν_1λ + ν_0.
.
(foo5Prop. <ref>)
The proof of dispatavity of model (<ref>) is similar to the proof for such in model (<ref>) (proposition <ref>). Observe that the Jacobian evaluated at this equilibrium is
J(ℰ_n') = [ ϵ̃(𝒫_0^p-1) 0 0 0; ; 0 0 0 0; ; -θ/k̃ + θ 0 -s ψ; ; ϵ̃ 0 0 -ψ ]
which is block triangular and has eigenvalue ϵ̃(𝒫_0^p-1) in the p direction. Thus via <cit.> as in our earlier proofs the desired result follows.
(foo6Prop. <ref>)
First, we observe that since 𝒫_0^z > 1 phytoplankton, upon which zooplankton depend, are robustly persistent, and the zooplankton equilibrium ℰ_npd exists. Next, notice that since a necessary condition for 𝒫_0^z > 1 is ℛ_0,1^z > 1 it follows that the zooplankton extinction equilibrium is unstable. Next, observe that the eigenvalue in the z direction for this equilibrium is
λ_z^ℰ_npd = b_1(ℛ_0,1^z-1),
(see equation (<ref>)), which is positive whenever 𝒫_0^z > 1. Thus as previously (by the work of <cit.>) zooplankton are robustly persistent, and it follows that there exists at least one coexistence equilibrium.
§.§ B. Supplementary tables
§.§ C. Supplementary figures
|
http://arxiv.org/abs/2307.04172v1 | 20230709133825 | Can Generative Large Language Models Perform ASR Error Correction? | [
"Rao Ma",
"Mengjie Qian",
"Potsawee Manakul",
"Mark Gales",
"Kate Knill"
] | cs.CL | [
"cs.CL",
"cs.SD",
"eess.AS"
] |
Electron-phonon driven charge density wave in CuTe.
Matteo Calandra
Received 27 February 2023; accepted 23 May 2023
===================================================
ASR error correction continues to serve as an important part of post-processing for speech recognition systems. Traditionally, these models are trained with supervised training using the decoding results of the underlying ASR system and the reference text. This approach is computationally intensive and the model needs to be re-trained when switching the underlying ASR model. Recent years have seen the development of large language models and their ability to perform natural language processing tasks in a zero-shot manner. In this paper, we take ChatGPT as an example to examine its ability to perform ASR error correction in the zero-shot or 1-shot settings. We use the ASR N-best list as model input and propose unconstrained error correction and N-best constrained error correction methods. Results on a Conformer-Transducer model and the pre-trained Whisper model show that we can largely improve the ASR system performance with error correction using the powerful ChatGPT model.
ASR error correction, generative model, large language model, speech recognition, zero-shot
§ INTRODUCTION
Automatic speech recognition (ASR) systems aim to transcribe human speech into readable text and are the key component for human-computer interaction <cit.>. In recent years, significant advancements have been made in this area. End-to-end (E2E) systems such as LAS or RNN-T are effective at modelling long context within the utterance and show superior performance compared to the HMM-based counterparts <cit.>. The training of ASR systems requires the availability of high-quality transcribed speech data, which can be costly to obtain.
In general, the training set of the publicly available corpus contains thousands of hours of annotated data. In contrast, the recently released ASR model,
Whisper <cit.>, is pre-trained on 680,000 hours of weakly supervised data collected from the Internet. Once published, Whisper gained extensive attention from both academia and industry.
The decoder part of an RNN-T or a LAS model acts as a language model that estimates the probability of the generated word sequence <cit.>. It learns from the labelled reference text and is jointly trained with the acoustic encoder. Due to the limited availability of training speech data, ASR systems struggle to generate rare words that have low frequency in the training corpus. Compared to speech data, large quantities of text data covering a wide range of domains are much easier to collect and process. Therefore, text-based methods have been explored to improve the performance of speech recognition systems.
Among these, ASR error correction which automatically identifies errors within the ASR hypothesis and outputs the corrected transcription is widely used <cit.>.
The development of the error correction model follows the trend of Natural Language Processing (NLP) technology. Early models were rule-based systems, which required carefully designed features and human expertise <cit.>. With the emergence of recurrent networks and attention mechanisms, models with the E2E architecture became mainstream later.
These models usually adopt a similar structure where the bidirectional encoder takes the ASR transcription as input and the reference text is used as the training target.
This approach has shown promising performance on diverse datasets for ASR models of different architectures <cit.>.
In the past few years, large-scale pre-trained language models are made available, which are generally trained on multi-domain text data that is several magnitudes more than the prevailing ASR systems. For instance, BERT is pre-trained on 3,300M words <cit.> and T5 is trained on 750GB text <cit.>.
Previous works <cit.> developed methods to build an ASR error correction model based on the powerful T5 model. By fine-tuning from the pre-trained NLP model, implicit knowledge learned from huge amounts of text data can be effectively transferred to the target error correction task.
Results indicate the importance of adopting the ASR N-best list rather than the top one hypothesis as model input for accessing richer context in the correction process.
Traditional error correction models are trained in a supervised fashion to effectively learn the error patterns made by the ASR system. The training process requires first decoding the ASR model on large amounts of speech data, and then using the erroneous hypotheses to train the correction model. These two stages can be computationally intensive to adopt in practice. Additionally, the error correction model is usually bound to a specific ASR system for a specific domain. Therefore, when we switch the underlying ASR system or apply it to a new domain, the corresponding error correction model can be less effective and needs to be re-trained. To address the above issues, we develop approaches to perform zero-shot or few-shot ASR error correction within the scope of this paper. The proposed methods are training-free and enable plug-and-play support to an existing ASR system.
Generative large language models (LLMs) such as ChatGPT have demonstrated remarkable performance of language understanding on text processing tasks <cit.>. In our work, we examine its performance in identifying and correcting errors made by the ASR system. In the experimental section, several prompts and both unconstrained and constrained generation methods are compared on standard speech recognition datasets. Results show that for both a Transducer-based ASR system and the pre-trained Whisper model, ChatGPT shows great potential in performing ASR error correction.
§ BACKGROUND
Error correction models aim to fix errors in the ASR transcription and are an integral part of ASR post-processing.
A standard error correction model adopts an E2E structure, taking the ASR transcriptions as the model input and generating the corrected sentence.
Several model variants incorporating additional inputs have been proposed.
<cit.> proposes an N-best T5 error correction model that is fine-tuned from a pre-trained T5 model. It leverages the N-best ASR hypotheses as model input and demonstrates significant performance gain over the model using the 1-best input. It also proposes an N-best constrained decoding approach in error correction, which uses the combined scores of the ASR model and the T5 model to find the best hypothesis in the N-best list.
There has been rapid growth in current LLM literature, and larger and better LLMs are constantly released.
Recently, as LLMs have been scaled up in size, pre-trained on increasingly more data, and further fine-tuned to follow instructions, they are capable of performing several NLP tasks in a zero-shot manner <cit.>. For example, LLMs such as ChatGPT have been applied to summary assessment <cit.>, and grammatical error correction <cit.>. However, their inherent ability to perform ASR post-processing tasks, such as ASR error correction, has been less explored. In this work, we follow <cit.> to use the ASR N-best list as input to the error correction model while using ChatGPT rather than T5 to perform the task.
§ ASR ERROR CORRECTION WITH LLM
In this section, we introduce our methods of utilising generative large language models for zero-shot or few-shot error correction. Two types of tasks: unconstrained error correction and N-best constrained error correction are discussed.
§.§ Unconstrained Error Correction
In the unconstrained error correction (uncon) setting, we ask ChatGPT to directly output the corrected hypothesis without adding further explanation. Since ChatGPT has no prior knowledge about the error patterns of the ASR system and no access to the original utterance, this task can be relatively difficult to perform. Therefore, instead of the 1-best ASR transcription, we input the N-best list obtained from the beam search decoding of the ASR model to ChatGPT. Hypotheses from the N-best list can act as hints to help the model better detect and correct the errors <cit.>. In the ablation study, we show that using a reasonable number of N is important for the model to achieve good performance. When only the top one ASR hypothesis is used as input, ChatGPT yields much worse performance than the proposed method.
The prompt designed for the zero-shot uncon setting is illustrated in Figure <ref>. In the designed prompt, all the hypotheses are sorted by the ASR posterior score. Furthermore, tags like and are used to surround each N-best hypothesis. Other input formats such as using numbers rather than tags or using plain sentences without the explicitly specified order are also examined and show degraded performance to our selected prompt.
Considering the complexity of this task, we additionally experiment with the 1-shot setting to perform in-context learning.
Here, we give an example for ChatGPT to refer to before conducting error correction (shown in orange colour in Figure <ref>). This example is selected from the decoding result of Conformer-Transducer on the dev_other set of LibriSpeech.
By showing both input and the desired output in the prompt, we hope to remind ChatGPT to match the sentence length of the given hypotheses and only make edits to the detected errors.
§.§ N-best Constrained Error Correction
In the above section, we perform standard ASR error correction to generate the corrected transcription based on the information from the given hypotheses. Results in <cit.> suggest that constraining the decoding space to the given N-best list leads to performance gain in some cases. In the following, we design two methods to constrain the output of ChatGPT to be a hypothesis within the given N-best list, namely the selective approach and the closest mapping.
§.§.§ Selective Approach
With the selective approach (select), ChatGPT is asked to select the most likely ASR transcription from all the candidates rather than generate one from scratch. All the input sentences are listed as , and ChatGPT is asked to return the selected option in the format of . This method is similar to language model rescoring to some extent, however, it performs the selection in one go. More importantly, ChatGPT sees all the candidates before deciding on the best one. This is different from the rescoring process where language model scores are generated individually for each of the N-best hypotheses without comparing the similarity and correlation between each other.
§.§.§ Closest Mapping
The closest mapping method (closest) is based on the assumption that when ChatGPT performs unconstrained error correction, it first selects the best hypothesis from the given N-best list and makes modifications based on this sentence to yield the final output. Therefore, we hope to find this “closest match” in a reverse process by finding the hypothesis within the ASR N-best list that has the smallest Levenshtein distance to the ChatGPT unconstrained generation result. For instance, for the zero-shot uncon example in Figure <ref>, the Levenshtein distance of the ChatGPT output to the 3-best ASR hypotheses is 1, 0, 1 respectively. Therefore, the second hypothesis will be selected as the corrected result for this utterance.
§ EXPERIMENTS
§.§ Setup
We conduct experiments on ChatGPT (gpt-3.5-turbo-0613) to study its performance on error correction for two ASR models.
A novel Conformer-Transducer <cit.> model containing 12 encoder layers is utilised. The model was trained on 960 hours LibriSpeech data with SpecAugment <cit.> and speed perturbation applied, following the ESPnet recipe <cit.>. The other ASR model studied is the Whisper <cit.> small.en model. In decoding, we follow <cit.> to suppress the probability of the most common punctuation. Each ASR model is decoded with a beam size of 10 that generates a 10-best list as a byproduct at inference. If not stated otherwise, the top five hypotheses are used as input to ChatGPT, i.e. the size of the input N-best list is 5. The effect of adopting different N is studied in the ablation experiment.
We apply lowercase representation to the ASR N-best list without performing other text processing steps. At the evaluation stage, we run the text normalisation scripts from the Whisper project on both the ASR reference and the hypothesis text before calculating WER results.
The proposed approaches are evaluated on three public datasets, namely LibriSpeech <cit.>, TED-LIUM3 <cit.>, and Artie bias corpus <cit.>. LibriSpeech is an audiobook-based English speech corpus, TED-LIUM3 is an audio dataset collected from TED talks, and Artie bias corpus is a subset of the Common Voice dataset <cit.> which is also read speech. The details of the datasets are presented in Table <ref>. We undertook a comparative analysis between ASR error correction using the generative LLM ChatGPT and a standard error correction model that adopts an E2E structure. To be specific, we trained an N-best T5 error correction model for the ASR system, as described in Section <ref>. The N-best T5 model was fine-tuned on the 10-best list of the Conformer-Transducer ASR model decoded on the 960 hours LibriSpeech training set.
§.§ Experiments on Conformer-Transducer
In Table <ref>, we study the behaviour of ChatGPT on ASR error correction when the Conformer-Transducer model is used as the base system. Results from the fine-tuned T5 error correction model are listed for comparison. Since both the ASR system and the T5 error correction model are trained on 960 hours of LibriSpeech training data, LibriSpeech can be considered as an in-domain dataset to the T5 model. In this case, the supervised trained N-best T5 model yields a performance gain of 10.9% (6.90 to 6.15) over the ASR baseline.
Error correction results using ChatGPT with different methods are presented, which does not require any form of model training and is therefore more efficient. In the zero-shot setting, both the selective approach and closest mapping perform better than the unconstrained generation, which is in line with the T5 model results that constrained decoding performs better. Moreover, the 0-shot closest which finds the closest match of the output corrected hypothesis in the given N-best list performs better than asking ChatGPT to directly select the best one from the N-best list. The unconditional error correction results become much better when we switch to the 1-shot uncon prompt (6.64 to 6.29), indicating that ChatGPT has a better understanding of the task by referring to the given example. When we further apply the closest mapping in the 1-shot setting, WER on the test set is reduced to 6.24, which is comparable to the T5 model performance.
TED-LIUM3 can be considered as an out-of-domain dataset for both the ASR model and the trained T5 error correction model. Therefore, the ASR system shows high error rates on the test set while the T5 model gives 11.3% WERR by performing error correction. Results from the ChatGPT-based methods show significant performance improvement. The 1-shot uncon approach largely outperforms the ASR baseline by 25.1% (13.53 to 10.13). The result is even better than the oracle WER of the 5-best list output by the ASR model. As the upper bound for the constrained decoding methods is the 5-best oracle WER, the 1-shot closest shows worse performance on the test set. The results on both datasets suggest that ChatGPT is effective at detecting errors in the given ASR hypotheses and generating the corrected transcription, especially for out-of-domain scenarios.
To further study where the performance gain comes, we built a ROVER-based system <cit.> to align and combine the hypotheses in an N-best list with weighted voting, but it leads to worse results compared to the ASR baseline. Experimental results suggest that ChatGPT leverages the implicitly learned world knowledge to generate the corrected ASR transcription based on the given input information, instead of performing a simple voting process on the N-best list.
In Table <ref>, we calculate the WER breakdown of different types of errors. When using the zero-shot uncon prompt, the error correction results from ChatGPT contain fewer substitution and insertion errors compared to the original ASR baseline while leading to much more deletions. With human evaluation, we find out that in the ChatGPT output, error correction results for 14 sentences are truncated (only the first few words are in the ChatGPT output rather than the entire sentence), contributing to 0.2% absolute WER. With 1-shot learning, ChatGPT performs more stable and all the badcases are solved, yielding better overall performance. Additionally, we observe that in some cases, ChatGPT has a tendency to remove redundant spoken words from the given ASR hypothesis to make the transcription more fluent. With 1-shot closest, we search from the given N-best list for the final output and therefore the introduced deletion errors can be reduced.
In table <ref>, we perform ablation for the size of the N-best list on the LibriSpeech test_other set. Results show that using a large number of N is important for ChatGPT to perform well with the zero-shot uncon prompt. In the extreme case of only using the top one ASR hypothesis as input, ChatGPT makes many unnecessary changes to the input to make the sentence more “reasonable” due to lack of information. With the increased N-best list size, it learns to compare the differences between the hypotheses and correct when the sentences disagree with each other. For the selective approach, the size of the N-best list matters less as ChatGPT performs choice selection rather than generating the entire corrected hypothesis.
The 1-shot closest method achieves the best performance on the test_other set.
With the closest mapping, we select the ASR hypothesis within the N-best list that is most similar to the ChatGPT output. Thus, for each utterance, the selected hypothesis falls in the range of hypothesis-1 to hypothesis-5, and we divide the test set into 5 splits accordingly. The proportions of each subset are 67%, 14%, 8%, 5%, and 6%. In Figure <ref>, we calculate the WER of the ASR baseline and the WER after error correction for each subset. When the selected hypothesis is the same as the top one ASR hypothesis, WER remains the same as the ASR baseline. When the model selects other transcription from the N-best list, performance improvement can be seen compared to the ASR baseline.
§.§ Experiments on Whisper
Next, we investigate the impact of the proposed methods on the pre-trained Whisper model, and the results are listed in Table <ref>. Although Whisper already demonstrates state-of-the-art performance, ChatGPT proves to be effective in correcting ASR errors on both LibriSpeech and Artie, yielding 5.4% and 8.9% WERR on the test sets respectively. However, ChatGPT shows worse performance compared to the ASR baseline on the TED-LIUM3 set. In particular, much more deletion errors compared to the ASR baseline can be seen in the ChatGPT output with all the proposed methods. To further study the possible cause for ChatGPT to be less effective on Whisper outputs, we analyse the ASR N-best list of both Whisper and the Transducer model, as shown in Table <ref>.
When computing the statistics, punctuation and special symbols are removed from the ASR hypotheses, leaving only English characters and numbers to focus on meaningful content.
The Uniq metric refers to the number of unique hypotheses within one N-best list. We compute the average of all samples in the test set. For Transducer outputs, the result is close to 5 which is the size of the N-best list, however, there are more repeated entries in Whisper outputs. This is due to the fact that Whisper learns to generate sentences with inverse text normalisation (ITN) to improve the readability, i.e. capitalisation added, punctuation and other symbols included, and disfluency removed. Accordingly, in many cases, multiple hypotheses in an N-best list only differ in format, not in actual content. Nevertheless, the diversity of the N-best list is important for our error correction method to perform well.
Another observation is that in Whisper output, even when the hypotheses in the N-best list are diverse, the difference may come from one hypothesis omitting or inserting some irrelevant words in the output.
This is illustrated with the Cross WER metric in Table <ref>. Here, we keep all the unique hypotheses in an N-best list. Then for each pair of hypotheses in the remaining list, we calculate the WER result against each other and sum the result on the entire set. This metric can help us measure the difference between hypotheses within one N-best list. The results show that the deletion and the insertion rates of Whisper on the Cross WER metric are much higher than the Transducer model. This suggests that Whisper may fail to faithfully transcribe the utterance in all N-best hypotheses, resulting in sentences with varying lengths. ChatGPT tends to choose more coherent ones, leading to the large number of deletions in the error correction results.
In Table <ref>, we conduct case analysis for an error correction example from the test set of TED-LIUM3. As the table shows, for the Transducer ASR model, all the hypotheses are of similar length containing all the information from the utterance, and the Uniq metric is 5. ChatGPT helps to correct “blue” into “blew” utilising the given N-best list and world knowledge. Meanwhile, for Whisper 5-best hypotheses, the Uniq metric is only 3 due to the repetition problem. In addition, disfluencies in the utterance (“that”, “you know”) and the non-existent word (“and”) are incorrectly removed or introduced in the output, resulting in more deletions and insertions in Cross WER. The produced N-best list is hence less informative and misleads ChatGPT into the wrong output.
§ CONCLUSIONS
In this paper, we propose to use ChatGPT, a powerful generative large language model, to perform ASR error correction in zero-shot or 1-shot settings. Results on standard datasets suggest that when using the ASR N-best list as input, ChatGPT has the ability to detect and correct errors for the ASR output. 10% and 25% WER reduction can be observed for the Transducer model in the in-domain and out-of-domain settings. We also analyse the Whisper N-best list to explore potential reasons that cause the proposed methods to be less effective.
IEEEbib
|
http://arxiv.org/abs/2307.07437v1 | 20230714160327 | Leveraging Traceability to Integrate Safety Analysis Artifacts into the Software Development Process | [
"Ankit Agrawal",
"Jane Cleland-Huang"
] | cs.SE | [
"cs.SE"
] |
Leveraging Traceability to Integrate Safety Analysis Artifacts into the Software Development Process
Ankit Agrawal
Department of Computer Science
Saint Louis University
Saint Louis, MO, USA
[email protected]
Jane Cleland-Huang
Department of Computer Science
University of Notre Dame
SouthBend, IN, USA
[email protected]
August 12, 2023
========================================================================================================================================================================================================================================================================
Safety-critical system's failure or malfunction can cause loss of human lives or damage to the physical environment; therefore, continuous safety assessment is crucial for such systems. In many domains this includes the use of Safety assurance cases (SACs) as a structured argument that the system is safe for use. SACs can be challenging to maintain during system evolution due to the disconnect between the safety analysis and system development process. Further, safety analysts often lack domain knowledge and tool support to evaluate the SAC. We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models, and then uses these connections to visualize the change. We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety. We present new traceability techniques for closer integration of the safety analysis and system development process, and illustrate the viability of our approach using examples from a cyber-physical system that deploys Unmanned Aerial Vehicles for emergency response.
Safety Case, Safety Analysis, Traceability
§ INTRODUCTION
Safety-critical systems are systems whose failure could result in loss of life, significant damage to the environment, or significant financial loss <cit.>. Such systems must be developed systematically and rigorously. Given a set of requirements describing the system's functionality, we need to assure that associated hazards have been identified and appropriately addressed, typically using techniques such as Fault-Tree Analysis (FTA) and Failure-Mode Effect and Criticality Analysis (FMECA). Beyond these techniques, it is increasingly common for organizations to construct claim-based safety arguments <cit.> in the form of a Safety Assurance Case (SAC). A SAC decomposes high-level safety goals or claims into layers of arguments supported by safety evidence such as test-cases logs, simulation results, or formal proofs <cit.>, often using either the Claims-Arguments-Evidence notation <cit.> or the Goal Structuring Notation <cit.>.
SACs are recommended, or even required, in many safety critical domains (e.g.,<cit.>); however, in a study by Cheng et al., safety experts reported that there are `no effective mechanisms for managing change' for a SAC, and that `SAC creation and maintenance has not been fully integrated into the software development process' <cit.>. System change analysis becomes difficult because of insufficient domain-knowledge among safety stakeholders and the sheer complexity involved in safety-critical products. During change analysis, the safety stakeholders seek answers to questions such as (1) why the system has changed, (2) what risk does this change mitigates, and (3) how this change can impact safety<cit.>. Further, safety stakeholders typically design and maintain SACs while the development team produces artifacts, such as mitigating requirements and test results, on which the SAC's safety arguments depend. These distinct roles also create a gap between system development and SAC maintenance processes. Therefore, it is crucial not only to establish traceable links between various safety artifacts such as Fault Tree, SAC, and development artifacts such as Design Decisions and code, but to keep safety analysts informed of how changes in development artifacts may affect safety by documenting the rationales for changes to these development artifacts.
In this paper, we propose a solution to establish traceability links between various safety artifacts such as SACs and Fault Trees, and supplementing the SACs with rationales for changes in the development artifacts. Our goal is to improve maintainability of SACs and keep safety analysts informed of the rationale for changes in the development artifacts as the software evolves. Our solution first utilizes Safety Artifact Forest Analysis (SAFA) <cit.>, which detects changes in the software development artifacts between the two versions of the underlying system and automatically generates vizualizations enabling analyst to easily navigate through changes. We establish traceable links between these auto-generated vizualizations and safety artifacts such as FTAs and SACs. Secondly, we discuss strategies to capture the reasons behind changes in software development artifacts and maintain change rationale information as part of the development artifacts. Finally, we demonstrate how this comprehensive traceability across multiple safety artifacts, supported by rationales for changes in the development artifacts, enables us to analyze the impact of changes on safety as the system evolves. To illustrate our approach, we provide examples from the DroneResponse system <cit.>, which utilizes Unmanned Aerial Vehicles (UAVs) to support emergency response. Finally, we outline the open challenges ahead in a preliminary roadmap.
§ SYSTEM ARTIFACTS TRACEABILITY
The SAFA framework retrieves artifacts from project repositories such as DOORS, Jira, and Github. Given a root node, such as a system level requirement, it constructs a vertical slice through the system according to the trace links defined in a Traceability Information Model (TIM). SAFA refers to such a tree as an Artifact Tree (AT). SAFA can compare a current version of the AT against an earlier baseline version to produce a Delta Tree (DT) which visualizes changes in the system.
The right lane of Figure <ref> shows a partial DT. Additional examples are provided in our prior work <cit.>.
SAFA detects additions (green), deletions (red), and modifications (blue) for requirements, design, code, tests, operating context, environmental assumptions, and other system artifacts. The delta tree visualization helps project stakeholders to identify changes and to investigate their impact on system safety. It highlights these changes and recommends areas in which inspection is needed. Safety experts expressed the importance of providing rationales for each change <cit.>. While SAFA currently uses basic static analysis techniques to explain refactoring changes; richer rationales are needed that describe changes in requirements and design, as well as modifications to the code.
We illustrate the proposed solution with examples from publicly available requirements dataset for DroneResponsecyber-physical system that deploys cohorts of UAVs to support emergency response mission such as search-and-rescue <cit.>. The use of on-board intelligence by UAVs for autonomous navigation through airspace is one of the key requirements of multi-UAV autonomous systems. However, the US Federal Aviation Authority (FAA) regulates airspace usage and defines special-use or restricted-airspace where UAVs are not allowed to fly. Autonomous UAVs entering the restricted-airspace could cause accidents with commercial flights, military operations, or medivac deliveries. Therefore, we consider flights into prohibited space to be a severe operational risk and have carefully designed mitigations into our system.
The safety requirement UAV-1387 in Figure <ref> states that “When the UAV is in autonomous mode it shall fetch restricted airspace data from the LAANC system.” In the previous version of the system, this was partially addressed by requiring the UAV to continuously check for airspace information while in autonomous mode (Design Definition - UAV 1388). However, in the new system, this requirement was replaced by conducting a more economical check when new flight paths are planned (Design Definition - UAV 1413). The delta tree, produced by SAFA, clearly highlights this design change, showing the replacement of Design Definition UAV-1388 and its associated code in red, and the inclusion of Design Definition UAV-1413 and its associated code in green.
§ INTEGRATING SYSTEM ARTIFACTS WITH SAFETY ASSETS
When developing a safety-critical system, a preliminary hazard analysis (PHA) is performed <cit.> to identify high-level hazards that represent undesirable states of the system. Each of these hazards is then explored through an associated Fault Tree (FT) <cit.> or a FMECA model <cit.>. In this paper, we illustrate our approach using FTs; however, our techniques are also applicable to FMECAs. Fault Tree Analysis (FTA) is a top-down approach that starts by analyzing the high-level risk and then uses boolean logic to depict a chain of events causing system-level risk. The middle lane of Figure <ref> shows a partial FT for risks associated with UAV flights in restricted airspace.
FTs and FMECAs are typically used as part of a SAC's argumentation structure to show that a specific fault has been sufficiently mitigated. Therefore a link should be established from a FT to its relevant argument in the SAC.
An example of a SAC argument using GSN is depicted in the left lane of Figure <ref>. This SAC uses a complete fault tree as evidence to support the claim that hazards due to FAA restrictions on airspace are mitigated. To maintain horizontal traceability (depicted by horizontal dashed arrows in Figure <ref>), an explicit link is established between an evidence node in the SAC and the root node of the FT. In turn, links are established from multiple nodes in the FT to sub-trees of system artifacts. For example, the intermediate fault node of `drone autonomously navigates into the restricted airspace' is linked to an acceptance test as verification that the fault has been mitigated. Similarly, one of the contributing basic faults `Drone operates on stale restricted-airspace data' is linked to the previously discussed safety requirement (UAV-1387).
In our solution, we establish traceability links between system artifacts, safety assets (e.g., FTs and FMECAs), and SACs to propagate changes back-and-forth between safety assets and system artifacts. Figure <ref> illustrates that a single change in the design of the system could trigger and propagate notifications and warnings across the linked safety artifacts (yellow nodes).
§ CAPTURING RATIONALES
To support Safety Analysts in analyzing the impact of changes in system artifacts on the overall safety of the system, our approach strategically captures rationales from developers and other project stakeholders as they make changes that impact artifacts linked directly or indirectly to an FT, FMECA, or other safety assets. For example, in the FT depicted in the middle lane of Figure <ref>, we observe that the leaf node describing the basic fault `Drone operates on stale restricted airspace data' is mitigated through safety requirement UAV-1387, which states that `When the UAV is in autonomous mode it shall fetch restricted airspace data from the LAANC system.' As indicated by the red nodes, the original design satisfied this requirement through continuously fetching restricted airspace data during flight, while the current version (shown in green) replaces this functionality with a single fetch each time a new flight path is planned. A safety analyst will need to determine whether this change adversely impacts safety.
Burge et al suggested capturing reasons, alternative options, and arguments to provide rationales for design decisions <cit.>. We therefore elicit a rationale for any design decision that links directly to an FTA. In the case of DroneResponse, all requirements and design decisions are captured in Jira, and Figure <ref> shows possible instrumentation of the Jira environment to elicit design rationales when the original design requirement (UAV-1388) is replaced by a new one (UAV-1413).
At the code level we capture change rationales. These are akin to commit messages, but at the granularity of each modified class instead of an entire change set. In our example, the MonitorAirspace.java file is replaced by a new optimized service OnDemandAirspace.java that fetches data on-demand. Figure <ref> shows the user interface of a prototype IDE plugin that could be used to capture granular details of the change. The plugin not only elicits a justification and explanation of the change from the developer, but also visualizes the contribution that the class makes to an FTA (e.g., Fig. <ref>) and provides links to prior commit messages and previous change justifications.
The rationales captured as part of the change process provide crucial domain knowledge to conduct a thorough analysis of system changes on safety. Therefore, the domain knowledge acquired from these rationales, alongside visual representations of change impact, as depiceted in Figure <ref>, can aid safety analysts to determine (1) whether current changes impact safety or not, (2) whether additional mitigations are needed, and/or (3) whether the FT, FMECA, and/or the SAC need to be updated to reflect new hazards or new safety arguments. Decisions made by safety analysts can then be propagated back to the development team in order to close the loop.
§ FUTURE CHALLENGES
* Knowledge Management: As previously reported, the innate complexity of safety-critical systems means that experts from diverse disciplines are needed to construct and maintain a SAC <cit.>. While we integrate basic rationale capture into SAFA, open questions include (1) What types of domain knowledge are needed by safety analysts to evaluate and/or construct a SAC? (2) How and when should this information be collected? (3) How can it be effectively used to support Safety Analysts?
* Intelligent analysis of change: Changes are introduced into a system at many different levels to accommodate changes in the environment, introduce functional enhancements, improve system qualities such as performance, reliability, or maintainability, or to correct errors. From a safety perspective we need to differentiate between harmful and non-harmful errors. Our current approach visualizes all change, captures stakeholders' change rationales, and provides basic explanations for change. However, future systems should be able to leverage AI solutions to (1) analyze individual and composite changes in order to identify and explain patterns of change, (2) differentiate between harmless changes and those with potential safety impact, and (3) recommend remedial actions when safety is impacted.
* Tool Supported Integrated Environments Our proposed approach requires trace links to be established across heterogeneous artifacts stored in diverse tools and repositories. Safety experts have previously reported that the lack of tool support and clear guidance make SAC creation and maintenance challenging <cit.>. Open challenges therefore include (1) defining best practices for creating end-to-end traceability across safety assets and development artifacts, (2) providing integrated tool-supported environments for retrieving diverse artifacts and establishing effective traceability that supports safety analysis, and (3) developing interactive visualization tools that display SACs, FTAs, FMECAs, rationales, and artifacts in ways that provide appropriate support for different types of users such as safety analysts and developers.
This paper has presented a preview of our current research in addressing the disconnect between safety analysis and the software development process. As proposed in this paper, we are developing plugins to capture change rationales at the design and code level, and extending SAFA to show rationales in Delta view and link FTAs, FMECAs, and SACs in it.
abbrv
|
http://arxiv.org/abs/2307.04613v1 | 20230710145555 | Encapsulation Structure and Dynamics in Hypergraphs | [
"Timothy LaRock",
"Renaud Lambiotte"
] | cs.SI | [
"cs.SI",
"math.DS",
"physics.soc-ph"
] |
Mathematical Institute, University of Oxford, UK
[email protected]
Mathematical Institure, University of Oxford, UK
Turing Institute, London, UK
[email protected]
June 2023
Hypergraphs have emerged as a powerful modeling framework to represent systems with multiway interactions, that is systems where interactions may involve an arbitrary number of agents. Here we explore the properties of real-world hypergraphs, focusing on the encapsulation of their hyperedges, which is the extent that smaller hyperedges are subsets of larger hyperedges. Building on the concept of line graphs, our measures quantify the relations existing between hyperedges of different sizes and, as a byproduct, the compatibility of the data with a simplicial complex representation – whose encapsulation would be maximum. We then turn to the impact of the observed structural patterns on diffusive dynamics, focusing on a variant of threshold models, called encapsulation dynamics, and demonstrate that non-random patterns can accelerate the spreading in the system.
Keywords: Higher-order Networks, Hypergraphs
§ INTRODUCTION
Networks provide a powerful language to model and analyze interconnected systems <cit.>. The building blocks of networks are pairwise edges, and these blocks can then be combined to form walks and paths, making it possible for systems to be globally connected yet sparse. Since the seminal work of Watts and Strogatz 25 years ago <cit.>, a key focus of network science has been to investigate the relationship between the structure of a network and the dynamics taking place on its nodes <cit.>. This program requires the design of metrics to capture significant, non-random structural properties of networks, e.g., the clustering coefficient, the degree distribution or modularity, as well as the specification of dynamical models, both linear and non-linear, for the diffusion between neighbouring nodes. An important observation is that the same structural property may affect different dynamical models in different ways, e.g., a high density of triangles tends to slow down simple diffusion, but facilitate complex diffusion <cit.>.
Finding the right modeling framework for interacting systems is a challenging task. While networks have the advantage of simplicity, it has been recognized that they may also neglect critical aspects of a system and even lead to a misleading representation. Driven by the availability of datasets with richer connectivity information in recent years, different frameworks have emerged to enrich the network representation, leading to different types of higher-order networks <cit.>. One branch of this research has extended pairwise graph-based models to multiway interaction frameworks, most notably as hypergraphs or simplicial complexes, to account for group interactions among arbitrary numbers of nodes <cit.>.
Multiway interactions naturally appear in many systems, ranging from social interactions, where people interact in groups rather than in pairs <cit.>, to joint neuronal activity in brains <cit.> and cellular networks <cit.>. Different computational tools have been adapted to multiway systems, for instance for centrality measures <cit.> and community detection <cit.>. Researchers have also investigated how the structure of multiway interactions impacts dynamical processes <cit.>, especially the conditions under which dynamics on hypergraphs and simplicial complexes differ from those on networks <cit.>.
The objectives of this paper are twofold: to propose metrics that characterise the non-random patterns of encapsulation in multiway systems, and to explore dynamical models that may be affected positively or negatively by this type of hypergraph structure. These objectives are motivated by a well-known conceptual difference between the two main representations for multiway systems, hypergraphs and simplicial complexes. By definition, a simplicial complex of size k nodes includes all of the subfaces of the complex. In contrast, a hyperedge of size k nodes does not imply the existence of any subsets as hyperedges in the same hypergraph. We refer to this difference as the simplex assumption. For example, using a simplicial complex to represent the relationship between 3 nodes {a,b,c} assumes that the subfaces {a,b}, {a,c}, {b,c} all exist, along with the individual nodes. This is a strong assumption that is unlikely to hold, even approximately, in real data. A classic example is co-authorship, where a jointly authored paper between three co-authors does not imply that each pair of co-authors have also authored separate papers together, nor that each co-author has published a single-author paper. Recent work has investigated the relationship between these two representations <cit.>, and shown that the choice of higher-order representation does effect the outcome of dynamical processes <cit.>.
Simplicial complexes and hypergraphs can be seen as poles on a spectrum of multiway interaction structure, and it is likely that real data falls somewhere in-between. In this work, we build on previous investigations of this spectrum of overlapping higher-order structures, as well as random models for hypergraphs and simplicial complexes <cit.>.
Our approach builds on the notion of line graph, that has been used in different contexts in network science, where nodes are the edges of the original graph and there is a link between two nodes if their corresponding edges have a node in common <cit.>. The interactions between hyperedges of arbitrary sizes make it possible to define a variety of different line graphs for hypergraphs. As each hyperedge can be seen as a set of nodes, this problem is equivalent to that of comparing two sets. There exist multiple ways to compare sets, which leads to multiple ways to build a line graph for a hypergraph <cit.>.
We will focus in particular on what we refer to as an encapsulation graph, where two hyperedges are connected (by a directed edge from larger to smaller) if one is the subset of the other.
We then analyze the properties of the resulting directed acyclic graphs built from real-world hypergraphs and from a synthetic hypergraph model called the Random Nested Hypergraph Model (RNHM) <cit.>, which allows for some control over the extent of nested structure through random rewiring of simplicial complexes. Finally, we define a process for the spread of a complex contagion on a hypergraph through its hyperedges, and show how varying levels of encapsulation structure impact the spread of the contagion in both synthetic and real hypergraphs.
§ MEASURING OVERLAP AND ENCAPSULATION IN HYPERGRAPHS
Consider a list of multiway interactions, where each item in the list is a set of nodes that represent a group interaction. We will represent these interactions as a hypergraph, and focus in particular on aggregated, static hypergraphs, where all interactions are included regardless of any dynamic or temporal information. In fact, for all of the empirical datasets we will examine, this static hypergraph is actually the result of aggregating interactions that happen over time. We will also make our hypergraphs simple, meaning that no edges are repeated, i.e., hyperedges are contained in a set, rather than a multiset. In the future, the techniques we develop here could be extended to study the relationships between hyperedges over time, extending, for example, work on simplicial closure <cit.> or temporal dynamics of group interactions <cit.>.
Formally we represent the multiway interactions as a hypergraph H=(V, E) where V=1, 2, ..., n is the set of n nodes and E={e_1, e_2, ..., e_m} is the set of m hyperedges representing interactions between the nodes in V, with the size of each interaction measured as the number of nodes and represented by ℓ_i = |e_i|.
To understand the extent of nestedness in the structure of a hypergraph, we build a line graph where the nodes are hyperedges and where there is a a directed link between two hyperedges if one is a subset of the other. These links represent what we call encapsulation relationships between hyperedges. More formally, given two hyperedges e_i and e_j such that ℓ_i > ℓ_j, we say e_j is encapsulated by e_i if e_j ⊂ e_i.
The line graph representing encapsulation relationships is a Directed Acyclic Graph (DAG) D_e_i,e_j of H, where a directed edge from hyperedge e_i to hyperedge e_j means that e_i encapsulates e_j. Since for every connected e_i and e_j we know that ℓ_e_i > ℓ_e_j, a cycle in this graph would imply that a smaller hyperedge encapsulates a larger hyperedge, which is impossible, thus the graph is always a DAG. We refer to this DAG as the encapsulation DAG of a hypergraph. By construction, a hypergraph corresponding to a simplicial complex would have the maximum possible number of edges in the encapsulation DAG. The center panel of Figure <ref> shows an example of encapsulation DAG. The number of edges in the encapsulation DAG is the number of encapsulation relationships present in the hypergraph. As we will show, these structures are useful for studying dynamical processes where the spreading occurs at the hyperedge level.
The encapsulation DAG is closely related to a Hasse Diagram representing a partial ordering of a set of sets. However, a Hasse Diagram is transitively reduced by construction, meaning that an edge between two nodes is removed if there is an alternative path between the nodes. Hasse Diagrams with weights associated to their nodes have been used to define weighted simplicial complexes of hypergraphs, which were further used to predict the evolution and recurrence of small groups <cit.>. While we will examine transitively reduced encapsulation DAGs in Section <ref>, for consistency we will refer to the line graph with edges representing encapsulation relationships as an encapsulation DAG throughout.
The encapsulation DAG is just one way to build a line graph from a hypergraph. Other objects can be defined by considering other relations between hyperedges.
An important relation is the intersection between the hyperedges, which defines an overlap graph. Given two hyperedges e_i and e_j, an undirected edge exists between them if |e_i ∩ e_j| > 0, and the weight of the edge is the size of the overlap |e_i ∩ e_j| (or, alternatively, normalized as |e_i ∩ e_j|/minℓ_i, ℓ_j). If we remove the edges between hyperedges of the same size and impose directionality on the remaining undirected edges, for example by directing edges from larger hyperedges to smaller, we obtain a DAG that we call an overlap DAG. The right graph in Figure <ref> shows an example of the intersection relation. We note that overlap graphs are also related to clique-graph representations of pairwise networks <cit.>.
Let us make a short digression about dynamics here, a topic that we will cover in more detail in Section <ref>.
The encapsulation and intersection relations capture different ways in which hyperedges may be related with each other, but they also have different implications for dynamical processes on the hypergraph. The intersection graph is compatible with dynamics centered on the nodes of the hypergraph. One can think here, for instance, of a threshold model where all of the nodes in a hyperedge become activated if a certain number (or fraction) of its nodes are already activated. The intersection graph then provides us with information on how the activation of one edge may spread into others.
Take the hyperedge {e,f} in Figure <ref> for instance. In that case, activating the nodes in {e,f} may result, depending on the details of the dynamical model, in activation of the hyperedges {a,b,c,e} and {b,e}, and trigger a cascade of activations in the hypergraph. The picture is strikingly different in the encapsulation DAG where {e,f} is disconnected and therefore has no impact on future activations. Indeed, from that perspective, it is not the fact that node e is activated that matters, but instead that hyperedges encapsulated in others are activated. In other words, the encasulation DAG is more naturally associated to dynamics where the states are defined on the hyperedges, in a way reminiscent of the Hodge Laplacian for diffusive processes <cit.>. A more thorough discussion of the interpretation of this type of dynamics, and its simulation on both synthetic and real-world hypergraphs, will be given in Section <ref>.
Computationally, we construct the encapsulation and overlap graph structures using the following algorithms. For both algorithms, we first assign each hyperedge a unique label and construct a mapping between each node and the hyperedges it participates in. We then loop over each hyperedge α∈ E, and for each node u∈α we add edges from α to other hyperedges β∈ E based on the relation we are interested in. In the intersection graph, this means adding edges from hyperedge α to other hyperedges β∈ E, u∈β with the weight defined above. For the encapsulation DAG, we only add edges to hyperedges β that are encapsulated by α, meaning we add edges where β⊂α. After repeating this loop for each node in α, the out-neighbors of α represent all of the hyperedges in E that have the relevant relationship with the α.
The complexity of this construction has two terms. We first loop over all hyperedges m=|E| to construct a mapping from hyperedges to labels, and a mapping from nodes to the hyperedges they are members of, which takes O(m·ℓ_max) time, where ℓ_max = max_e ∈ Eℓ_e is the maximum length of a hyperedge. Once the mapping is constructed, we again loop over all m hyperedges to find encapsulation and overlap relationships. The worst case time for a loop is the size of the largest hyperedge ℓ_max times the highest degree node k_max = max_u ∈ V |{e | u ∈ e; e∈ E}|. This second term dominates the first and so the worst case running time is O(m·ℓ_max· k_max).
§ ENCAPSULATION IN EMPIRICAL DATA
In this section we introduce basic measurements of encapsulation relationships in some empirical hypergraph datasets, all of which were made available online with the publication of <cit.>. We focus in particular on coauthorship <cit.>, social contact <cit.>, and email communication datasets <cit.>. In Table <ref>, we show some statistics of the largest connected components of the hypergraphs. Following <cit.>, we exclude hyperedges of size greater than 25 nodes to keep some amount of consistency across the datasets. As mentioned above, we also ignore multiedges in the datasets and therefore consider the simple hypergraph representation of each.
The coauthorship datasets, which include decades of published papers in multiple fields, contain numbers of nodes and edges that are multiple orders of magnitude larger than the face-to-face contact and email datasets. They are also orders of magnitude less dense in terms of the proportion of edges that exist in the projected graph where an edge exist between two nodes if they occur in the same hyperedge at least once.
§.§ Degree in the Encapsulation DAG
For each hyperedge, we are interested in the extent to which it encapsulates other hyperedges present in E, or equivalently we are interested in its out-degree in the encapsulation DAG. In the top row of Figure <ref>, we report the total number of hyperedges of each size m that are encapsulated by hyperedges of larger sizes n>m. The total number of hyperedges of each size n is shown as a dotted line.
For each m, the number of observed hyperedges encapsulated decreases with n, but so does the number of size-n hyperedges. To account for the distribution of hyperedge sizes, in the bottom row of Figure <ref> we report the same counts but divided by the number of size-n hyperedges, giving us the number of encapsulated size-m hyperedges per size-n hyperedge.
We also show the same quantity in a randomization of the hypergraph which we call the “layer randomization”. The name comes from the fact that in this randomization procedure we view the sets of hyperedges of each size k as a layer, similar to the multiplex approach taken in <cit.>. The procedure then works as follows: for each layer of the hypergraph consisting of hyperedges of size k, we gather all of the hyperedges and the set of their constituent nodes, then shuffle the labels of the nodes. We repeat this procedure for every layer independently. The result is a hypergraph where the hyperedge size distribution and the unlabeled node degree distribution within each size layer are preserved, but the labeled node degree distributions within size layers, the node hyperdegree distribution, and, most importantly, the cross-size encapsulation and overlap relationships are randomized. In other words, we randomize the hypergraph across layers, but not inside layers. This is the reason why we opted for this randomization procedure, and not, for example, the configuration model for hypergraphs introduced by <cit.>. Future work could investigate the effect of other randomization procedures such as those discussed in <cit.>. In Figure <ref>, we show the proportion of encapsulation and overlap relationships destroyed by the layer randomization.
Only the coauthorship datests include hyperedges of size 1 (i.e., nodes representing papers authored by a single individual). The number of encapsulations of 1-node hyperedges increases with n after accounting for the number of size-n hyperedges across all three datasets. This indicates that authors who are part of large collaborations also publish single author papers. However, the relationship is not as strong as would be expected under the simplex assumption. In that case, every node should appear as a 0-simplex and the number of encapsulations would grow exactly as y=n since all n nodes would be encapsulated for every size-n hyperedge. Instead the encapsulation relationship for single nodes grows sublinearly, indicating that there are many nodes which appear in hyperedges of size larger than 1, but never appear alone. Note also that the layer randomization does not substantially reduce the number of encapsulations of 1-node hyperedges, since the shuffling of the 1-node layer has no effect, and shuffling at each higher layer still results in some encapsulations of 1-node hyperedges necessarily, since the set of nodes in each layer does not change.
The relationship is even weaker for larger values of m. In this case, the simplex assumption would lead to the relationship y = n m, since for every size-n hyperedge all possible size-m hyperedges would have to exist. However, for all values, the number of encapsulations per size-n hyperedge stays well below 1, meaning that, on average, a size-n hyperedge encapsulates few smaller hyperedges relative to its maximum capacity.
Notably, for all of the coauthorship datasets, encapsulation relationships tend to be destroyed among hyperedges of any size after the layer randomization is applied, as expected.
The encapsulation structure of the face-to-face social contact hypergraphs appears to be more sparse than the rest of the datasets, partly due to the fact that there are fewer large interactions with a maximum interaction size of only 5 nodes. However, even with this more sparse structure, there are substantial encapsulation relationships, especially for hyperedges with 2 and 3 nodes.
The email communication hypergraphs show a substantially nested structure where large group emails are composed of groups with many smaller interactions in separate email chains, especially pairwise and 3-node interactions. This is consistent with an intuitive understanding of how email communication works within organisations: many small group email chains will naturally occur to facilitate day-to-day operations and side conversations, while large group emails will occur around big meetings, decisions, or announcements that involve larger proportions of the organisational structure. Interestingly, compared to the coauthorship data, the layer randomization keeps substantially more of the encapsulation relationships in the email communications. We hypothesize that this is due to the smaller number of nodes in the email datasets, which constrains the possible randomizations.
In Figure <ref>, we show the distribution of encapsulation for each (n,m) pair; that is, one distribution for each point in Figure <ref> up to n=5. We compute for each α∈ E with ℓ_α = n the number of out-neighbors of α in the encapsulation DAG that are of size m. We then normalize this quantity by the maximum number of subsets of size m, which is n m. Thus if a histogram is fully concentrated on 1, there is full encapsulation and the simplex assumption holds. The bottom row of Figure <ref> shows the same histograms computed on the layer randomized version of the hypergraph.
As we observed in Figure <ref>, the number of encapsulations decreases for all of the coauthorship datasets when n increases. The distributions in Figure <ref> show that the most common amount of encapsulation is exactly one subset (leftmost point of each line), and relatively few hyperedges fully encapsulate all of the possible subsets (rightmost point in each line). However, we observe the opposite pattern in the social contact and email datasets, where full encapsulation of 2-node hyperedges by 3-node and 4-node hyperedges is common in the observed data, and these relationships are destroyed by the layer randomization.
§.§ Paths Through Encapsulation DAGs
In this section we show how analysis of encapsulation DAGs can help understand the structure of encapsulation relationships. An encapsulation DAG encodes interaction structure in at least 3 ways. As shown above, we can use the out-degree of a hyperedge in the DAG to measure the extent to which subsets of that hyperedge also appear as hyperedges. Similarly, the in-degree of a hyperedge in the DAG indicates the extent to which the supersets of a hyperedge exist, e.g., how much a given hyperedge is encapsulated. Finally, and this is the purpose of this section, the length of paths in the DAG indicates the “depth" of encapsulation relationships.
Here we analyze the height of rooted paths in the transitively reduced DAG, inspired by the approach taken in <cit.>. A rooted path is one that begins from a root node, which we define as a node in the DAG with zero in-degree and non-zero out-degree. We consider paths starting from root nodes because they indicate the maximum possible path lengths through the DAG. A transitively reduced DAG is one in which all edges representing shorter redundant paths are removed. For example, if we have the edges A-B, B-C, and A-C, in the transitively reduced DAG the edge A-C would be removed, since there would still be a path from A to C without that edge. Analyzing the DAG after removing these “shortcut” edges gives us a sense for the extent to which intermediate sized hyperedges are or are not present.
The distribution of path lengths in the transitively reduced DAG indicates the depth of the encapsulation relationships in the hypergraph. If the distribution is skewed towards the maximum length (k-1 edges for a hyperedge on k nodes), this indicates a hierarchy of encapsulations in the sense that multiple intermediate hyperedges of different sizes are all encapsulated by the same larger hyperedge (the root). In contrast, if most path lengths are short, this indicates that encapsulation relationships in the hypergraph are concentrated between only two different sizes at a time, a kind of shallow encapsulation. Note that transitively reduced DAGs corresponding to two hypergraphs with very different encapsulation structures could have similar numbers of edges, but very different path length distributions. As we will discuss below, deeper and more hierarchical encapsulation relationships can have important implications for how a contagion can spread over the hyperedges of a hypergraph.
In the top row of Figure <ref>, we show the distribution of heights in each dataset compared to the average over multiple layer randomizations. After randomization, the maximum path length through the transitively reduced DAG drops substantially in every dataset, and the number of paths of length 2 drops by multiple orders of magnitude in all of the coauthorship and contact datasets, but not in the email datasets.
In the middle and bottom rows of Figure <ref>, we plot for each root hyperedge its degree in the DAG against its maximum height (path length) in the transitively reduced DAG. The middle row shows the relationship without normalization for the observed (left) and layer randomized (right) hypergraph. The DAG degree of a hyperedge and its maximum length path in the transitively reduced DAG are positively correlated to varying extents across all of the datasets, but in the coauthorship datasets there are many hyperedges with high DAG degree that have relatively low maximum path lengths of only 2 or 3 edges.
As mentioned previously, the maximum height is bounded by k-1, where k is the size of the hyperedge, since the maximum path length will pass through exactly one node (hyperedge) of each size 0 < k' < k, of which there are k-1. The bottom row of Figure <ref> again shows the relationship between DAG degree and maximum height, but with both quantities normalized by their maximums.
As expected, when a root hyperedge has maximum degree in the DAG, it also has maximum path length (the opposite need not hold). The dark colored points in the top right of each normalized scatter plot indicate that only the hyperedges with small degrees have the maximum degree, meaning that they are also small hyperedges.
§.§ Random Nested Hypergraph Model
In this section we describe the Random Nested Hypergraph Model (RNHM) developed in <cit.>, which we will use as a starting point for analyzing the relationship between nested hypergraph structure and a hyperedge contagion process. The parameters of the model are: the number of nodes N; the maximum sized hyperedge s_m; the number of hyperedges of size s_m, denoted H_s_m; and ϵ_s, the probability of rewiring a hyperedge of size s<s_m.
Hypergraphs generated by this model are sampled by the following process. First, H_s_m hyperedges of the maximum size s_m are sampled, where the probability of a node being included in a hyperedge is uniform. Second, all of the subsets of those hyperedges (i.e., the powerset of every edge excluding sets with size less than 2) are added to the hypergraph. In some simulations, we also include all of the individual nodes as 1-node hyperedges. Finally, each of the encapsulated hyperedges with size 1<s<s_m are rewired with probability 1-ϵ_s, meaning that when ϵ_s is small, hyperedges of size s are more likely to be rewired.
Rewiring a hyperedge involves (i) choosing a pivot node in the edge uniformly at random; (ii) deleting all other nodes from the edge; and (iii) replacing the deleted nodes with nodes chosen uniformly at random from outside of the hyperedges that are supersets of the original edge, ensuring that the new edge does not already exist in the hypergraph. Since this model will be used as a substrate for contagion dynamics in the next section, we further constrain the RNHM by rejecting hypergraphs that are not connected.
In Figure <ref> we show DAG representations of random nested hypergraphs, where edges of the encapsulation DAG are drawn in black and edges from the overlap DAG are drawn in green. As ϵ_s decreases, so does the number of encapsulation relationships (DAG edges). When ϵ_s=1, no hyperedges are rewired, so all encapsulation relationships exist. As ϵ_s decreases, rewiring of hyperedges reduces the number of encapsulation relationships until, when ϵ_s=0, almost no encapsulation relationships between 4-node and s-node hyperedges exist. However, since the s-node hyperedges were constructed based on the set of nodes that appeared in the 4-node hyperedges, some encapsulation relationships may randomly remain after rewiring.
§ THE ROLE OF ENCAPSULATION STRUCTURE IN DYNAMICS
In this section we show that encapsulation plays a role in modulating the relationship between higher-order interactions and dynamical processes. We study a complex contagion process for which encapsulation and overlapping structures are vital to spreading. Our work builds on advances in the study of dynamical processes on higher-order structures, including the relationship between spreading dynamics on hypergraphs compared with simplicial complexes, where encapsulation relationships are implied <cit.>. It is important to emphasise that our analysis focuses on a purely higher-order effect, as the notion of encapsulation has no counterpart in classical networks.
We study a hypergraph complex contagion process where in each discrete timestep, every node u∈ V and hyperedge α∈ E in the hypergraph is in a binary state, either inactive or active. We represent these states using two binary vectors, s_u for nodes and x_α for edges, which both take a value of 0 if the corresponding node or edge is inactive, and 1 if active. At each time step, an inactive hyperedge α, x_α=0 is activated if more than a threshold τ of hyperedges which it directly encapsulates, i.e., hyperedges of size |α|-1, are also active. Therefore activation can only spread in hypergraphs with an encapsulation structure that is tightly nested, with many encapsulation relationships between adjacent layers of the DAG. We refer to this class of contagion as encapsulation dynamics and focus on two variants depending on the influence we allow individual nodes to have on the dynamics.[Inspired by the language of topology, we may also call these dynamics subface dynamics, referring to the fact that a subface of a simplicial complex would need to be activated for a larger face to activate.]
In the first variant, which we refer to as strict encapsulation dynamics, individual nodes can only have influence in the dynamics if they appear in the hypergraph as a 1-node hyperedge. These 1-node hyperedges only appear in the coauthorship datasets, meaning that in the other datasets, individual nodes have no influence on the spreading process and their being in an active or inactive state has no bearing on the process beyond their participation in an active hyperedge that is encapsulated. In contrast, in the non-strict variant we allow any individual node to influence pairwise interactions in which it participates. This corresponds to an assumption that all individual nodes are also 1-node hyperedges in the hypergraph and makes the state of individual nodes relevant to how the process can evolve. It also allows for exactly one kind of “backwards” activation, since activation of a large hyperedge will activate the individual nodes, while in general we do not allow activation of a large hyperedge to activate any of its subhyperedges in encapsulation dynamics. Instead, all activation flows upward through the encapsulation DAG from smaller to larger hyperedges. For a further discussion of the possible variants of encapsulation dynamics, see <ref>.
Intuitively, encapsulation relationships are necessary to the spreading process in encapsulation dynamics, since larger hyperedges can only be activated if they encapsulate smaller hyperedges, which in turn must encapsulate still smaller hyperedges. We make an analogy between this process and building a campfire, where the smallest hyperedges correspond to dry leaves and twigs, medium hyperedges correspond to kindling, and the largest hyperedges correspond to the logs. Thus the “goal” of the encapsulation dynamical process we have defined is to catch the logs on fire by first lighting the fuel.
The encapsulation dynamics can be seen as a generalisation of threshold models, which have been studied systematically in the context of opinion dynamics on graphs <cit.> and hypergraphs <cit.>. An important difference is that only activated nodes that are all connected by an active hyperedge can activate a larger hyperedge.
From an opinion dynamics perspective, for instance, this could be interpreted as follows: a set of nodes that is part of a larger set may change the collective behavior only if nodes in the smaller set form an interacting unit, which allows them to coordinate their action.
In the illustration of Figure <ref>, for instance, if we assume nodes a and b are activated in both hypergraphs, then their impact on node c would be identical in the case of threshold models. The encapsulation dynamics distinguishes the two configurations, and the activation of node c via the hyperedge {a,b,c} is only possible when nodes a and b can coordinate their action via the encapsulated hyperedge {a,b}. In the non-strict encapsulation dynamics setting, where individual nodes are assumed to exist and have encapsulation relationships only with 2-node hyperedges, activation of node b would also activate {a,b} and lead to the activation of {a,b,c}. Thus we can view the non-strict variant of the dynamics as falling between the strict dynamics and node-based threshold models, where the existence and structural patterns of 2-node hyperedges are key to determining whether the non-strict dynamics behave more like strict or node-based threshold dynamics.[We also report simulations using more traditional threshold contagion dynamics based on node activations in <ref>.]
We simulate encapsulation dynamics by constructing the encapsulation DAG, but only keeping edges between hyperedges at adjacent layers, i.e., where the difference in size is 1. In our simulations, we first place a given number of seed-activated hyperedges using one of the strategies described below. We then count for each hyperedge how many of its encapsulated hyperedges are seeds and deterministically simulate the dynamics forward. After each iteration, for every hyperedge α with size ℓ_α nodes we update the number of its encapsulated ℓ_α-1 hyperedges that became activated. In practice, it is more efficient to update these counts by maintaining a reverse adjacency list of the encapsulation DAG so that we need only loop over the newly activated hyperedges and update the counts for the inactive hyperedges that they are encapsulated by.
We consider 4 different strategies for choosing seed hyperedges:
* Uniform: Choose hyperedges uniformly at random.
* Size Biased: Choose hyperedges with probability proportional to their size (i.e., choose the largest hyperedges first).
* Inverse Size Biased: Choose hyperedges with probability proportional to their inverse size (i.e., choose the smallest hyperedges first).
* Smallest First: Explicitly choose the smallest hyperedges first. Practically, arrange the hyperedges in a vector ordered by increasing size, with hyperedges of the same size in random order. Choose seed hyperedges starting from the beginning of this vector.
We expect that in a hypergraph with deep encapsulation relationships the smallest first seeding strategy will be the most effective for strict encapsulation dynamics, since the small hyperedges must be activated or the dynamics will never reach the entire structure. In contrast, in non-strict encpsulation dynamics it may be the case that activating the largest hyperedges first will activate the most nodes that will in turn activate many pairwise hyperedges, potentially leading to more activation overall.
§.§ Simulations on the Random Nested Hypergraph Model
In Figure <ref> we compare the encapsulation dynamics on random nested hypergraphs with varying combinations of ϵ_s for RNHM parameters N=20, s_m=4, H_s_m=5. In these simulations, we also include all of the individual nodes in the hypergraph. We show results using both uniform (top row) and smallest first (bottom row) seeding strategy, with number of seeds the same as the number of nodes N. Each point is an average over 50 realizations of the hypergraph and 50 simulations per realization. The smallest first strategy is more effective for all parameters, consistent with the “campfire” intuition of lighting the fuel to burn the logs.
In the smallest first simulations, all hyperedges are activated consistently when there is no rewiring of any hyperedges (ϵ_2=ϵ_3=1, red line), as expected. Interestingly, the dynamics are qualitatively different when either the 2- or 3-node hyperedges are rewired, but the other is left alone. More hyperedges are activated when only 3-node hyperedges are rewired (ϵ_2=1, ϵ_3=0, green line) compared to when only 2-node hyperedges are rewired (ϵ_2=0, ϵ_3=1, orange line). However, it is not the case that the most rewiring leads to the slowest activation dynamics. We attribute this to a combination between the stochasticity of the rewiring process and the relatively small number of nodes N, which can lead to situations where rewired hyperededges encapsulate each other randomly (see the encapsulation DAG in black for ϵ_2=0, ϵ_3=0 in Figure <ref>, for example).
We also note that the smallest first seeding strategy as used in this setting would make node-based threshold dynamics trivial, since every node is activated in the seeding process. This illustrates the key conceptual difference between node-based and encapsulation-based dynamics: the latter requires explicit higher-order coordination among activated nodes, as well as encapsulation in the hypergraph structure.
Figure <ref> shows the average outcome of simulations on RNHMs with an increasing number of seed hyperedges again chosen with either uniform or smallest first strategy (25 realizations, 100 simulations per realization). In both cases there appear to be two distinct trends in the encapsulation dynamics results depending on whether ϵ_2 is zero, meaning all 2-node hyperedges are rewired. Activation spreads to a larger number of hyperedges when ϵ_2 > 0, consistent with the result from Figure <ref>. When 2-node hyperedges are fully rewired, even with 50% of edges being activated as seeds, only about 75% of the total edges are activated by the end of the process in the best case.
§.§ Simulations on Empirical Data
We also simulated the encapsulation dynamics on the same empirical datasets described in Table <ref> and their randomizations. In the top rows of Figures <ref> and <ref>, we show the proportion of non-seed hyperedges activated after 25 steps across all datasets with varying seed strategies and increasing number of initially active seed hyperedges.[Since the dynamics are deterministic once the seed hyperedges are chosen, usually only a small number of simulation steps are needed before the spreading stops. 25 steps is more than necessary for all of these datasets.] In the bottom rows of each figure, we show the difference between the observed and randomized outcomes.
In strict encapsulation dynamics (Figure <ref>), where pairwise edges can only be activated if one of their constituent nodes is present as a hyperedge, no further hyperedges are activated on average for small numbers of seeds across the coauthorship and face-to-face contact datasets. In the email datasets, the dynamics already take off with just 10 seed hyperedges and the smallest hyperedges first strategy clearly has an advantage in both the observed and randomized datasets. In fact, across all of the datasets the smallest first strategy is the most effective, and it also tends to be the strategy with largest difference in final activations between the observed and layer randomized hypergraphs. In general, activations on the layer randomization are much lower than in the observed hypergraphs, which is as expcted since the observed data contains many more encapsulation relationships.
In the non-strict encapsulation dynamics (Figure <ref>), we again see that more non-seed edges are activated in the observed hypergraph with more encapsulation relationships. In the face-to-face social contact datasets, a single seed is enough to activate the entire observed hypergraph. Similarly, in the email datasets the final number of activations is consistent regardless of the number of seeds, until falling off at high numbers of seeds, likely due to the smaller proportion of available hyperedges to activate. However, in the layer-randomized hypergraphs, in the face-to-face contact and email datasets there appears to be a limit on the amount of non-seed hyperedges that can become activated.
We also note that in non-strict encapsulation dynamics, there is not a clearly best hyperedge seed placement strategy across the datasets. It is intuitive that the size biased strategies work well in non-strict dynamics with small numbers of seeds, since this strategy will by definition activate the most nodes, and these nodes can in turn activate pairwise edges they participate in, essentially translating into more seeds.
§ CONCLUSION
Higher-order networks have emerged, in recent years, as a promising approach to represent and model interacting systems.
Among this broad family of models, approaches based on hypergraphs help to characterise the global structure and collective dynamics when interactions involve more than two agents. In this work, we have proposed novel ways to quantify the relations between hyperedges in real-world datasets. Based on the notions of overlap and encapsulation, we propose two alternative ways to represent a hypergraph as a graph where the nodes are the original hyperedges. In this line graph representation, edges may be directed to encode the encapsulation of a hyperedge in another, or undirected to encode the number of nodes in common between them. We have focused in detail on the structure induced by encapsulation, proposing a randomization strategy to erase encapsulation relations between hyperedges, while preserving other structural patterns, and quantifying how different real-world data are from what would be expected in a simplicial complex representation.
As a second step, we turned to dynamics. In contrast with works focusing on the difference exhibited by a dynamical process on a hypergraph and on its corresponding projection on a graph, we explore the impact of encapsulation on spreading and compare the dynamics taking place on real-work hypergraphs and their randomization. To do so, we focus on a dynamical process specifically designed for hypergraphs – the encapsulation dynamics is trivial on graphs – and demonstrate that encapsulation facilitates spreading in situations when smaller hyperedges fuel the activation of larger hyperedges. Our work contributes to the recent efforts to understand how hypergraph structure impacts dynamics. Future research directions include a more thorough focus on the importance of overlap, but also testing our metrics to study other dynamical models, e.g. for synchronisation.
There remain many potential avenues for future work in this area. We have focused on a simple, size-layer-based approach to randomizing hypergraphs, but there exist in the literature other ways of randomizing hyperedges, including the configuration model approach introduced in <cit.> and the multiplex approach in <cit.>. In contrast to our randomization, which preserves the size distribution of hyperedges and the unlabeled within-layer node degree distributions, both of these models preserve more general notions of degree, including the overall hyperdegree and the detailed within-layer degree of each node. Another potential research direction concerns the encapsulation dynamics, that was kept as simple as possible for the purpose of this work, but could be defined in different variants, as we allude to in <ref>, in the same was that different types of threshold dynamical models have been explored in the literature. Finally, as we noted, the intersection and encapsulation relations are just two out of the several ways in which the relation between hyperedges can be measured. A combined analysis of the multiple line graphs that can be associated to the same hypergraph is also a promising research direction.
In this work, we ignored the temporal aspect of hypergraphs, however in the future the ideas introduced here could be extended to understand encapsulation patterns in temporal or dynamic hypergraphs, following work such as <cit.>.
Our work could also be integrated with existing literature on higher-order motifs in hypergraphs <cit.>. Further research could also be done on analyzing the DAG structures we investigated here using recent work on the cyclic analysis of DAGs <cit.>.
§ AVAILABILITY OF CODE AND DATA
Code implementing the measurements and simulations shown in this paper will be made available at <https://github.com/tlarock/encapsulation-dynamics/> <cit.>. All of the empirical data was made available with the publication of <cit.> and can be found online at <https://www.cs.cornell.edu/ arb/data/>.
§ ACKNOWLEDGMENT
The authors acknowledge support from the EPSRC Grant EP/V03474X/1. TL acknowledges the use of open source code made available by the developers of many projects including NumPy <cit.>, SciPy <cit.>, NetworkX <cit.>, MatPlotLib <cit.>, and compleX Group Interactions (XGI) <cit.>.
§ DATA
Table <ref> shows the same statistics as Table <ref>, but for the whole hypergraph, rather than just the largest connected component.
§ DISCUSSION OF ALTERNATIVE DYNAMICS
Due to the multidimensionality inherent to hypergraphs, there are numerous valid choices for specifying a spreading process of the type we study here, each of which have their own conceptual and practical advantages and pitfalls. In this Appendix we discuss some of the potential alternatives that could be investigated in the future. We focus specifically on the specification of spreading over hyperedges - for a brief discussion of node-based threshold models on hypergraphs, see <ref>.
The first and most important choice in specifying the dynamics is deciding which hyperedges can influence one another. In the main text, we presented a model where only hyperedges at adjacent levels in the encapsulation DAG can influence each other, e.g. one in which only hyperedges of size k-1 can influence a hyperedge of size k. These are in some sense the most directly applicable to the “ideal” encapsulation DAG, since the dynamics directly spread over the DAG structure. However, we are also interested in how our spreading process unfolds on empirical hypergraphs, and we cannot know in advance whether the DAG connectivity will be suitable for spreading.
With this limitation in mind, we can also specify a version of encapsulation dynamics where we relax the condition from requiring immediately adjacent hyperedges to empirically adjacent hyperedges, meaning that a hyperedge α can be influenced by hyperedges it encapsulates that are of the maximum size k < |α| existing in the hypergraph. For example, if a hyperedge on 4 nodes does not encapsulate any hyperedges on 3 nodes, but does encapsulate a hyperedge on 2 nodes, we allow this smaller hyperedge to influence the larger.
The encapsulation dynamics presented in the main text are the most true to the spirit of the encapsulation relation, since they require that the encapsulation DAG has a specific structure. The empirical encapsulation relaxation is more flexible and compatible with the variety of structures we expect to see in empirical data, but the cost of this flexibility is that in some cases very small hyperedges can “punch above their weight” by activating much larger hyperedges just by virtue of being the only observed encapsulated edge.
We can address this issue in a few ways. In the first place, we could set the threshold τ to be at least the number of individual nodes in the hyperedge. With this threshold, it would only be possible for single nodes to activate a larger hyperedge if all of them were activated. However, this “global” threshold could have the effect of making it impossible to activate some hyperedges, for example a hyperedge with only one encapsulation, but where that encapsulation is of size k-1, which would also be counter-intuitive. Instead, size-specific threshold models could be given, such that a different number of different sized hyperedges could be necessary to activate a hyperedge.
Finally, there is the question of whether activation should go in only one direction, from smaller hyperedges up to larger hyperedges, or in both directions. In this work we have only allowed activation to flow from smaller to larger hyperedges, but it would be equally reasonable to assume that once a larger hyperedge has been activated, all or some of its subsets also become active. We leave investigation of this style of model for future work.
§ THRESHOLD CONTAGION MODEL
In this Appendix, we show some results on a traditional node-based threshold contagion model on a hypergraph to contrast with the encapsulation dynamics we introduced in the main text. Just as in encapsulation dynamics, in our threshold model every node u∈ V and hyperedge α∈ E in the hypergraph is in a binary state, either inactive or active, in each discrete timestep. At each step, an inactive hyperedge α, x_α=0 is activated if the number of already-activated nodes within the hyperedge is larger than a threshold. When a hyperedge is activated, all of its member nodes u ∈α are also activated. We define the threshold based on the size of the hyperedge, specifically |α|-τ. An inactive hyperedge α will be activated if
∑_u ∈α s_u ≥ |α| - τ,
that is, if the number of activated nodes is greater than the size of the hyperedge minus the threshold.
These dynamics could still be sensitive to encapsulation structure in a hypergraph, however the overlap structure of the hypergraph can play an equally important role, since there is no requirement that smaller hyperedges are activated first to activate enough nodes to finally activate larger hyperedges.
We run simulations on empirical datasets using two threshold values: τ=0 and τ=1 and present the results in Figure <ref>. When τ=1 (top plot), meaning that an inactive hyperedge α becomes active when the number of inactive nodes remaining in α is 1, a single seed activates the entire hypergraph for both the face-to-face contact and email datasets. In the coauthorship datasets, full activation is never achieved in either observed or randomized datasets.
When τ=0, meaning all nodes must be activated for a hyperedge to become active, a sort of unanimity condition, the outcomes are dependent on the dataset. Starting with the email-Eu dataset, we see that as the number of seed hyperedges increases, choosing the largest hyperedges first is the most effective strategy on the observed data until the number of seeds increases passed 10^3, where all of the methods converge. In the email-Enron dataset there is a similar pattern, but the difference between the outcome on the observed hypergraph and the random hypergraph is smaller across the simulations. The two contact datasets show similar patterns across all of the seeding strategies and in both observed and randomized hypergraphs, with full activation being achieved for the largest numbers of seeds. Finally, in the coauthorship datasets almost no activation occurs until more than 10^4 hyperedges are activated as seeds, and choosing seeds proportional to their size is the best strategy.
abbrv
|
http://arxiv.org/abs/2307.04305v1 | 20230710020443 | Automatic Piano Transcription with Hierarchical Frequency-Time Transformer | [
"Keisuke Toyama",
"Taketo Akama",
"Yukara Ikemiya",
"Yuhta Takida",
"Wei-Hsiang Liao",
"Yuki Mitsufuji"
] | cs.SD | [
"cs.SD",
"cs.LG",
"eess.AS"
] |
The category of reduced imaginary Verma modules
Juan Camilo Arias, Vyacheslav Futorny and André de Oliveira
August 12, 2023
===============================================================
Taking long-term spectral and temporal dependencies into account is essential for automatic piano transcription.
This is especially helpful when determining the precise onset and offset for each note in the polyphonic piano content.
In this case, we may rely on the capability of self-attention mechanism in Transformers to capture these long-term dependencies in the frequency and time axes.
In this work, we propose hFT-Transformer, which is an automatic music transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
The first hierarchy includes a convolutional block in the time axis, a Transformer encoder in the frequency axis, and a Transformer decoder that converts the dimension in the frequency axis.
The output is then fed into the second hierarchy which consists of another Transformer encoder in the time axis.
We evaluated our method with the widely used MAPS and MAESTRO v3.0.0 datasets, and it demonstrated state-of-the-art performance on all the F1-scores of the metrics among Frame, Note, Note with Offset, and Note with Offset and Velocity estimations.
§ INTRODUCTION
Automatic music transcription (AMT) is to convert music signals into symbolic representations such as piano rolls, Musical Instrument Digital Interface (MIDI), and musical scores <cit.>.
AMT is important for music information retrieval (MIR), its result is useful for symbolic music composition, chord progression recognition, score alignment, etc.
Following the conventional methods <cit.>, we estimate the frame-level metric and note-level metrics as follows: (1) Frame: the activation of quantized pitches in each time-processing frame, (2) Note: the onset time of each note, (3) Note with Offset: the onset and offset time of each note, and (4) Note with Offset and Velocity: the onset, offset time, and the loudness of each note.
For automatic piano transcription, it is important to analyze several harmonic structures that spread in a wide range of frequencies, since piano excerpts are usually polyphonic.
Convolutional neural network (CNN)-based methods have been used to aggregate harmonic structures as acoustic features.
Most conventional methods apply multi-layer convolutional blocks to extend the receptive field in the frequency axis.
However, the blocks often include pooling or striding to downsample the features in the frequency axis.
Such a downsampling process may reduce the frequency resolution <cit.>.
It is worth mentioning, many of these methods use 2-D convolutions, which means the convolution is simultaneously applied in the frequency and time axes.
The convolution
in the time axis works as a pre-emphasis filter to model the temporal changes of the input signals.
Up to now, recurrent neural networks (RNNs), such as gated recurrent unit (GRU) <cit.> and long short-term memory (LSTM) <cit.>, are popular for analyzing the temporal sequences of acoustic features.
However, recently some of the works start to use Transformer <cit.>, which is a powerful tool for analyzing sequences, in AMT tasks.
Ou et al. <cit.> applied a Transformer encoder along the time axis and suggested that using Transformer improves velocity estimation.
Hawthorne et al. <cit.> used a Transformer encoder-decoder as a sequence-to-sequence model for estimating a sequence of note events from another sequence of input audio spectrograms.
Their method outperformed other methods using GRUs or LSTMs.
Lu et al. <cit.> proposed a method called SpecTNT to apply Transformer encoders in both frequency and time axes and reached state-of-the-art performance for various MIR tasks such as music tagging, vocal melody extraction, and chord recognition.
This suggests that such a combination of encoders helps in characterizing the broad-scale dependency in the frequency and time axes.
However, SpecTNT aggregates spectral features into one token, and the process in its temporal Transformer encoder is not independent in the frequency axis.
This inspires us to incorporate Transformer encoders in the frequency and time axes and make the spectral information available for the temporal Transformer encoder.
In addition, we usually divide the input signal into chunks since the entire sequence is often too long to be dealt at once.
However, this raises a problem that the estimated onset and offset accuracy fluctuates depending on the relative position in the processing chunk.
In our observation, the accuracy tends to be worse at both ends of the processing chunk.
This motivates us to incorporate extra techniques during the inference time to boost the performance.
In summary, we propose hFT-Transformer, an automatic piano transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
Its workflow is shown in Figure <ref>.
The first hierarchy consists of a one-dimensional (1-D) convolutional block in the time axis, a Transformer encoder in the frequency axis, and a Transformer decoder in the frequency axis.
The second hierarchy consists of another Transformer encoder in the time axis.
In particular, the Transformer decoder at the end of the first hierarchy converts the dimension in the frequency axis from the number of frequency bins to the number of pitches (88 for piano).
Regarding the issue of the location dependent accuracy fluctuation in the processing chunks, we propose a technique which halves the stride length at inference time.
It uses only the result of the central part of processing chunks, which will improve overall accuracy.
Finally, in Section <ref>, we show that our method outperforms other piano transcription methods in terms of F1 scores for all the four metrics.
A implementation of our method is available here[].
§ RELATED WORK
Neural networks, such as CNNs, RNNs, generative adversarial networks (GANs) <cit.>, and Transformers have been dominant for AMT.
Since Sigtia et al. <cit.> proposed the first method to use a CNN to tackle AMT, CNNs have been widely used for the methods of analyzing the spectral dependency of the input spectrogram <cit.>.
However, it is difficult for CNNs to directly capture the harmonic structure of the input sound in a wide range of frequencies, as convolutions are used to capture features in a local area.
Wei et al. <cit.> proposed a method of using harmonic constant-Q transform (CQT) for capturing the harmonic structure of piano sounds.
They first applied a 3-Dimensional CQT,
then applied multiple dilated convolutions with different dilation rates to the output of CQT.
Because the dilation rates are designed to capture the harmonics, the performance of Frame and Note accuracy reached state-of-the-art.
However, the dilation rates are designed specifically for piano.
Thus, the method is not easy to adapt to other instruments.
For analysis of time dependency, Kong et al. <cit.> proposed a method that uses GRUs.
Howthorner et al. <cit.>, Kwon et al. <cit.>, Cheuk et al. <cit.>, and Wei et al. <cit.> proposed methods that use bi-directional LSTMs for analysis.
Ou et al. <cit.> used a Transformer encoder to replace the GRUs in Kong et al.'s method <cit.>, and showed the effectiveness of the Transformer.
Usually, the note onset and offset are estimated in each frequency and time-processing frame grid, then paired as a note for note-level transcription by post-processing algorithms such as <cit.>.
However, compared to heuristically designed algorithms, end-to-end data-driven methods are often preferred.
For example, Keltz et al. <cit.> applied a seven-state hidden Markov model (HMM) for the sequence of attack, decay, sustain, and release to achieve note-level transcription.
Kwon et al. <cit.> proposed a method of characterizing the output of LSTM as a five-state statement (onset, offset, re-onset, activate, and inactivate).
Hawthorne et al. <cit.> proposed a method of estimating a sequence of note events, such as note pitch, velocity, and time, from another sequence of input audio spectrograms using a Transformer encoder-decoder.
This method performs well in multiple instruments with the same model <cit.>.
Yan et al. <cit.> proposed a note-wise transcription method for estimating the interval between onset and offset.
This method shows state-of-the-art performance in estimating Note with Offset and Note with Offset and Velocity.
However, the performance in estimating Frame and Note is worse than that of Wei et al.'s method <cit.>.
§ METHOD
§.§ Configuration
Our proposed method aims to transcribe N frames of the input spectrogram into N frames of the output piano rolls (frame, onset, offset, and velocity) as shown in Figure <ref>, where N is the number of frames in each processing chunk.
Each input frame is composed of a log-mel spectrogram having size (F, M+1+M), where F is the number of frequency bins, and M is the size of the forward margin and that of the backward margin.
To obtain the log-mel spectrogram, we first downmix the input waveform into one channel and resample them to 16 kHz.
Then, the resampled waveform is transformed into a mel spectrogram with class in the library <cit.>.
For the transformation, we use hann window, setting the window size as 2048, fast-Fourier-transform size as 2048, F as 256, padding mode as constant, and hop-size as 16 ms.
The magnitude of the mel spectrogram is then compressed with a log function.
§.§ Model Architecture and Loss Functions
The model architecture of our proposed method is shown in Figure <ref>.
We first apply a convolutional block to the input log-mel spectrogram, the size of which is (B, N, F, M+1+M) where B is the batch size.
In the convolutional block, we apply a 1-D convolution in the M+1+M dimension.
After this process, the data are embedded with a linear module.
The embedded vector is then processed with the first Transformer encoder in the frequency axis.
The self-attention is processed to analyze the dependency between spectral features.
The positional information is designated as [0, 1, ..., F-1].
These positional values are then embedded with a trainable embedding.
These are processed in the frequency axis only, thus completely independent to the time axis (N dimension).
Next, we convert the frequency dimension from F to the number of pitches (P).
A Transformer decoder with cross-attention is used as the converter.
The Transformer decoder calculates the cross-attention between the output vectors of the first Transformer encoder and another trainable positional embedding made from [0, 1, ..., P-1].
The decoded vectors are then converted to the outputs of the first hierarchy with a linear module and a sigmoid function (hereafter, we call these outputs output_1st).
Regarding the loss calculation for the outputs, frame, onset, and offset are calculated with binary cross-entropy, and velocity is calculated with 128-category cross-entropy.
The losses can be summarized as the following equations:
L_bce^<m> =∑_n=0^N-1∑_p=0^P-1l_bce(y_n,p^<m>,ŷ_n,p^<m>),
L_cce^velocity =∑_n=0^N-1∑_p=0^P-1l_cce(y_n,p^velocity,ŷ_n,p^velocity),
L =L_bce^frame+L_bce^onset+L_bce^offset+L_cce^velocity,
where <m> is the placeholder for each output (frame, onset, and offset), l_bce and l_cce denote the loss function for binary cross-entropy and categorical cross-entropy, respectively, and y and ŷ denote the ground truth and predicted values of each output (frame, onset, offset, and velocity), respectively.
Although it is intuitive to apply the mean squared error (MSE) for velocity, we found that using the categorical cross-entropy yields much better performance than the MSE from a preliminary experiment.
Finally, the output of the converter is processed with another Transformer encoder in the time axis.
The self-attention is used to analyze the temporal dependency of features in each time-processing frame.
A third positional embedding made from [0, 1, ..., N-1] is used here.
Then, similar to the first hierarchy, the outputs of the second hierarchy are obtained through a linear module and a sigmoid function.
We call these outputs of the second hierarchy as output_2nd hereafter.
The losses for the output_2nd are evaluated in the same way as those for output_1st.
These losses are summed with the coefficients α_1st and α_2nd as follows:
L_all=α_1stL_1st+α_2ndL_2nd.
Although both outputs are used for computing losses during training, only output_2nd is used in inference.
As Chen et al. <cit.> suggested that the performance of their method of calculating multiple losses outperformed the method that uses single loss only, it hints us that utilizing both output_1st and output_2nd in training has the potential to achieve better performance.
§.§ Inference Stride
As mentioned in Section <ref>, chunk-based processing is required because the input length is limited due to system limitations, such as memory size and acceptable processing delay.
We found that the estimation error tends to increase at certain part within each processing chunk.
This can be demonstrated by evaluating the error for each instance of time n within the chunks:
𝑒𝑟𝑟𝑜𝑟_n^<m>=1/IP∑_i=0^I-1∑_p=0^P-1(y_i,n,p^<m>-ŷ_i,n,p^<m>)^2,
where <m> is the placeholder for each output (frame, onset, offset, and velocity), and I is the number of processing chunks over the test set.
The result using our proposed model trained using the MAESTRO training set (described in Section <ref>) is shown in Figure <ref>.
Here, the error 𝑒𝑟𝑟𝑜𝑟_n^<m> is calculated using the MAESTRO test set.
In the figure, we observe a monotonic decrease for frame and a similar but much weaker trend for onset and offset. However, for velocity, no such trend can be observed.
This hints us to use only the middle portion of a processing chunk as the output to reduce the error rate. We call this as the half-stride strategy, since a 50% overlap is required for processing chunks, as shown in Figure <ref> (B).
§ EXPERIMENTS
§.§ Datasets
We use two well-known piano datasets for the evaluation.
The MAPS dataset <cit.> consists of CD-quality recordings and corresponding annotations of isolated notes, chords, and complete piano pieces.
We use the full musical pieces and the train/validation/test split as stated in <cit.>.
The number of recordings and the total duration in hours in each split are 139/71/60 and 8.3/4.4/5.5, respectively.
The MAESTRO v3.0.0 dataset <cit.> includes about 200 hours of paired audio and MIDI recordings from ten years of the International Piano-e-Competition.
We used the train/validation/test split configuration as provided.
In each split, the number of recordings and total duration in hours are 962/137/177 and 159.2/19.4/20.0, respectively.
For both datasets, the MIDI data have been collected by Yamaha Disklaviers concert-quality acoustic grand pianos integrated with a high-precision MIDI capture and playback system.
§.§ Model Configuration
Regarding our model architecture depicted in Figure <ref>, we set N as 128, M as 32, F as 256, P as 88, the CNN channels (C) as 4, size of the CNN kernel (K) as 5, and embedding vector size (Z) as 256.
For the Transformers, we set the feed-forward network vector size as 512, the number of heads as 4, and the number of layers as 3.
For training, we used the following settings: a batch size of 8, learning rate of 0.0001 with Adam optimizer<cit.>, dropout rate of 0.1, and clip norm of 1.0.
in is used for learning rate scheduling with default parameters.
We set α_1st and α_2nd as 1.0, which were derived from a preliminary experiment (see Section <ref>).
We trained our models for 50 epochs on MAPS dataset and 20 epochs for MAESTRO dataset using one NVIDIA A100 GPU.
It took roughly 140 minutes and 43.5 hours to train one epoch with our model for MAPS and MAESTRO, respectively.
The best model is determined by choosing the one with the highest F1 score in the validation stage.
In order to obtain high-resolution ground truth for onset and offset, we followed the method in Kong et al. <cit.>.
We set J, the hyper-parameter to control the sharpness of the targets, to 3.
Also, the label of velocity is set only when an onset is present.
We set the threshold as 0.5, which means if the onset is smaller than 0.5, the velocity is set as 0.
§.§ Inference
At inference time, we use output_2nd as the final output.
We set the threshold for frame as 0.5.
For note-wise events (onset, offset, and velocity), the outputs in each pitch-frame grid are converted to a set containing note-wise onset, offset, and velocity following Kong et al.'s Algorithm 1 <cit.> in five steps shown below:
Step 1. onset detection: find a local maximum in onset with a value at least 0.5. Then calculate the precise onset time using the values of the adjacent three frames <cit.>.
Step 2. velocity: If an onset is detected in Step 1, extract the velocity value at the frame. If the value is zero, then discard both onset and velocity at this frame.
Step 3. offset detection with offset: find a local maximum in offset with a value at least 0.5. Then calculate the precise offset time using the values of the adjacent three frames <cit.>.
Step 4. offset detection with frame: choose the frame that is nearest to the detected onset which has a frame value below 0.5.
Step 5. offset decision: choose the smaller value between the results of Step 3 and 4.
An example is shown in Figure <ref>.
The onset is 4.003, and the velocity is 61.
For offset, the direct estimation from offset is 4.043, and that estimated via frame is 4.064.
Thus, we choose 4.043 as offset.
Finally, we obtain a note with {onset: 4.003, offset: 4.043, velocity: 61} in the output.
§.§ Metrics
We evaluate the performance of our proposed method with frame-level metrics (Frame) and note-level metrics (Note, Note with Offset, and Note with Offset & Velocity) with the standard precision, recall, and F1 scores.
We calculated these scores using library <cit.> with its default settings.
The scores were calculated per recording, and the mean of these per-recording scores was presented as the final metric for a given collection of pieces, as explained in Hawthorne et al. <cit.>.
§.§ Results
Tables <ref> and <ref> show the scores on the test sets of MAPS and MAESTRO datasets.
The numbers of parameters in these Tables are referred from <cit.>.
For the MAPS dataset, our proposed method outperformed the other methods in F1 score for all metrics.
For the MAESTRO dataset, our proposed method outperformed the other methods in F1 score for Note, Note with Offset, and Note with Offset & Velocity.
Furthermore, our method with the half-stride strategy which is mentioned in <ref> outperformed other methods in all metrics.
In contrast, the two state-of-the-art methods for MAESTRO, which are Semi-CRFs <cit.> and HPPNet-sp <cit.>, performed well only on a subset of the metrics.
The results suggest that the proposed two-level hierarchical frequency-time Transformer structure is promising for AMT.
§.§ Ablation Study
To investigate the effectiveness of each module in our proposed method, we trained various combinations of those modules using the MAPS training set and evaluated them using the MAPS validation set.
The variations are shown in Table <ref>.
In this study, we call our proposed method 1-F-D-T, which means it consists of the 1-D convolution block, the first Transformer encoder in the Frequency axis, the Transformer Decoder, and the second Transformer encoder in the Time axis.
Table <ref> shows evaluation results for each variation.
Second Transformer encoder in time axis.
To verify the effectiveness of the second Transformer encoder, we compared the 1-F-D-T and the model without the second Transformer encoder (1-F-D-N).
For the 1-F-D-N model, we use output_1st in both training and inference stages as the final output.
The result indicates that the second Transformer encoder improved Note with Offset performance, in which the F1 score is 84.42 for 1-F-D-T and 80.23 for 1-F-D-N.
This shows the effectiveness of the second Transformer encoder as it provides an extra pass to model the temporal dependency of acoustic features, which is presumably helpful in offset estimation.
Compelxity of convolutional block.
To investigate how the complexity of the convolutional block affects the AMT performance, we compared the 1-F-D-T model and the model that replaces the 1-D convolutional block with a 2-D convolutional block (2-F-D-T).
Surprisingly, the result shows that the performance of the 2-F-D-T model is significantly worse than that of the 1-F-D-T model.
This is probably because the two modules working on the spectral dependency do not cohere with each other.
The 2-D convolutional block may over aggregate the spectral information thus resulting into an effectively lower frequency resolution. Then, the Transformer encoder can only evaluate the spectral dependency over an over-simplified feature space, causing the performance degradation.
Converter.
We used a Transformer decoder to convert the dimension in the frequency axis from F to P.
In contrast, almost all of the existing methods used a linear module to achieve this.
We compared the performance of the 1-F-D-T model to a model with the Transfomer decoder replaced with a linear converter (1-F-L-T).
The result indicates that the 1-F-D-T model outperformed the 1-F-L-T model in F1 score for all four metrics.
Especially, the difference in Note with Offset and Velocity is large (75.95 for the 1-F-D-T model and 69.34 for the 1-F-L-T model in F1 score).
This suggests that using a Transformer decoder as converter is an effective way of improving the performance, although the side effect is the increase of model size.
We also investigated how the coefficients for the loss functions, α_1st and α_2nd in Eqn (<ref>), affect the performance.
We investigated six pairs of coefficients of loss functions (α_1st, α_2nd) in Eqn (<ref>), i.e., (1.8, 0.2), (1.4, 0.6), (1.0, 1.0), (0.6, 1.4), (0.2, 1.8), and (0.0, 2.0), for the 1-F-D-T model.
Figure <ref> shows the F1 scores of frame, onset, offset, and velocity evaluated on the MAPS validation set in each epoch.
These results indicate that the (1.0, 1.0) pair yields the best score.
It also shows that the training converges faster when α_1st is larger than α_2nd.
Importantly, if we omit the output_1st, which is the case when training with the pair (0.0, 2.0), the training loss did not decrease much.
Therefore, the F1 score stays around 0% and thus cannot be seen in Figure <ref>.
This suggests that it is crucial to use both losses, output_1st and output_2nd in our proposed method.
§ CONCLUSION
In this work, we proposed hFT-Transformer, an automatic piano transcription method that uses a two-level hierarchical frequency-time Transformer architecture.
The first hierarchy consists of a 1-D convolutional block in the time axis, a Transformer encoder and a Transformer decoder in the frequency axis, and the second hierarchy consists of a Transformer encoder in the time axis.
The experiment result based on two well-known piano datasets, MAPS and MAESTRO, revealed that our two-level hierarchical architecture works effectively and outperformed other state-of-the-art methods in F1 score for frame-level and note-level transcription metrics.
For future work, we would like to extend our method to other instruments and multi-instrument settings.
§ ACKNOWLEDGMENTS
We would like to thank Giorgio Fabbro and Stefan Uhlich for their valuable comments while preparing this manuscript.
We are grateful to Kin Wai Cheuk for his dedicated support in preparing our github repository.
|
http://arxiv.org/abs/2307.03948v1 | 20230708101129 | Reading Between the Lanes: Text VideoQA on the Road | [
"George Tom",
"Minesh Mathew",
"Sergi Garcia",
"Dimosthenis Karatzas",
"C. V. Jawahar"
] | cs.CV | [
"cs.CV"
] |
G. Tom et al.
Center for Visual Information Technology (CVIT), IIIT Hyderabad, India
{george.tom,minesh.mathew}@research.iiit.ac.in, [email protected]
Computer Vision Center (CVC), UAB, Spain
{sergi.garcia,dimos}@cvc.uab.cat
AllRead Machine Learning Technologies
Reading Between the Lanes: Text VideoQA on the Road
George Tom1 0009-0002-7343-1680 Minesh Mathew1 0000-0002-0809-2590 Sergi Garcia-Bordils2,3 0000-0002-4222-8367 Dimosthenis Karatzas2 0000-0001-8762-4454 C.V. Jawahar10000-0001-6767-7057
August 12, 2023
=============================================================================================================================================================================================
Text and signs around roads provide crucial information for drivers, vital for safe navigation and situational awareness.
Scene text recognition in motion is a challenging problem, while textual cues typically appear for a short time span, and early detection at a distance is necessary. Systems that exploit such information to assist the driver should not only extract and incorporate visual and textual cues from the video stream but also reason over time.
To address this issue, we introduce RoadTextVQA, a new dataset for the task of video question answering (VideoQA) in the context of driver assistance. RoadTextVQA consists of 3,222 driving videos collected from multiple countries, annotated with 10,500 questions, all based on text or road signs present in the driving videos. We assess the performance of state-of-the-art video question answering models on our RoadTextVQA dataset, highlighting the significant potential for improvement in this domain and the usefulness of the dataset in advancing research on in-vehicle support systems and text-aware multimodal question answering. The dataset is available at http://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqahttp://cvit.iiit.ac.in/research/projects/cvit-projects/roadtextvqa
§ INTRODUCTION
In this work, we propose a new dataset for Visual Question Answering (VQA) on driving videos, with a focus on questions that require reading text seen on the roads and understanding road signs. Text and road signs provide important information to the driver or a driver assistance system and help to make informed decisions about their route, including how to reach their destination safely and efficiently. Text on roads can also provide directions, such as turn-by-turn directions or the distance to a destination. Road signs can indicate the location of exits, rest stops, and potential hazards, such as road construction or detours. Reading text and understanding road signs is also important for following traffic laws and regulations. Speed limit signs, yield signs, and stop signs provide important information that drivers must follow to ensure their own safety and the safety of others on the road.
VQA is often dubbed as the Turing test for image/video understanding. The early datasets for VQA on images and videos <cit.> largely ignored the need for reading and comprehending text on images and videos, and questions were mostly focus on the visual aspects of the given image or video. For example, questions focused on the type, attributes and names of objects, things or people. However, the text is ubiquitous in outdoor scenes, and this is evident from the fact that nearly 50% of the images in the MS-COCO dataset have text in them <cit.>.
Realizing the importance of reading text in understanding visual scenes, two datasets—Scene text VQA <cit.> and Text VQA <cit.> were introduced that focus exclusively on VQA involving scene text in natural images.
Two recent works called NewsVideoQA<cit.>, and M4-ViteVQA<cit.> extend text-based VQA works to videos by proposing VQA tasks that exclusively focus on question-answers that require systems to read the text in the videos.
Similar to these works that focus on text VQA on videos, our work proposes a new dataset where all the questions need to be answered by watching driving videos and reading the text in them. However, in contrast to NewsVideoQA which contains news videos where question-answer pairs are based on video text (born-digital embedded text) appearing on news tickers and headlines, the text in videos in our dataset are scene text. The text in the road or driving videos are subjected to blur, poor contrast, lighting conditions and distortions. Text while driving goes by fast and tends to be heavily occluded. Often, multiple frames needs to be combined to reconstruct the full text, or a good frame with readable text needs to be retrieved. These difficulties made researchers focus on road-text recognition exclusively, and there have been works that focus exclusively on the detection, recognition and tracking of road text videos <cit.>. On the other hand M4-ViteVQA contains varied type of videos such as sports videos, outdoor videos and movie clips. A subset of these videos are driving videos. In contrast, our dataset is exclusively for VQA on driving videos and contains at least three times more questions than in the driving subset of M4-ViteQA. Additionally, questions in our dataset require both reading road text and understanding road signs, while M4-ViteVQA's focus is purely on text-based VQA.
Specifically our contributions are the following:
* We introduce the first large scale dataset for road text and road sign VQA containing 10K+ questions and 3K+ videos.
* We provide a thorough analysis of the dataset and present detailed statistics of videos, questions and answers. We also establish heuristic baselines and upper bounds that help to estimate the difficulty of the problem.
* We evaluate an existing popular VQA model and two SoTA VideoQA models on our dataset and demonstrate that these models fail to perform well on the new dataset since they are not designed to read and reason about text and road signs.
§ RELATED WORK
§.§ VideoQA
In video question answering(VideoQA), the goal is to answer the question in the context of the video. Earlier approaches to VideoQA use LSTM to encode the question and videos<cit.>.
Several datasets have been created in recent years to assist research in the field of video question answering (VideoQA). Large datasets such as MSRVTT-QA<cit.> contain synthetic generated questions and answers where the questions require only an understanding of the visual scenes. MOVIE-QA<cit.> and TVQA<cit.> are based on scenes in movies and TV shows. Castro et al.<cit.> introduced a dataset with videos from the outside world for video understanding through VideoQA and Video Evidence Selection for interpretability. MOVIE-QA<cit.>, TVQA<cit.>, HowtoVQA69M<cit.>
provide explicit text in the form of subtitles. Multiple-Choice datasets<cit.> consist of a pre-defined set of options for answers. When compared to open-ended datasets, they can be considered limiting in the context of real-world applications. Synthetically generated datasets<cit.> contain questions that are generated through processing video descriptions, narration and template questions. MSRVTT-QA<cit.> exploits the video descriptions for QA creation. HowToVQA69M<cit.> uses cross-modal supervision and language models to generate question-answer pairs from narrated videos, whereas ActivityNetQA<cit.> uses template questions to generate the QA pairs. Xu et al. introduced the SUTD-TrafficQA<cit.> dataset and the Eclipse model for testing systems' ability to reason over complex traffic scenarios. The SUTD-TrafficQA<cit.> dataset contains multiple-choice questions that are based on different traffic events. RoadTextVQA is an open-ended dataset that deals with questions related to the text information found in road videos or the signs posted along roads. Recent studies<cit.> on pretraining transformers on other vision and language tasks have shown excellent results for the VideoQA task. Lei et al. <cit.>, in their study, uncovered the bias present in many video question-answering datasets, which only require information from a single frame to answer, and introduced new tasks aimed at training models to answer questions that necessitate the use of temporal information.
§.§ VideoQA involving video text
NewsVideoQA<cit.> and M4-ViteVQA<cit.> are two recently introduced datasets that include videos with embedded born-digital text and scene text, respectively. Both datasets require an understanding of the text in videos to answer the questions.
Embedded text, sometimes called video text in news videos, is
often displayed with good contrast and in an easy-to-read style.
Scene text in the RoadTextVQA dataset can be challenging to read due to the factors such as occlusion, blur, and perspective distortion. M4-ViteVQA contains videos from different domains, a few of them being shopping, driving, sports, movie and vlogs. The size of RoadTextVQA is more than three times the size driving subset of M4-ViteVQA. Additionally, a subset of questions in RoadTextVQA also requires domain knowledge to answer questions related to road signs. Few recent works<cit.> on vision and language transformers have shown to work well with text-based VQA tasks. Kil et al.<cit.> introduced PreSTU, a pretraining method that improves text recognition and connects the recognized text with the rest of the image. GIT(GenerativeImage2Text)<cit.> is a transformer-based model for vision and language tasks with a simple architecture that does not depend on external OCR or object detectors.
§.§ Scene Text VQA
Our work, which focuses on VQA requiring text comprehension within videos, shares similarities with other studies dealing with text in natural images, commonly known as Scene Text VQA. The ST-VQA<cit.> and TextVQA<cit.> datasets were the first to incorporate questions requiring understanding textual information from natural images. LoRRa<cit.> and M4C<cit.> utilized pointer networks<cit.> that generate answers from a fixed vocabulary and OCR tokens. In addition, M4C used a multimodal transformer<cit.> to integrate different modalities. TAP<cit.> employed a similar architecture to M4C and incorporated a pretraining task based on scene text, improving the model's alignment among the three modalities. Another study, LaTr<cit.>, focused on pretraining on text and layout information from document images and found that incorporating layout information from scanned documents improves the model's understanding of scene text.
§ ROADTEXTVQA DATASET
This section looks at the data collection and annotation procedure, data analysis, and statistics.
§.§ Data Collection
The videos used in the dataset are taken from the RoadText-3K<cit.> dataset and YouTube. The RoadText-3K dataset includes 3,000 ten-second road videos that are well-suited for annotation because they have a considerable quantity of text.
The RoadText-3K dataset includes videos recorded in the USA, Europe, and India and features text in various languages such as English, Spanish, Catalan, Telugu and Hindi. Each video contains an average of 31 tracks. However, the European subset is excluded from the annotation process for RoadTextVQA as it is dominated by texts in Spanish/Catalan, and the RoadTextVQA is designed specifically for English road-text.
In addition to the videos from RoadText-3K, additional dashcam videos were sourced from the YouTube channel J Utah[ <https://www.youtube.com/@jutah>]. 252 videos from USA and UK were selected, and clips with a substantial amount of text were further selected by running a text detector over the video frames. Being a free and open-source text detector popular for scene text detection, we went with EasyOCR<cit.> as our choice of text detector. The RoadText-3K videos have a resolution of 1280x720 with a frame rate of 30 frames per second. To keep the data consistent, the YouTube clips were downsampled to the same resolution and frame rate of 1280x720 at 30fps.
Individuals who are proficient in the English language were hired to create the question-answer pairs. To ensure the quality of the applicants, an initial training session was conducted, followed by a filtering mechanism in the form of a comprehensive quiz. The quiz was designed to ensure that the question-answer pairs were created by individuals who had a solid grasp of the English language and a good understanding of the task, thereby enabling us to maintain a high standard of quality in the annotations.
The annotation process involved two stages, and a specifically designed web-based annotation tool was used. In the initial stage, annotators add the question, answers and timestamp triads for videos shown to them.
All the questions have to be based on either some text present in the video or on any road sign. In cases where a question could have multiple answers in a non-ambiguous way, the annotators were given the option to enter several answers. The timestamp is an additional data point which is collected, and it is the aptest point in the video at which the question is answerable. The annotators were instructed to limit the number of questions to not more than ten per video and to avoid asking any questions related to the vehicle license plate numbers. If there were no possible questions that could be asked from the video, then the annotators were given the option to reject it.
In the verification stage, the video and the questions are shown, and the annotators had to add the answers and the timestamps. We made sure that verification is done by an annotator different from the one who has annotated it in the first stage.
If the question is incorrect or does not follow the annotation guidelines, it is flagged and rejected. If for a question, there are common answers in the annotation stage and verification stage, then that question is considered valid. All the common answers are considered valid answers to the question.
In the verification stage, additional data regarding the question-answers are also collected. The questions are categorically tagged into two distinct classes. Firstly, based on the type of question— text-based or traffic sign-based.
The second classification captures whether the answer for a question, i.e., the text that makes up the answer, is present in the video or not.
§.§ Data Statistics and Analysis
The RoadTextVQA dataset contains 3,222 videos and 10,500 question-answer pairs.
Among the 3,222 videos, 1,532 videos are taken from the RoadText-3K dataset and the rest are from YouTube.
The data is randomly split into 2,557 videos and 8,393 questions in the train set, 329 videos and 1,052 questions in the test, and 336 videos and 1,055 questions in the validation set.
The videos for the test and validation sets were randomly chosen from the RoadText-3K split, as it has ground truth annotations for text tracking. Methods that use OCR data can take advantage of the accurate annotations provided by RoadText-3K.
We present statistics related to the questions in RoadTextVQA through <ref>, and <ref>. <ref> shows the most frequent questions and their frequencies. “What is written on the road with white block letters?" is the most recurrent, followed by questions regarding the speed limits on the roads.
<ref> provides a comprehensive overview of the question distribution in RoadTextVQA, with the majority of the questions being centred around details of shops located along the road. <ref> depicts the word count in the questions and answers, respectively. The average number of words in the questions in RoadTextVQA is 10.8, while the average number in the answers is 1.45. The average number of words in questions is much higher when compared to other text-based VideoQA datasets, as seen in <ref>. The percentage of unique questions stands at 86.6%, while the percentage of unique answers is 40.7%. <ref> shows the top 30 answers and the number of occurrences. <ref>, in the form of a word cloud, illustrates the most frequently occurring answers and OCR tokens. The most popular answers are “right", “left", “yes", and “no". The most prevalent OCR tokens in the videos are “stop", “only", and “one way".
The distribution of the videos in the dataset based on the geographic location where it was captured is shown in <ref>.
More than two-thirds of the videos in the dataset are captured from roads in the USA.
The majority of questions are grounded on text seen in the video (61.8%), and the rest are based on road signs. Road signs can also contain text, such as speed limit signs or interchange exit signs. 68% of questions have answers that can be found within the text present in the video, while the remaining 32% of questions require an answer that is not a text present in the video.
§ BASELINES
This section presents details of the baselines we evaluate on the proposed RoadTextVQA dataset.
§.§ Heuristic Baselines and Upper Bounds
We evaluate several heuristic baselines and upper bounds on the dataset. These heuristics and upper bounds are similar to those used in other VQA benchmarks, such as TextVQA<cit.> and DocVQA<cit.>. The following heuristic baselines are evaluated:
(i) Random Answer: performance when answers to questions are randomly selected from the train split.
(ii) Random OCR token: performance when a random OCR token from the video is picked as the answer.
(iii) Majority Answer: performance when the most common answer in the train split is considered as the answer for all the questions.
The following upper bounds are evaluated
(i) Vocab UB: the upper bound on predicting the correct answer if it is present in the vocabulary of all the answers from the train split.
(ii) OCR UB: the upper bound on performance if the answer corresponds to an OCR token present in the video.
(iii) Vocab UB + OCR UB: this metric reflects the proportion of questions for which answers can be found in the vocabulary or the OCR transcriptions of the video.
§.§ M4C
The M4C<cit.> model uses a transformer-based architecture to integrate representations of the image, question and OCR tokens. The question is embedded using a pretrained BERT<cit.> model. Faster R-CNN<cit.> visual features are extracted for the objects detected and the OCR tokens in the image.
The representation of an OCR token is formed from the FastText<cit.> vector, PHOC<cit.> vector, bounding box location feature, and Faster R-CNN feature of the token. A multi-head self-attention mechanism in transformers is employed, enabling all entities to interact with each other and model inter- and intra-modal relationships uniformly using the same set of transformer parameters. During answer prediction, the M4C model employs an iterative, auto-regressive decoder that predicts one word at a time. The decoder can use either a fixed vocabulary or the OCR tokens detected in the image to generate the answer.
§.§ SINGULARITY
The architecture of SINGULARITY<cit.> is made up of three major components: a vision encoder using ViT<cit.>, a language encoder utilizing BERT<cit.>, and a multi-modal encoder using a transformer encoder<cit.>. The multi-modal encoder uses cross-attention to collect information from visual representations using text as the key. Each video or image is paired with its corresponding caption during the pretraining phase, and the model is trained to align the vision and text representations using three losses (i) Vision-Text Contrastive: a contrastive loss which aligns the representations of vision and language encoders, (ii) Masked Language Modeling<cit.>: masked tokens are predicted (iii) Vision-Text Matching: using the multi-modal encoder, predict the matching score of a vision-text pair.
We use the SINGULARITY-temporal model, which is pretrained on 17M vision caption pairs<cit.>.
The SINGULARITY-temporal model contains a two-layer temporal encoder that feeds its outputs into the multi-modal encoder. SINGULARITY-temporal makes use of two new datasets named SSv2-Template Retrieval, and SSv2-Label Retrieval created from the action recognition dataset Something-Something v2 (SSv2)<cit.>. The pretraining is a video retrieval task using text queries. An additional multi-modal decoder is added for open-ended QA tasks and is initialised from the pretrained multi-modal encoder, which takes the multi-modal encoder's output as input and generates answer text with [CLS] as the start token.
§.§ GenerativeImage2Text
GIT(GenerativeImage2Text)<cit.> is a transformer-based architecture aimed at unifying all vision-language tasks using a simple architecture pretrained on 0.8 billion image text pairs. GIT consists of an image encoder and a text decoder and is pretrained on a large dataset of image text pairs. The image encoder is a Swin-like<cit.> transformer based on the contrastive pretrained model, which eliminates the need for other object detectors or OCR. As for the text decoder, the GIT uses a transformer with a self-attention and feed-forward layer to generate text output. The visual features and the text embeddings are concatenated and used as inputs to the decoder. GIT is able to gradually learn how to read the scene text with large-scale pretraining and hence achieves SoTA performance on scene-text-related VQA tasks such as ST-VQA. For video question answering, GIT employs a method of selecting multiple frames from the video and separately embeds each frame with a learnable temporal embedding which is initialized as zeros, and the image features are concatenated and used similarly to the image representation. The question and the correct answer are combined and used in the sense of a special caption, and the language model loss is computed solely on the answer and the [EOS] token.
§ EXPERIMENTS AND RESULTS
This section covers the evaluation metrics, the experimental setup, and the experiment results.
§.§ Experimental Setup
Evaluation metrics. We use two evaluation metrics to evaluate the model's performance: Average Normalized Levenshtein Similarity (ANLS)<cit.> and Accuracy (Acc. (%)). The Accuracy metric calculates the percentage of questions where the predicted answer exactly matches any of the target answers.
ANLS, on the other hand, does not award a zero score for all predictions that do not match the ground truth string exactly.
The score was originally proposed to act softly on cases where the predicted answer differs slightly from the actual.
ANLS measures a similarity(based on the Levenshtein distance) between the prediction and ground truth and normalizes it as a score in the range [0,1]. If the score is less than 0.5, the final ANLS score for the prediction is set to zero.
OCR transcriptions. The ground truth annotations were utilized for the videos in the RoadText-3K set, while for the remaining videos, the OCR transcriptions were sourced using the Google Cloud Video Intelligence API. Both RoadText-3K ground truth annotations, and the Google API provide text transcriptions at the line level.
We use the line-level text transcriptions as the OCR tokens for the calculation of OCR upper bounds and OCR-based heuristics as given in the <ref>. When a text track gets cut off from the frame or partially occluded by other objects in a video, the Google Cloud Video Intelligence API treats it as a new track, whereas RoadText-3K annotations ignore the partially occluded tracks. This is why in the <ref>, the number of videos vs the number of tracks is a bit inflated for the YouTube clips when compared to RoadText-3K clips.
Experimental setup for M4C.
The M4C<cit.> model is trained using the official implementation, and the training parameters and implementation details remain consistent with those used in the original paper. We used a fixed vocabulary of size 3926 generated from the train set.
The training data consists of image question-answer pairs where the image selected for training is the one on which the questions are based, specifically the timestamp frame. After training, the model is evaluated using two approaches. Firstly, it is tested on the timestamp QA pairs of the test set, and secondly, it is evaluated on the video level by sampling ten frames from the respective video for each QA pair and obtaining the model prediction for every frame individually. The final answer is determined by taking the most common answer from the ten individual frame predictions.
Experimental setup for SINGULARITY.
We fine-tuned the pretrained SINGULARITY-temporal 17M model on four NVIDIA Geforce RTX 2080 Ti. The fine-tuning process was run for 20 epochs with a batch size of 16, starting with an initial learning rate of 1e-5 and increasing linearly in the first half epoch, followed by cosine decay<cit.> to 1e-6. The other parameters used for training are the same as the official implementation. The video frames were resized to 224x224, and a single frame with random resize, crop and flip augmentations was utilised during training, whereas 12 frames were used during testing. Additionally, we fine-tuned the SINGULARITY model, which has been pretrained on the MSRVTT-QA<cit.> dataset.
Experimental setup for GIT.
The training process for GIT was carried out using a single Tesla T4 GPU for 20 epochs with a batch size of 2.
We use an Adam<cit.> optimizer with an initial learning rate starting at 1e-5 and gradually decreasing to 1e-6 through the use of cosine decay.
The GIT model was trained using the official VideoQA configuration used for MSRVTT-QA training. We fine-tuned the pretrained GIT-large model on our dataset, using six frames that were evenly spaced as inputs during both training and testing. In addition, we further fine-tuned the GIT model that was pretrained on the MSRVTT-QA<cit.> dataset.
§.§ Results
Heuristic baselines and upper bound results are presented in the <ref>. The heuristic baselines yield very low accuracy, which indicates the absence of any bias due to the repetition of answers.
Random OCR heuristic gives close to 2% accuracy, meaning that there is enough text present in the video that selecting a random OCR from the video will not yield high accuracy. The OCR upper bound is 36.6% which is low when compared to the percentage of questions which have the answers present in the video. The low OCR UB can be attributed to how the text detection and how ground truth annotation is done. The response to a question may be split into multiple lines within the video, leading to the representation of the answer as separate tokens in the OCR output. This happens because the annotations in the OCR process were carried out on a line level. From the upper bound result of Vocab + OCR UB, we can see that more than three-quarters of the answers are present in either the vocabulary or in the OCR tokens of the video.
The results on M4C are shown on <ref>. The frame level results, where we evaluate on the timestamp frame, show an accuracy of 38.20% and the video level results, where we evaluate on ten frames, give an accuracy of 28.92%. The results show that answering the question is still a challenging task, even when we reduce the complexity of the problem by providing the aptest frame for answering the question and ground truth OCR tokens.
We show the results after fine-tuning on SINGULARITY and GIT in <ref>. The accuracy of the questions requiring answers to be extracted from the video (AP) is comparatively lower, while the accuracy of the questions where the answer is not present in the video is comparatively higher.
Compared to AP, ANP is less complex to answer because it involves a fixed set of answers. In contrast, AP requires dynamic extraction from OCR tokens, resulting in the ANP set having better accuracy than AP.
Additionally, fine-tuning the model that has been pretrained on the MSRVTT-QA dataset shows improvement in accuracy across all categories(TB, RSB, AP, and ANP).
Fine-tuning GIT results in better performance compared to SINGULARITY. GIT also shows a similar trend when fine-tuned on pretrained MSRVTT-QA dataset. The “answer is present in the videos(AP)" subset has an improvement of 3.9% in accuracy when compared with SINGULARITY, whereas the “answer is not present(ANP)" in the videos subset has a gain of 6.3%. M4C tested on a single frame shows better results compared to VideoQA models. This can be attributed to the fact that we explicitly provide the OCR tokens and the correct frame on which the question is framed to the model. M4C tested on ten frames gives comparable results to GIT.
We show some of the qualitative results in <ref>. As the complexity of the scene and the obscurity of the scene text increase, it becomes more and more difficult for the model to predict the correct answer. VideoQA baselines achieve better results on questions that do not require the extraction of answers from the video.
§ CONCLUSIONS
We introduce RoadTextVQA, a new Video Question Answering dataset where the questions are grounded on the text and road signs present in the road videos. Our findings from the baseline models' performance indicate a need for improvement in existing VideoQA approaches for text-aware multimodal question answering.
Future work can involve augmenting the dataset by incorporating videos obtained from diverse global locales. Currently, there are recurrent questions and answers due to repeating elements in the videos.
Including videos from various locations broadens the diversity of the dataset by providing a more comprehensive range of questions and answers and minimizes any biases within the dataset. To our best knowledge, currently, there are no Visual Question Answering models that explicitly incorporate road signs. Models can integrate road signs as an additional input or pretrain on road sign-description pairs to enhance their ability to respond to questions that require domain knowledge.
We believe this work would encourage researchers to develop better models that incorporate scene text and road signs and are resilient to the challenges posed by driving videos. Additionally, drive further research in the area of scene text VideoQA and the development of advanced in-vehicle support systems.
§ ACKNOWLEDGEMENTS
This work has been supported by IHub-Data at IIIT-Hyderabad, and grants PDC2021-121512-I00, and PID2020-116298GB-I00 funded by MCIN/AEI/
10.13039/501100011033 and the European Union NextGenerationEU/PRTR.
splncs04
|
http://arxiv.org/abs/2307.04262v1 | 20230709203407 | Quantum random walks on a beam splitter array | [
"Mario Ivan Estrada Delgado",
"Zurika Iveth Blanco Garcia"
] | quant-ph | [
"quant-ph",
"81R99"
] |
APS/123-QED
[email protected]
Tecnológico de Monterrey, Escuela de Ingeniería y Ciencias, Carr. Lago de Guadalupe Km. 3.5, CP. 52926, Estado de México, Mexico
[email protected]
Tecnológico de Monterrey, Escuela de Ingeniería y Ciencias, Carr. Lago de Guadalupe Km. 3.5, CP. 52926, Estado de México, Mexico
The general matrix representation of a beam splitter array is presented. Each beam splitter has a transmission/reflection coefficient that determines the behavior of these individual devices and, in consequence, the whole system response. The general matrix representation of each beam splitter is given as rotations of a 2n-th dimensional space. With these operators, the matrix that describes the entire array and, consequently, the final probability distribution of an input photon state can be calculated.
Quantum random walks on a beam splitter array
Z. Blanco-Garcia 0000-0002-4612-7934
August 12, 2023
=============================================
§ INTRODUCTION
Aharonov et al. introduced the concept of quantum random walks in their 1993 paper <cit.>, in analogy to classical walks. If we consider the evolution of a quantum state in a beam splitter array, we can observed a manifestation of quantum phenomena. For this reason, it is of great interest to the quantum community.
In recent years, investigations around this subject have increased <cit.>. One reason for this increase is the potential technological application in quantum computer <cit.>, cryptography, quantum information and other related fields <cit.>.
In 2019, Sarkar et al. implemented the quantum random walk to generate a quantum random number generator that can be used, for example, in cryptography protocols <cit.>. Others groups are performing experimental implementations of the quantum random walk algorithms <cit.>. For instance, Travaglione et. al implemented the quantum random walk in a ion trap quantum computer to compare it with its classical counterparts <cit.>.
Also, Jiangfeng Du et. al. implemented quantum random walk algorithms in a nuclear magnetic resonance quantum computer <cit.>.
The community hopes that using quantum algorithms can reduce the computing times of certain classical computer problems<cit.>.
In this work, we utilize the properties of the beam splitters <cit.> and draw inspiration from the Elitzur and Vaidman model <cit.>, to propose a new set of operators associated with them. These operators facilitate the construction of the general beam splitter array operator. Moreover, we put forth a general expression for such operator in a certain Hilbert space that depends on the dimension of the array. Once such an operator is built, it enables the calculation of the probability evolution of a photon in several particular cases.
The paper is organized as follows: In Sec. <ref>, we review the matrix beam splitter representation.
In Sec. <ref>, we introduced our model of a 2×2 beam splitter array and the general operator of an n× n array in certain Hilbert spaces. In Sec. <ref>, we apply our model to some interesting problems. Finally, the conclusion are included in Sec. <ref>.
§ BEAM SPLITTER
Consider a beam splitter described by reflection and transmission coefficients R and T, respectively. When a photon passes through this optical device, it can be sent through the horizontal arm as state | 1 ⟩ or the vertical arm as state | 2 ⟩, as shown in Figure <ref>. The matrix representations of these two states are as follows:
| 1 ⟩ = [ 1; 0 ], 2ex
| 2 ⟩ = [ 0; 1 ].
The eigenstates | 1 ⟩ and | 2 ⟩ represent the basis states of the positional Hilbert Space.
According to <cit.> and <cit.>, these optical devices can be described by a scattering matrix operator defined as
B=[ cosθ isinθ; i sinθ cosθ ]
such that, if we send an individual photon through the horizontal arm, the evolution of the quantum state of this photon is described as follows:
B | 1 ⟩ = [ cosθ isinθ; i sinθ cosθ ][ 1; 0 ] = cosθ| 1 ⟩ + i sinθ| 2 ⟩
Then, the transmittance probability is given by P_D1 = ⟨ 1| B | 1 ⟩= cos^2θ. This corresponds to the photon lying in the horizontal arm. On the other hand, the probability of finding the photon in vertical arm is given by P_D2 = ⟨ 2| B | 1 ⟩=sin^2θ; therefore, R:T = sin^2θ: cos^2θ. Note that the perfect beam splitter is recover if θ = π/4.
Using this matrix description of a beam splitter, it is possible to analyze the state evolution of a photon in a Mach-Zehnder interferometer (see Figure <ref>).
The Mach-Zehnder interferometer has two beam splitters and two mirrors. The mirrors can be described using equation <ref> by considering θ = π/2. When we send an individual photon through the horizontal arm with an initial state |ψ_ini⟩ = | 1 ⟩, it has a certain probability of taking either the vertical or horizontal path. The mirrors act on this state, and finally, the probabilities are recombined at the second beam splitter.
BMB| 1⟩ = [ cosθ isinθ; i sinθ cosθ ][ 0 i; i 0 ][ cosθ isinθ; i sinθ cosθ ]| 1 ⟩
=
-2 sinθcosθ| 1 ⟩ + i(cos^2θ - sin^2 θ) | 2 ⟩
If we consider perfect beam splitters, the final state is simplified as |ψ_fin⟩ = - | 1 ⟩. The final interference is constructive in the horizontal arm, and completely destructive in the vertical arm.
§ MULTIPORT BEAM SPLITTER ARRAY
Inspired by this optical configuration, we propose the replacement of the two mirrors in the Mach-Zehnder interferometer by two beam splitters (See Figure <ref>) and the incorporation of the new exits and entrances to play a role in the photon evolution.
The name of each beam splitter in the array is in concordance with the matrix language B_ij, where i stands for the row and j for column position of the beam splitter in the array. In this network there are four possible entrances and an equal number of exits. As a result, the Hilbert space is expanded, and the positional states | i⟩, where i=1,2,3,4, describe the potential channels through which the photon can travel. These vectors also form a basis of the state space. The matrix representation of this basis is provided in <ref>.
| 1 ⟩ =
[ 1; 0; 0; 0 ], 1ex
| 2 ⟩ =
[ 0; 1; 0; 0 ], 1ex
| 3 ⟩ =
[ 0; 0; 1; 0 ], 1ex
| 4 ⟩ =
[ 0; 0; 0; 1; ].
The action of each of these beam splitters over the possible arriving states should be as follows:
B_11| 1 ⟩ = cosθ| 1 ⟩ + i sinθ| 2 ⟩
B_11| 2 ⟩ = cosθ| 2 ⟩ + i sinθ| 1 ⟩
B_12| 1 ⟩ = cosθ| 1 ⟩ + i sinθ| 4 ⟩
B_12| 4 ⟩ = cosθ| 4 ⟩ + i sinθ| 1 ⟩
B_21| 2 ⟩ = cosθ| 2 ⟩ + i sinθ| 3 ⟩
B_21| 3 ⟩ = cosθ| 3 ⟩ + i sinθ| 2 ⟩
B_22| 3 ⟩ = cosθ| 3 ⟩ + i sinθ| 4 ⟩
B_22| 4 ⟩ = cosθ| 4 ⟩ + i sinθ| 3 ⟩
Moreover, if the arriving state is traveling through a channel or channels that are not part of any of the paths associated with a specific beam splitter, it should act as an identity operator.
Thus, the beam splitters can be described by the following 4× 4 rotational matrices,
B_11 =
[ cosθ isinθ 0 0; isinθ cosθ 0 0; 0 0 1 0; 0 0 0 1; ],
B_12 =
[ cosθ 0 0 isinθ; 0 1 0 0; 0 0 1 0; isinθ 0 0 cosθ ],
B_21 =
[ 1 0 0 1; 0 cosθ isinθ 0; 0 isinθ cosθ 0; 1 0 0 1 ],
B_22 =
[ 1 0 0 0; 0 1 0 0; 0 0 cosθ isinθ; 0 0 isinθ cosθ ].
For instance, let's consider the initial state of an individual photon as |ψ_ini⟩ = | 1 ⟩. The resulting output state of the photon after interacting with the beam splitter network can be obtain by performing the following operations: B_22B_21B_12B_11| 1 ⟩. It is important to highlight that the order in which the beam splitters are applied to the input state to obtain the output state is significant. Moreover, it is worth noting that beam splitters commute along the ascending diagonal. Finally, through further analysis, we can derive the final expression for the state (See equation <ref>).
|ψ_fin⟩ = cos^2θ| 1 ⟩ + isinθcosθ| 2 ⟩ - 2 sin^2θcosθ| 3 ⟩ + (i sinθcos^2θ - isin^3θ) | 4 ⟩
Consequently, the photon has different probabilities to be detected in each detector and they depend on the beam splitters used in the array. For example, if we consider a perfect beam splitter, i.e., θ=π/4 the only detector with zero probability is D4; however, the other three detector have a probability different from zero (See figure <ref>).
Note that in this exemplification of the problem, the parameter θ has been set to the same value for all beam splitters. Different instantiations can be achieved by easily setting distinct values of θ for each beam splitter. Therefore, it is possible to control the final state of a photon in a four-beam splitter array, as the probability of detection at each detector varies with the θ parameter and, the conventional Mach-Zehnder interferometer is a particular case of this example problem.
§ GENERALIZING THE MULTIPORT BEAM SPLITTER ARRAY
In this section, we present a generalization of the previous array, as depicted in Figure <ref>. The system is described using a parameter p∈ℕ, indicating that we have a square array consisting of p^2 beam splitters and 2p channels. Consequently, the Hilbert space dimension is 2p, denoted as H = {| n ⟩| n∈ℕ≤ 2p }. For this reason, the beam splitter representations belong to 2p× 2p matrices.
The beam splitters in this larger array must adhere to a similar set of rules. They should split signals that arrive at them and act as identity operators for signals that do not. Additionally, there should be sets of beam splitter operators that commute with each other as long as they belong to the same ascending diagonal. Consequently, the operator describing the action of each beam splitter is given as follows (See equation <ref>).
B_m,n= 1_2p× 2p + (cosθ -1)[ | 2n ⟩⟨ 2n | + | 2m-1 ⟩⟨ 2m-1 |] + i sinθ[| 2n ⟩⟨ 2m -1 | + | 2m-1 ⟩⟨ 2n |]
On the other hand, it is advantageous to introduce the general operator MZ, which takes into account the sequential application of the beam splitter operators from the top-left to the bottom-right of the array. Consequently, the complete array operator is defined as follows (See equation <ref>).
MZ=[∏_r=1^p-1( ∏_s=1^f(r) B_p-s+1,p+s-r)] [∏_r=p^2p-1( ∏_s=1^f(r) B_p-s+1,s)]
where f(r) is,
f(r) = {[ r ;; 2p-r . ].
§ INSTANCES OF OUR MODEL
In this section, we explore specific instances that exemplify the versatility and robustness of our model. Firstly, we will examine the simplest examples where the output is known based on reported experimental and theoretical realizations, see for example <cit.>. Then, we will proceed to more complex systems that, to the best of our knowledge, have not been experimentally realized yet.
As a first example, we consider the output of a Mach-Zehnder interferometer (See Figure <ref>) when an incoming photon enters the system through port 1. In other words, the initial state is | 1 ⟩.
By applying the operator defined in equation <ref> to the initial state |ψ_ini⟩ = | 1 ⟩, we can obtain the same probabilities that are reported in section <ref>. As shown, in Figure <ref>, when the state interacts with the first beam splitter, the photon has an equal probability of 0.5 to reach each of the mirrors. Since the mirrors are fully reflective, there is a probability of 0 for the signal to continue to ch 2 or ch1. Therefore, as the state reaches the next beam splitter, it exhibits total destructive interference to continue along ch 4 and total constructive interference to ch3. Consequently, the photon has a probability of 0 to be detected in ch4 (corresponding to detector D_2 according to Figure <ref>), and a probability of 1 to be detected in ch 3 (corresponding to detector D_1). This information is summarized in Figure <ref>, which presents the complete probability evolution in the Mach-Zehnder array.
When the two perfect mirrors in the Mach-Zehnder interferometer, each corresponding to θ=π/2, are replaced by perfect beam splitters with θ=π/4 (see Figure <ref>), the probability evolution undergoes a change. The complete evolution of the probability can be seen in Figure <ref>.
Once again, the probabilities at each detector can be observed at the end of their respective paths. In Figure <ref>, the green rectangle on the left represents the probability at detector 2, as indicated in Figure <ref>. Similarly, the blue rectangle corresponds to the probability at D_4, the red rectangle represents D_3, and finally, the green rectangle on the right represents D_1. The first two red rectangles, from top to bottom, indicate the probabilities after interacting with the first beam splitter but before reaching the subsequent components. Figure <ref> provides a comprehensive view of the probability evolution in the beam splitter array.
Furthermore, we can analyze the evolution of a single photon in larger arrays by exploring different values of the parameter p. For instance, let's consider the case where p=3, which corresponds to a system with 9 beam splitters. In this scenario, we will also set the value of θ to π/4 for each of the beam splitters. The complete probability evolution is presented in Figure <ref>.
In Figure <ref>, we observe the evolution of the system with p=3 and a similar set of perfect beam splitters. The response begins with two red squares in the top left, representing the first beam splitter interaction. Subsequently, we see a sequence of green, blue, and red rectangles, followed by another green rectangle consistent with the result obtained for p=2. Finally, the front row and right column display the probabilities of photon detection at detectors D_2, D_4, D_6, and D_5, D_3, D_1, respectively.
In a similar vein to the previous example, we investigated the system's evolution with p=50, which corresponds to an array of 2500 beam splitters, each set to θ=π/4. The probability evolution is depicted in Figure <ref>.
Perfect beam splitters do not exist in real life, and the transmission coefficients of individual beam splitters can vary due to various factors such as imperfections in manufacturing, temperature variations, or aging effects. To account for this variability, we introduce randomness in the transmission coefficients of each beam splitter in our model. Specifically, we generate the transmission coefficient for each beam splitter using a normal distribution centered at 50 with a standard deviation of 10. By incorporating this randomness, we can observe the system's response under more realistic conditions. The corresponding results, considering the variability in the transmission coefficients, are shown in Figure <ref>.
It is evident from Figure <ref> that the probability evolution follows an unordered path, in contrast to the pattern observed in Figure <ref>.
Based on the results depicted in Figure <ref>, the output probabilities deviate from a balanced distribution despite the well-balanced transmission and reflection coefficients of each beam splitter with θ=π/4. The photon demonstrates a clear tendency to continue its path towards the odd-numbered detectors. This observation leads us to explore, as our penultimate example, the behavior when the input state is a linear combination of | 1⟩ and | 2⟩. The system's response to this scenario is illustrated in Figure <ref>.
As our last example, we explore our system response when the input state is a linear combination of | 49⟩ and | 50⟩, see Figure <ref>.
In addition to these specific instances, the general beam splitter array model presented in Section <ref> allows for the exploration of a wide range of configurations and scenarios, even non square configurations that can be achieve by simple setting θ=0 and avoiding such beam splitters to perform operations in the incoming states so that it arrives directly to the detectors. By varying the parameters p and θ and analyzing the probability evolution, one can investigate the behavior of photons in different network architectures and study their potential applications in quantum information processing, communication, and other fields.
§ CONCLUSION
We have presented a general matrix representation of a beam splitter array. These beam splitters are described by rotation matrices, and the specific configuration of the array determines the final probabilities at different detectors. In our study, we have showcased various example problems, ranging from simple setups like the Mach-Zehnder interferometer to more complex instances.
Future research will focus on exploring different architectures, including non-square arrays, which can be achieved by designating certain beam splitters as identity operators. Additionally, we will investigate the incorporation of Fock states into the system.
*
|
http://arxiv.org/abs/2307.04884v1 | 20230710201235 | A $q$-Chaundy representation for the product of two nonterminating basic hypergeometric series and its symmetric generating functions | [
"Howard S. Cohl",
"Roberto S. Costas-Santos"
] | math.CA | [
"math.CA",
"33D45, 05A15, 42C05, 33D15"
] |
-1.6cm
black
thmTheorem[section]
cor[thm]Corollary
con[thm]Conjecture
rem[thm]Remark
lem[thm]Lemma
defn[thm]Definition
prop[thm]Proposition
equationcurrentlabel=eqnswtrue
centering
=eqncr
toeqcnt@
@## eqcntne
## eqcnt@ ##centering ##@@eqncr@̧equation@ne
ignoretrueyeqncrifnextchar [xeqncrxeqncr[5pt]=0pt
_
`_=
_#1-2#1
@̌mathfonts
132=3.5pt
142=3.5pt
162=2.5pt
172=2.5pt
∅∅∅
q-Chaundy product representation and its symmetric generating functions
A q-Chaundy representation for the product
of
two nonterminating basic hypergeometric
series
and its symmetric generating functions
Howard S. Cohl^∗
and
Roberto S. Costas-Santos^†
H. S. Cohl and
R. S. Costas-Santos
^∗ Applied and Computational
Mathematics Division, National Institute
of Standards
and Technology, Mission Viejo, CA 92694, USA
http://www.nist.gov/itl/math/msg/howard-s-cohl.cfmhttp://www.nist.gov/itl/math/msg/howard-s-cohl.cfm
[email protected]^† Department of Quantitative Methods, Universidad Loyola Andalucía, Sevilla, Spain
http://www.rscosan.comhttp://[email protected]
Received August 12, 2023 in final form ????; Published online ????
We derive double product representations of
nonterminating basic hypergeometric series
using diagonalization, a method introduced by
Theo William Chaundy in 1943.
We also present some
generating functions that arise from it in
the q and q-inverse Askey schemes.
Using this q-Chaundy theorem
which expresses a product of two nonterminating
basic hypergeometric series as a sum over a
terminating basic hypergeometric series, we
study generating
functions for the symmetric families of orthogonal
polynomials in the q and q-inverse Askey scheme.
By applying the q-Chaundy theorem to
q-exponential generating functions due to
Ismail, we are able to derive
alternative expansions of these generating
functions and from these, new representations
for the continuous q-Hermite and q-inverse
Hermite polynomials which are connected by a
quadratic transformation for the terminating
basic hypergeometric series representations.
basic hypergeometric functions;
generating functions;
orthogonal polynomials; q-Askey scheme;
nonterminating representations;
terminating representations
33D45, 05A15, 42C05, 33D15
§ INTRODUCTION
In this paper we exploit a method introduced by
Theo William Chaundy in 1943 (see
<cit.>) for re-expressing
double summation nonterminating expressions in terms of an infinite sum of terminating expressions. This method is sometimes referred
to as diagonal summation or simply diagonalization.
Chaundy applied this method to re-write products of
generalized hypergeometric series. Sometimes the formulas which result from this method for a product of two generalized hypergeometric functions
lead to very beautiful representations
in terms of a single generalized hypergeometric series. For several nice examples, see for instance Clausen's formula <cit.> (see also <cit.>)
{21a,ba+b+1/2z}^2
=322a,2b,a+ba+b+1/2,2a+2bz,
and Bailey's formula <cit.>
11a2az11b2b-z=231/2(a+b),1/2(a+b+1)a+1/2,b+1/2,a+b1/4z^2.
Other nice examples can be found
in <cit.>.
The goal of this paper is to extend the general results which were conceived of by Chaundy to the q-realm and to investigate some of their applications.
§ PRELIMINARIES
We adopt the following set
notations: ℕ_0:={0}∪={0, 1, 2,…}, and we
use the sets ℤ, ℝ,
ℂ which represent
the integers, real numbers and
complex numbers respectively,
:=∖{0}, and
:={z∈: |z|<1}.
We adopt the following conventions for succinctly
writing elements of sets. To indicate
sequential positive and negative
elements, we write
± a:={a,-a}.
We also adopt an analogous notation
z^±:={z,z^-1}.
Consider q∈, n∈ℕ_0.
Define the sets
Ω_q^n:={q^-k:
k∈ℕ_0, 0≤ k≤ n-1},
Ω_q:=Ω_q^∞
={q^-k:k∈ℕ_0}.
We will use <cit.>
n+k2=
n2+k2+kn,
n-k2
=n2+k2+k(1-n).
We also require the q-shifted factorial
(a;q)_n=(1-a)(1-qa)⋯(1-q^n-1a),
n∈_0.
One may also define
(a;q)_∞:=∏_n=0^∞
(1-aq^n),
where |q|<1.
Furthermore, define
(a;q)_b:=(a;q)_∞/(a q^b;q)_∞.
where a q^b∉Ω_q.
We will also use the common
notational product conventions
(a_1,...,a_k)_b:=
(a_1)_b⋯(a_k)_b,
(a_1,...,a_k;q)_b:=
(a_1;q)_b⋯(a_k;q)_b,
where b∈ℂ∪{∞}.
The q-shifted factorial
also has the
following useful properties
<cit.>:
(a;q^-1)_n=q^-n2(-a)^n(a^-1;q)_n,
(a;q)_n+k=(a;q)_k(aq^k;q)_n
= (a;q)_n(aq^n;q)_k,
(a;q)_n
=(q^1-n/a;q)_n(-a)^nq^n2,
(a;q)_n-k/(b;q)_n-k=
(b/a)^k
(a;q)_n(q^1-n/b;q)_k/(b;q)_n(q^1-n/a;q)_k,
a,b 0, k=0,1,2,…,n,
(q^-n-k;q)_k=(q;q)_k(q^1+k;q)_n/(q;q)_n
(-1)^k q^k2-k^2-nk.
We note that an equivalent representation of (<ref>), which is
very useful for obtaining limits which we
often need, is
a^n(x/a;q)_n=
q^n2(-x)^n(a/x;q^-1)_n,
therefore
lim_a→0 a^n(x/a;q)_n
=
lim_b→∞ 1/b^n(xb;q)_n
=
q^n2(-x)^n.
From (<ref>), another useful limit representation is
lim_λ→∞(aλ;q)_n/(bλ;q)_n
=(a/b)^n.
Furthermore, one has the following
identities
<cit.>
(a^2;q)_∞=(± a,± q^1/2 a;q)_∞,
(a;q^1/2)_∞=(a,q^1/2 a;q)_∞.
§.§ Basic hypergeometric series
The basic hypergeometric series, which we
will often use, is defined for
q,z∈ such that |q|,|z|<1, s,r∈ℕ_0,
b_j∉Ω_q, j=1,...,s, as
<cit.>
rsa_1,...,a_rb_1,...,b_sq,z
:=∑_k=0^∞(a_1,...,a_r;q)_k/(q,b_1,...,b_s;q)_k((-1)^kq^ k2)^1+s-r
z^k.
For s+1>r, _rϕ_s is an entire
function of z, for s+1=r then
_rϕ_s is convergent for |z|<1, and
for s+1<r the series
is divergent unless it is terminating.
Note that when we refer to a basic hypergeometric
function with arbitrary argumentz, we
mean that the argument does not necessarily
depend on the other parameters, namely the a_j's,
b_j's nor q. However, for the arbitrary
argument z, it very-well may be that the domain
of the argument is restricted, such as for |z|<1.
We refer to a basic hypergeometric
series as ℓ-balanced if
q^ℓ a_1⋯ a_r=b_1⋯ b_s,
and balanced if ℓ=1.
A basic hypergeometric series _r+1ϕ_r is
well-poised if
the parameters satisfy the relations
qa_1=b_1a_2=b_2a_3=⋯=b_ra_r+1.
It is very-well poised if in addition,
{a_2,a_3}=± qa_1^1/2.
Terminating basic hypergeometric series
which appear in the definitions of basic hypergeometric orthogonal
polynomials are defined as
rsq^-n,a_1,...,a_r-1b_1,...,b_sq,z:=∑_k=0^n
(q^-n,a_1,...,a_r-1;q)_k/(q,b_1,...,b_s;q)_k((-1)^kq^ k2)^1+s-rz^k,
where b_j∉Ω_q^n, j=1,...,s.
In the sequel, we will use the following notation _r+1ϕ_s^m, m∈ℤ
(originally due to van de Bult & Rains
<cit.>),
for basic hypergeometric series with
zero parameter entries.
Consider p∈ℕ_0. Then define
_r+1ϕ_s^-p([ a_1,…,a_r+1; b_1,…,b_s ];q,z
)
:=
r+p+1sa_1,a_2,…,a_r+1,0,…,0^p
b_1,b_2,…,b_sz,
_r+1ϕ_s^ p([ a_1,…,a_r+1; b_1,…,b_s ];q,z
)
:=
r+1s+pa_1,a_2,…,a_r+1
b_1,b_2,…,b_s,0,…,0_pz,
where b_1,…,b_s∉Ω_q∪{0}, and
_r+1ϕ_s^0:=_r+1ϕ_s.
The nonterminating basic hypergeometric series _r+1ϕ_s^m( a; b;q,z), a:={a_1,…,a_r+1},
b:={b_1,…,b_s}, is well-defined for s-r+m≥ 0. In particular _r+1ϕ_s^m is an entire function of z for s-r+m>0, convergent for |z|<1 for s-r+m=0 and divergent if s-r+m<0.
Note that we will move interchangeably between the
van de Bult & Rains notation and the alternative
notation with vanishing numerator and denominator parameters
which are used on the right-hand sides of (<ref>) and (<ref>).
We will often use (frequently without mentioning) the
following limit transition
formulas which can be found in
<cit.>
lim_λ→∞rsa_1,...,a_r-1,λ a_rb_1,...,b_sq,z/λ
=r-1sa_1,...,a_r-1b_1,...,b_sq,a_rz,
lim_λ→∞rsa_1,...,a_rb_1,...,b_s-1,
λ b_sq,λ z=rs-1a_1,...,a_rb_1,...,b_s-1q,z/b_s,
lim_λ→∞rsa_1,...,a_r-1,λ a_rb_1,...,b_s-1,λ b_sq,z
=r-1s-1a_1,...,a_r-1b_1,...,b_s-1q,a_r/b_sz.
The q-binomial theorem is
<cit.>
10a-q,z=(az;q)_∞/(z;q)_∞, q∈, |z|<1.
Also, one has two q-analogues of the exponential function which are due to Euler <cit.> (see also <cit.>).
Let q∈, z∈. Then
e_q(z):=00-1--q,z=1/(z;q)_∞, |z|<1,
E_q(-z)=00--q,z=(z;q)_∞.
See proof of <cit.>.
One has the following relation between _2ϕ_2
and _2ϕ_1 cf. <cit.>
22a,bc,abz/cq,z=
(bz/c;q)_∞/(abz/c;q)_∞21a,c/bcq,bz/c.
One also has the following
nonterminating transformations
<cit.>
101a-q,z
=(a,z;q)_∞01-2-zq,a
=(z;q)_∞01-zq,az.
In <cit.>, one
finds the inversion formula for
terminating basic hypergeometric series.
Let m, n, k, r, s∈ℕ_0, 0≤ k≤ r,
0≤ m≤ s,
a_k, b_m∉Ω^n_q∪{0},
q∈ℂ^∗ such that |q| 1.
Then,
r+1sq^-n,a_1,...,a_rb_1,...,b_s
q,z=(a_1,...,a_r;q)_n/(b_1,...,b_s;q)_n(z/q)^n((-1)^nq^n2)^s-r-1
×∑_k=0^n
(q^-n,q^1-n/b_1,...,q^1-n/
b_s;q)_k/(q,q^1-n/a_1,...,q^1-n/a_r;q)_k(b_1⋯ b_s/a_1⋯ a_rq^n+1/z)^k.
From the above inversion formula (<ref>),
one may derive the following useful terminating
basic hypergeometric transformation lemma.
Let p∈, n,r,s∈ℕ_0,
a_k, b_m∉Ω^n_q∪{0},
z, q∈ℂ^∗ such that |q| 1.
Then
r+1spq^-n,a_1,...,a_rb_1,...,b_sq,z=(a_1,...,a_r; q)_n/(b_1,...,b_s;q)_n(z/q)^n
((-1)^nq^n2)^s-r+p-1
×s+1rs-r+pq^-n,q^1-n/b_1,...,q^1-n/b_sq^1-n/a_1,...,q^1-n/a_rq,b_1⋯ b_s/a_1⋯ a_rq^(1-p)n+p+1/z.
In a straightforward calculation, if we write
(<ref>) and we apply (<ref>)
assuming all the parameters are nonzero, and then
we apply identities (<ref>) and (<ref>)
one obtains (<ref>).
This completes the proof.
Let n,r∈ℕ_0, q∈ℂ^∗ such that |q| 1, and
for 0≤ k≤ r, let a_k, b_k∉Ω^n_q∪{0}.
Then,
r+1rq^-n,a_1,…,a_rb_1,…,b_rq,z
=
q^-n2
(-1)^n
(a_1,…,a_r;q)_n/(b_1,…,b_r;q)_n
(z/q)^nr+1rq^-n,
q^1-n/b_1,…,
q^1-n/b_rq^1-n/a_1,…,
q^1-n/a_rq,
q^n+1/zb_1⋯ b_r/a_1⋯ a_r.
Take r=s, p=0 in
(<ref>), which completes
the proof.
Note that in Corollary <ref>
if the terminating basic hypergeometric
series on the left-hand side is balanced
then the argument of the terminating basic
hypergeometric series on the right-hand side
is q^2/z.
Another equality we can use is the following
connecting relation between terminating
basic hypergeometric series with base q, and
with base q^-1:
r+1rq^-n,a_1,...,a_rb_1,...,b_rq,z=
r+1rq^n, a^-1_1,..., a^-1_rb^-1_1, ..., b^-1_rq^-1,
a_1 a_2⋯ a_rb_1 b_2⋯ b_rzq^n+1
=
q^-n2(-z/q)^n
(a_1,…,a_r;q)_n/(b_1,…,b_r;q)_nr+1rq^-n,
q^1-n/b_1,...,
q^1-n/b_rq^1-n/a_1,...,
q^1-n/a_rq,b_1⋯ b_r/a_1⋯ a_rq^n+1/z.
In order to understand the procedure for obtaining
the q-inverse analogues of the basic
hypergeometric orthogonal polynomials studied
in this manuscript, let us consider a special
case in detail.
Let n∈_0,
f_n,r(q):=f_n,r(q;z(q); a(q), b(q)):=g_r(q)
r+1rq^-n, a(q) b(q)q,z(q),
where
.
[ a(q):={a_1(q),…,a_r(q)}; b(q):={b_1(q),…,b_r(q)} ]},
which will suffice, for instance, for
the study of the
terminating basic hypergeometric
representations for the
Askey-Wilson polynomials.
In order to obtain the corresponding q-inverse hypergeometric representations
of f_n,r(q), one only needs to consider the corresponding q-inverted function:
f_n,r(q^-1)=g_r(q^-1)
r+1rq^n, a(q^-1) b(q^-1)q^-1,z(q^-1).
Let r,k∈ℕ_0, 0≤ k≤ r, a_k(q)∈ℂ, b_k(q)∈Ω_q,
q∈ℂ^∗ such that |q| 1, z(q)∈ℂ.
Define a(q):=(a_1(q),…,a_r(q)), b(q):=(b_1(q),…,b_r(q))
and a multiplier function g_r(q):=g_r(q;z(q); a(q); b(q)) which is not
of basic hypergeometric type (some multiplicative combination of powers and q-Pochhammer symbols),
and z(q):=z(q; a(q); b(q)). Then defining f_n,r(q) as in (<ref>), one has
f_n,r(q^-1)
=g_r(q^-1)r+1rq^-n, a^-1(q^-1) b^-1(q^-1)q,q^n+1a_1(q^-1)⋯ a_r(q^-1) z(q^-1)/b_1(q^-1)⋯ b_r(q^-1).
By using (<ref>) repeatedly with
the definition (<ref>) in (<ref>),
one obtains the q-inverted terminating
representation (<ref>), which
corresponds to the original terminating basic
hypergeometric representation (<ref>). This completes the proof.
Now consider the more general case.
Let r,s∈ℕ_0, 0≤ t≤ r, 0≤ u≤ s, and let
.
[ a(q):={a_1(q),…,a_r-t(q),0,…,0^t}; b(q):={b_1(q),…,b_s-u(q),0,…,0_u} ]},
where either t>0, u=0, or u>0, t=0, or t=u=0, and as above,
a multiplier function g_r,s,t,u(q):=g_r,s,t,u(q;z(q); a(q); b(q))
and z(q):=z(q; a(q); b(q)).
Define
f_r,s,t,u(q):=
g_r,s,t,u(q)
r+1sq^-n, a(q) b(q)q,z(q).
In order to obtain the q-inverted representation of
f_r,s,t,u, one must again compute
f_r,s,t,u(q^-1)=g_r,s,t,u(q^-1)
r+1sq^n, a(q^-1) b(q^-1)q^-1,z(q^-1).
This can be obtained by repeated use of
(<ref>) using the definition (<ref>)
and various combinations of (<ref>)–(<ref>).
§ CONTINUOUS BASIC HYPERGEOMETRIC ORTHOGONAL POLYNOMIALS
We will study a subset of basic hypergeometric orthogonal polynomials
in the q-Askey scheme which we refer to as continuous basic
hypergeometric orthogonal polynomials. These are basic hypergeometric
orthogonal polynomials whose orthogonality relation is given by
an integral over an interval on the real line.
In the remainder of the paper we will be examining orthogonal polynomials in x=1/2(z+z^-1).
Note that in this case, x=x(z) is invariant under
the map z↦ z^-1, so all functions (including polynomials) in x will also satisfy this invariance.
§.§ The continuous q and q-inverse symmetric families
The Askey–Wilson polynomials are at the top of
the symmetric
family of basic hypergeometric orthogonal polynomials.
The continuous dual q-Hahn p_n(x;a,b,c|q),
Al-Salam–Chihara p_n(x;a,b|q),
continuous big q-Hermite H_n(x;a|q) and
continuous q-Hermite H_n(x|q) polynomials
are the d→ c→ b→ a→ 0 limit cases
of the Askey–Wilson polynomials, namely
p_n(x;a,b,c|q)=lim_d→ 0p_n(x;a,b,c,d|q),
p_n(x;a,b|q)=lim_c→ 0p_n(x;a,b,c|q),
H_n(x;a|q)=lim_b→ 0p_n(x;a,b|q),
H_n(x|q)=lim_a→ 0H_n(x;a|q).
The continuous dual q-Hahn and Al-Salam–Chihara polynomials
are symmetric in the variables a,b,c, and a,b respectively.
By starting with representations of the Askey–Wilson
polynomials (<ref>), we can obtain terminating basic
hypergeometric series representations of the
symmetric family.
Furthermore, the q-inverse symmetric family are also a
set of symmetric polynomials in their parameters a,b,c.
These polynomials can also be obtained as
c→ b→ a→ 0 limit cases
p_n(x;a,b|q^-1)=lim_c→ 0p_n(x;a,b,c|q^-1),
H_n(x;a|q^-1)=lim_b→ 0p_n(x;a,b|q^-1),
H_n(x|q^-1)=lim_a→ 0H_n(x;a|q^-1).
§.§ The Askey–Wilson polynomials
The Askey–Wilson polynomials have the following
terminating _4ϕ_3 basic hypergeometric series
representations
<cit.>.
Let n∈_0, q∈,
x=1/2(z+z^-1), z∈,
a,b,c,d∈.
Then
p_n(x;a,b,c,d|q) := a^-n (ab,ac,ad;q)_n
43q^-n,q^n-1abcd, az^±ab,ac,adq,q
=q^-n2 (-a)^-n(abcd/q;q)_2n
(a z^±;q)_n/(abcd/q;q)_n43q^-n,
q^1-n/ab,
q^1-n/ac,
q^1-n/ad
q^2-2n/abcd,q^1-n/az^±q,q
=z^n(ab,cz^-1,dz^-1;q)_n
43q^-n,az,bz,q^1-n/cdab,q^1-n/cz,q^1-n/dzq,q.
See proof of <cit.>.
The q-inverse Askey-Wilson polynomials
p_n(x;a,b,c,d|q^-1) are
given by
p_n(x;a,b,c,d|q^-1)
=q^-3n2(-abcd)^np_n(x;1a,1b,1c,1d|q),
which follows from Theorem <ref>, Proposition
<ref>, and Remark
<ref>.
§.§ The continuous dual q-Hahn and dual q-inverse Hahn polynomials
The continuous dual q-Hahn polynomials are symmetric
in three parameters a,b,c.
One has the following basic hypergeometric representations
of the continuous dual q-Hahn polynomials
Let n∈ℕ_0,
x=1/2(z+z^-1), z∈,
q∈, a,b,c∈.
Then, the continuous dual q-Hahn polynomials can be given by:
p_n(x;a,b,c|q)
:= a^-n (ab,ac;q)_n
32q^-n, az^±ab,acq,q
=q^-n2(-a)^-n
(az^±;q)_n32q^-n,
q^1-n/ab,
q^1-n/ac
q^1-n/az^±q,q^n bc
=z^n
(ab,cz^-1;q)_n32q^-n, az, bzab,q^1-n/czq,q/cz
=z^n (az^-1,bz^-1;q)_n
32q^-n,cz,q^1-n/abq^1-n/az,q^1-n/bzq,q.
The representation (<ref>) is derived by starting with (<ref>) and replacing b, c, or d→ 0
(see also <cit.>);
(<ref>) is derived using (<ref>)
and taking, for instance d→ 0;
(<ref>) is derived by using (<ref>)
and taking d→ 0;
(<ref>) is derived by using (<ref>) and
taking b→ 0 and replacing d→ b. This completes the proof.
The continuous dual q-inverse Hahn polynomials can
be obtained from the Askey–Wilson polynomials as follows
p_n(x;a,b,c|q^-1)=q^-3n2(-abc)^n
lim_d→0d^n p_n(x;1a,1b,
1c,1d|q).
Now we give the basic hypergeometric
representations
of the continuous dual q-inverse Hahn polynomials.
Let p_n(x;a,b,c|q) and all the respective parameters be defined as previously. Then, the continuous dual
q-inverse Hahn polynomials are given by:
p_n(x;a,b,c|q^-1)=
q^-2n2
(abc)^n (
1/ab,1/ac
;q)_n32q^-n,z^±/a1/ab,1/ac
q,q^n/bc
=q^-n2(-a)^n(z^±/a;q)_n
32q^-n,
q^1-nab,
q^1-nac
q^1-naz^±q,q
=q^-2n2 (abc)^n(1/ab,z/c;q)_n
32q^-n,
1/az,
1/bzq^1-nc/z,
1/abq,q
=q^-2n2(ab/z)^n
(z/a,z/b;q)_n32q^-n,1/cz,q^1-nabq^1-na/z,q^1-nb/zq,qc/z.
Each inverse representation is derived from the
corresponding representation by applying the
map q↦ 1/q and using (<ref>).
§.§ The Al-Salam–Chihara and q-inverse Al-Salam–Chihara polynomials
Let n∈ℕ_0,
x=1/2(z+z^-1), z∈,
q∈, a,b∈.
Then, the Al-Salam-Chihara polynomials are given by:
p_n(x;a,b|q):=a^-n(ab;q)_n
311q^-n, az^±abq,q
= q^-n2
(-a)^-n (az^±;q)_n22q^-n,q^1-n/abq^1-n/az^±q,qb/a
= z^n (ab;q)_n31q^-n,
az,bzabq,q^n/z^2
=z^n(a z^-1;q)_n
21q^-n,bzq^1-n/azq,q/az
=z^n(
az^-1,bz^-1
;q)_n
22-1q^-n,q^1-n/abq^1-n/az,
q^1-n/bz
q,q.
The representation (<ref>) is derived by taking (<ref>)
and replacing c↦ 0 (see also
<cit.>); (<ref>) is derived
by taking (<ref>)
and replacing c↦ 0; (<ref>) is derived by
taking (<ref>) and replacing c↦ 0;
(<ref>) is derived by taking (<ref>)
replacing b↦ 0 (see also <cit.>)
and interchanging c and a;
(<ref>) is derived by taking (<ref>) and replacing c↦ 0,
Using the Al-Salam-Chihara polynomial representations,
we can compute their q-inverse analogs.
Let p_n(x;a,b|q) and the respective parameters be defined as previously.
Then, the q-inverse Al-Salam-Chihara polynomials are given by:
p_n(x;a,b|q^-1)=
q^-n2
(-b)^n
(1/ab
;q)_n31q^-n,
z^±/a1/abq,q^na/b
=q^-n2(-a)^n(z^±/a;q)_n
22-1q^-n,q^1-nabq^1-naz^±q,q
=q^-n2(-ab
z)^n(1/ab;q)_n
311q^-n,
1/az,
1/bz
1/abq,q
=
q^-()0ptn2 (-a)^n
(1/az;q)_n21q^-n,
z/bq^1-nazq, qbz
=
q^-2n2(abz)^n
(
1/az,
1/bz
;q)_n
22q^-n, q^1-nab
q^1-naz,q^1-nbz
q,qz^2.
Each inverse representation is derived from the
corresponding representation by applying the
map q↦ 1/q and using (<ref>).
§.§ The continuous big q-Hermite and big q-inverse Hermite polynomials
Let
n∈ℕ_0,
q∈,
a∈,
x=1/2(z+z^-1), z∈.
The continuous big q-Hermite polynomials are given by:
H_n(x;a|q)
:=a^-n302q^-n, az^±-q,q
=
q^-n2 (-a)^-n
(az^±;q)_n
12q^-nq^1-nz^±/aq,q^2-n/a^2
=z^n(az^-1;q)_n
11-1q^-nq^1-n/azq,q/az
=z^n
(az^-1;q)_n/(q/az;q)_∞11qz/aq^1-n/azq,q^1-n/az
=z^n20q^-n, az -q,q^n/z^2.
The representation (<ref>) is derived by taking (<ref>) and replacing a_2↦ 0
(see also <cit.>);
(<ref>) is derived by taking (<ref>) and replacing a_2↦ 0;
(<ref>) is derived by taking eqrefASC:def3 and replacing a_2↦ 0;
(<ref>) is derived from
(<ref>) using
<cit.>;
(<ref>) is derived by taking (<ref>) or (<ref>) and
replacing b↦ 0
(see also <cit.>).
Using the continuous big q-Hermite polynomials, we can compute
their q-inverse representations.
Let H_n(x;a|q) and the respective parameters be defined as previously.
Then, the continuous big q-inverse Hermite polynomials are given by:
H_n(x;a|q^-1) =a^-n30q^-n,z^±/a-q,q^na^2
=q^-n2(-a)^n(z^±/a;q)_n12-2q^-nq^1-naz^±q,q
=
q^-()0ptn2(-a)^n
(1/az;q)_n11q^-nq^1-nazq,qz^2
=z^n201q^-n,1/az-q,qa/z.
Each inverse representation is derived from the corresponding representation
by applying the map q↦ 1/q and using (<ref>).
§.§ The continuous q-Hermite and q-inverse Hermite polynomials
Let n∈_0, q∈,
x=1/2(z+z^-1), z∈. Then, one has the
following terminating basic hypergeometric representation
for the continuous q-Hermite polynomials:
H_n(x|q):=
z^n
10-1q^-n-q,q^n/z^2.
Start with (<ref>) and take the limit as d→ c→ b→ a→ 0 sequentially.
Similarly, we can compute the basic hypergeometric representation of the continuous q-inverse Hermite polynomials.
Let H_n(x|q) and the respective parameters be defined as previously.
The continuous q-inverse Hermite polynomials are given by:
H_n(x|q^-1)
=z^n101q^-n-q,q/z^2
=z^n(qz^2;q)_∞01-q/z^2q,q^1-n/z^2.
The inverse representation (<ref>)
is derived from (<ref>) by applying the map q↦
1/q and using (<ref>).
The representation (<ref>) follows from (<ref>) by applying the
transformation (<ref>).
Note that there exist the connection relations between the continuous q-Hermite polynomials and the continuous q-inverse Hermite polynomials <cit.>
H_n(x|q)=(q;q)_n∑_k=0^⌊n/2⌋(-1)^kq^1/2 k(3k-2n-1)/(q;q)_k(q;q)_n-2kH_n-2k(x|q^-1),
H_n(x|q^-1)=(q;q)_n∑_k=0^⌊n/2⌋q^-k(n-k)/(q;q)_k(q;q)_n-2kH_n-2k(x|q).
§ Q-CHAUNDY NONTERMINATING DOUBLE PRODUCT REPRESENTATIONS
We derive two equivalent q-Chaundy infinite
series representations for a product of two
nonterminating basic hypergeometric series.
These representations
are given by sums over terminating basic
hypergeometric series using the
van de Bult & Rains notation (<ref>),
(<ref>).
Let r,s∈_0∪{-1}, u,v∈_0,
p,ℓ∈ such that p≥ r-u and ℓ≥ s-v,
a∈^r+1, b∈^u,
c∈^s+1, d∈^v,
q∈. Then
r+1up a bq, Xs+1vℓ c dq, Y
=∑_n=0^∞( a;q)_n X^n/(q, b;q)_n((-1)^nq^n2)^u-r+p
×s+u+2r+v+1u-r+p+ℓq^-n,
c,
q^1-n/ b
d,
q^1-n/ aq,
q^1+p(1-n)b_1⋯ b_u Y/a_1⋯ a_r+1 X
=∑_n=0^∞( c;q)_n Y^n/(q, d;q)_n((-1)^nq^n2)^v-s+ℓ
×r+v+2s+u+1v-s+p+ℓq^-n,
a,
q^1-n/ d
b,
q^1-n/ c
q,
q^1+ℓ(1-n)d_1⋯ d_v X/c_1⋯ c_s+1 Y,
where X, Y are given
such that the left-hand side is well-defined.
First consider the restriction p,ℓ∈ such that p≥ r-u and ℓ≥ s-v so that both nonterminating basic hypergeometric series are convergent.
Then starting with the left-hand side of (<ref>) one writes out the double product of two nonterminating
basic hypergeometric
series as two sums multiplied together using
(<ref>), (<ref>) with
X↦ g X,
Y↦ h X, for
some h X,g X∈, namely
r+1up a bq,g Xs+1vℓ c dq,h X
=∑_n=0^∞( a;q)_n/(q, b;q)_n((-1)^nq^n2)^u-r+p∑_k=0^∞( c;q)_k/(q, d;q)_k((-1)^kq^k2)^v-s+ℓ
(g X)^n(h X)^k.
Now make a double-index replacement (n',k')=(n+k,k)
or equivalently (n,k)=(n'-k',k'). This is referred
to as diagonal summation (see
<cit.>), and upon
replacement n'↦ n, and k'↦ k, we
have
r+1up a bq,g Xs+1vℓ c dq,h X
=∑_n=0^∞(g X)^n
∑_k=0^n( c;q)_k/(q, d;q)_k((-1)^kq^k2)^v-s+ℓ( a;q)_n-k/(q, b;q)_n-k((-1)^n-kq^n-k2)^u-r+p(h/g)^k.
Now we use
(<ref>),
(<ref>),
(<ref>),
collecting terms using (<ref>), (<ref>),
and replacing g X↦ X, h X↦ Y produces
(<ref>).
Without loss of generality interchanging the two
basic hypergeometric series on the left-hand side
of (<ref>) produces (<ref>).
This completes the proof.
The product representations (<ref>), (<ref>)
are clearly, term by term, inverses of each other with
regard to Lemma <ref>.
§ APPLICATIONS TO GENERATING FUNCTIONS
In this section we will treat generating functions of orthogonal polynomials in the q-Askey scheme, and also, in the q-inverse Askey scheme. A generating function for a basic hypergeometric orthogonal polynomial p_n(x; a|q), where a is a multiset of parameters with base q, is given by
f(x,t; a|q)=∑_n=0^∞ t^n h_n( a|q) p_n(x; a|q),
where h_n is a coefficient defined such that the infinite series is convergent. Unless otherwise stated, we assume throughout the manuscript that |t|<1. Sometimes other conditions on the parameters might be required in order for the expressions to be well-defined, and also, in some cases the generating functions might be entire functions of t.
§.§ Askey–Wilson polynomials
The above formulas are quite general. Nonetheless, they can be used to prove some classical generating functions for basic hypergeometric orthogonal polynomials in the q-Askey scheme. The Askey–Wilson polynomials <cit.> are the basic hypergeometric orthogonal polynomials which are at the top of the q-Askey scheme and are symmetric in four parameters a,b,c,d∈.
Let q∈, x,a,b,c,d∈, x=1/2(z+z^-1), z∈, t∈ such that |tz^±|<1. Then
21az,bzabq,tz^-121cz^-1,dz^-1cdtz=∑_n=0^∞p_n(x; a|q) t^n/(q,ab,cd;q)_n.
Starting with (<ref>) using
p=ℓ=0, r=s=u=v=1, a={az,bz},
b={ab},
c={cz^-1,dz^-1}, d={cd},
X=tz^-1, Y=tz, the terminating basic
hypergeometric series reduces to an Askey–Wilson
polynomial through (<ref>). After
simplification, the result follows.
As mentioned in <ref>, one can specifically
start with (<ref>) and take the sequential limit
d→ c→ b→ a→ 0 symmetric
subfamilies.
Starting from (<ref>), we can also use
these sequential limits to obtain the following
generating functions.
Alternatively, one can use the
representations in Corollary <ref>
with Theorem <ref> to verify the following
generating functions.
We can also do the same thing with the q-inverse symmetric families. We now proceed in a systematic way to complete this task.
§.§ The continuous dual q-Hahn
Using the q-Chaundy result one
can obtain generating functions for the
continuous dual q-Hahn polynomials.
Let q∈, a,b,c∈, t∈ℂ,
|t|<|z^±|. Then,
one has the following generating function for
continuous dual q-Hahn polynomials, namely
∑_n=0^∞p_n(x;a,b,c|q) t^n/(q,ab;q)_n
=(ct;q)_∞/(tz;q)_∞21az,bzabq,tz^-1.
The generating function (<ref>) can be derived
using Theorem <ref> with r=u=1, p=s=v=ℓ=0,
a={az,bz}, b={ab},
c={cz^-1}, d=∅,
X=tz^-1, Y=tz along with the representation
of the continuous dual q-Hahn polynomials
(<ref>).
For the constraint note also Remark <ref>. We will not mention this again. This completes the proof.
Note that the generating function (<ref>)
can also be derived by using the representation
for continuous dual q-Hahn polynomials
(<ref>) with s=v=1, p=r=u=ℓ=0,
a={cz^-1}, b=∅,
c={az,bz}, d={ab}, X=tz,
Y=tz^-1. This is because the representations
(<ref>), (<ref>) are related
by the inversion transformation.
There is a similar equivalence for Theorem <ref>
using the representation of continuous dual q-Hahn
polynomials (<ref>).
Let q∈, a,b,c∈,
t∈ℂ, x=1/2(z+z^-1),
z∈, |t|<|a|.
Then
∑_n=0^∞t^n q^n2 p_n(x;a,b,c|q)/(q,ab,ac;q)_n=(-ta;q)_∞22-1az^±ab,acq,-t/a.
This follows by setting u=2, r=1, v=l=0,
s=-1, a={az^±}, b={ab,ac},
c= d=∅ along with the
representation
of the continuous dual q-Hahn polynomials
(<ref>). Finally replacing
t↦ -ta^-1 completes the proof.
Another example can be generated by the
non-standard generating function due to
Atakishiyeva and Atakishiyev
<cit.>
P(x,t;a,b,c|q):=∑_n=0^∞t^n p_n(x;a,b,c|q)/(q,tabc;q)_n
=(ta,tb,tc;q)_∞/(tabc,tz^±;q)_∞.
The q-Chaundy theorem produces the
alternative expansions of this
non-standard generating function.
Let q∈, x=1/2(z+z^-1),
z∈, a, b, c, t∈, |t|<1.
Then
P(x,t;a,b,c|q)=∑_n=0^∞(ac,bc;q)_n (t/c)^n/(q,abct;q)_n43q^-n,zc^±,q^1-n/abcttz,q^1-n/ac,q^1-n/bcq,qt/z
=∑_n=0^∞(zc^±;q)_n (t/z)^n/(q,tz;q)_n43q^-n,ac,bc,q^1-n/tzabct,
q^1-n/zc^±q,qt/c.
One can use the q-Chaundy Theorem <ref> with the product generating function (<ref>) and identify
a={ac,bc}, b={abct}, c={zc^±}, d={tz}, r=s=u=v=1, ℓ=p=0, X=t/c, Y=t/z, which upon insertion completes the proof.
§.§ The continuous dual q-inverse Hahn polynomials
We can also use the q-Chaundy result to obtain generating functions for the continuous dual q-inverse Hahn polynomials.
Let q∈, x=1/2(z+z^-1),
z∈, a,b,c,t∈, |t|<1. Then
∑_n=0^∞t^n q^2n2p_n(x;a,b,c|q^-1)/(q,1/ab,1/ac;q)_n
=1/(abct;q)_∞22z^±/a1/ab,1/acq,at.
Start with (<ref>) and identify
c={z^±/a}, d={1/ab,1/ac}, a= b=∅, v=2, s=1, u=ℓ=0, r=p=-1, X=bct, Y=t in Theorem <ref>
with (<ref>). Finally, replacing t↦ at, completes the proof.
Starting with the representation of the continuous
dual q-inverse polynomials (<ref>) combined
with Theorem <ref> produces Theorem <ref>.
The following generating function has been previously discovered by Ismail, Zhang and Zhou in
<cit.>. However, we are able to prove it alternatively using the q-Chaundy Theorem <ref> as follows.
Let q∈, x=1/2(z+z^-1),
z∈, a,b,c,t∈, |t|<1. Then
∑_n=0^∞t^n q^2n2 p_n(x;a,b,c|q^-1)/(q,1/ab;q)_n
=(bt;q)_∞/(abct;q)_∞22z^±/a1/ab,btq,at
=(tab/z;q)_∞/(abct;q)_∞21z/a,z/b1/abq,tab/z,
where |tab|<|z^±| in the second representation.
Start with (<ref>) and identify
a={z/a,z/b}, b={1/ab}, c={1/cz}, d=∅, u=r=1, s=v=p=ℓ=0, X=t, Y=czt in Theorem <ref>
with (<ref>). Finally replacing t↦ tab/z completes the proof.
Starting with the representation of the continuous dual
q-inverse polynomials (<ref>) combined with
Theorem <ref> produces a generating function which is
equivalent to Theorem <ref>.
Now we present the following _3ϕ_3 product generating
function for continuous dual q-inverse Hahn polynomials.
Let q∈, x=1/2(z+z^-1),
z∈, γ,a,b,c,t∈, |t|<1. Then
G_γ(x,t;a,b,c|q):=
∑_n=0^∞t^n (γ;q)_n q^2n2
p_n(x;a,b,c|q^-1)/(q,1/ab,1/ac;q)_n
=(γ abct;q)_∞/(abct;q)_∞33γ,z^±/a1/ab,1/ac,γ abctq,at.
Start with the definition of G_γ in (<ref>) and
insert the representation of the continuous dual q-inverse Hahn
polynomials (<ref>), which then is a double sum over n,k.
Then reverse the order of summation and shift the n index
n↦ n+k. This converts the outer sum to the form of a
q-binomial and the result follows.
Using the q-Chaundy product representations
we can obtain the following double sum representations of the _3ϕ_3 in
G_γ(x,t;a,b,c|q).
Let q∈, x=1/2(z+z^-1),
z∈, γ,a,b,c,t∈, |t|<1. Then
G_γ(x,t;a,b,c|q)
=∑_n=0^∞(γ;q)_n (abct)^n/(q;q)_n44q^-n,γ,z^±/a1/ab,1/ac,γ abct,q^1-n/γq,q/γ bc
=∑_n=0^∞(γ,z^±/a;q)_n (-at)^n q^n2/(q,1/ab,1/ac,γ abct;q)_n531q^-n,γ,q^1-nab,q^1-nac,q^1-n/γ abctq^1-n/γ,q^1-naz^±q,qabct.
Applying the q-Chaundy product representation
Theorem <ref> and in particular (<ref>)
and (<ref>) respectively produces the double
sum representations of the _3ϕ_3 in
Theorem <ref>.
If one takes the limit γ→ 0 then the
representations of the generating function
G_γ produces the Theorem <ref>.
Taking the limit as γ→ 0 in Corollary <ref> produces representations of G_0 using (<ref>), (<ref>)
respectively.
Replacing γ=1/ac in
(<ref>) and using (<ref>)
produces Theorem <ref>.
Let q∈, x=1/2(z+z^-1),
z∈, a,b,c,t∈, |t|<1. Then
∑_n=0^∞t^n q^2n2 p_n(x;a,b,c|q^-1)/(1/ab,1/ac;q)_n=(q abct;q)_∞/(abct;q)_∞33q,z^±/a1/ab,1/ac,q abctq,at.
Setting γ=q in Theorem <ref> completes the proof.
§.§ The Al-Salam–Chihara polynomials
The Al-Salam–Chihara polynomials
have three standard (well-known) generating functions <cit.> which all follow easily using the q-Chaundy
Theorem <ref>.
Let q∈, x=1/2(z+z^-1),
z∈, a,b,t∈, |t|<1. Then
∑_n=0^∞t^n p_n(x;a,b|q)/(q;q)_n=(at,bt;q)_∞/(tz^±;q)_∞,
∑_n=0^∞t^n p_n(x;a,b|q)/(q,ab;q)_n=1/(tz;q)_∞21az,bzabq,t/z,
∑_n=0^∞t^n q^n2 p_n(x;a,b|q)/(q,ab;q)_n=(-ta;q)_∞21az^±abq,-t/a,
where |t|<|z^±|, |t|<|a|, in the second and third generating functions respectfully, so that the nonterminating Gauss basic hypergeometric series are convergent.
The generating function (<ref>) follows from the representation
(<ref>) using the q-Chaundy Theorem <ref>
with r=s=u=v=p=ℓ=0, a={az^-1}, c={bz}, b= d=∅, X=zt, Y=tz^-1, and the q-binomial theorem twice.
The generating function (<ref>) follows from the representation (<ref>) (or (<ref>)) with r=p=-1, u=ℓ=0, s=v=1, c={az,bz}, d={ab}, a= b=∅, X=zt, Y=tz^-1, and the application of Euler's Theorem <ref> once.
The generating function (<ref>) follows from the representation (<ref>) (or (<ref>)) with r=-1, u=p=ℓ=0, s=v=1, c={az^±}, d={ab}, a= b=∅, X= Y=t, and the application of Euler's Theorem <ref> once.
There's another generating function for Al-Salam–Chihara polynomials <cit.>
L_γ(x,t;a,b|q):=∑_n=0^∞t^n (γ;q)_n p_n(x;a,b|q)/(q,ab;q)_n=(γ tz;q)_∞/(tz;q)_∞32γ,az,bzab,γ tzq,t/z,
where |t|<|z^±|.
Let q∈, x=1/2(z+z^-1),
z∈, a,b,t∈, |t|<1. Then
L_γ(x,t;a,b|q)=∑_n=0^∞(γ;q)_n (tz)^n/(q;q)_n43q^-n,γ,az,bzab,γ tz,q^1-n/γq,q/γ z^2
=∑_n=0^∞(γ,az,bz;q)_n (t/z)^n/(q,ab,γ tz;q)_n43q^-n,γ,q^1-n/ab,q^1-n/γ tzq^1-n/γ,q^1-n/az,q^1-n/bzq,qtz.
Starting with the generating function (<ref>), and applying both expansions of the q-Chaundy Theorem <ref> using r=u=p=ℓ=0, s=v=2, X=tz, Y=t/z, a={γ}, c={γ,az,bz}, d={ab,γ tz}, b=∅, completes the proof.
§.§ The q-inverse Al-Salam–Chihara polynomials
One has the following generating function for q-inverse Al-Salam–Chihara polynomials which come from the representations (<ref>)-(<ref>).
Let q∈, x=1/2(z+z^-1),
z∈, a,b,t∈, |tab|<|z^±|.
Then
∑_n=0^∞t^n q^2n2p_n(x;a,b|q^-1)/(q,1/ab;q)_n
=(tab/z;q)_∞21z/a,z/b1/abq,tab/z.
This generating function can be obtained by
starting with the q-Chaundy Theorem <ref> with representation (<ref>) (or (<ref>)), s=-1, v=p=ℓ=0, u=r=1, a={1/az,1/bz}, b={1/ab}, c= d=∅, X= Y=t, replacing t↦ tabz and then z↦ z^-1. Similarly, one can take (<ref>) and take the limit as c→ 0. This completes the proof.
Similarly, from (<ref>), we obtain the following infinite product generating function which was originally obtained in <cit.> (see also <cit.>, <cit.>).
Let q∈, x=1/2(z+z^-1),
z∈, a,b,t∈, |t|<1. Then
∑_n=0^∞t^n q^n2 p_n(x;a,b|q^-1)/(q;q)_n
=(-tz^±;q)_∞/(-ta,-tb;q)_∞.
This generating function can be obtained by
starting with the q-Chaundy Theorem
<ref> with representation (<ref>)
and s=-1, r=u=s=v=p=ℓ=0,
a={1/az},
c={z/b},
b= d=∅, X=at,
Y=bt, and replacing t↦ -t.
This completes the proof.
By starting with (<ref>) (or
(<ref>)) and the q-Chaundy Theorem
<ref>, we can obtain another generating
function.
Let q∈, x=1/2(z+z^-1),
z∈, a,b,t∈, |ta|<1.
Then
∑_n=0^∞t^n q^n2 p_n(x;a,b|q^-1)/(q,1/ab;q)_n=
1/(-bt;q)_∞21z^±/a1/abq,-ta.
This generating function can be obtained by
starting with the q-Chaundy Theorem
<ref> with representation (<ref>)
(or (<ref>)) and s=ℓ=-1,
u=v=p=0, r=u=1,
a={z^±/a},
b={1/ab},
c= d=∅, X=at,
Y=bt, and replacing t↦ -t.
This completes the proof.
One also has the following interesting
generating function for q-inverse
Al-Salam–Chihara polynomials with
arbitrary parameter γ.
Let q∈, x=1/2(z+z^-1),
z∈, γ, a,b,t∈,
|at|<1.
Then
H_γ(x,t;a,b|q)
:=∑_n=0^∞t^n(γ;q)_n
q^n2p_n(x;a,b|q^-1)/(q,1/ab;q)_n=(-γ bt;q)_∞/(-bt;q)_∞32γ,z^±/a1/ab,-γ btq,-at.
Start with the definition of H_γ in
(<ref>) and insert the representation of
the q-inverse Al-Salam–Chihara polynomials
(<ref>), which then is a double sum
over n,k, then reverse the order of summation
and shift the n index n↦ n+k.
This converts the outer sum to the form of a
q-binomial and the result follows.
Using the q-Chaundy product representations
we can obtain the following double sum representations
of the _3ϕ_2 in
H_γ(x,t;a,b|q).
Let q∈, x=1/2(z+z^-1),
z∈, γ, a,b,t∈, |t|<1. Then
H_γ(x,t;a,b|q)
=∑_n=0^∞(γ;q)_n (-bt)^n/(q;q)_n43q^-n,γ,z^±/a1/ab,-γ bt,q^1-n/γq,qa/γ b
=∑_n=0^∞(γ,z^±/a;q)_n (-at)^n/(q,1/ab,-γ bt;q)_n43q^-n,γ,q^1-nab,-q^1-n/γ btq^1-n/γ,q^1-naz^±q,-qbt.
Applying the q-Chaundy product representation
Theorem <ref> and, in particular, (<ref>) and
(<ref>) respectively produces the double
sum representations of the _3ϕ_2 in
Theorem <ref>.
Inserting γ=1/ab in Theorem
<ref> produces Theorem <ref>.
§.§ The continuous big q-Hermite polynomials
The continuous big q-Hermite polynomials
have three standard (well-known) generating
functions <cit.> which all
follow easily using the q-Chaundy Theorem <ref>.
Let q∈, x=1/2(z+z^-1),
z∈, a,t∈, |t|<1. Then
∑_n=0^∞t^n H_n(x;a|q)/(q;q)_n=(at;q)_∞/(tz^±;q)_∞,
∑_n=0^∞t^n q^n2 H_n(x;a|q)/(q;q)_n=(-ta;q)_∞201az^±-q,-t/a,
where |t|<|a|, in the second generating function, so that the nonterminating Gauss basic hypergeometric series is convergent.
One can use the q-Chaundy Theorem <ref> with the representations (<ref>)–(<ref>).
For instance, the generating function (<ref>) follows
with (<ref>), (<ref>), p=v=0,
r=u=1, s=ℓ=-1,
a={az},
b= c= d=∅, X=tz^-1,
Y=tz along with the representation
of the continuous big q-Hermite polynomials
(<ref>).
However, it is easier to just take the limit
as b→ 0 in Theorem
<ref>. Note that the limit as b→ 0 in both (<ref>), (<ref>) produce (<ref>). This completes the proof.
Another product generating
function for continuous q-Hermite polynomials
is <cit.>
M_γ(x,t;a,b|q):=
∑_n=0^∞t^n (γ;q)_n H_n(x;a|q)/(q;q)_n=(γ tz;q)_∞/(tz;q)_∞21γ,azγ tzq,t/z.
One may use the q-Chaundy Theorem <ref>
to produce alternative expansions of this
generating function which we reproduce in
the following theorem.
Let q∈, x=1/2(z+z^-1),
z∈, γ, a,t∈, |t|<1.
Then
M_γ(x,t;a|q)=∑_n=0^∞(γ;q)_n (tz)^n/(q;q)_n32q^-n,γ,azγ tz,q^1-n/γq,q/γ z^2,
=∑_n=0^∞(γ,az;q)_n (t/z)^n/(q,γ tz;q)_n32q^-n,γ,q^1-n/γ tzq^1-n/γ,q^1-n/azq,qtz^2/a.
One can use the q-Chaundy Theorem <ref> with the product
generating function (<ref>). However, it is easier
to take the limit as b→ 0 in Theorem <ref>.
This completes the proof.
§.§ The continuous big q-inverse Hermite polynomials
From (<ref>), we can obtain the following generating function for continuous big q-inverse Hermite polynomials.
Let q∈, x=1/2(z+z^-1),
z∈, a,t∈, |t|<1. Then
∑_n=0^∞t^n q^n2H_n(x;a|q^-1)/(q;q)_n
=(-tz^±;q)_∞/(-ta;q)_∞.
If you replace t↦ t/a and take the limit as a→ 0 in (<ref>) and then replace b↦ a you obtain the following generating function.
If one starts with Theorem <ref> and takes the limit b→ 0, one arrives at this result. Also, if one uses the q-Chaundy Theorem <ref> with representations (<ref>) or (<ref>), one arrives at the same generating function.
Note that for the continuous big q-Hermite polynomials there exists a generating function with arbitrary numerator dependence given by (γ;q)_n, i.e., (<ref>). We have not, as of yet, been able to derive an analogous generating function for the continuous big q-inverse Hermite polynomials.
§.§ The continuous q-Hermite polynomials
The standard generating function for the continuous
q-Hermite polynomials <cit.>
can be easily obtained using the q-Chaundy
Theorem <ref>.
Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then
∑_n=0^∞t^n H_n(x|q)/(q;q)_n=1/(tz^±;q)_∞.
The generating function (<ref>) follows easily from the representation
(<ref>) using the q-Chaundy Theorem <ref>
with r=s=-1, u=v=0, p=ℓ=-1, a= b= c= d=∅, X=zt, Y=tz^-1, and Euler's Theorem <ref> twice.
Another generating function for
continuous q-Hermite polynomials is given by
<cit.>
J(x,t|q):=∑_n=0^∞q^n2 t^n H_n(x|q)/(q;q)_n=(-tz;q)_∞01-1--tzq,-t/z.
By applying the q-Chaundy Theorem <ref>, we can obtain the following results.
Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then
J(x,t|q)=∑_n=0^∞q^n2(tz)^n/(q;q)_n11q^-n-tzq,q/z^2,
=∑_n=0^∞q^n2(t/z)^n/(q,-tz;q)_n201q^-n,-q^1-n/tz-q,-q^ntz^3.
Starting with the generating function (<ref>), and applying both expansions of the q-Chaundy Theorem <ref> using r=s=ℓ=-1, u=p=0, v=1, X=-tz, Y=-t/z, d={-tz}, a= b= c=∅ completes the proof.
A third generating function for continuous q-Hermite polynomials is given by <cit.>
K(x,t|q):=∑_n=0^∞t^n (γ;q)_n H_n(x|q)/(q;q)_n=(γ tz;q)_∞/(tz;q)_∞11-1γγ tzq,t/z.
Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then
K(x,t|q)=∑_n=0^∞(γ;q)_n (tz)^n/(q;q)_n22-1q^-n,γγ tz,q^1-n/γq,q/γ z^2,
=∑_n=0^∞(γ;q)_n (t/z)^n/(q,γ tz;q)_n31q^-n,γ,-q^1-n/γ tzq^1-n/γq,q^ntz^3.
Starting with the generating function (<ref>), and applying both expansions of the q-Chaundy Theorem <ref> using ℓ=-1, r=s=u=p=0, v=1, X=tz, Y=t/z, a= c={γ}, d={γ tz}, b=∅, completes the proof.
One also has the following interesting generating function due to Ismail for continuous q-Hermite polynomials cf. <cit.>.
Let q∈, x=1/2(z+z^-1),
z∈, t∈,
|t|<1. Then
O(x,t|q):=∑_n=0^∞t^n q^1/4n^2 H_n(x|q)/(q;q)_n=(-t;q^1/2)_∞21q^1/4z^±-q^1/2q^1/2,-t.
One should see the proof of <cit.>.
Using the q-Chaundy Theorem <ref>,
one is able to derive alternate expressions for
the generating function for continuous q-Hermite
polynomials O(x,t|q).
Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then
O(x,t|q)=∑_n=0^∞q^1/2n2 t^n/(q^1/2;q^1/2)_n311q^-1/2n,q^1/4z^±-q^1/2q^1/2,q^1/2,
=∑_n=0^∞(q^1/4z^±;q^1/2)_n (-t)^n/(± q^1/2;q^1/2)_n22± q^-1/2nq^1/4-1/2nz^±q^1/2,-q^1/2.
Starting with the generating function (<ref>),
and replacing q↦ q^2 converts the right-hand
side to a form where the q-Chaundy Theorem <ref>
can be used.
Using r=-1, u=p=ℓ=0, s=v=1, X= Y=-t,
c={q^1/2z^±}, d={-q},
a= b=∅, and then replacing
q↦ q^1/2 completes the proof.
One should observe the surprising fact that the alternate expressions
for O(x,t|q) in Theorem <ref>, have the property that the
terminating basic hypergeometric series are only a
function of x, q and n. Comparing these expressions
with the original generating function, it can be seen
that these terminating basic hypergeometric
series must represent alternative basic
hypergeometric representations for the continuous
q-Hermite polynomials!
Remark <ref> leads us to the following important result.
Let q∈, x=1/2(z+z^-1), z∈. Then
H_n(x|q)=q^-1/4n
(-q^1/2;q^1/2)_n311q^-1/2n,q^1/4z^±-q^1/2q^1/2,q^1/2
=(-1)^nq^-1/4n^2
(q^1/4z^±;q^1/2)_n22± q^-1/2nq^1/4-1/2nz^±q^1/2,-q^1/2.
Comparing the the terms of the series of the
alternate expressions for the generating function
O(x,t|q) in Theorem <ref> completes
the proof.
This then leads us to the following quadratic transformations for terminating basic
hypergeometric series.
Let q∈, x=1/2(z+z^-1), z∈. Then, one has
the following terminating quadratic transformation:
101q^-2n-q^2,q^2/z^2=(±qz;q)_∞01-q^2/z^2q^2,q^2-2n/z^2
=q^-1/2n^2/z^n(-q;q)_n31q^-n,q^1/2z^±-qq,-q^n
=q^-1/2n^2/(-z)^n(q^1/2z^±;q)_n22-1± q^-nq^1/2-nz^±q,q.
Comparing (<ref>), (<ref>),
(<ref>) and making the replacement
q↦ q^2 completes the proof.
The above terminating quadratic
transformation formula leads to an interesting
summation formula.
Let n∈_0, q∈. Then, one
has the following summation formula
10-1q^-2n-q^2,q^2n∓ 1=q^-n(1/2±1/2)(-q;q)_n.
Setting z=q^±1/2 in Theorem <ref> completes the proof.
§.§ The continuous q-inverse Hermite polynomials
The following result can be found in
<cit.>. The result
can be found using the q-Chaundy Theorem
<ref>, but we provide a slightly
different proof.
This infinite product generating function
was originally found in
<cit.>.
Let q∈, x=1/2(z+z^-1),
z∈, |t|<1. Then
∑_n=0^∞t^nq^n2 H_n(x|q^-1)/(q;q)_n=(-tz^±;q)_∞.
First start with the left-hand side of
(<ref>) and use the terminating representation of the
continuous q-inverse Hermite polynomials (<ref>). Then reversing the order of the summation followed by evaluating the outer sum using
Euler's Theorem <ref>, the inner sum can be evaluated using the q-binomial theorem.
This completes the proof.
If one starts with the representation (<ref>) (or (<ref>)) and utilize the q-Chaundy Theorem <ref>, one arrives at a nonterminating product representation of the corresponding generating function, which happens to be divergent (it is proportional to a _2ϕ_0).
One also has the following interesting generating function for continuous q-inverse Hermite polynomials cf. <cit.>.
Let q∈, x=1/2(z+z^-1), z∈, |t|<1.
Then
N(x,t|q):=
∑_n=0^∞t^n q^1/4n^2 H_n(x|q^-1)/(q;q)_n
=1/(t;q^1/2)_∞21q^1/4z^±-q^1/2q^1/2,-t.
One should see the proof of <cit.>.
Using the q-Chaundy Theorem <ref>,
one is able to derive alternate expressions for
the generating function for continuous q-inverse
Hermite polynomials N(x,t|q).
Let q∈, x=1/2(z+z^-1), z∈, t∈, |t|<1. Then
N(x,t|q)=∑_n=0^∞t^n/(q^1/2;q^1/2)_n31q^-1/2n,q^1/4z^±-q^1/2q^1/2,-q^1/2n
=∑_n=0^∞(q^1/4z^±;q^1/2)_n (-t)^n/(± q^1/2;q^1/2)_n22-1± q^-1/2nq^1/4-1/2nz^±q^1/2,q^1/2.
Starting with the generating function (<ref>),
and replacing q↦ q^2 converts the right-hand
side to a form where the q-Chaundy Theorem <ref>
can be used.
Using r=p=-1, u=ℓ=0, s=v=1, X=t, Y=-t,
c={q^1/2z^±}, d={-q},
a= b=∅, and then replacing
q↦ q^1/2 completes the proof.
Observe surprisingly that the alternate expressions for
N(x,t|q) have the property that the
terminating basic hypergeometric series are only
functions of x, q and n. Comparing these expressions
with the original generating function, we realize
that in these terminating basic hypergeometric
series must represent alternative basic hypergeometric
representations for the continuous q-inverse Hermite
polynomials!
Remark <ref> leads us to the next important result.
Let n∈_0, q∈, x=1/2(z+z^-1), z∈. Then
H_n(x|q^-1)=q^-1/4n^2(q;q)_n/(q^1/2;q^1/2)_n31q^-
1/2n,q^1/4z^±-q^1/2q^1/2,-q^1/2n
=(-1)^nq^-1/4n^2
(q^1/4z^±;q^1/2)_n22-1± q^-1/2nq^1/4-1/2nz^±q^1/2,q^1/2.
Comparing the the terms of the series of the alternate
expressions for the generating function N(x,t|q)
in Theorem <ref> completes the proof.
This then leads us to quadratic transformations for terminating
basic hypergeometric series.
Let n∈_0, q∈, x=1/2(z+z^-1), z∈. Then, one
has the following terminating quadratic transformation:
101q^-2n-q^2,q^2/z^2=(±qz;q)_∞01-q^2/z^2q^2,q^2-2n/z^2
=q^-1/2n^2/z^n(-q;q)_n31q^-n,q^1/2z^±-qq,-q^n
=q^-1/2n^2/(-z)^n(q^1/2z^±;q)_n
22-1± q^-nq^1/2-nz^±q,q.
Comparing (<ref>), (<ref>), (<ref>),
(<ref>)
and making the replacement q↦ q^2 completes the proof.
The above terminating quadratic transformation formula
leads to an interesting summation formula.
Let n∈_0, q∈. Then, one has the
following summation formula:
101q^-2n-q^2,q^2∓1=
(± q^1∓1/2;q)_∞01-q^2∓1q^2,q^-2n+2∓1
=
q^-1/2n^2∓1/2 n (-q;q)_n.
Setting z=q^±1/2 in Theorem <ref>
completes the proof.
Note that for the continuous q-Hermite polynomials there exists a generating function with arbitrary numerator dependence given by (γ;q)_n, i.e., (<ref>). We have not, as of yet, been able to derive an analogous generating function for the continuous q-inverse Hermite polynomials.
' to 0pt.2ex "16d10AskeyIsmail84
R. Askey and M. E. H. Ismail.
Recurrence relations, continued fractions, and orthogonal
polynomials.
Memoirs of the American Mathematical Society, 49(300):iv+108,
1984.
AtakishiyevaAtakishiyev11
M. Atakishiyeva and N. Atakishiyev.
A non-standard generating function for continuous dual q-hahn
polynomials.
Revista de Matemática: Teoría y Applicaciones,
18(1):111–120, 2011.
Bailey1928
W. N. Bailey.
Products of Generalized Hypergeometric Series.
Proceedings of the London Mathematical Society. Second Series,
28(4):242–254, 1928.
Chaundy43
T. W. Chaundy.
An extension of hypergeometric functions. I.
The Quarterly Journal of Mathematics. Oxford Series, 14:55–78,
1943.
ChristansenIsmail2006
J. S. Christiansen and M. E. H. Ismail.
A moment problem and a family of integral evaluations.
Transactions of the American Mathematical Society,
358(9):4071–4097, 2006.
Clausen1828
T. Clausen.
Über die Fälle, wenn die Reihe von der Form
y=1+α/1·β/γ
x+α·α+1/1· 2·β·β+1/γ·γ+1x^2 +etc.
ein Quadrat von
der Form
z=
1+α'/1·β'/γ'·δ'/ε'x+α'
·α'+1/1· 2·β'·β'+1/γ'·γ'+1·δ' δ'+1/ε' ε'+1x^2 +
etc.
hat.
Journal für die Reine und Angewandte Mathematik, 3:89–91,
1828.
CohlCostasSantos20b
H. S. Cohl and R. S. Costas-Santos.
Symmetry of terminating basic hypergeometric representations of the
Askey-Wilson polynomials.
Journal of Mathematical Analysis and Applications,
517(1):126583, 2023.
CohlIsmail20
H. S. Cohl and M. E. H. Ismail, editors.
Lectures on orthogonal polynomials and special functions,
volume 464 of London Mathematical Society Lecture Note Series.
Cambridge University Press, Cambridge, 2021.
Sixth Summer School, Maryland, 2016.
ErdelyiHTF
A. Erdélyi, W. Magnus, F. Oberhettinger, and F. G. Tricomi.
Higher Transcendental Functions. Vols. 1-3.
Robert E. Krieger Publishing Co. Inc., Melbourne, Fla., 1981.
GaspRah
G. Gasper and M. Rahman.
Basic hypergeometric series, volume 96 of Encyclopedia of
Mathematics and its Applications.
Cambridge University Press, Cambridge, second edition, 2004.
With a foreword by Richard Askey.
Ismail:2009:CQO
M. E. H. Ismail.
Classical and Quantum Orthogonal Polynomials in One Variable,
volume 98 of Encyclopedia of Mathematics and its Applications.
Cambridge University Press, Cambridge, 2009.
With two chapters by Walter Van Assche, With a foreword by Richard A.
Askey, Corrected reprint of the 2005 original.
IsmailMasson1994
M. E. H. Ismail and D. R. Masson.
q-Hermite polynomials, biorthogonal rational functions, and
q-beta integrals.
Transactions of the American Mathematical Society,
346(1):63–116, 1994.
IsmailZhang2022
M. E. H. Ismail, R. Zhang, and K. Zhou.
Orthogonal polynomials of Askey–Wilson type.
https://arxiv.org/abs/2205.05280
arXiv:2205.05280, 2022.
Ismailetal2022
M. E. H. Ismail, R. Zhang, and K. Zhou.
q-fractional Askey-Wilson integrals and related semigroups of
operators.
Physica D. Nonlinear Phenomena, 442:Paper No. 133534, 15, 2022.
Koekoeketal
R. Koekoek, P. A. Lesky, and R. F. Swarttouw.
Hypergeometric orthogonal polynomials and their
q-analogues.
Springer Monographs in Mathematics. Springer-Verlag, Berlin, 2010.
With a foreword by Tom H. Koornwinder.
vandeBultRains09
F. J. van de Bult and E. M. Rains.
Basic hypergeometric functions as limits of elliptic hypergeometric
functions.
Symmetry, Integrability and Geometry: Methods and Applications,
5(059), 2009.
|
http://arxiv.org/abs/2307.05687v1 | 20230711180057 | Spectral Analysis of ionospheric density variations measured with the large radio telescope in the Low-Latitude region | [
"Sarvesh Mangla",
"Abhirup Datta"
] | physics.space-ph | [
"physics.space-ph",
"astro-ph.EP",
"astro-ph.IM",
"physics.geo-ph",
"physics.plasm-ph"
] |
Sarvesh Mangla, Abhirup Datta
Department of Astronomy, Astrophysics and Space Engineering, Indian Institute of Technology Indore, Madhya Pradesh, 453552, India
Sarvesh [email protected]
* GMRT can demonstrate an order of magnitude better sensitivity than GNSS-based TEC measurements in characterizing ionospheric fluctuations.
* The spectral analysis technique used with GMRT can detect multiple MSTIDs and smaller-scale structures simultaneously.
* GMRT can detect ionospheric variations as small as 10 km. The study also showed waves changing direction unexpectedly during sunrise time.
The low-latitude ionosphere is a dynamic region with a wide range of disturbances in temporal and spatial scales. The Giant Metrewave Radio Telescope (GMRT) situated in the low-latitude region has demonstrated its ability to detect various ionospheric phenomena. It can detect total electron content (TEC) variation with precision of 1 mTECU and also can measure TEC gradient with an accuracy of about 7× 10^-4 TECU km^-1. This paper describes the spectral analysis of previously calculated TEC gradient measurements and validates them by comparing their properties using two bands. The analysis tracked individual waves associated with medium-scale traveling ionospheric disturbances (MSTIDs) and smaller waves down to wavelengths of ∼ 10 km. The ionosphere is found to have unanticipated changes during sunrise hours, with waves changed propagation direction as the sun approached the zenith. Equatorial spread F disturbances during sunrise hours is observed, along with smaller structures moving in the same direction.
§ PLAIN LANGUAGE SUMMARY
The Earth's ionosphere can limit exploring sub-GHz frequencies of the sky and introduces an extra phase term that is difficult to calibrate. The same calibration data can be used to study the Earth's ionosphere more precisely than conventional probes. Radio interferometry is a technique for studying astronomical sources and Earth's ionosphere by measuring the spatial coherence function of multiple elements. The GMRT is a unique instrument for exploring the equatorial ionosphere region. This study used dual-band observations of a bright radio source with the GMRT to explore the Equatorial Ionization Anomaly region. The GMRT can detect variations in total electron content and measure TEC gradient with high accuracy. Spectral analysis was performed on TEC gradient measurements to track individual waves associated with medium scales traveling ionospheric disturbances and smaller waves up to wavelengths of about ∼ 10 km.
The results showed unexpected changes in the ionosphere during sunrise hours and observed large plasma irregularities and smaller structures moving in the same direction.
§ INTRODUCTION
Radio-frequency arrays, especially those that operate in the low-frequency range (≲ 1 GHz), are a potent but comparatively underutilised tool for remote sensing. They are mostly used to observe cosmic sources and were built as synthesis telescopes. To mitigates the effects of the ionosphere, radio interferometers need detailed calibration schemes. However, the same calibration data is seldom used to detect and study the Earth's ionosphere.
Radio interferometers measure `Spatial Coherence Function' <cit.>, where an additional phase term is introduced because of the ionosphere. This phase is proportional to the difference in TEC along the line of sight between two-array elements. Thus, these extra phase term can be easily converted to differential TEC () with an accuracy of 10^-3 TECU <cit.> or better <cit.>. Interferometers are effective tools for studying ionospheric dynamics due to their increased sensitivity to measure TEC gradients compared to traditional probes such as Radars, ionosondes, and the Global Navigation Satellite System (GNSS). Recently, they are also used to study Travelling Ionosphere Disturbances (TIDs; Jac1992A A...257..401J) and even ionospheric scintillations <cit.>.
Using the Very Large Array (VLA), [ and references within]Jac1992A A...257..401J measured phases from individual antennas by observing a single-bright source and further performing spectral analysis to detect TIDs primarily associated with gravity waves. Additionally, Helm2012RaSc...47.0L02H_spectral utilized a similar spectral analysis on TEC gradient observations with the VLA to detect several small-scale structures and medium-scale TIDs (MSTIDs). It was observed that the smaller scale disturbances propagated in the same direction as the MSTIDs. Later, using VLA Low-frequency Sky Survey, Helmboldt2012RaSc...47.5008H detected many wavelike disturbances at 74 MHz, by measuring the positional shift of several bright radio galaxies to obtain fluctuation spectra for the TEC gradient. Studying these spectra gave a detailed account from ionospheric intraday variation to seasonal variation where turbulent activity appear to be significant during the summer nighttime, winter daytime and during sunset in the spring near mid-latitude region.
By utilising the night-time observation of the wide fields of view of the Murchison Widefield Array (MWA), Loi_2015 conducted a power spectrum analysis of ionospheric fluctuations (measured by positional offsets of several radio sources) to reveals field-aligned irregularities (large electron irregularities elongated along the geomagnetic field lines; Loi2015GeoRL..42.3707L, Loi2016RaSc...51..659L) and other wave-like phenomena including TIDs <cit.>. Later, Helm2020RaSc...5507106H utilized the GLEAM Survey <cit.> of the MWA, to generate images of ionospheric structures present during the observations. Furthermore, spectral analysis of these images provided evidence of distinct features of ionospheric activity, including the generation of MSTIDs in conjunction with sporadic E layer (Es) events during nighttime. A recent analyses with the LOw Frequency ARray (LOFAR), GNSS and ionosondes, Rich2020JSWSC..10...10F revealed breaking down of large-scale ionospheric structure into smaller-scales, giving rise to the scintillation.
Although, above mentioned studies are done with radio telescopes, these studies are limited to mid-latitude region due to geographical location constraints. This makes the Giant Meterwave Radio Telescope (GMRT; geophysical latitude and longitude: 19^∘ 05' N and 74^∘ 03' E; geomagnetic latitude: 10^∘ 40' N) a unique instrument to study the ionosphere in low-latitude region. It is important to note that, GMRT is located in the geophysically sensitive region between the magnetic equator and the northern crest of the Equatorial Ionization Anomaly (EIA; Appl1946Natur.157..691A) in the Indian longitude sector.
In our most recent study <cit.>, we explored the techniques for deriving ionospheric data from calibration of the GMRT data. We successfully achieved a typical precision of 10^-3 TECU while monitoring a bright radio source. We also demonstrated methods for measuring the TEC gradient with an accuracy of about 7× 10^-4 TECU km^-1 since arrays like this are essentially only sensitive to the TEC gradient. Here, we propose spectral analysis methods for these TEC gradient observations in an effort to further this project. We will show that these techniques can find and describe a number of medium- to small-scale phenomena, as well as give a general statistical description of the spectrum of TEC variations.
The present paper is outlined as follows: in Sec. <ref> we summarise our observations with GMRT and computed TEC gradient measurements. In Sec. <ref>, we outlined the spectral analysis method to estimate the properties of ionospheric structures using GMRT observations and provide limitation of how much smaller waves may be detected using this method. Finally, in Sec. <ref>, we conclude our results and discussion.
§ OBSERVATION AND TEC GRADIENT MEASUREMENTS
A nearly 9-hour observation with GMRT of a cosmic radio source (3C 68.2) served as the basis for this analysis. The observations were taken simultaneously at 235 and 610 MHz on the night of August 5 and 6, 2012. The observations consisted of each `scan' (block of time) of nearly one hour, except scan number four, which is nearly five hours continuous observation during mid-night to post-sunrise hours with a temporal sampling of 0.5 s. Furthermore, a moderate level of geomagnetic activity (K_p index ∼ 1-3) and a moderate level of solar activity (F10.7 = 137.9 SFU; 1SFU = 10^-22 W m^2 Hz^-1) were both present throughout the observation.
Measured TEC gradients using the GMRT are extensively described in detail in Mangla2022MNRAS.513..964M. In brief summary, the ionosphere introduces an extra phase term, proportional to the difference in TEC (or ), measured along the line of sight of a pair of antennas. This makes the radio interferometers sensitive to TEC gradients, rather than the absolute TEC value. However, due to the Y-shape of the GMRT (consisting of northeastern, northwestern, and southern arms), it is challenging to accurately measure the TEC gradient and perform Fourier inversion on it. Mangla2022MNRAS.513..964M utilised two methods (previously discussed by Helm2012RaSc...47.0K02H_temporal) to address this issue. The first method involves estimating the full 2-D TEC gradient over the array, utilizing a second- and third-order polynomial (Taylor series) at each time step. This method can recover the large, long-period disturbances that are present during the observation.
The second approach involves computing the projection of the TEC gradient along each arm of the GMRT at each time step. This method provides limited directional information, while the polynomial-based method tends to miss or dampen small-scale fluctuations. For the purposes of this paper, we will only be conducting spectral analysis of TEC gradients computed using the polynomial-based method. The following section will outline the procedure used to spectrally analyze the time series of TEC gradients derived through the polynomial-based method.
§ SPECTRAL ANALYSIS
§.§ Methodology
The Fourier-based approach using TEC gradient measurements obtained from the polynomial method is used to detect and analyze individual or groups of waves passing over the array. This method was previously utilized by Helm2012RaSc...47.0L02H_spectral, to account for wavefront distortions and detect multiple wavefronts simultaneously.
The primary objective is to improve the array's spatial coverage by considering the source apparent position and movements of ionospheric disturbances, which have different speeds, wavelengths, and directions. To estimate the properties of each structure, the TEC fluctuations are approximated as a combination of multiple oscillating modes, each having the specific form
TEC(t) = τ_ω exp[i (ω t-k_xx-k_yy)]
where τ_ω is the amplitude which varies perpendicular to wavefront distortions, ω is the oscillation frequency and k_x & k_y are the wave number for position x (North-South) & y (East-West) respectively. For a single Fourier mode, the Fourier transforms of the partial derivatives with respect to x and y are:
D_x(ω;x,y) = [∂τ_ω/∂ x - ik_xτ_ω] × exp[-i(k_xx+k_yy)]
D_y(ω;x,y) = [∂τ_ω/∂ y - ik_yτ_ω] × exp[-i(k_xx+k_yy)]
Mangla2022MNRAS.513..964M employed polynomial series method to estimate TEC in the plane transverse to the line of sight. This method assumes that TEC can be represented by a two-dimensional, third-order polynomial, such that the difference in TEC between two antennas i and j can be represented as
Δ_ij = p_0 (x_i-x_j) + p_1 (y_i-y_j) + p_2 (x_i^2-x_j^2)
+ p_3 (y_i^2-y_j^2) + p_4 (x_i y_i-x_j y_j)
+ p_5 (x_i^3-x_j^3) + p_6 (y_i^3-y_j^3) + p_7 (x_i^2 y_i-x_j^2 y_j)
+ p_8 (x_i y_i^2-x_j y_j^2)
where x and y are the antenna positions projected onto the transverse plane in the north-south and east-west directions, respectively. At any given time, this method has the capability to replicate most of the structure observed. Subsequently, the polynomial coefficients time series can then be Fourier transformed to decompose the transverse gradient in TEC into the Fourier modes <cit.>, such that
D_x(ω;x,y) = P_0(ω) + 2P_2(ω)x + P_4(ω)y + 3P_5(ω)x^2
+ 2P_7(ω)xy + P_8(ω)y^2
D_y(ω;x,y) = P_1(ω) + 2P_3(ω)y + P_4(ω)x + 3P_6(ω)y^2
+ P_7(ω)x^2 + 2P_8(ω)xy
where D_x and D_y represent the Fourier transforms of the north-south and east-west components of the TEC gradient, respectively, while P_n(ω) corresponds to the Fourier transform of p_n(t). By expanding equation <ref> using Taylor series and comparing the variables (x, y, x^2, y^2, xy) with equation <ref> for respective D_x and D_y, the wave number vector components (k_x and k_y) for each mode can be approximated as
k_x(ω) ≃ - Im {2P_2/P_0(ω) }≃ - Im {P_4/P_1(ω) }
k_y(ω) ≃ - Im {2P_3/P_1(ω) }≃ - Im {P_4/P_0(ω) }
Since there are two estimators for both k_x and k_y, a weighted average is utilised, with the weights being |P_0|^2 and |P_1|^2.
It is to be noted that in the above equation P_5 to P_8 are not considered as they will induced higher-order effects (explained in Appendix <ref>). We also estimated the spectral power, wave speed and azimuth (propagating wave direction) respectively as
𝒫≡|ℱ{Δ}|^2
V ≃ω/√(k^2_x + k^2_y)
Az ≃ tan^-1(k_y/k_x)
§.§ Derived Wave Properties from GMRT data
Polynomial-based TEC gradient fits are used to compute the power as a function of time and frequency at two separate frequency bands (235 and 610 MHz) by performing discrete Fourier transforms (DFTs) on the time series of polynomial coefficients. To enable the use of DFTs, which work best with evenly sampled data, the missing time steps of 10 min between each scan were filled with zeros (a process called `zero-padding'). The DFTs are performed with a sliding window (`Hamming window') of one hour for frequencies up to 10 hr^-1 (or, period > 6.0 min), as the same one hour is used to de-trend the data <cit.>. Choosing Hamming window will diminish the ringing effects that might form due to the zero-padding. The power is computed for a period of 30 min before and after the observing run.
Therefore, the DFTs of each polynomial coefficient is used to calculate the power as a function of temporal frequency and time for the two bands. Fig. <ref>(a) displays the normalised power for bands 235 and 610 MHz. Although random fluctuations are commonly observed, significant wave detections above the background are observed in multiple cases. Most of these waves are observable at frequencies ranging from 0.5 to 4 hr^-1, or periods between 15 and 120 min, which is typical behaviour of MSTIDs. To isolate such detections, a mask is produced by determining the median and median absolute deviation (MAD) within elliptical `annuli' around each pixel with a size of 1.5 hr in local time and 2 hr^-1 in frequency. A pixel exceeding two times the MAD over median for its annulus is considered a detection above the background.
The wave's speed and propagation direction or azimuth angle (measured clockwise from North through East) are calculated as a function of local time and temporal frequency using equations <ref> and <ref>. The same mask is used to display both the wave speed and azimuth angle for the detection, which were shown in Figs. <ref>(b) and <ref>(c) respectively for 235 and 610 MHz bands.
Wave parameters derived from GMRT measurements at 235 and 610 MHz
1cEvents 1cPeriod 2cWave speed 2cAzimuth 2cWavelength
1c 1c 2c(m/s) 2c(degree) 2c(km)
610 MHz 235 MHz 610 MHz 235 MHz 610 MHz 235 MHz
A 1 hr 10 min 46.55^±5.16 48.02^±6.54 -81.78^±0.67 (W) 135.09^±0.59 (SE) 195.51^±21.67 201.68^±27.48
B 1 hr 33 min 54.91^±3.45 56.78^±5.23 101.52^±3.36 (E) 103.13^±6.83 (E) 263.57^±16.56 272.54^±25.10
C 24 min 95.47^±9.20 106.20^±8.02 28.07^±10.42 (N) 13.18^±1.49 (N) 137.19^±13.25 152.93^±11.55
To determine whether the wave structures in the two frequency bands are similar, we computed the Pearson correlation coefficients. Fig. <ref>(d) shows the correlation coefficient computed independently for every time stamp, with the grey vertical lines indicating the time stamps where zero padding is used. The normalised power is highly correlated, especially for the longer duration of scan (scan no. 4; nearly 5 hr), emphasizing the importance of continuous data for spectral analysis in ionospheric studies. There are more detections visible in the 235 MHz band compared to the 610 MHz band, indicating the dependence of refraction on the observable frequency. The correlation coefficient time series is partially correlated or decorrelated where the scan is very short or the detection is less above the background. A substantial number of detections, including small structures, are highly correlated between local times of 05:00 and 06:00 hours, suggesting that the Earth's ionosphere varies before dawn. Additionally, several small-scale structures are detected in the direction of a pre-existing MSTID, which is towards the eastward.
Based on the highly correlated coefficient observed in Fig. <ref>(d), we have selected three events indicated by and in the Fig. <ref>(a). We have calculated the properties of the events using the method described in Section <ref> and presented the results in Table <ref>. Events and show agreement in wave speed and azimuth angle, while event only partially agrees. The wave speed in the two band for event is approximately the same, but the propagation direction is out of phase by approximately 180^∘, which may be due to scintillation observed in the higher order terms of polynomial fitting at 235 MHz data <cit.>.
§.§.§ Mean Power Spectra
A more statistical description of the observed set of TEC variations may be produced using the polynomial-based methodology. While Figure <ref> provide useful information for locating and evaluating specific instances of waves or groups of waves, we presented average power as a function of projected wave number inside bins of wave number (k) for each of the one-hour blocks of time in Figure <ref>. The shaded part represents the average power, and the solid line signifies the median in that region. Additionally, we included the corresponding `noise-equivalent' spectra (dashed lines) for 235 and 610 MHz in blue and green, respectively. The noise-equivalent spectra are computed using the following method:
* First, by performing the 2D polynomial fitting on the measurements on each of the bands and mean measurement. This process is very well explained in Mangla2022MNRAS.513..964M.
* Then by performing the DFTs on the difference between the individual fits and fit to the final mean time series.
* And lastly, to get the noise-equivalent spectra, the same analysis is performed on the residual DFTs which is detailed in Sect. <ref>.
In Figure <ref>, we can see the resulting `noise-equivalent' spectra. These spectra are the average spectra computed within each one-hour block. The average power spectra (shaded region) intersect with the noise-equivalent spectra of that band, which is around ∼ 0.6 km^-1 (dashed grey line) and is clearly seen in the figure corresponding to panels 03:00 < 04:00 and 06:00 < 07:00. This intersection suggests that the method is capable of detecting substantial power of structures that are smaller than half the size of the array, approximately 10 km or k ∼ 0.6 km^-1. Many structures are also visible in Fig. <ref>(a), which can be identified as noticeable bumps above the background at wave numbers ranging from 0.015 - 0.12 km^-1, which corresponds to wavelengths of about 50 to 500 km in the majority of the panels and consistent with the known properties of MSTIDs.
Summary of Observed Phenomena at different geophysical location and with different/distinct instruments during different seasons
Location Instruments Period Wavelength Wave speed Direction References
(min) (km) (m/s) or comments
[.1em]
10-20 100-220 50-200 Northeast/Eastward (During Dusk)
84emLow-Latitude GMRT 60-90 150-250 40-100 14emSummer: Westward (nighttime) this work
>90 250-350 50-100 Eastward (During Dusk)
2-8
64emGPS 64em15-30 64em50-300 64em50-250 24emFall: Southeast (daytime) 610em<cit.>
Northeast/Westward (nighttime)
24emWinter: Southeast (daytime)
Westward (nighttime)
14emSpring: Westward (nighttime)
14emSummer: Northeast/westward (nighttime)
2-8
24emAirglow Imager 30-90 100-400 30-120 14emall: 210emSouthwestward (nighttime) <cit.>
15-20 100-175 130-140 14emWinter: <cit.>
174emMid-Latitude 33emVLA 15-50 160-300 110-170 34emSummer: Southwest 110em<cit.>
23em>20 25em100-200 West/Northwest (Before Midnight) 210em<cit.>
Northeast (After midnight)
2-8
34emLOFAR 200 34emFall: Southwest to Northeast <cit.>
200-700 20-40 Northwest to Southeast 110em<cit.>
∼11 60-70 ∼100 Northeast to Southwest two TIDs (nighttime)
2-8
MWA 60 700 200 14emSpring: Northeast <cit.>
2-8
104emGPS 84em15-27 84em50-250 84em50-250 24emFall: Southeast (daytime) 810em<cit.>
Northwest/Westward (nighttime)
24emWinter: Southeast (daytime)
Westward (nighttime)
24emSpring: Southeast/Eastward (daytime)
Westward (nighttime)
24emSummer: Eastward (daytime)
Westward (nighttime)
24em10-60 300-1000 100-200 14emWinter: Southeast (daytime) 28em<cit.>
200-500 100-150 14emSummer: Southwest (nighttime)
2-8
24emAirglow Imager 24em30-90 24em100-300 24em50-100 24emall: 210emSouthwest (nighttime) 28em<cit.>
74emHigh-Latitude 74emGPS 74em10-25 74em100-300 74em100-300 24emFall: Southeast (daytime) 610em<cit.>
Northeast/Southwest (nighttime)
24emWinter: Southeast (daytime)
Northward/Westward (nighttime)
14emSpring: Southwest (nighttime)
24emSummer: Northwest (nighttime)
Southeast (dusktime)
§ RESULTS AND CONCLUSIONS
Our work showed that the GMRT instrument can efficiently study ionospheric fluctuations using a long observation of a bright radio source. For the aforementioned observations, the 1σ uncertainty in the measurements is 1 mTECU, in both the 235 and 610 MHz bands [see Mangla2022JoAA and Mangla2022MNRAS.513..964M, respectively]. This level of sensitivity achieved by the GMRT is ten time better than that with GNSS measurements <cit.>.
Spectral analysis technique can detect multiple MSTIDs simultaneously, and the wave structures in both bands are similar, as shown in Fig. <ref>(a), with high correlation confirmed by the Pearson correlation coefficient computed at each time step. The normalized power is highly correlated, particularly in scan no. 4, which lasted over 4 hours. Thus, ionospheric studies using spectral analysis would provide better results, if continuous observation of bright radio source is taken. Furthermore, Fig. <ref>(a) suggest possible MSTIDs candidates with periods between 10 and 30 min, or longer, and estimated speeds between 50 and 150 ms^-1, which is similar to nighttime MSTIDs in the Northern Hemisphere <cit.>. During summer, nighttime MSTIDs primarily propagate southwestward <cit.>, likely generated by the ionosphere's electrodynamics. Mid-latitude daytime MSTIDs mostly propagate southeastward and are likely generated by acoustic gravity waves <cit.>.
This work also identified a group of waves between local hours 05:00 and 06:00, indicating unexpected ionospheric changes during sunrise, similar to those anticipated during sunset <cit.>. Additionally, polynomial-based methods can detect structures as small as 10 km , showing the GMRT's ability to detect ionospheric variations at such small wavelengths. Among the many detected waves present intermittently throughout the night and turbulent fluctuations seen at all times, the waves appear to change direction after sunrise. This phenomenon is visible in both frequency bands for waves with oscillation frequencies between 3 hr^-1 and 9 hr^-1.
Our works key findings are summarized in table <ref>, along with other studies that have examined the MSTIDs propagation characteristics using data with different instruments, methods, locations, and time periods. We see that the wave speed and wavelengths obtained with the GMRT is comparable with the observed trend with GPS studies <cit.> at similar geophysical latitudes. Also, to explore further, we have also chosen three events on the basis of correlation-coefficient, that are shown in Fig. <ref>(a) and named them as events and . The properties of these events are as follows:
§.§ Event A
During the entire observation run, a dominating pattern with a period of 1 hr 10 min and propagating at around 50 ms^-1 is briefly observed in both bands, as shown in Fig. <ref>(d), near 03:40 Local time (UTC + 05:30). This pattern exhibits MSTID-like properties and resembling what has been observed during nighttime summer season at low-latitudes using GPS studies <cit.>. However, during the same time, ionospheric scintillation was likely observed at 235 MHz in the higher-order terms of polynomial fitting <cit.>, which highlights a limitation of this method (to determining the propagation speed accurately), as phase measurements for radio interferometers are sensitive to induced phase errors from ionospheric scintillation or other sources. Furthermore, the Fresnel scale at an altitude of 300 km for 235 MHz is about 620 m. These observations are therefore sensitive to structures at these scales. Helmboldt_2022 also noted a similar correlation between TID activity and scintillation at 35 MHz (Fresnel scale ∼ 1.5 km).
§.§ Event B
During dusk-time (05:15 to 06:15 Local time), eastward directed wave is observed along with many small waves in both the bands. It is unusual to observe ionospheric wave during sunrise as the ionosphere is in the state of transition and the dynamics are more complex. However, this explains the formation of many small structures detected in both the bands in the similar direction. It may also be the case that these smaller waves are present inside the detected ionospheric structure propagating in the same direction. Furthermore, from Fig. <ref>, one can also notice the two to three orders of magnitude less noise in the panels displaying the duration of 05:00 < 07:00 local time.
However, it may be possible that these ionospheric disturbances are the cases of equatorial spread F (ESF) which occurs the daytime (typically around sunrise and sunset) at low-latitudes <cit.>. Furthermore, observed ionospheric structure has the period and wavelength of 1 hr 33 min and ∼ 300 km respectively propagating with a speed of somewhere between 50 to 60 ms^-1, which falls within the typical range of ESF. A more comprehensive, statistical analysis of these phenomena appears to be necessary. Besides, it is important to note that ESF is a complex phenomenon and is still not fully understood, but it is known to have a significant impact on the ionosphere and the propagation of radio signals, especially in the equatorial regions. The study of ESF is important for understanding the dynamics of the ionosphere, and for mitigating the impact of ionospheric disturbances on communication and navigation systems.
§.§ Event C
We observed northward moving MSTID briefly during the nighttime (02:15 Local time) with a period of 24 min and wave speed of about 100 ms^-1 in both bands. Nighttime northward propagating MSTIDs are rare events in the northern hemisphere and Ichihara201342 suggested that the northward propagation of MSTIDs is caused by gravity waves. Similar event have been observed by Bhat2021AdSpR..68.3806B using the Airglow imager in the Indian logitude sector.
Furthermore, Shiokawa_2005JA011406 discovered quasiperiodic southward-moving MSTIDs during nighttime, likely caused by gravity waves in the thermosphere. Similar studies by <cit.> also observed southward propagation of nighttime MSTIDs and hypothesized that deep convection in the troposphere generates gravity waves that contribute to the formation of MSTIDs. Both of these studies were conducted using 630.0 nm airglow observations in the Kototabang region of Indonesia, which is located in the equatorial latitudes, but in the southern hemisphere. These studies is consistent with our work, which shows that the propagation direction is toward poles.
The northward moving TIDs during night time (Event C), and eastward moving TIDs during the sunrise time (Event B), may be attributed to gravity waves (GWs) that have been filtered by the wind. Gravity waves that have undergone wind filtering tend to propagate more easily in the opposite direction of the wind, while facing difficulties propagating in the same direction as the wind. Based on the Horizontal Wind Model (HWM) climatology <cit.>, it is indicated that the winds within the F-region would have been moving south by southwest around midnight and subsequently transitioning to predominantly westward flow by 05:00 local time. Therefore, it is plausible to suggest that the detected TIDs could be attributed to wind-filtered GWs. Similar phenomenon is observed by Mukherjee_2010 during summer time, using an All-sky imager in Indian longitudinal region.
This work is just the beginning of using GMRT to detect MSTIDs. More work is needed to automate the detection process and improve localization and propagation estimation. Simultaneous observations from GPS stations would be particularly valuable in unraveling the microphysical nature of TIDs, as GNSS data primarily captures large- to mid-scale behavior. In contrast, radio interferometers can provide insights into the small-scale behavior, bridging the gap between these two scales and enabling a more comprehensive understanding of the microphysical processes driving TIDs formation and the associated directional behaviors.
Additionally, due to greater spacial and temporal resolution of a radio telescopes, they can provide high-quality measurements of ionospheric parameters that can be used to develop more accurate and sophisticated models of the ionosphere, which will further help to correct for ionospheric delays in Interferometric synthetic aperture radar (InSAR) and geodetic instruments data. This is important because ionospheric delays can cause errors in InSAR measurements of ground deformation, particularly for long-wavelength signals. In addition, radio telescopes can provide information on ionospheric irregularities and scintillations, which can affect the propagation of radio signals and cause errors in navigation and communication systems. By combining data from InSAR and radio telescopes, researchers can improve their understanding of the ionosphere and its effects on geodetic and communication applications.
§ OPEN RESEARCH
All the radio observation data used in this study are available in the GMRT Online Archive (<https://naps.ncra.tifr.res.in/goa/data/search>) with proposal code 22_064. Solar and geomagnetic indices are obtained from the OMNIWeb services (<https://omniweb.gsfc.nasa.gov/ow.html>).
We thank the staff of the GMRT who have made these observations possible. GMRT is run by the National Centre for Radio Astrophysics of the Tata Institute of Fundamental Research. SM is grateful for the financial assistance from the University Grants Commission. We thank the two anonymous reviewers whose comments/suggestions helped improve and clarify this manuscript.
This work made use of , a community-developed core Python package for Astronomy <cit.>, <cit.>, <cit.>. This research also made use of <cit.> and <cit.> open-source plotting packages for .
§ HIGHER ORDER EFFECTS
By comparing equations <ref> and <ref>, we also get the estimators for k_x and k_y from the higher order terms which are as follows
k_x^2(ω) ≃ - ( 6P_5/P_0(ω) ) ≃ - ( 2P_7/P_1(ω) )
k_y^2(ω) ≃ - ( 6P_6/P_1(ω) ) ≃ - ( 2P_8/P_0(ω) )
k_x (ω) k_y(ω) ≃ -( 2P_7/P_0(ω) ) ≃ -( 2P_8/P_1(ω) )
Assuming k_p^2 = A e^iϕ, gives
k_p =
√(A) e^i( ϕ/2 +nπ) for n is even
√(A) e^i( ϕ/2 +n/2π) for n is odd
For each k_x and k_y, there are two values, one positive and one negative, corresponding to two directions of each wave vector. This creates a problem when calculating the actual propagation direction from equation <ref>. To avoid considering higher order effects, these equations are not used to estimate the wave vectors.
|
http://arxiv.org/abs/2307.04401v1 | 20230710080341 | Ethicist: Targeted Training Data Extraction Through Loss Smoothed Soft Prompting and Calibrated Confidence Estimation | [
"Zhexin Zhang",
"Jiaxin Wen",
"Minlie Huang"
] | cs.CL | [
"cs.CL",
"cs.AI"
] |
Deep and Decentralized Multi-Agent Coverage of a Target with Unknown Distribution
Hossein Rastgoftar
H. Rastgoftar is with the Department
of Aerospace and Mechanical Engineering, University of Arizona, Tucson,
AZ, 85721 USA e-mail: [email protected].
August 12, 2023
=====================================================================================================================================================================================
Large pre-trained language models achieve impressive results across many tasks. However, recent works point out that pre-trained language models may memorize a considerable fraction of their training data, leading to the privacy risk of information leakage. In this paper, we propose a method named Ethicist for targeted training data Extraction THrough loss smoothed soft prompting and calIbrated ConfIdence eSTimation, investigating how to recover the suffix in the training data when given a prefix. To elicit memorization in the attacked model, we tune soft prompt embeddings while keeping the model fixed. We further propose a smoothing loss that smooths the loss distribution of the suffix tokens to make it easier to sample the correct suffix. In order to select the most probable suffix from a collection of sampled suffixes and estimate the prediction confidence, we propose a calibrated confidence estimation method, which normalizes the confidence of the generated suffixes with a local estimation. We show that Ethicist significantly improves the extraction performance on a recently proposed public benchmark. We also investigate several factors influencing the data extraction performance, including decoding strategy, model scale, prefix length, and suffix length. Our code is available at <https://github.com/thu-coai/Targeted-Data-Extraction>.
§ INTRODUCTION
Large pre-trained language models have achieved impressive results on various natural language processing tasks <cit.>.
Model sizes rapidly increase from millions to trillions of parameters and keep growing to achieve better performance and even obtain some emergent abilities <cit.>. Despite the success of large-scale pre-trained language models, recent works point out that they may memorize a considerable fraction of training data, leading to the privacy risk of information leakage <cit.>. Furthermore, researchers find that memorization scales with model sizes <cit.>. Therefore, this privacy risk becomes more and more critical in the era of large-scale pre-training. And attacking language models to extract their training data attracts increasing attention.
There are currently two main settings to extract training data. One is membership inference attack, which infers whether a given example is contained in the model's training data <cit.>. The other is untargeted training data extraction <cit.>, which aims to extract training data from scratch (i.e., without the given prefix). However, both settings are not suitable for extracting targeted training data. For example,
attackers may feed the model with a prefix indicating the beginning of an email and try to extract the following private email content in the training dataset as shown in Figure <ref>. In such cases, we do not have complete examples to do membership inference, and we have specific goals instead of performing untargeted extraction. Therefore, we focus on targeted training data extraction in this paper, which requires recovering the suffix when given a prefix according to the training data. Compared with untargeted training data extraction, the task matters more because attackers can recover specific types of training data instead of any possible training data that might be harmless. What's more, it is easier to evaluate targeted training data extraction because we just need to compare the prediction with the ground truth suffix. However, for untargeted training data extraction, we need to search over the whole massive pre-training dataset (e.g., The Pile dataset <cit.>, which has 800GB text data) to check whether it contains the generated sample, which is very slow and costly.
The general process for targeted training data extraction can be divided into two steps: (1) generating one or more possible suffixes based on the given prefix, and (2) choosing a most likely suffix as the prediction result based on a confidence estimation method. We summarize two challenges of this task: (1) how to increase the generation likelihood of the ground truth suffix, and (2) how to estimate the confidence accurately so that the confidence score can be meaningfully interpreted as the probability that the output suffix is correct. To tackle these challenges, we propose a method named Ethicist for targeted training data Extraction THrough loss smoothed soft prompting and calIbrated ConfIdence eSTimation.
For the first challenge, we propose loss smoothed soft prompting. It uses soft prompt to elicit memorization in the attacked model, and adds an additional loss besides the maximum likelihood estimation (MLE) loss to smooth the loss distribution of the suffix tokens. Through the loss smoothing, we hope to ensure that the probability of the ground truth token at each time step is not low, which makes it more likely to sample the ground truth suffix. With the two loss functions, we tune the prepended soft prompt tokens on an extracted training set which contains pairs of prefixes and ground truth suffixes. The existence of a training set is reasonable because large-scale pre-trained data generally contain public data (e.g., Common Crawl) [Similar setting is adopted in <cit.>.]. For the second challenge, we propose a calibrated confidence estimation method. We find that the model's perplexity cannot accurately represent the probability that the generated suffix is correct because the prediction probabilities for diversified prefixes are inherently different and incomparable. We thus normalize the confidence of the generated suffixes with a local estimation, which can mitigate the problems caused by intrinsic differences in the difficulties of distinct samples. We verify Ethicist on a recently proposed public benchmark containing 15,000 pairs of prefixes and suffixes derived from The Pile dataset <cit.>.
Experiments show that Ethicist can significantly improve the extraction performance, which suggests that existing large language models are at significant risk of leaking training data. We also discuss and analyze several factors influencing the data extraction performance, including decoding strategy, model scale, prefix length, and suffix length.
Our contributions can be summarized as follows:
* We propose loss smoothed soft prompting to reduce the difficulties of sampling the ground truth suffixes.
* We propose a calibrated confidence estimation method that enables the confidence score to be meaningfully interpreted as the probability that the output suffix is correct.
* Experiments on a recently proposed benchmark demonstrate that Ethicist can consistently and significantly improve the data extraction performance across various model sizes. We further investigate several factors influencing the data extraction performance.
§ RELATED WORK
§.§ Training Data Extraction
Existing works on training data extraction mainly focus on membership inference attack or untargeted training data extraction. For membership inference attack, adversaries need to judge whether a given example is contained in the training data of the attacked model. <cit.> train several shadow models that mimic the attacked models' behaviors to help train an auditing model that can predict whether an example is contained in the training dataset. <cit.> perform membership inference attacks on machine translation systems. They find it is harder to attack sequence generation models than classification models. <cit.> show that the encoded dense representations can leak information under membership inference attack. <cit.> focuses on attacking masked language models that are pre-trained on possibly sensitive data (e.g., clinical notes). They introduce an additional reference masked language model besides the original attacked model and compute the ratio of the likelihood measured by the attacked model and the reference model, which is better than solely relying on the attacked model.
For untargeted training data extraction, adversaries first generate various samples using the attacked model and then predict whether they are contained in its training set. <cit.> extract hundreds of verbatim sequences from the popular pre-trained language model GPT-2 <cit.>. And there is privacy information such as names, phone numbers, and email addresses in the extracted sequences. <cit.> try to extract sensitive information from BERT <cit.> pre-trained on clinical notes. However, they are mostly unable to meaningfully expose Personal Health Information by simply using templates. Different from the existing works, we focus on targeted training data extraction that aims to recover the suffix when given a prefix, which is more security-critical and easier to evaluate.
§.§ Memorization
We generally expect models can gain the generalization ability from the training process. However, recent works point out that models may unintentionally memorize the training data even without overfitting <cit.>. One possible method to mitigate this problem is to deduplicate training data <cit.>. However, <cit.> also show that it is possible to recover samples appearing only once in the training dataset. Surprisingly, <cit.> find that there is a forgetting baseline during the pre-training of the casual language model (e.g., the model can memorize at least 40% of the data that appear only once, even being trained on other data for many epochs afterward). These findings further emphasizes the difficulties of avoiding memorization and the potential threats of unintended memorization in large-scale pre-trained language models. Another line of work uses differential privacy to avoid the memorization problem <cit.>, but the mechanism could reduce the accuracy <cit.>. Differential privacy also increases the training time, which can further influence the accuracy within the same budget. Therefore there is still no effective and practical way to avoid unintended memorization. Our work further verifies the existence of unintended memorization and makes it more necessary to develop practical defense methods.
§ METHODOLOGY
We formulate the targeted training data extraction task as follows: given a source prefix S=(s_1,s_2,⋯,s_|S|) with |S| tokens, the attacker should predict the target suffix T=(t_1,t_2,⋯,t_|T|) with |T| tokens and its confidence. The pair of the given prefix and the predicted suffix (S,T) should be contained in the pre-training dataset D_pretrain={(S_i,T_i)}, which the attacked model M is trained on. The prediction of the confidence score is necessary for picking out the most probable suffix when we don't know the ground truth suffix in realistic attack scenarios (i.e., we need to pick out most probable pairs of prefixes and extracted suffixes based on their confidence scores among all predictions). We assume the attacker can obtain some pairs of ground truth prefixes and suffixes D_train={(S_i,T_i) |(S_i,T_i)∈ D_pretrain, 1≤ i≤ |D_train|} before attacking, which is reasonable because large-scale pre-trained data generally contain public data (e.g., Common Crawl). The attackers can utilize D_train to train their attacking models and their goal is to predict suffixes for the prefixes in the test set D_test={S_i | 1≤ i≤ |D_test|}. Note that the prefix S_i in D_test is included in D_pretrain but is not a part of D_train.
§.§ Method Overview
An overview of Ethicist is shown in Figure <ref>. We first tune the soft prompt embeddings during training to elicit memorization in the attacked model M with the MLE loss and the additional smoothing loss. The smoothing loss aims to increase the probability of sampling the ground truth suffix. After prompt tuning, we repeatedly sample K suffixes using the attacked model M conditioned on one given prefix and reorder them with our calibrated confidence estimation. Our calibrated confidence estimation can not only select the most possible suffix, but also provide a more accurate confidence score that represents how likely the predicted suffix is correct. Finally, the suffix with the highest confidence is selected as the final prediction.
§.§ Prompt Tuning with Smoothing Loss
We adopt prompt tuning to train the soft prompt tokens on D, which prepends |X| soft tokens X=(x_1,x_2,⋯,x_|X|) before the original input sequence. Then we feed the input to the attacked model M to compute the MLE loss:
ℒ_MLE=∑_i=1^|T|-1/|T|logP_M(t_i|X,S,t_<i).
Note that we only tune the parameters of the soft prompt tokens and the parameters of the attacked model M are fixed. We use prompt tuning for two reasons: (1) we do not want to change the original parameters of the attacked model M because the main goal is to elicit memorization in M, and (2) prompt tuning is helpful to improve the training efficiency when M is very large, making Ethicist able to efficiently adapt to larger language models that generally memorize more training data.
The MLE loss aims to increase the total generation probability of the target suffix T. However, when using popular sampling methods such as top-k sampling <cit.> and top-p (nucleus) sampling <cit.> to generate multiple candidate suffixes, we want to ensure the probability of the ground truth suffix token at each time step is not low. Suppose the total probability of the ground truth suffix is high while there is one token in the sequence with a low generation probability. In this case, it is still hard to generate the correct suffix using auto-regressive sampling methods. Therefore, we propose a smoothing loss to make the loss distribution of the suffix sequence more smooth. More specifically, we pick out the top-N tokens with the highest loss values in the whole sequence T. Then we additionally optimize the generation probabilities for these N tokens as follows:
ℒ_Smooth=∑_i=1^N-1/NlogP_M(t_σ(i)|X,S,t_<σ(i)),
where t_σ(i) represents the token with the i-th highest loss in T. Note that t_σ(i) is dynamically computed during training. The smoothing loss can also be seen as assigning higher weights to the tokens with higher loss values. Finally, we derive the overall loss function as follows:
ℒ_Total=ℒ_MLE+αℒ_Smooth,
where the coefficient α is a hyperparameter to control the strength of the smoothing loss.
§.§ Calibrated Confidence Estimation
After predicting the suffix, we also need to give a confidence score for the prediction, which can be meaningfully interpreted as the probability that the output suffix is correct. A naive method is to use the generation likelihood P_T=exp(-|T|ℒ_MLE) as the confidence score. This naive method is reasonable for picking out the most probable suffix T_i from a collection of sampled suffixes {T_1,T_2,⋯,T_M} for one given prefix. However, it is unsuitable for comparing the confidence of different predicted suffixes corresponding to different prefixes. As the language model is essentially a statistical model, frequencies of tokens and n-grams in the prefixes can greatly influence the absolute generation likelihood of the suffixes. For example, consider two predicted suffixes T_A and T_B conditioned on two different prefixes S_A and S_B, where S_A and T_A contain tokens and n-grams with much higher frequencies. The absolute generation likelihood of T_A may be significantly higher than T_B, even if they are both ground truth suffixes. Therefore, to eliminate the intrinsic differences in scales of generation likelihood across different suffixes, we propose a novel calibrated confidence estimation method. To calibrate the confidence estimation, we have two considerations: (1) different generated suffixes conditioned on one given prefix should have comparable scales of generation likelihood, and (2) the memorized ground truth suffix is expected to be generated more frequently during multiple generations, which is also validated in Section <ref>.
Suppose the sampled distinct suffixes are {T_1,T_2,⋯,T_M} for one given prefix, the repeated generation times for these suffixes are {r_1,r_2,⋯,r_M} (i.e., r_i denotes how many times T_i is generated among K repeated sampling outputs), and the MLE loss values for these suffixes are {ℒ_MLE^1,ℒ_MLE^2,⋯,ℒ_MLE^M}. Then we assign the calibrated confidence score to T_i as:
C(T_i)=r_i×exp(-|T_i|ℒ_MLE^i)/∑_j=1^M r_j×exp(-|T_j|ℒ_MLE^j).
Through the proposed confidence estimation method, we obtain the confidence score of T_i by comparing it with other sampled suffixes with comparable scales of generation likelihood. In this way, we avoid the scale problem brought by different prefixes and make it practical to compare the predicted suffixes conditioned on different prefixes. Moreover, we leverage the repetition time r_i as a valuable signal since memorized suffix is expected to be generated more frequently. Finally, we select the suffix T_best with the highest confidence score C(T_best) among {C(T_1),C(T_2),⋯,C(T_M)} as the predicted suffix and C(T_best) as its confidence estimation.
§ EXPERIMENTS
§.§ Benchmark
We evaluate Ethicist on the LM-Extraction benchmark[<https://github.com/google-research/lm-extraction-benchmark/>], which is designed for benchmarking targeted training data extraction attacks. It consists of a subset contained in The Pile dataset <cit.>. Both the prefix and the suffix are 50 tokens long. All examples are well-specified, meaning that there is only one 50-token suffix in The Pile dataset given the 50-token prefix. What's more, these examples are all chosen to meet the property that there exists a prefix length (maybe longer than 50) that causes the model to generate the suffix string exactly, which implies that the extraction performance on this benchmark may be higher than that on randomly selected prefixes. We randomly split the dataset into training, validation and test sets. The detailed statistics of the LM-Extraction benchmark are shown in Table <ref>.
§.§ Baselines
We compare Ethicist with the following baselines. All the compared baselines first sample K suffixes {T_1,T_2,⋯,T_K} conditioned on one given prefix S and then pick out one suffix as the prediction.
Perplexity It leverages the perplexity (PPL) measured by the attacked language model M as the metric to sort the candidate suffixes and finally chooses the one with the lowest PPL as the predicted suffix T:
T=max_T_iC(T_i)=max_T_i1/PPL_M(T_i|S)
Comparing (LM) It takes another language model M' and leverages the ratio of the perplexity measured by theses two language models as the metric <cit.>:
T=max_T_iC(T_i)=max_T_iPPL_M'(T_i|S)/PPL_M(T_i|S)
The language model M' could be a much smaller model trained on the same dataset with M or trained on a different dataset.
Comparing (zlib) Different from Comparing (LM), it uses the zlib <cit.> entropy of the text (i.e., the number of bits after compression with zlib) for comparison <cit.>:
T=max_T_iC(T_i)=max_T_ilen(zlib(T_i))/PPL_M(T_i|S)
Comparing (lowercase) It compares the perplexity of the original text and the lower-cased text measured by the same language model M <cit.>:
T =max_T_iC(T_i)
=max_T_iPPL_M(lowercased(T_i)|S)/PPL_M(T_i|S)
Furthermore, we conduct ablation tests by removing the proposed components respectively to investigate the influence of each component.
§.§ Metrics
We adopt the following automatic metrics for evaluation.
Recall The metric computes the percentage of the suffixes that are predicted verbatim over the whole test set. A higher recall score indicates better data extraction ability, which can also be understood as a higher attacking success rate.
Recall_Early stop The metric first sorts the predictions according to their confidence scores and then evaluates the correctness of each prediction one by one. It finally computes the Recall score while making x incorrect predictions. We set x to 100 in our experiments following the LM-Extraction benchmark. A better confidence estimation method can give the correct predictions higher confidence scores and thus lead to a higher Recall_Early stop score.
§.§ Main Results
Table <ref> shows the automatic evaluation results with GPT-Neo 1.3B as the backbone. Ethicist achieves an impressive Recall score of 62.8% and outperforms all the baselines by a large margin, indicating its better ability to extract training data from language models. Moreover, Ethicist has better confidence estimation performance after calibration as shown by a significantly higher Recall_Early stop score.
To further investigate the influence of each component, we run an ablation study. From the results shown in Table <ref>, it can be seen that both the smoothing loss and the calibrated confidence estimation are important to enhance the ability to extract training data, and combining both of them achieves the best performance. Furthermore, we draw the following conclusions: (1) With prompt tuning and extra training data, we can better induce large-scale language models to generate their memorized training data and successfully achieves a 9.5% performance improvement on Recall and a 12.4% performance improvement on Recall_Early stop. (2) The proposed smoothing loss can further enhance the ability to extract training data, boosting the Recall score from 60.8% to 62.3%. (3) The calibrated confidence provides a 6.3% improvement on Recall_Early stop as expected, demonstrating the importance of calibrating confidence estimation for this task. (4) The smoothing loss is more effective in predicting exact suffixes while the calibrated confidence is more beneficial for identifying highly confident predictions, according to the significant drop in Recall without smoothing and the substantial decrease in Recall_Early stop without calibration. (5) The calibrated confidence estimation is effective regardless of whether using prompt tuning. And it demonstrates greater advantages compared to the comparing (LM) baseline in recognizing predictions with higher confidence when using prompt tuning, indicated by increasing Recall_Early stop (from 48.7 to 52.4).
§.§ Analysis: Decoding Strategy
In our experiments, we use top-p sampling to sample multiple candidate suffixes conditioned on one given prefix. However, there are also other popular decoding methods, including greedy search, beam search, and top-k sampling. We thus compare these popular decoding methods in this section. Table <ref> shows the results. Not surprisingly, greedy search performs worst on both Recall and Recall_Early stop, which suggests some tokens in the ground truth suffix do not have the highest probability at the corresponding positions. Beam search outperforms top-p sampling on Recall, indicating that searching for the suffix with the lowest loss works well to find the ground truth suffix. However, beam search performs significantly worse than top-p sampling on Recall_Early stop, because it cannot use our calibrated confidence. Compared with beam search, top-p sampling can generate multiple candidates, which could substantially increase the accuracy of confidence estimation with our proposed calibrated confidence. Moreover, the top-k sampling performs worse than top-p sampling on Recall_Early stop, which may be because top-k sampling is easier to sample low-probability tokens and thus reduce the confidence of the ground truth suffixes. We finally select top-p sampling as our decoding method due to its balance on Recall and Recall_Early stop.
§.§ Analysis: Model Scale
Previous works on scaling laws find that larger language models can memorize more training data <cit.>. Therefore, we are interested in how targeted data extraction performance varies across different model scales.
Figure <ref> shows the results. We can see that the targeted training data extraction performance continuously increases as the model scale increases from 125 million to 6 billion. Ethicist shows impressive results as it consistently and significantly outperforms baselines across different model scales. Thanks to prompt tuning, Ethicist is efficient in terms of computation time and particularly memory consumption. Therefore, Ethicist can also be adapted to larger language models for efficient targeted training data extraction.
§.§ Analysis: Prefix Length and Suffix Length
All prefixes and suffixes in the LM-Extraction benchmark are 50 tokens long, making it an interesting question how the length of prefixes and suffixes would affect the extraction performance.
We show the effect of the given prefix length in Figure <ref>. We can observe that the extraction performance grows approximately linearly with the prefix length for all evaluated methods, and Ethicist performs best for all prefix lengths. Although all methods have similar growth speed on Recall, Ethicist has the highest growth speed on Recall_Early stop. It is also interesting that Comparing (LM) only outperforms Perplexity when given prefixes that are long enough.
We show the effect of the predicted suffix length in Figure <ref>. For all three methods, the extraction performance decreases when the suffix length increases. Different from the approximately linear relationship between the prefix length and the extraction performance, the performance degradation tends to become progressively slower as the suffix length increases. This suggests that the model can still memorize a considerable proportion of suffixes (rather than quickly decreasing to zero) even if the predicted suffix length continues to increase. What's more, we observe that Ethicist has a significantly slower speed of performance degradation compared with the two baselines, which suggests Ethicist is effective for eliciting deeper memorization of longer suffixes of the attacked model.
§.§ Analysis: Sampling Time
Due to space limitations, we put the analysis of sampling time in Appendix <ref>.
§ DISCUSSION
We further show some statistical features in Table <ref>. We can see that the memorized suffixes can be sampled significantly more frequently with a high average repeat time of 85.38, validating that the repeat time is a valuable signal for confidence estimation. What's more, the memorized suffixes have significantly higher confidence. One interesting phenomenon we observe is that if the ground truth suffix can be generated, it mostly has the top 3 highest confidence (Recall@3 ≈ Recall@100). We also find that for more than 30% of the prefixes, the model cannot generate the correct prefix even given 100 chances. Therefore, an important future direction is to design better methods to elicit memorization in the attacked model. Considering the non-negligible gap between Recall@1 and Recall@100 (0.63 vs. 0.69), another important future direction is to design better confidence estimation methods (maybe trainable), which can pick out the ground truth suffix among the collection of candidate suffixes for one prefix.
We show a case in Figure <ref>. Although the first predicted suffix has higher loss than the second predicted suffix, it is sampled far more times than the latter. Therefore, we assign higher confidence to the first suffix using our calibrated confidence estimation method. We further show the probability of generating each token during the sampling process in Figure <ref>. We can observe that although the correct prediction has higher loss as a whole, it keeps a high sampling probability across the generation process. The minimum probability of generating one token in the correct suffix is about 0.45, which is significantly higher than 0.1 for the wrong suffix. Therefore it is easier to generate the correct suffix, which leads to a higher confidence score. This is also in line with our motivation for designing the extra smoothing loss, which can increase the probability of sampling the correct suffix.
§ CONCLUSION
In this work, we propose Ethicist, an effective method for targeted training data extraction attack. Ethicist uses soft prompt to elicit memorization in the attacked model. To ensure the probability of the ground truth suffix token at each time step is not low, we propose a smoothing loss besides the standard MLE loss.
We also propose a calibrated confidence estimation method to calibrate the scale of confidence across different samples.
Experiments on the LM-Extraction benchmark demonstrate that Ethicist significantly improves the extraction performance. We further conduct extensive experiments to investigate several critical factors influencing the extraction performance, including decoding strategy, model scale, prefix length, and suffix length.
We hope our work can promote future researches on better attack methods and practical defense methods for the training data extraction problem.
§ ACKNOWLEDGEMENT
This work was supported by the NSFC projects (Key project with No. 61936010 ). This work was also supported by the Guoqiang Institute of Tsinghua University, with Grant No. 2020GQG0005.
§ LIMITATIONS
Although we conduct experiments across various model scales ranging from 125M to 6B, there are still larger language models we don't test either because their training data is not publicly released or because we have limited resources.
Moreover, the examples in the LM-Extraction benchmark are all chosen to meet the property that there exists a prefix length (maybe longer than 50) that causes the model to generate the suffix string exactly, which makes the extraction performance on this benchmark higher than that on randomly selected prefixes.
§ ETHICS STATEMENT
Ethicist is a powerful method to elicit memorization in the large pre-trained language models, which makes it a useful tool to expose the privacy risk of large language models. However, it also has a risk to be abused by attackers to extract privacy information from pre-trained language models. Thus large language models should be carefully examined before being made publicly available. What's more, it is necessary to develop defense methods against the training data extraction attacks without sacrificing the language modeling ability.
The LM-Extraction benchmark is derived from the Pile dataset, and thus covers many domains including books, code, emails, etc. This suggests the effectiveness of targeted training data extraction across different domains.
acl_natbib
§ IMPLEMENTATION DETAILS
As the benchmark is derived from The Pile <cit.> dataset, we conduct experiments only on the models that are pre-trained on The Pile dataset. They are GPT-Neo 125M, GPT-Neo 1.3B, GPT-Neo 2.7B, and GPT-J 6B <cit.>. We set the prompt length to 100, the batch size to 32, the learning rate of AdamW optimizer to 1e-3, the warmup step to 500, the learning rate decay strategy to linear, N in Equation <ref> to 5, α in Equation <ref> to 0.7, and the maximum training epoch to 20 with an early stopping mechanism. In our main experiments, we generate the suffix using top-p sampling <cit.> with p=0.7 and temperature=0.8. For other decoding methods, we set beam size to 10 for beam search, and k to 10 for top-k sampling (temperature=0.8). Our code is based on Huggingface Transformers <cit.>.
§ COMPUTING INFRASTRUCTURE
All experiments are carried out on a single Tesla V100 GPU with 32GB memory. Each experiment can be completed in less than 20 hours.
§ EFFECT OF SAMPLING TIME
In our main experiments, we sample 100 candidate suffixes for one given prefix. We show the effect of sampling time in Figure <ref>. We can see that all methods' performances increase quickly when the sampling time increases from 1 to 10. However, Ethicist's performance can still improve slowly when the sampling time increases from 10 to 100, which we attribute to the consideration of repeat time in our calibrated confidence estimation. What's more, although we report the result for sampling 100 times in our main experiments, we can see that Ethicist can achieve satisfying performance when sampling only 10 times, which suggests the efficiency of Ethicist.
|
http://arxiv.org/abs/2307.03872v1 | 20230708012336 | Domain Adaptation using Silver Standard Labels for Ki-67 Scoring in Digital Pathology: A Step Closer to Widescale Deployment | [
"Amanda Dy",
"Ngoc-Nhu Jennifer Nguyen",
"Seyed Hossein Mirjahanmardi",
"Melanie Dawe",
"Anthony Fyles",
"Wei Shi",
"Fei-Fei Liu",
"Dimitrios Androutsos",
"Susan Done",
"April Khademi"
] | eess.IV | [
"eess.IV",
"cs.AI",
"cs.CV",
"cs.LG"
] |
Why does dissolving salt in water decrease its dielectric permittivity
Xifan Wu
======================================================================
Deep learning systems have been proposed to improve the objectivity and efficiency of Ki-67 PI scoring. The challenge is that while very accurate, deep learning techniques suffer from reduced performance when applied to out-of-domain data. This is a critical challenge for clinical translation, as models are typically trained using data available to the vendor, which is not from the target domain. To address this challenge, this study proposes a domain adaptation pipeline that employs an unsupervised framework to generate silver standard (pseudo) labels in the target domain, which is used to augment the gold standard (GS) source domain data. Five training regimes were tested on two validated Ki-67 scoring architectures (UV-Net and piNET), (1) SS Only: trained on target silver standard (SS) labels, (2) GS Only: trained on source GS labels, (3) Mixed: trained on target SS and source GS labels, (4) GS+SS: trained on source GS labels and fine-tuned on target SS labels, and our proposed method (5) SS+GS: trained on source SS labels and fine-tuned on source GS labels. The SS+GS method yielded significantly (p<0.05) higher PI accuracy (95.9%) and more consistent results compared to the GS Only model on target data. Analysis of t-SNE plots showed features learned by the SS+GS models are more aligned for source and target data, resulting in improved generalization. The proposed pipeline provides an efficient method for learning the target distribution without manual annotations, which are time-consuming and costly to generate for medical images. This framework can be applied to any target site as a per-laboratory calibration method, for widescale deployment.
Ki-67, proliferation index, domain adaptation, self-supervised learning
§ INTRODUCTION
Breast cancer is the most diagnosed cancer and the leading cause of cancer-related death in women worldwide <cit.>. Ki-67 immunohistochemistry (IHC) biomarker is gaining traction for evaluating the proliferation rate of invasive breast cancers <cit.>. Ki-67 expression is related to prognosis and can identify high-risk early-stage breast cancers <cit.> and determine treatment modalities <cit.>. The Ki-67 proliferation index (PI) is the score associated with the proportion of Ki-67^+ tumour cells to the total number of tumour cells in a breast tissue section <cit.>. However, quantifying this biomarker is labour-intensive, time-consuming, and subject to poor visual estimation concordance <cit.>.
Fortunately, Ki-67 PI can be calculated with deep learning nuclei detection algorithms for more efficient and objective quantification. There have been a few deep learning tools addressing automated Ki-67 PI scoring in literature, such as piNET <cit.> and UV-Net <cit.> which were specifically developed for Ki-67 PI quantification in breast cancer. As automated artificial intelligence (AI) tools become more robust, there is a chance for translation and deployment. However, a challenge with widescale adoption is performance degradation at deployed target sites resulting when the target data is from a center that is not included in the (source) training set. It is especially evident in digital pathology given the variation in patient factors, specimen processing, staining protocols and acquisition devices across pathology laboratories. Annotations from target sites could be included in training sets, but generating gold standard (GS) ground truths are laborious and expensive for medical imaging.
Mitigating domain shift has become a topic of extensive research <cit.> and unsupervised domain adaptation (UDA) is gaining considerable attention for this task. UDA methods seek to overcome the domain gap without the need for labelled target data. Self-training (pseudo label-based methods) has emerged as a promising UDA solution <cit.>. Self-training generates a set of pseudo labels in the target domain and re-trains a network based on these pseudo labels. Self-training loss encourages cross-domain feature alignment by learning from the labelled source data and pseudo-labelled target data. Pseudo labels can be quickly generated for any number of datasets, which is cost-effective and reduces development time. However, perfect accuracy cannot be guaranteed which can lead to propagated errors when fine-tuning. Because pseudo labels do not capture detailed features as well as clean labels we hypothesize that pre-training a network on pseudo labels from the target domain will allow a network to first learn dataset-specific characteristics and low-level features that are task-dependent, thereby providing optimal parameter initialization. Fine-tuning with GS (clean) labels from the source domain can then allow more detailed features to be captured by the network. This work proposes a pipeline that (1) uses an unsupervised Ki-67 PI quantification algorithm to generate pseudo labels, which we call silver standard (SS) labels, in the unlabeled target domain, (2) pre-trains a network on SS labels, and (3) fine-tunes the network on GS labels from the source domain. This pipeline can be used to calibrate automated deep learning-based medical imaging tools on a per-dataset basis, in an easy and unsupervised manner. We validate our method on 325 clinical tissue microarrays (TMAs) (20800 patches) from the target domain. Experimental results show the proposed approach achieves superior performance on the pixel-level and patient-level, therefore, providing a DA training method for robust and accurate Ki-67 PI estimation.
§ METHODS
§.§ Deep Learning Models
Two deep learning architectures are used for experiments: UV-Net and piNET, both developed for Ki-67 PI quantification in breast cancer and validated on large multi-institutional datasets. piNET was built using the U-NET architecture with an extra layer <cit.> and UV-Net was designed to preserve nuclear features of clustering or overlapping nuclei through dense 'V' blocks to retain the high-resolution details <cit.>. The output of piNET and UV-Net is a multi-channel probability map, with center locations of tumour nuclei detected for two classes: Ki-67^- and Ki-67^+ cells.
§.§ Transfer Learning
Transfer learning (TL) <cit.> has proven to be effective for many real-world applications by exploiting knowledge in labelled training data from a source domain. TL has made major contributions to medical image analysis as it overcomes the data scarcity problem as well as preserving time and hardware resources. In this study, we introduce a TL approach that uses an unsupervised Ki-67 nuclei detection scheme to generate SS labels in the target domain for pre-training the model. This enables the model to learn the low-level nuclei features and attain optimal parameter initialization. We will then fine-tune the model using gold GS labels to capture more precise details and improve the accuracy of the learned features. We compare the performance of two network architectures, UV-Net and piNET, in the following scenarios: (1) pre-training with GS labels and fine-tuning with SS labels, and (2) pre-training with SS labels and fine-tuning with GS labels. The results are compared against training methods without TL.
§.§ Pseudo Label Generation: Silver Standards
In UDA settings, there are no labels for the target domain. Our goal is to improve performance on the target, so we train the model with the target SS labels generated by a previously developed and validated unsupervised Ki-67 nuclei detection method called the immunohistochemical colour histogram (IHCCH). The process includes vector median filtering, background subtraction, an unsupervised colour separation method that separates blue and brown objects automatically based on the histogram of the b* channel, and adaptive radius nuclei detection. More details can be found in <cit.>.
§.§ Dataset
This study uses Ki-67 stained invasive breast cancer images obtained from three institutions. Table <ref> summarizes the Ki-67 datasets used for each training method.
Source Dataset:
510 patches of 256×256 pixels in size are extracted from whole slide images provided by St. Michael's Hospital (SMH) in Toronto and an open-source database, Deepslide <cit.>. The ×20 Aperio AT Turbo and ×40 Aperio ScanScope scanners were used, respectively. Deepslide images are down-sampled to ×20 for compatibility. Images were annotated by marking Ki-67^- and Ki-67^+ centroids <cit.>. Centroid annotations were recast into a Gaussian kernel to allow the system to contextual learn information from the
nuclei to help the classifier discover more robust features. Artifacts including overstaining, background, folders, blur, and dust are common in tissue slides; therefore, 15% of the training dataset includes patches with artifacts and non-tumorous areas to reduce false positives. This dataset represents our source domain and contains GS labels. Each patch contains 58 tumourous cells on average for a total of 29571 cells.
Target Dataset:
The target dataset was provided by the University Health Network (UHN) and contains 411 tissue microarrays (TMA) from 175 patients. Each patient has 1 to 3 corresponding TMAs of 2000 × 2000 pixels in size and an expert PI estimate is available for each patient. 24 TMAs from 24 patients were used to create the SS labels. These 24 TMAs were tiled into patches of size 256x256 pixels and 345 patches which contained ≥ 80 % tumorous tissue were extracted and the remaining patches were discarded. The TMAs from patients used for SS label generation were removed from our target dataset to prevent patient data leakage. 10 TMAs were randomly selected from the remaining pool and annotated by an anatomical pathology resident (N.N.J.N) and verified by a breast pathologist (S.D.) to produce pixel-wise nuclei annotations for testing in the target domain. Each annotated TMA contains 2093 tumourous cells on average for a total of 20930 cells. Accordingly, the target domain test set contains 325 TMAs from 151 patients with patient-level PI scores and 10 TMAs with nuclei annotations.
§.§ Evaluation Metrics
Nuclei detection is evaluated by comparing the Ki-67^- and Ki-67^+ centroids between the AI prediction and GS ground truths through the F1 score. The F1 score is the harmonic mean of precision and recall which is dependent on the number of true positives (TP), false positives (FP), and false negatives (FN). A TP is detected whenever the Euclidean distance between an annotation centroid and a detected centroid is less than 6 µm. This value corresponds to the average radius of tumourous cells from the source dataset. All detected cells not within 6 µm of a ground truth annotation are considered FP. Multiple detections of an already counted cell are also counted as FP. All ground truth cells without a detection within 6 µm proximity are considered FN. The F1 scores report raw nuclei detection performance, therefore, if a model is operating on an image with a low tumour nuclei count, a single missed nucleus can greatly skew the overall F1 score. Thus, different metrics, such as the proliferation index (PI) error should also be used. Tumour proliferation is measured by:
PI=# Ki-67^+ tumour cells/#(Ki-67^+ + Ki-67^-) tumour cells
which is computed over the whole TMA based on the detected nuclei. The PI difference is used to investigate the error between predicted and actual PI values: Δ PI= |PI_actual - PI_predicted|. Pairwise one-way ANOVA is used to compare model performance.
§.§ Experimental Setup
Five training methods are used to study Ki-67 nuclei detection and PI estimation accuracy. The first configuration is GS only, which uses only the GS data from the source domain. The second configuration, SS only, uses the SS data generated by the unsupervised IHCCH algorithm from the target domain. The third configuration, Mixed, includes both GS and SS in the training pool. The fourth configuration, GS+SS, uses GS for pre-training and SS for fine-tuning and the final configuration is our proposed method, SS+GS, which uses SS for pre-training and GS for fine-tuning. All methods that use SS are trained with increments of 100 where each increment contains SS from previous increments. Table <ref> summarizes the configurations of each training method. The IHCCH (unsupervised) method is also evaluated to verify the stand-alone performance of the tool.
To ensure robustness to training variations we use a 3-fold cross-validation protocol for all experiments. We divide our 510 source patches with GS annotations into 3 subsets. For each fold, we select one subset as the held-out patches and the other 340 patches are used in the training pool. An Adam optimizer was used with a learning rate of 1e-3, a batch size of 4 with 100 epochs, and a Huber loss function, the epoch with the lowest validation loss was saved. Data augmentations were applied for rotation and scaling. All experiments were run using a GeForce RTX 3070 Ti.
§ RESULTS
Quantitative results are summarized in Table <ref>. Nuclei predictions are shown in Figure <ref>. Reproducibility (standard deviation between 3-fold cross-validation models when predicting on the same target distribution data) is shown in Table <ref>.
§.§ Source Domain: Nuclei Detection
170 unseen patches from the source domain with pixel-level Ki-67^- and Ki-67^+ centroid annotations are used to test nuclei detection performance. The distributions of the F1 scores are shown in Figure <ref> and summarized in Table <ref>. The proposed SS+GS method yielded superior or competitive F1 performance on the source domain when compared to the baseline method, GS Only, whereas IHCCH, SS Only, Mixed and GS+SS methods performed generally worse. Nuclei detection performance on the source domain serves as our model verification step. Our findings indicate that including SS data from the target domain does not degrade model performance on the source domain.
§.§ Target Domain: Nuclei Detection
We next test our method on an adaptation task as we shift from source domain to target domain pixel-level assessments. 10 TMAs from the target domain with pixel-level Ki-67^- and Ki-67^+ expert annotations were used to test nuclei detection performance. The distribution of the F1 scores on the target domain test set is shown in Figure <ref> and summarized in Table <ref>. The GS+SS method achieves superior performance exceeding all other methods and significantly higher performance than the baseline method regardless of the SS increment.
§.§ Target Domain: PI Computation
We extend the use of our approach to another adaptation task involving a change in the level of assessment, specifically from patch-level to patient-level. ΔPI is assessed on 151 patients (325 TMAs) from the target domain. The distributions of the ΔPI are shown in Figure <ref> and summarized in Table <ref>. SS+GS achieves superior PI prediction performance exceeding all other methods and achieving significantly lower PI error (p<0.05) compared to the baseline method, GS Only, regardless of the SS increment.
The ΔPI for GS only methods is ∼ 7.5%, but using the SS+GS method leads to a decrease in error by ∼ 3.5%, which is a significantly greater improvement compared to other methods. SS+GS methods also yielded the lowest ΔPI standard deviation signifying less variability and more consistent and reliable predictions. As some PI intervals have greater clinical significance, the patient-level PI performance was evaluated in intervals of 10% as depicted in Figure <ref>. SS+GS methods maintain the lowest ΔPI across all intervals (excluding 30% to 40% for UV-Net) which demonstrates optimal performance in clinically relevant ranges.
§.§ Qualitative Evaluation: t-SNE
We analyze the effects of the models on source and target domains further with t-SNE, a popular method to visualize high-dimensional data in 2D <cit.>. Figure <ref> illustrates such feature visualizations from source and target images obtained from GS Only, GS+SS and SS+GS models. The features learned for the source and target domains in the GS only and GS+SS models are diffuse and mostly non-overlapping, which likely causes reduced generalization. However, features from the SS+GS model are similar across source and target domains, which likely resulted in improved generalization and top performance on target domain data.
§ DISCUSSION
Ki-67 PI is visually assessed by pathologists to estimate prognosis <cit.> and decide whether adjuvant chemotherapy should be added to a patient's treatment plan <cit.>. A high Ki-67 proliferation index is associated with a poor prognosis <cit.> and better eligibility for adjuvant chemotherapy <cit.>. The monarchE Phase 3 <cit.>—establishes >20% Ki-67 PI as a clinically relevant threshold to stratify patients with estrogen receptor-positive early breast cancer eligible for adjuvant chemotherapy. However, various preanalytical, analytical and interpretation factors affect the scoring of Ki-67 by pathologists and lead to high inter-rater variability. Automated tools, such as deep learning can be used to bring objectivity and efficiency, thus improving the clinical utility of Ki-67 scoring.
While more accurate than other tools, deep learning methods experience a reduction in performance when applied to out-of-domain data. Covariate shifts between source and target domains are common in digital pathology due to different staining protocols and scanning equipment/software. This presents a significant challenge for clinical translation, as the current industry standard is to train models using data only available to the vendor. To address this issue and move closer to widespread deployment, this work presents an unsupervised domain adaptation method for Ki-67 quantification to focus on creating models that generalize to target data. The proposed pipeline learns the target distribution without manual annotations, which would be time-consuming and costly to obtain for medical images. Pseudo labels (SS labels) are extracted from the target domain in an unsupervised manner using the IHCCH method, and this data is used to supplement training datasets to learn domain- and problem-specific features. This framework can be easily implemented at any target site as a laboratory-specific calibration method, which can simplify deployment not only for Ki-67 quantification but also for a wide range of medical imaging applications.
We evaluated five training configurations (GS Only, SS Only, Mixed, GS+SS, SS+GS) on two Ki67 architectures (piNET and UV-Net) and found improved performance, particularly for the SS+GS configuration compared to the baseline, GS only. This suggests that although the SS labels may be slightly noisy (F1 score of 0.53 on source and 0.57 on target), incorporating data from the target domain can help the models learn domain-specific features. This was evident from the t-SNE plots, which showed a clear overlap in features learned for the target and source distributions in the SS+GS models. On the other hand, the GS+SS models did not perform as well, despite being the standard practice in the community. We believe that fine-tuning with the noisy SS labels forces the model to remember the noise more prominently. However, in the SS+GS configuration, the model was first trained with the noisy SS labels and then refined with clean GS labels, leading to better performance and an overall PI accuracy of 95.9% achieved using piNET. Furthermore, across clinically relevant PI ranges, the SS+GS models exhibited the best performance and demonstrated consistency (low standard deviation across multiple training runs).
We recognize there is ample opportunity to enhance performance and gain a deeper understanding of the impact of SS labels. Our strategy includes enhancing pseudo label generation, refining patch selection, diversifying patient cohorts, and assessing SS label source domain effects. We'll also compare our approach to domain adversarial learning and self-supervised model distillation. Future studies will explore per-site calibration in other datasets and benchmark against state-of-the-art methods.
§ CONCLUSION
In this study, we address the problem of domain adaptation for automated Ki-67 quantification in invasive breast cancer. We present a novel self-supervised approach that shows that using target domain pseudo labels (SS) for pre-training and fine-tuning with ground truth (GS) data from the source domain leads to improved performance on both source and target domains. The proposed method enhances the robustness of AI models to domain variations and improves adaptation to unseen data distributions. The training pipeline overcomes the difficulties of scarce labelled data and costly manual annotations; a challenge in medical imaging applications. These findings can drive widespread clinical utilization of automated quantification tools in digital pathology.
We acknowledge the Canadian Cancer Society, and MITACs Canada for funding this research.
§ APPENDIX
|
http://arxiv.org/abs/2307.05547v1 | 20230709055046 | Robust Routing Made Easy: Reinforcing Networks Against Non-Benign Faults | [
"Christoph Lenzen",
"Moti Medina",
"Mehrdad Saberi",
"Stefan Schmid"
] | cs.DC | [
"cs.DC"
] |
Robust Routing Made Easy:
Reinforcing Networks Against Non-Benign Faults
Research supported by the Federal Ministry of Education and Research (BMBF), grant 16KISK020K, 2021-2025.
This article extends work presented at SSS
2017 <cit.>.
Christoph Lenzen^1 Moti Medina^2 Mehrdad Saberi^3 Stefan Schmid^4
^1CISPA Helmholtz Center for Information Security, Germany ^2Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
^3University of Maryland, College Park, USA ^4TU Berlin, Germany
August 12, 2023
===============================================================================================================================================================================================================================================================================
With the increasing scale of communication networks,
the likelihood of failures grows as well.
Since these networks form a critical backbone
of our digital society, it is important that they rely on
robust routing algorithms which ensure connectivity
despite such failures. While most modern communication
networks feature robust routing mechanisms, these mechanisms
are often fairly complex to design and verify, as they
need to account for the effects of failures and rerouting
on communication.
This paper conceptualizes the design of robust routing mechanisms,
with the aim to avoid such complexity. In particular,
we showcase simple and generic blackbox transformations that increase resilience of routing against independently distributed failures, which allows
to simulate the routing scheme on the original network, even in the presence
of non-benign node failures (henceforth called faults). This is attractive
as the system specification and routing policy can simply be preserved.
We present a scheme for constructing such a reinforced network, given
an existing (synchronous) network and a routing scheme. We prove that
this algorithm comes with small constant overheads, and only requires a minimal
amount of additional node and edge resources;
in fact, if the failure probability is smaller than 1/n,
the algorithm can come without any overhead at all.
At the same time,
it allows to tolerate a large number of
independent random (node) faults,
asymptotically almost surely.
We complement our analytical results with simulations on different real-world topologies.
§ INTRODUCTION
Communication networks have become a critical backbone
of our digital society. For example, many datacentric applications
related to entertainment, social networking, or health, among others,
are distributed and rely on the high availability and
dependability of the interconnecting network (e.g., a
datacenter network or a wide-area network).
At the same time, with the increasing scale of
today's distributed and networked systems (often relying
on commodity hardware as a design choice
<cit.>), the number of
failures is likely to increase as well
<cit.>.
It is hence important that communication networks can tolerate
such failures and
remain operational despite the failure of some of their
components.
Robust routing mechanisms aim to provide such guarantees:
by rerouting traffic quickly upon failures,
reachability is preserved. Most communication
networks readily feature robust routing mechanisms,
in the control plane (e.g.
<cit.>), in
the data plane (e.g. <cit.>), as well as on higher
layers (e.g. <cit.>).
However, the design of such robust routing mechanisms is
still challenging and comes with tradeoffs, especially if
resilience should extend to multiple failures <cit.>.
Besides a fast reaction time and re-establishing connectivity, the
resulting routes typically need to fulfill certain additional properties,
related to the network specification and policy.
Ensuring such properties however can be fairly complex,
as packets inevitably follow different paths after failures.
Interestingly, while the problem of how to re-establish reachability
after failures is well explored,
the problem of providing specific properties on the failover
paths is much less understood.
This paper conceptualizes the design of robust routing, presenting a new approach to robust routing which conceptually differs
significantly from existing literature by relying on proactive reinforcement (rather than reaction to failures).
In particular, our approach aims to overcome the complexities involved in designing
robust routing algorithms, by simply sticking to the original
network and routing specification.
To achieve this, our approach is to mask the effects of failures
using redundancy: in the spirit of error correction,
we proactively reinforce networks by adding a minimal number of
additional nodes and links, rather than
coping with failed components when they occur.
The latter is crucial
for practicability: significant refactoring of existing systems
and/or accommodating substantial design constraints is rarely
affordable.
In this paper, to ensure robustness while maintaining
the network and routing specification, we aim to
provide a high degree of fault-tolerance,
which goes beyond simple equipment and failstop failures,
but accounts for more general faults which include non-benign
failures of entire nodes.
While our approach presented in this paper will be general
and applies to any network topology, we are particularly
interested in datacenter networks (e.g., based on low-dimensional
hypercubes or d-dimensional tori <cit.>)
as well as in wide-area
networks (which are typically sparse <cit.>).
We will show that our approach works especially well for these networks.
§.§ The Challenge
More specifically,
we are given a network G=(V,E) and a routing scheme, i.e.,
a set of routes in G.
We seek to reinforce the network G by
allocating additional resources, in terms of nodes and edges,
and to provide a corresponding routing strategy to simulate the routing scheme
on the original network despite non-benign node failures.
The main goal is to maximize the probability that the network withstands
failures (in particular, random failures of entire nodes),
while minimizing the resource overhead.
Furthermore, we want to ensure that the network transformation is simple
to implement, and that it interferes as little as possible with the existing system design and operation, e.g., it
does not change the reinforced system's specification.
Toward this goal, in this paper, we make a number of simplifying assumptions.
First and most notably, we assume independent failures,
that is, we aim at masking faults with little or no correlation among each other.
Theoretically, this is motivated by the fact that
guaranteeing full functionality despite having f adversarially placed faults trivially requires redundancy (e.g., node degrees) larger than f.
There is also practical motivation to consider independent faults:
many distributed systems proactively avoid fault clusters
<cit.> and there is also empirical
evidence that in certain scenarios, failures are only weakly correlated <cit.>.
Second, we treat nodes and their outgoing links as fault-containment regions (according to <cit.>), i.e., they are the basic components our systems are comprised of.
This choice is made for the sake of concreteness;
similar results could be obtained when considering, e.g., edge failures, without changing the gist of results or techniques.
With these considerations in mind, the probability of uniformly random
node failures that the reinforced system can tolerate is a canonical choice for measuring resilience.
Third, we focus on synchronous networks, for
several reasons:
synchrony not only helps in handling faults, both on the theoretical level (as illustrated by the famous FLP theorem <cit.>) and for ensuring correct implementation, but it also
simplifies presentation, making it easier to focus on the proposed concepts.
In this sense, we believe
that our approach is of particular interest in the context of real-time systems,
where the requirement of meeting hard deadlines makes synchrony an especially attractive choice.
§.§ Contributions and Techniques
This paper proposes a novel and simple approach to robust routing,
which decouples the task of designing a reinforced network from the task of
designing a routing scheme over the input network. By virtue of this decoupling,
our approach supports arbitrary routing schemes and objectives,
from load minimization to throughput maximization and beyond,
in various models of computation, e.g., centralized or distributed, randomized
or deterministic, online or offline, or oblivious.
We first consider a trivial approach:
we simply replace each node by ℓ∈ copies
and for each edge we connect each pair of copies of its endpoints,
where ℓ is a constant.[Choosing concreteness over generality,
we focus on the, in our view, most interesting case of constant ℓ. It is straightforward to generalize the analysis.]
Whenever a message would be sent over an edge in the original graph,
it should be sent over each copy of the edge in the reinforced graph.
If not too many copies of a given node fail, this enables each receiving copy to recover the correct message.
Thus, each non-faulty copy of a node can run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph.
When analyzing this approach,
we observe that asymptotically almost surely (a.a.s., with probability 1-o(1)) and with ℓ=2f+1, this reinforcement can sustain an independent probability p of f Byzantine node failures <cit.>, for any p∈ o(n^-1/(f+1)), i.e., f nodes may violate the protocol in any arbitrary way (and may hence also collude).
This threshold is sharp up to (small) constant factors: for p∈ω(n^-1/(f+1)), a.a.s. there is some node for which all of its copies fail.
If we restrict the fault model to omission faults
(faulty nodes may skip sending some messages but otherwise act according to the protocol), ℓ=f+1 suffices.
The cost of this reinforcement is that the number of nodes and edges increase by factors of ℓ and ℓ^2, respectively.
Therefore, already this simplistic solution can support non-crash faults of probability p∈ o(1/√(n)) at a factor-4 overhead.
We note that the simulation introduces no large computational overhead and
does not change the way the system works, enabling to use it as a blackbox.
Also randomized algorithms can be simulated in a similar fashion,
provided that all copies of a node have access to a shared source of randomness.
Note that this requirement is much weaker than globally shared randomness:
it makes sense to place the copies of a node in physical proximity to approximately preserve the geometrical layout of the physical realization of the network topology.
Our approach above raises the question whether
we can reduce the involved overhead further.
In this paper, we will answer this question positively:
We propose to apply the above strategy only to a small
subset E' of the edge set.
Denoting by v_1,…,v_ℓ the copies of node v∈ V, for
any remaining edge {v,w}∈ E∖ E' we add only edges
{v_i,w_i}, i∈ [ℓ], to the reinforced graph.
The idea is to choose E' in a way such that the connected components
induced by E∖ E' are of constant size, yet |E'|=ε |E|.
This results in the same asymptotic threshold for p, while the number of edges of the reinforced graph drops to ((1-ε)ℓ+εℓ^2)|E|.
For any constant choice of ε, we give constructions with this property for grids or tori of constant dimension and minor-free graphs of bounded degree.
Again, we consider the case of f=1 of particular interest:
in many typical network topologies, we can reinforce the network to boost the failure probability that can be tolerated from Θ(1/n) to Ω(1/√(n)) by roughly doubling (omission faults) or tripling (Byzantine faults) the number of nodes and edges.
The redundancy in this second construction is near-optimal under the constraint that we want to simulate an arbitrary routing scheme in a blackbox fashion,
as it entails that we need a surviving copy of each edge, and thus in particular each node.
In many cases, the paid price will be smaller than the price for making each individual component sufficiently reliable to avoid this overhead.
Furthermore, we will argue that the simplicity of our constructions enables us to re-purpose the redundant resources in applications with less strict reliability requirements.
Our results show that while approach is general and can be applied to any
existing network topology (we will describe and analyze valid reinforcements for
our faults models on general graphs), it can be refined and is particularly
interesting in the context of networks that
admit suitable partitionings. Such networks include
sparse, minor-free graphs, which are practically relevant topologies in
wide-area networks, as well as torus graphs and low-dimensional
hypercubes, which arise in datacenters and parallel architectures.
To complement our theoretical findings and investigate the reinforcement
cost in real networks, we conducted experiments on the Internet Topology Zoo <cit.>.
We find that our approach achieves robustness at significantly lower cost compared to
the naive replication strategy often employed in dependable networks.
§.§ Putting Things Into Perspective
In contrast to much existing robust routing literature on reactive
approaches to link failures <cit.> (which come with a delay),
we consider a proactive approach by enhancing the network with redundancy.
Our proactive approach also allows us to replicate the routing scheme (and hence the network policy) on the new network.
In particular, we show that if the failure probability is smaller than 1/n, there is a good probability that our approach works even without any overhead at all.
Furthermore, there are two ways in which our system can be used. One approach is to replicate the entire node (including the compute part), and then forward the traffic to its two associated peers. Alternatively, traffic can also simply be replicated to multiple NICs, without additional compute requirements, depending on the failure model. More generally, our contribution can also be seen more abstractly and the robust routing happen on a logical level, depending on the failure scenario.
Also, we show that in the absence of a valid message, it can simply be ignored, as the rest of the system continues to perform
The most closely related work to ours is NetCo <cit.>,
which also relies on network reinforcement and can handle malicious behavior.
NetCo is is based on a robust
combiner concept known from cryptography, and complements each router with two additional routers.
Using software-defined networking, traffic is replicated across the three (untrusted) devices and then merged again, using a consensus algorithm. While a high degree of robustness is achieved, the three-fold overhead is significant. More importantly, however, in contrast to our approach, Netco requires special hardware for splitting and merging the traffic; while the functionality of this hardware can be simple, it still needs to be trusted. The consensus requirement dramatically reduces the throughput, as shown in the empirical evaluation of NetCo in <cit.>.
Our solution does not require such components and is hence not only more practical but also significantly more performant.
§.§ Organization
In <ref>, we sketch the properties of our approach and state a number of potential applications. In <ref>, we formalize the fault models that we tackle in this article alongside the notion of a valid reinforcement and its complexity measures. In <ref> and <ref>, we study valid reinforcements on general graphs, and in <ref>, we study more efficient reinforcements for specific graphs.
We complement our analytical results with an empirical simulation study in
<ref>.
In <ref> we raise a number of points in favor of the reinforcement approach. We review related work in
<ref>, and we conclude and present a number of interesting
follow-up questions in <ref>.
§ HIGH-LEVEL OVERVIEW: REINFORCING NETWORKS
Let us first give an informal overview of our blackbox transformation
for reinforcing networks (for formal specification see <ref>), as well as its guarantees and preconditions.
Assumptions on the Input Network
We have two main assumptions on the network at hand: (1) We consider synchronous routing networks, and (2) each node in the network (alongside its outgoing links) is a fault-containment region, i.e., it fails independently from other nodes.
We do not make any assumptions on the network topology, but will provide specific
optimizations for practically relevant topologies (such as sparse, minor-free networks
or hypercubes) in <ref>.
Valid Reinforcement Simulation Guarantees
Our reinforcements create a number of copies of each node. We have each non-faulty copy of a node run the routing algorithm as if it were the original node, guaranteeing that it has the same view of the system state as its original in the corresponding fault-free execution of the routing scheme on the original graph. Moreover, the simulation fully preserves all guarantees of the schedule, including its timing, and introduces no big computational overhead.
This assumption is simple to meet in stateless networks, while it requires synchronization primitives in case of stateful network functions.
Unaffected Complexity and Cost Measures
Routing schemes usually revolve around objective functions such as load minimization, maximizing the throughput, minimizing the latency, etc., while aiming to minimize complexity related to, e.g., the running time for centralized algorithms, the number of rounds for distributed algorithms, the message size, etc. Moreover, there is the degree of uncertainty that can be sustained, e.g., whether the input to the algorithm is fully available at the beginning of the computation (offline computation) or revealed over time (online computation). Our reinforcements preserve all of these properties, as they operate in a blackbox fashion. For example, our machinery readily yields various fault-tolerant packet routing algorithms in the Synchronous Store-and-Forward model by Aiello et. al <cit.>. More specifically, from <cit.> we obtain a centralized deterministic online algorithm on unidirectional grids of constant dimension that achieves a competitive ratio which is polylogarithmic in the number of nodes of the input network w.r.t. throughput maximization. Using <cit.> instead, we get a centralized randomized offline algorithm on the unidirectional line with constant approximation ratio w.r.t. throughput maximization. In the case that deadlines need to be met the approximation ratio is, roughly, O(log^* n) <cit.>. As a final example, one can obtain from <cit.> various online distributed algorithms with sublinear competitive ratios w.r.t. throughput maximization.
Cost and Gains of the Reinforcement
The price of adding fault-tolerance is given by the increase in the network size, i.e., the number of nodes and edges of the reinforced network in comparison to the original one. Due to the assumed independence of node failures, it is straightforward to see that the (uniform) probability of sustainable node faults increases roughly like n^-1/(f+1) in return for (i) a linear-in-f increase in the number of nodes and (ii) an increase in the number of edges that is quadratic in f. We then proceed to improve the construction for grids and minor-free constant-degree graphs to reduce the increase in the number of edges to being roughly linear in f. Based on this information, one can then assess the effort in terms of these additional resources that is beneficial, as less reliable nodes in turn are cheaper to build, maintain, and operate. We also note that, due to the ability of the reinforced network to ensure ongoing unrestricted operability in the presence of some faulty nodes, faulty nodes can be replaced or repaired before communication is impaired or breaks down.
Preprocessing
Preprocessing is used, e.g., in computing routing tables in Oblivious Routing <cit.>.
The reinforcement simply uses the output of such a preprocessing stage in the same manner as the original algorithm. In other words, the preprocessing is done on the input network and its output determines the input routing scheme. In particular, the preprocessing may be randomized and does not need to be modified in any way.
Randomization
Randomized routing algorithms can be simulated as well, provided that all copies of a node have access to a shared source of randomness. We remark that, as our scheme locally duplicates the network topology, it is natural to preserve the physical realization of the network topology in the sense that all (non-faulty) copies of a node are placed in physical proximity. This implies that this constraint is much easier to satisfy than globally shared randomness.
§ PRELIMINARIES
We consider synchronous routing networks.
Formally, the network is modeled as a directed graph G=(V,E), where V is the set of n≜ |V| vertices, and E is the set of m≜ |E| edges (or links).
Each node maintains a state, based on which it decides in each round for each of its outgoing links which message to transmit.
We are not concerned with the inner workings of the node, i.e., how the state is updated;
rather, we assume that we are given a scheduling algorithm performing the task of updating this state and use it in our blackbox transformations.
In particular, we allow for online, distributed, and randomized algorithms.
Probability-p Byzantine Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol in arbitrary ways, including delaying, dropping, or forging messages, etc.
Probability-p Omission Faults p
The set of faulty nodes F⊆ V is determined by sampling each v∈ V into F with independent probability p. Nodes in F may deviate from the protocol by not sending a message over an outgoing link when they should. We note that it is sufficient for this fault model to be satisfied logically. That is, as long as a correct node can identify incorrect messages, it may simply drop them, resulting in the same behavior of the system at all correct nodes as if the message was never sent.
Simulations and Reinforcement
For a given network G=(V,E) and a scheduling algorithm A, we will seek to reinforce (G,A) by constructing G'=(V',E') and scheduling algorithm A' such that the original algorithm A is simulated by A' on G', where G' is subject to random node failures. We now formalize these notions. First, we require that there is a surjective mapping P:V'→ V; fix G' and P, and choose F'⊆ V' randomly as specified above.
Assume that in each round r∈, each v'∈ V'∖ F' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, a strict majority of the nodes v'∈ V' with P(v')=v computes in each round r∈ the state of v in A in this round. The simulation is strong, if not only for each v∈ V there is a strict majority doing so, but all v'∈ V'∖ F' compute the state of P(v') in each round.
Assume that in each round r∈, each v'∈ V' is given the same input by the environment as P(v'). A' is a simulation of A under p, if for each v∈ V, there is v'∈ V' with P(v')=v that computes in each round r∈ the state of v in A in this round. The simulation is strong, if each v'∈ V' computes the state of P(v') in each round.
A (strong) reinforcement of a graph G=(V,E) is a graph G'=(V',E'), a surjective mapping P V'→ V, and a way of determining a scheduling algorithm A' for G' out of scheduling algorithm A for G. The reinforcement is valid under the given fault model (p or p) if A' is a (strong) simulation of A a.a.s.
*Resources and Performance Measures.
We use the following performance measures.
* The probability p of independent node failures that can be sustained a.a.s.
* The ratio ν≜ |V'|/|V|, i.e., the relative increase in the number of nodes.
* The ratio η≜|E'|/|E|, i.e., the relative increase in the number of edges.
We now briefly discuss, from a practical point of view, why we do not explicitly consider further metrics that are of interest.
§.§ Other Performance Measures
* Latency:
As our reinforcements require (time-preserving) simulation relations, in terms of rounds, there is no increase in latency whatsoever.
However, we note that (i) we require all copies of a node to have access to the input (i.e., routing requests) of the simulated node and (ii) our simulations require to map received messages in G' to received messages of the simulated node in G.
Regarding (i), recall that it is beneficial to place all copies of a node in physical vicinity, implying that the induced additional latency is small.
Moreover, our constructions naturally lend themselves to support redundancy in computations as well, by having each copy of a node perform the tasks of its original;
in this case, (i) comes for free.
Concerning (ii), we remark that the respective operations are extremely simple;
implementing them directly in hardware is straightforward and will have limited impact on latency in most systems.
* Bandwidth/link capacities.
We consider the uniform setting in this work.
Taking into account how our simulations operate, one may use the ratio η as a proxy for this value.
* Energy consumption.
Regarding the energy consumption of links, the same applies as for bandwidth.
The energy nodes use for routing computations is the same as in the original system, except for the overhead induced by Point (ii) we discussed for latency.
Neglecting the latter, the energy overhead is in the range [min{ν,η},max{ν,η}].
* Hardware cost.
Again, neglecting the computational overhead of the simulation, the relative overhead lies in the range [min{ν,η},max{ν,η}]
In light of these considerations, we focus on p, ν, and η as key metrics for evaluating the performance of our reinforcement strategies.
§ STRONG REINFORCEMENT UNDER BYZ(P)
We now present and analyze valid reinforcements
under Byz(p)
for our faults model
on general graphs.
Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and set ℓ = 2f+1.
Reinforced Network G'
We set V'≜ V× [ℓ], where [ℓ]≜{1,…,ℓ}, and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Consider node v'∈ V'∖ F'. We want to maintain the invariant that in each round, each such node has a copy of the state of v=P(v') in A. To this end, v'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the message that has been sent to v' by at least f+1 of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step requires such a majority to exist; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, then A' strongly simulates A.
We show the claim by induction on the round number r∈, where we consider the initialization to anchor the induction at r=0. For the step from r to r+1, observe that because all v'∈ V'∖ F' have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. Accordingly, each v'∈ V'∖ F' receives
the message A would send over (w,v) ∈ E
from each w'∈ V'∖ F' with P(w')=w (via the link (w',v')). By the assumption of the lemma, we have at least ℓ-f=f+1 such nodes, implying that v' updates the local copy of the state of A as if it received the same messages as when executing A in round r+1. Thus, the induction step succeeds and the proof is complete.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
If p ∈ o(n^-1/(f+1)), the above construction is a valid strong reinforcement for the fault model p. If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f. If p ∈ o(n^-1/(f+1)), using ℓ=2f+1 and a union bound we see that the probability of this event is at least
1-n∑_j=f+1^2f+12f+1jp^j(1-p)^2f+1-j
≥ 1-n ∑_j=f+1^2f+12f+1jp^j
≥ 1-n 2f+1f+1p^f+1∑_j=0^f p^j
≥ 1-n (2e)^f·p^f+1/1-p= 1-o(1).
Here, the second to last step uses that ab≤ (ae/b)^b and the final step exploits that p∈ o(n^-1/(f+1)).
For the second claim, assume w.l.o.g. p≤ 1/3, as increasing p further certainly increases the probability of the system to fail. For any v∈ V, the probability that |{v_i∈ F'}|> f is independent of the same event for other nodes and larger than
2f+1f+1p^f+1(1-p)^f≥(3/2)^f p^f+1(1-p)^f≥ p^f+1,
since ab≥ (a/b)^b and 1-p≥ 2/3. Hence, if G contains Ω(n) nodes v with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the probability that there is such a node v for which |{v_i∈ F'}|> f is at least
1-(1-p^f+1)^Ω(n)⊆ 1-(1-ω(1/n))^Ω(n)= 1-o(1).
If there is such a node v, there are algorithms A and inputs so that A sends a message across some edge (v,w) in some round. If faulty nodes do not send messages in this round, the nodes w_i∈ V'∖ F' do not receive the correct message from more than f nodes v_i and the simulation fails. Hence, the reinforcement cannot be valid.
For constant p, one can determine suitable values of f∈Θ(log n) using Chernoff's bound. However, as our focus is on small (constant) overhead factors, we refrain from presenting the calculation here.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = ℓ^2 = 4f^2 + 4f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes.
However, η = 9, i.e., while the number of edges also increases only by a constant, it seems too large in systems where the limiting factor is the amount of links that can be afforded.
§ STRONG REINFORCEMENT UNDER OM(P)
The strong reinforcement from the previous section is, trivially, also a strong reinforcement under p. However, we can reduce the number of copies per node for the weaker fault model. Given are the input network G=(V,E) and scheduling algorithm A. Fix a parameter f∈ and, this time, set ℓ = f+1.
Reinforced Network G'
We set V'≜ V× [ℓ] and denote v_i≜ (v,i). Accordingly, P(v_i)≜ v. We define E'≜{(v',w')∈ V'× V' | (P(v'),P(w'))∈ E}.
Strong Simulation A' of A
Each node[Nodes suffering omission failures still can simulate A correctly.] v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A,
* sends on each link (v',w')∈ E' in each round the message v would send on (P(v'),P(w')) when executing A, and
* for each neighbor w of P(v') and each round r, updates the local copy of the state of A as if v received the (unique) message that has been sent to v' by some of the nodes w' with P(w')=w (each one using edge (w',v')).
Naturally, the last step assumes that some such neighbor sends a message and all w' with P(w') send the same such message; otherwise, the simulation fails. We show that A' can be executed and simulates A provided that for each v∈ V, no more than f of its copies are in F'.
If for each v∈ V, |{v_i∈ F'}|≤ f, A' strongly simulates A.
Analogous to the one of Lemma <ref>, with the difference that faulty nodes may only omit sending messages and thus a single correct copy per node is sufficient.
Resilience of the Reinforcement
We now examine how large the probability p can be for the precondition of Lemma <ref> to be satisfied a.a.s.
The above construction is a valid strong reinforcement for the fault model p if p ∈ o(n^-1/(f+1)). If G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' strongly simulates A if for each v∈ V, |{v_i∈ F'}|≤ f = ℓ -1. For v∈ V,
{v_i | i∈ [ℓ]}∩ F'=ℓ = p^f+1.
By a union bound, A' thus simulates A with probability 1-o(1) if p∈ o(n^-1/(f+1)).
Conversely, if there are Ω(n) nodes with non-zero outdegree and p∈ω(n^-1/(f+1)), with probability 1-o(1) all copies of at least one such node v are faulty. If v sends a message under A, but all corresponding messages of copies of v are not sent, the simulation fails. This shows that in this case the reinforcement is not valid.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = ℓ^2 = f^2 + 2f + 1, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and quadrupling the number of edges.
§ MORE EFFICIENT REINFORCEMENT
In this section, we reduce the overhead in terms of edges at the expense of obtaining reinforcements that are not strong. We stress that the obtained trade-off between redundancy (ν and η) and the sustainable probability of faults p is asymptotically optimal: as we require to preserve arbitrary routing schemes in a blackbox fashion, we need sufficient redundancy on the link level to directly simulate communication. From this observation, both for p and p we can readily derive trivial lower bounds on redundancy that match the constructions below up to lower-order terms.
§.§ A Toy Example
Before we give the construction, we give some intuition on how we can reduce the number of required edges. Consider the following simple case. G is a single path of n vertices (v_1,…, v_n), and the schedule requires that in round i, a message is sent from v_i to v_i+1. We would like to use a “budget” of only n additional vertices and an additional (1+) m=(1+) (n-1) links, assuming the fault model p. One approach is to duplicate the path and extend the routing scheme accordingly. We already used our entire budget apart from m links! This reinforcement is valid as long as one of the paths succeeds in delivering the message all the way.
The probability that one of the paths “survives” is
1-(1-(1-p)^n)^2 ≤ 1-(1-e^-pn)^2 ≤ e^-2pn,
where we used that 1-x≤ e^-x for any x∈ℝ.
Hence, for any p = ω(1/n), the survival probability is o(1). In contrast, the strong reinforcement with ℓ=2 (i.e., f=1) given in <ref> sustains any p∈ o(1/√(n)) with probability 1-o(1); however, while it adds n nodes only, it requires 3m additional edges.
We need to add some additional edges to avoid that the likelihood of the message reaching its destination drops too quickly. To this end, we use the remaining ε m edges to “cross” between the two paths every h≜ 2/ε hops (assume h is an integer), cf. Figure <ref>.
This splits the path into segments of h nodes each. As long as, for each such segment, in one of its copies all nodes survive, the message is delivered. For a given segment, this occurs with probability 1-(1-(1-p)^h)^2≥ 1-(ph)^2. Overall, the message is thus delivered with probability at least (1-(ph)^2)^n/h≥ 1-nhp^2.
As for any constant ε, h is a constant, this means that the message is delivered a.a.s. granted that p∈ o(1/√(n))!
The reader is cautioned to not conclude from this example that random sampling of edges will be sufficient for our purposes in more involved graphs. Since we want to handle arbitrary routing schemes, we have no control over the number of utilized routing paths. As the latter is exponential in n, the probability that a fixed path is not “broken” by F would have to be exponentially small in n. Moreover, trying to leverage Lovász Local Lemma for a deterministic result runs into the problem that there is no (reasonable) bound on the number of routing paths that pass through a single node, i.e., the relevant random variables (i.e., whether a path “survives”) exhibit lots of dependencies.
§.§ Partitioning the Graph
To apply the above strategy to other graphs, we must take into account that there can be multiple intertwined routing paths. However, the key point in the above example was not that we had path segments, but rather that we partitioned the nodes into constant-size regions and added few edges inside these regions, while fully connecting the copies of nodes at the boundary of the regions.
In general, it is not possible to partition the nodes into constant-sized subsets such that only a very small fraction of the edges connects different subsets; any graph with good expansion is a counter-example. Fortunately, many network topologies used in practice are good candidates for our approach. In the following, we will discuss grid networks and minor free graphs, and show how to apply the above strategy in each of these families of graphs.
Grid Networks
We can generalize the above strategy to hypercubes of dimension d>1.
A q-ary d-dimensional hypercube has node set [q]^d and two nodes are adjacent if they agree on all but one index i∈ [d], for which |v_i-w_i|=1.
For any h,d∈, assume that h divides q∈ and set ε=1/h. Then the q-ary d-dimensional hypercube can be partitioned into (q/h)^d regions of h^d nodes such that at most an ε-fraction of the edges connects nodes from different regions.
We subdivide the node set into h-ary d-dimensional subcubes; for an example of the subdivision of the node set of a 6-ary 2-dimensional hypercube into 2-ary 2-dimensional subcubes see Figure <ref>. There are (q/h)^d such subcubes. The edges crossing the regions are those connecting the faces of adjacent subcubes. For each subcube, we attribute for each dimension one face to each subcube (the opposite face being accounted for by the adjacent subcube in that direction). Thus, we have at most dh^d-1 crossing edges per subcube. The total number of edges per subcube are these crossing edges plus the d(h-1)h^d-1 edges within the subcube. Overall, the fraction of crossedges is thus at most 1/(1+(h-1))=1/h, as claimed.
Note that the above result and proof extend to tori, which also include the “wrap-around” edges connecting the first and last nodes in any given dimension.
Minor free Graphs
Another general class of graphs that can be partitioned in a similar fashion are minor free bounded-degree graphs.
For a fixed graph H, H is a minor of G if H is isomorphic to a graph that can be obtained by zero or more
edge contractions on a subgraph of G. We say that a graph G is H-minor free if H is not a minor of G.
For any such graph, we can apply a corollary from <cit.>, which is based on <cit.>, to construct a suitable partition.
Let H be a fixed graph. There is a constant c(H) > 1 such that for every ∈ (0, 1] and
every H-minor free graph G = (V, E) with degree bounded by Δ a partition R_1,…,R_k⊆ V with the following properties can be found in time O(|V|^3/2):
* ∀ i : |R_i|≤c(H)Δ^2/^2,
* ∀ i the subgraph induced by R_i in G is connected.
* |{(u,v) | u ∈ R_i, v ∈ R_j, i≠ j}|≤· |V|.
Grids and tori of dimension d>2 are not minor-free.
We note that this construction is not satisfactory, as it involves large constants. It demonstrates that a large class of graphs is amenable to the suggested approach, but it is advisable to search for optimized constructions for more specialized graph families before applying the scheme.
§.§ Reinforcement
Equipped with a suitable partition of the original graph G=(V,E) into disjoint regions R_1,…,R_k⊆ V, we reinforce as follows.
As before, we set V'≜ V× [ℓ], denote v_i≜ (v,i), define P(v_i)≜ v, and set ℓ≜ f+1. However, the edge set of G' differs. For e=(v,w)∈ E,
E_e'≜{(v_i,w_i) | i∈ [ℓ]}
{(v_i,w_j) | i,j∈ [ℓ]}
and we set E'≜⋃_e∈ E E_e'.
Simulation under Om(p)
Consider v∈ V. We want to maintain the invariant that in each round, some v_i has a copy of the state of v in A. To this end, v'∈ V'
[(1)]
* initializes local copies of all state variables of v as in A and sets _v'=;
* sends on each link (v',w')∈ E' in each round
* message M, if P(v') would send M via (P(v'),P(w')) when executing A and _v'=,
* a special symbol if _v'=, but v would not send a message via (P(v'),P(w')) according to A, or
* no message if _v'=;
* if, in a given round, _v'= and v' receives for each neighbor w of P(v') a message from some w_j∈ V', it updates the local copy of the state of v in A as if P(v') received this message (interpreting as no message); and
* if this is not the case, v' sets _v'=.
We claim that as long as _v'= at v', v' has indeed a copy of the state of P(v') in the corresponding execution of A; therefore, it can send the right messages and update its state variables correctly.
Suppose that for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], some i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. As P(C)=V, it suffices to show that each v'∈ C successfully maintains a copy of the state of P(v') under A. However, we also need to make sure that all messages, not only the ones sent by nodes in c, are “correct,” in the sense that a message sent over edge (v',w')∈ E' in round r would be sent by A over (P(v'),P(w')) (where means no message is sent). Therefore, we will argue that the set of nodes T_r≜{v'∈ V' | _v'= in round r} knows the state of their counterpart P(v') under A up to and including round r∈. As nodes v' with _v'= do not send any messages, this invariant guarantees that all sent messages are correct in the above sense.
We now show by induction on the round number r∈ that (i) each v'∈ T_r knows the state of P(v') under A and (ii) C⊆ T_r. Due to initialization, this is correct initially, i.e., in “round 0;” we use this to anchor the induction at r=0, setting T_0≜ V'.
For the step from r to r+1, note that because all v'∈ T_r have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E' with P(w')=w. Recall that v'∈ T_r+1 if and only if v'∈ T_r and for each (w,P(v'))∈ E there is at least one w'∈ V' with P(w')=w from which v' receives a message. Since under p nodes in F' may only omit sending messages, it follows that v'∈ T_r+1 correctly updates the state variables of P(v'), just as P(v') would in round r+1 of A.
It remains to show that C⊆ T_r+1. Consider v_i∈ C and (w,v)∈ E. If v,w∈ R_k' for some k'∈ [k], then w_i∈ C by definition of C. Hence, by the induction hypothesis, w_i∈ T_r, and w_i will send the message w would send in round r+1 of A over (w,v)∈ E to v_i, using the edge (w_i,v_i)∈ E'. If this is not the case, then there is some j∈ [ℓ] such that w_j∈ C and we have that (w_j,v_i)∈ E'. Again, v_i will receive the message w would send in round r+1 of A from w_j. We conclude that v_i receives at least one copy of the message from w for each (w,v)∈ E, implying that v∈ T_r+1 as claimed. Thus, the induction step succeeds and the proof is complete.
Figure <ref> provides an example of a comparison between a network, a naive duplication of that network, and its reinforcement. The simulation process of sending a message in the same sample network is shown in Figure <ref>.
Resilience of the Reinforcement
We denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree and R∈ O(1), p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there is some i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Accordingly, the probability that for a given k' the precondition of the lemma is violated is at most (Rp)^f+1. As k≤ n/r, taking a union bound over all k' yields that with probability at least 1-n/r· (Rp)^f+1, A' simulates A. Therefore, the reinforcement is valid if p ∈ o((n/r)^-1/(f+1)/R).
Now assume that r≤ R∈ O(1) and also that p∈ω(n^-1/(f+1))⊆ω((n/r)^-1/(f+1)/R). Thus, for each v∈ V, all v'∈ V' with P(v')=v simultaneously end up in F' with probability ω(1/n). Therefore, if Ω(n) nodes have non-zero outdegree, with a probability in 1-(1-ω(1/n))^Ω(n)=1-o(1) for at least one such node v all its copies end up in F'. In this case, the simulation fails if v sends a message under A, but all copies of v' suffer omission failures in the respective round.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(1+ε)f+ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by doubling the number of nodes and multiplying the number of edges by 2.4.
For hypercubes and tori, the asymptotic notation for p does not hide huge constants.
Lemma <ref> shows that h enters the threshold in Theorem <ref> as h^-d+1/2.
For the cases of d=2 and d=3, which are the most typical (for d>3 grids and tori suffer from large distortion when embedding them into 3-dimensional space), the threshold on p degrades by factors of 11.2 and 55.9, respectively.
§.§ Simulation under Byz(p)
The same strategy can be applied for the stronger fault model p, if we switch back to having ℓ=2f+1 copies and nodes accepting the majority message among all messages from copies of a neighbor in the original graph.
Consider node v∈ V. We want to maintain the invariant that in each round, a majority among the nodes v_i, i∈ [ℓ], has a copy of the state of v in A. For v'∈ V' and (w,P(v'))∈ E, set N_v'(w)≜{w'∈ V' | (w',v')∈ E'}. With this notation, v' behaves as follows.
[(1)]
* It initializes local copies of all state variables of v as in A.
* It sends in each round on each link (v',w')∈ E' the message v would send on (P(v'),P(w')) when executing A (if v' cannot compute this correctly, it may send an arbitrary message).
* It updates its state in round r as if it received, for each (w,P(v'))∈ E, the message the majority of nodes in N_v'(w) sent.
Suppose for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. Then A' simulates A.
Select for each R_k', k'∈ [k], f+1 indices i such that {v_i | v∈ R_k'}∩ F'=∅ and denote by C the union of all these nodes. We claim that each v'∈ C successfully maintains a copy of the state of P(v') under A. We show this by induction on the round number r∈, anchored at r=0 due to initialization.
For the step from r to r+1, observe that because all v'∈ C have a copy of the state of P(v') at the end of round r by the induction hypothesis, each of them can correctly determine the message P(v') would send over link (v,w)∈ E in round r+1 and send it over each (v',w')∈ E with P(w')=w. For each v'∈ C and each (w,P(v')), we distinguish two cases. If P(v') and w are in the same region, let i be such that v'=v_i. In this case, N_v'(w)={w_i} and, by definition of C, w_i∈ C. Thus, by the induction hypothesis, w_i sends the correct message in round r+1 over the link (w',v'). On the other hand, if P(v') and w are in different regions, N_v'(w)={w_i | i∈ [ℓ]}. By the definition of C and the induction hypothesis, the majority of these nodes (i.e., at least f+1 of them) sends the correct message w would send over (w,P(v')) in round r+1 when executing A. We conclude that v' correctly updates its state, completing the proof.
Resilience of the Reinforcement
As before, denote R≜max_k'∈ [k]{|R_k'|} and r≜min_k'∈ [k]{|R_k'|}.
The above construction is a valid reinforcement for the fault model p if p ∈ o((n/r)^-1/(f+1)/R). Moreover, if G contains Ω(n) nodes with non-zero outdegree, p∈ω(n^-1/(f+1)) implies that the reinforcement is not valid.
By Lemma <ref>, A' simulates A if for each k'∈ [k], there are at least f+1 indices i∈ [ℓ] so that {v_i | v∈ R_k'}∩ F'=∅. For fixed k' and i∈ [ℓ],
{v_i | v∈ R_k'}∩ F'=∅=(1-p)^|R_k'|≥ 1-Rp.
Thus, analogous to the proof of Theorem <ref>, the probability that for a given k' the condition is violated is at most
∑_j=f+1^2f+12f+1j(Rp)^j(1-Rp)^2f+1-j
= (2e)^f(Rp)^f+1(1+o(1)).
By a union bound over the at most n/r regions, we conclude that the precondition p ∈ o((n/r)^-1/(f+1)/R) guarantees that the simulation succeeds a.a.s.
For the second statement, observe that for each node v∈ V of non-zero outdegree,
|{v_i}∩ F'|≥ f+1≥ p^f+1= ω(1/n).
Thus, a.a.s. there is such a node v. Let (v,w)∈ E and assume that A sends a message over (v,w) in some round. If v and w are in the same region, the faulty nodes sending an incorrect message will result in a majority of the 2f+1=|{w'∈ V' | P(w')=w}| copies of w attaining an incorrect state (of the simulation), i.e., the simulation fails. Similarly, if w is in a different region than v, for each copy of w the majority message received from N_w'(v) will be incorrect, resulting in an incorrect state.
Note that the probability bounds in Theorem <ref> are essentially tight in case R∈ O(1). A more careful analysis establishes similar results for r∈Θ(R)∩ω(1), by considering w.l.o.g. the case that all regions are connected and analyzing the probability that within a region, there is some path so that for at least f+1 copies of the path in G', some node on the path is faulty. However, as again we consider the case R∈ O(1) to be the most interesting one, we refrain from generalizing the analysis.
Efficiency of the Reinforcement
For f∈, we have that ν = ℓ = 2f+1 and η = (1-ε)ℓ + εℓ^2 = 1+(2+2ε)f+4ε f^2, while we can sustain p∈ o(n^-1/(f+1)).
In the special case of f=1 and ε=1/5, we improve from p∈ o(1/n) for the original network to p∈ o(1/√(n)) by tripling the number of nodes and multiplying the number of edges by 4.2.
§ EMPIRICAL EVALUATION
We have shown that our approach from <ref> works particularly well
for graphs that admit a certain partitioning, such as
sparse graphs (e.g., minor-free graphs) or low-dimensional
hypercubes. To provide some empirical motivation for the relevance
of these examples, we note that the topologies collected
in the Rocketfuel <cit.> and Internet Topology Zoo <cit.> projects
are all sparse: almost a third (namely 32%) of the topologies even belong to the family of
cactus graphs, and roughly half of the graphs (49%) are outerplanar <cit.>.
To complement our analytical results and study the reinforcement cost
of our approach in realistic networks, we conducted simulations on
the around 250 networks from the Internet Topology Zoo.
While we have a fairly good understanding of the different network topologies
deployed in practice, unfortunately, little is known about the state-of-the-art protection mechanisms used by network operators today. Network operators are typically reluctant to share details about their infrastructure for security reasons, rendering a comparative evaluation difficult. That said, it seems relatively safe to assume that the most robust solutions rely on an one-by-one (“A/B”) replication strategy which allows to completely reroute traffic to a backup network; this baseline requires doubling resources and can hence be fairly costly.
In the following, we will report on our main insights.
Due to space constraints, we focus on the case of omission faults;
the results for Byzantine faults follow the same general trends.
Recall that we replace each node by f+1 of its copies, and each edge with endpoints in
different regions of the partition with (f+1)^2 copies; every other edge is replaced by f+1 copies.
Our goal is to do this partitioning such that it minimizes the edge overhead of the new network and
maximizes the probability of the network's resilience.
The fault probability of the network for given p, f and partitions with l_1, l_2, ..., l_k nodes is calculated as
1 - ∏_i=1^k [1-(1-(1-p)^l_i)^f+1].
In the following, as a case study, we fix a target network failure probability of at most 0.01.
That is, the reinforced network is guaranteed to operate correctly with a probability of 99%, and we aim to maximize the probability p with which nodes independently fail subject to this constraint.
For this fixed target resilience of the network, we determine the value of p matching it using the above formula.
We remark that the qualitative behavior for smaller probabilities of network failure is the same, where the more stringent requirement means that our scheme outperforms naive approaches for even smaller network sizes.
For the examined topologies, it turned out that no specialized tools were needed to find good partitionings.
We considered a Spectral Graph Partitioning tool <cit.> and Metis <cit.>,
a partitioning algorithm from a python library.
For small networks (less than 14 nodes), we further implemented a brute-force algorithm,
which provides an optimal baseline.
Figure <ref> shows the resulting edge overheads for the different partitioning algorithms
as a function of p and for f=3, at hand of a specific example.
For reference, we added the value of p for the original graph (f=0) to the plot, which has an overhead factor of 1 (no redundancy).
As to be expected, for each algorithm and the fixed value of f=3, as the number of components in partitionings increases, the edge overhead and p
increase as well.
The “Singleton partition” point for f=3 indicates the extreme case where the size of the components is equal to 1 and the approach becomes identical to strong reinforcement (see <ref>);
hence, it has an edge overhead of (f+1)^2=16.
The leftmost points of the f=3 curves correspond to the other extreme of “partitioning” the nodes into a single set, resulting in naive replication of the original graph, at an edge overhead of f+1=4.
We observed this general behavior for networks of all sizes under varying f, where the spectral partitioning consistently outperformed Metis, and both performed very close to the brute force algorithm on networks to which it was applicable.
We concluded that the spectral partitioning algorithm is sufficient to obtain results that are close to optimal for the considered graphs, most of which have fewer than 100 nodes, with only a handful of examples with size between 100 and 200.
Accordingly, in the following we confine the presentation to the results obtained using the spectral partitioning algorithm.
In Figure <ref>, we take a closer look on how the edge overhead
depends on f, at hand of a network of 33 nodes. Note that the partitionings do not depend on f, causing the 10 curves to have similar shape.
As f increases, the node overhead, edge overhead, and p for the reinforced networks increase.
We can see that it is advisable to use larger values of f only if the strong reinforcement approach for smaller f cannot push p to the desired value.
We also see that f=1 is sufficient to drive p up to more than 6%, improving by almost two orders of magnitude over the roughly 0.01/33≈ 0.03% the unmodified network can tolerate with probability 99%.
While increasing f further does increase resilience, the relative gains are much smaller, suggesting that f=1 is the most interesting case.
Following up on this, in Figure <ref> we plot p for all existing networks in the Topology Zoo using the spectral graph partitioning algorithm and f=1.
Specifically, for each network, we calculated the value of p on a set of reinforced networks with different node and edge overheads. Naturally, with increasing network size, the value of p that can be sustained at a given overhead becomes smaller. Note, however, that naive replication quickly loses ground as n becomes larger. In particular, already for about 20 nodes, an edge overhead of 3 with our approach is better than adding two redundant copies of the original network, resulting in more nodes, but the same number of edges. Beyond roughly 50 nodes, our approach outperforms two independent copies of the network using fewer edges, i.e., an edge overhead of 2.5.
To show more clearly when our approach outperforms naive network replication, Figure <ref> plots the relative gain in the probability p of node failure that can be sustained compared to the original network.
This plot is similar to the previous one. The y-axis now represents p divided by the value of p for the original graph. We now see that naive replication provides an almost constant improvement across the board. This is due to the fact that under this simple scheme, the reinforcement fails as soon as in each copy of the graph at least one node fails, as it is possible that a routing path in the original graph involves all nodes corresponding to failed copies.
Denote by p_k the probability of node failure that can be sustained with 99% reliability when simply using k copies of the original graph (in particular p_1≈ 0.01/n). For small k, the probability (1-p_k)^n that a single copy of the original graph is fault-free needs to be close to 1. Hence, we can approximate (1-p_k)^n≈ 1-p_k n. The probability that all copies contain a failing node is hence approximately (p_kn)^k. Thus, p_1 n ≈ 0.01≈ (p_k n)^k, yielding that
p_k/p_1=p_k n/p_1 n≈0.01^1/k/0.01=100^1-1/k.
In particular, we can expect ratios of roughly 10 for k=2 and 21.5 for k=3, respectively. The small discrepancy to the actual numbers is due to the approximation error, which would be smaller for higher target resilience.
As the plot clearly shows, our method achieves a relative improvement that increases with n, as predicted by Theorem <ref>.
In conclusion, we see that our approach promises substantial improvements over the naive replication strategy,
which is commonly employed in mission-critical networks
(e.g., using dual planes as in RFC 7855 <cit.>).
§ DISCUSSION
In the previous sections, we have established that constant-factor redundancy can significantly increase reliability of the communication network in a blackbox fashion. Our constructions in <ref> are close to optimal. Naturally, one might argue that the costs are still too high. However, apart from pointing out that the costs of using sufficiently reliable components may be even higher, we would like to raise a number of additional points in favor of the approach.
Node Redundancy
When building reliable large-scale systems, fault-tolerance needs to be considered on all system levels. Unless nodes are sufficiently reliable, node replication is mandatory, regardless of the communication network. In other words, the node redundancy required by our construction may not be an actual overhead to begin with. When taking this point of view, the salient question becomes whether the increase in links is acceptable. Here, the first observation is that any system employing node redundancy will need to handle the arising additional communication, incurring the respective burden on the communication network. Apart from still having to handle the additional traffic, however, the system designer now needs to make sure that the network is sufficiently reliable for the node redundancy to matter. Our simple schemes then provide a means to provide the necessary communication infrastructure without risking to introduce, e.g., a single point of failure during the design of the communication network; at the same time, the design process is simplified and modularized.
Dynamic Faults
Because of the introduced fault-tolerance, faulty components do not impede the system as a whole, so long as the simulation of the routing scheme can still be carried out. Hence, one may repair faulty nodes at runtime. If T is the time for detecting and fixing a fault, we can discretize time in units of T and denote by p_T the (assumed to be independent) probability that a node is faulty in a given time slot, which can be bounded by twice the probability to fail within T time. Then the failure probabilities we computed in our analysis directly translate to an upper bound on the expected fraction of time during which the system is not (fully) operational.
Adaptivity
The employed node- and link-level redundancy may be required for mission-critical applications only, or the system may run into capacity issues. In this case, we can exploit that the reinforced network has a very simple structure, making various adaptive strategies straightforward to implement.
* One might use a subnetwork only, deactivating the remaining nodes and links, such that a reinforced network for smaller f (or a copy of the original network, if f=0) remains. This saves energy.
* One might subdivide the network into several smaller reinforced networks, each of which can perform different tasks.
* One might leverage the redundant links to increase the overall bandwidth between (copies of) nodes, at the expense of reliability.
* The above operations can be applied locally; e.g., in a congested region of the network, the link redundancy could be used for additional bandwidth. Note that if only a small part of the network is congested, the overall system reliability will not deteriorate significantly.
Note that the above strategies can be refined and combined according to the profile of requirements of the system.
§ RELATED WORK
Robust routing is an essential feature of dependable
communication networks, and has been explored
intensively in the literature already.
*Resilient Routing on the Network Layer
In contrast to our approach,
existing resilient routing mechanisms on the network layer
are typically reactive.
They
can be categorized
according to whether they are supported in the
control plane, e.g.,
<cit.>,
or in the data plane, e.g., <cit.>,
see also the recent survey <cit.>.
These mechanisms are usually designed to cope with link failures.
Resilient routing algorithms in the control plane
typically rely on a global recomputation of paths
(either
centralized <cit.>,
distributed <cit.>
or both <cit.>),
or on techniques based on link reversal <cit.>, and can
hence re-establish policies relatively easily;
however, they come at the price of a relatively high restoration time
<cit.>.
Resilient routing algorithms in the dataplane can react to failures
significantly faster <cit.>; however,
due to the local nature of the failover, it is challenging to
maintain network policies or even a high degree of resilience <cit.>.
In this line of literature,
the network is usually given and the goal is to re-establish
routing paths quickly, ideally as long as the underlying physical
network is connected (known as perfect resilience <cit.>).
In contrast, in this paper we ask the question of how to proactively enhance the
network in order to tolerate failures, rather than reacting to them. In particular, we consider more general failures,
beyond link failures and benign faults.
We argue that such a re-enforced
network simplifies routing as it is not necessary to compute new paths.
The resulting problems are very different in nature, also in terms
of the required algorithmic techniques.
*Local Faults
In this paper, we consider more general failure models
than typically studied in the resilient routing literature above,
as our model is essentially a local fault model.
Byzantine faults were studied in <cit.> in the context of broadcast and consensus problems. Unlike its global classical counterpart, the f-local Byzantine adversary can control at most f neighbors of each vertex. This more restricted adversary gives rise to more scalable solutions, as the problems can be solved in networks of degree O(f); without this restriction, degrees need to be proportional to the total number of faults in the network.
We also limit our adversary in its selection of Byzantine nodes, by requiring that the faulty nodes are chosen independently at random. As illustrated, e.g., by Lemma <ref> and Theorem <ref>, there is a close connection between the two settings. Informally, we show that certain values of p correspond, asymptotically almost surely (a.a.s), to an f-local Byzantine adversary. However, we diverge from the approach in <cit.> in that we require a fully time-preserving simulation of a fault-free routing schedule, as opposed to solving the routing task in the reinforced network from scratch.
*Fault-Tolerant Logical Network Structures
Our work is reminiscent of literature on
the design fault-tolerant network structures.
In this area (see <cit.> for a survey), the goal is to compute a sub-network that has a predefined property, e.g., containing minimum spanning tree. More specifically, the sub-network should sustain adversarial omission faults without losing the property. Hence, the sub-network is usually augmented (with edges) from the input network in comparison to its corresponding non-fault-tolerant counterpart. Naturally, an additional goal is to compute a small such sub-network. In contrast, we design a network that is reinforced (or augmented) by additional edges and nodes so that a given routing scheme can be simulated while facing randomized Byzantine faults. As we ask for being able to “reproduce” an arbitrary routing scheme (in the sense of a simulation relation), we cannot rely on a sub-network.
The literature also considered random fault models.
In the network reliability problem, the goal is to compute the probability that the (connected) input network becomes disconnected under random independent edge failures. The reliability of a network is the probability that the network remains connected after this random process.
Karger <cit.> gave a fully polynomial randomized approximation scheme for the network reliability problem.
Chechik et. al <cit.> studied a variant of the task, in which the goal is to compute a sparse sub-network that approximates the reliability of the input network.
We, on the other hand, construct a reinforced network that increases the reliability of the input network;
note also that our requirements are much stricter than merely preserving connectivity.
*Self-healing systems
In the context of self-healing routing (e.g., Castañeda et al. <cit.>), researchers have studied a model where an adversary removes nodes in an online fashion, one node in each time step (at most n such steps). In turn, the distributed algorithm adds links and sends at most O(Δ) additional messages to overcome the inflicted omission fault.
Ideally, the algorithm is “compact”: each node's storage is limited to o(n) bits.
A nice property of the algorithm in <cit.> is that the degrees are increased by at most 3. For our purposes, an issue is that the diameter is increased by a logarithmic factor of the maximum initial degree, and hence the same holds for the latency of the routing scheme. Instead, we design a network that is “oblivious” to faults in the sense that the network is “ready” for independent random faults up to a certain probability, without the need to reroute messages or any other reconfiguration. Moreover, our reinforcements tolerate Byzantine faults and work for arbitrary routing schemes. We remark that compact self-healing routing schemes also deal with the update time of the local data structures following the deletion of a node; no such update is required in our approach.
*Robust Peer-to-Peer Systems
Peer-to-peer systems are often particularly dynamic and the development
of robust algorithms hence crucial.
Kuhn et. al <cit.> study faults in peer-to-peer systems in which an adversary adds and removes nodes from the network within a short period of time (this process is also called churn). In this setting, the goal is to maintain functionality of the network in spite of this adversarial process. Kuhn et al. <cit.> considered hypercube and pancake topologies, with a powerful adversary that cannot be “fooled” by randomness. However, it is limited to at most O(Δ) nodes, where Δ is the (maximum) node degree, which it can add or remove within any constant amount of time. The main idea in <cit.> is to maintain a balanced partition of the nodes, where each part plays the role of a supernode in the network topology. This is done by rebalancing the nodes after several adversarial acts, and increasing the dimensionality of the hypercube in case the parts become too big.
Hypercubes were also of particular interest in this paper. We employ two partitioning techniques to make sure that: (1) the size of each part is constant and (2) the number of links in the cut between the parts is at most · n, where n is the number of nodes. These partitioning techniques help us dial down the overheads within each part, and avoid a failure of each part due to its small size. However, we note that our motivation for considering these topologies is that they are used as communication topologies, for which we can provide good reinforcements, rather than choosing them to exploit their structure for constructing efficient and/or reliable routing schemes (which is of course one, but not the only reason for them being used in practice).
§ CONCLUSION
In this paper, we proposed simple replication strategies for improving network reliability. Despite being simple and general, both in terms of their application and analysis, our strategies can substantially reduce the required reliability on the component level to maintain network functionality compared the baseline, without losing messages or increasing latencies.
The presented transformations allow us to directly reuse non-fault-tolerant routing schemes as a blackbox,
and hence avoid the need to refactor working solutions.
We consider this property highly useful in general and essential in real-time systems.
Hence, being prepared for non-benign faults can be simple, affordable, and practical, and therefore enables building larger reliable networks. Interestingly, while our basic schemes may hardly surprise, we are not aware of any work systematically exploring and analyzing this perspective.
We understand our work as a first step and believe that it opens
several interesting avenues for future research.
For example:
* Which network topologies allow for good partitions as utilized in <ref>? Small constants here result in highly efficient reinforcement schemes, which are key to practical solutions.
* Is it possible to guarantee strong simulations at smaller overheads?
* Can constructions akin to the one given in <ref> be applied to a larger class of graphs?
On the practical side, while
our simulations indicate that our approach
can be significantly more efficient than a naive one-by-one replication strategy
to provision
dependable ISP networks,
it will be interesting to extend these empirical studies and also consider
practical aspects such as the incremental deployment
in specific networks.
Acknowledgments.
This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement 716562) and from the Vienna Science and Technology Fund (WWTF), under grant number ICT19-045 (project WHATIF).
This research was supported by the Israel Science Foundation under Grant 867/19.
spmpsci
[
< g r a p h i c s >
]Christoph Lenzen
received a diploma degree in mathematics from the University of Bonn in 2007 and a
Ph. D. degree from ETH Zurich in 2011. After postdoc positions at the Hebrew University of Jerusalem,
the Weizmann Institute of Science, and MIT, he became group leader at MPI for Informatics in 2014.
In 2021 he became faculty member at CISPA.
He received the best paper award at PODC 2009, the ETH medal for his dissertation, and in 2017 an ERC starting grant.
[
< g r a p h i c s >
]Moti Medina
is a faculty member at the Engineering Faculty at Bar-Ilan University since 2021. Previously, he was a faculty member at the Ben-Gurion University of the Negev and a post-doc
researcher in MPI for Informatics and in the Algorithms and Complexity group at
LIAFA (Paris 7). He graduated his Ph. D., M. Sc., and B. Sc. studies at the
School of Electrical Engineering at Tel-Aviv University, in 2014, 2009, and 2007
respectively. Moti is also a co-author of a text-book on logic design
“Digital Logic Design: A Rigorous Approach”, Cambridge Univ. Press, Oct.
2012.
[
< g r a p h i c s >
]Mehrdad Saberi
is an undergraduate student in Computer Engineering at Sharif University of Technology, Tehran, Iran. He achieved a silver medal in International Olympiad in Informatics (2018, Japan) during high school and is currently interested in studying and doing research in Theoretical Computer Science.
[
< g r a p h i c s >
]Stefan Schmid
is a Professor at TU Berlin, Germany.
He received his MSc (2004) and PhD
(2008) from ETH Zurich, Switzerland. Subsequently, Stefan Schmid
worked as postdoc at TU Munich and the University of Paderborn (2009).
From 2009 to 2015, he was a senior research scientist at the Telekom Innovations Laboratories (T-Labs) in Berlin, Germany, from 2015 to 2018 an Associate
Professor at Aalborg University, Denmark, and from 2018 to 2021 a Professor
at the University of Vienna, Austria.
His research interests revolve around algorithmic problems of networked and distributed systems,
currently with a focus on self-adjusting networks
(related to his ERC project AdjustNet) and resilient networks (related to his WWTF project
WhatIf).
|
http://arxiv.org/abs/2307.04527v1 | 20230710125059 | Automatic Debiased Machine Learning for Covariate Shifts | [
"Michael Newey",
"Whitney K. Newey"
] | stat.ME | [
"stat.ME"
] |
Automatic Debiased Machine Learning for Covariate Shifts
Michael Newey and Whitney K Newey Research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2-1000. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the United States Air Force or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. This research was supported by NSF Grant 1757140
August 12, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper we address the problem of bias in machine learning of parameters following covariate shifts. Covariate shift occurs when the distribution of input features change between the training and deployment stages. Regularization and model selection associated with machine learning biases many parameter estimates. In this paper, we propose an automatic debiased machine learning approach to correct for this bias under covariate shifts. The proposed approach leverages state-of-the-art techniques in debiased machine learning to debias estimators of policy and causal parameters when covariate shift is present. The debiasing is automatic in only relying on the parameter of interest and not requiring the form of the form of the bias. We show that our estimator is asymptotically normal as the sample size grows. Finally, we evaluate the method using a simulation.
§ INTRODUCTION
In applications, machine learners trained on one data set may be used to estimate parameters of interest in another data set that
has a different distribution of predictors.
For example, training data could be a sub-population of a larger population or the training and estimation could take place at different times where the distribution of predictors varies between times.
A case we consider in this work is a neural network trained to predict on one data set and then used to learn average outcomes from previously unseen data on predictor variables.
Another case, from causal statistics, is a Lasso regression trained on outcome, treatment, and covariate data that is used to estimate counterfactual averages in another data set with a different distribution of covariates.
This is an important case of distribution shift that is known as covariate shift <cit.>.
Such covariate shifts are of interest in a wide variety of settings, including
estimation of
counterfactual averages and causal effects with shifted covariates
<cit.>.
Additionally, covariate shifts are interesting for classification where the training data may differ from
the field data <cit.>. There are many important
parameters that depend on covariate shifts, including average outcomes or average potential outcomes learned from shifted covariate data.
This paper concerns machine learning of parameters of interest in field data
that depend on regressions in training data. An important problem with
estimation is the bias that can result from regularization and/or model
selection in machine learning on training data. In this paper we address this
problem by giving automatic debiased machine learners of parameters of
interest. The debiasing is automatic in only requiring the object of interest
and in not requiring a full theoretical formula for the bias correction.
The debiased estimators given are obtained by plugging a training data
regression into the formula of interest in the field data and adding a debiasing
term. The debiasing term consists of an average product in the training data
of a debiasing function and regression residuals. The debiasing function is
estimated by linking the field data with certain features of the training data
in a way that only uses the formula for the parameter. The debiased estimators
have a form similar to those of <cit.>. The estimators here
differ from previous estimators in that training and field data come from
different sources, with field data being statistically independent of training data and the distribution of covariates being different in the field data than the training data.
In this paper we will describe the estimator, provide the underlying theory,
and report results of a simulation on artificial data.
§ PARAMETERS OF INTEREST AND DEBIASED ESTIMATING EQUATIONS
The parameters of interest we consider depend on a regression where Y is an
outcome variable of interest, X is a vector of regressors, and
γ_0(X)=E(Y|X).
Here γ_0(X) is the conditional expectation of the outcome variable
Y given regressors X. The parameter of interest will also depend on a
vector of random variables Z, that will often be a shifted vector of
regressors, with the same dimension as X but a different distribution than
X. This setup is meant to apply to settings where γ_0(X) is a
nonparametric regression for training data and the parameter depends on field
data Z. The parameter θ_0 we consider will be the expectation of a
functional m(Z,γ) of a vector of variables Z and a possible
regression γ, given by
θ_0=E[m(Z,γ_0)].
We focus on parameters where m(Z,γ) depends linearly on the
function γ.
An example of this type of parameter has m(Z,γ
)=γ(Z) so that
θ_0=E[γ_0(Z)].
In this example, θ_0 is the expectation of Y when the regressor
distribution is shifted from that of X to Z and the regression function. Here θ_0 quantifies what the
expectation of Y would be if the covariate distribution shifted from that of
X to that of Z. For example, if Y is a binary variable for classification
into one of two groups, then θ_0 is the classification probability
that would be produced by the field data.
Another example, from causal statistics, is an average potential outcome in
field data using a conditional mean from training data. Here X=(D,X_2) for
a discrete treatment D and covariates X_2 and Y=Y(D) where each
potential outcome Y(d) is independent of D conditional on X_2 in the
training data. Let Z be random vector in field data with the same dimension
as X_2 but with possibly a different distribution than X_2. The object
of interest is
θ_0=E[γ_0(d,Z)].
In this example, when treatment is independent of the potential outcomes
conditional on X_2, the θ_0 is an average of potential outcome
Y(d) when covariates X_2 have been shifted to Z. This object can be an
average potential outcome in the field data. Here m(z,γ)=γ(d,z)
for some specific possible treatment d.
For estimation we explicitly consider the case with distinct training and
field data, as in the motivating examples. Here Z and (Y,X) are from
distinct data sets that do not overlap. We will consider a regression learner
γ̂(X) that is computed from training data with T samples
(Y_1,X_1),...,(Y_T,X_T).
Estimators of the object of interest also make use of N samples of field
data
Z_1,...,Z_N.
A plug-in estimator of the parameter of interest can be constructed from a
learner γ̂ of the conditional mean based on the training data.
Replacing the expectation in equation (<ref>) with a sample
average over the field data Z_i and the true conditional expectation
γ_0 with an estimator γ̂ gives
θ̃=1/N∑_i=1^Nm(Z_i,γ̂).
This plug-in estimator of the parameter of interest is known to suffer from
biases resulting from regularization and/or model selection in γ̂,
as discussed in <cit.>. We will now describe an
alternative, debiased estimator that reduces the bias in θ̃
that comes from γ̂.
The debiased estimator is obtained by using the training data to form a bias
correction for the plug-in estimator. The bias correction is based on a
function of the training data given by
ϕ(Y,X,γ,α)=α(X)[Y-γ(X)],
where α(X) is a function that helps lower bias. We will assume that
there is some some true, unknown debiasing function α_0(X) with
finite second moment such that
E[m(Z,Δ)]=E[α_0(X)Δ(X)], for all Δ(X) with
E[Δ(X)^2]<∞.
From the Riesz representation theorem it is known that existence of such an
α_0 is equivalent to mean square continuity of E[m(Z,Δ)] in
Δ, meaning that there is a constant C with | E[m(W,Δ
)]| ^2≤ CE[Δ(X)^2] for all Δ with finite second
moment. To see how such an α_0(X) helps in debiasing, note that for any
γ(X), Δ(X)=γ(X)-γ_0(X), and α(X) taking
expectations we have
E[m(Z,γ)]-θ_0+E[ϕ(Y,X,γ,α)]
=E[m(W,Δ
)]+E[α(X)(Y-γ(X))]
=E[α_0(X)Δ(X)]-E[α(X)Δ(X)]
=E[(α_0
(X)-α(X))Δ(X)],
where the first equality follows by linearity of m(Z,γ) in γ and
the second equality by iterated expectations. In this way adding the training
data term E[ϕ(Y,X,γ,α)] to the identifying term E[m(Z,γ
)]-θ_0 makes the expectation differ from θ_0 by only the
expected product term following the last equality. In particular, when
α=α_0 the expression preceding the first equality is zero so
that adding E[ϕ(Y,X,γ,α)] to E[m(Z,γ)]-θ_0
exactly cancels the effect of γ. Also, when γ(X)=γ_0(X)
the expression preceding the first equality is zero, even when
α(X)≠α_0. Thus, the presence of the bias correction term
ϕ(Y,X,γ_0,α) does not affect the expectation even though
α≠α_0. The fact that the expectation preceding the first
equality is zero when either γ or α is not equal to its true
value (but one of them is), is a double robustness property shown by <cit.> for linear functions of a regression.
Using this bias correction to estimate the parameter of interest depends
crucially on being able to estimate the α_0(X) of equation
(<ref>). This α_0(X) can be identified as the
minimizing value of the expectation of a known function of α,
α_0=min_αE[-2m(Z,α)+α(X)^2].
To see that α_0 is identified in this way, we note that by adding and
subtracting C=E[α_0(X)^2] and completing the square we obtain
E[-2m(Z,α)+α(X)^2] =-C+E[α_0(X)^2-2α_0
(X)α(X)+α(X)^2]
=-C+E[(α_0(X)-α(X))^2].
This justification of of equation (<ref>) is similar to that in <cit.> where Z is understood to come from field data and Y and X from training data.
Here we see that the objective function of equation (<ref>) does
indeed have a unique minimum at the α_0(X) of equation
(<ref>). Consequently an estimator of α_0 can be
constructed by minimizing a sample version of the objective function in
equation (<ref>). Thus α̂ can be used to construct a
bias correction by adding a training sample average of ϕ(Y,X,γ̂,α̂) to the plug-in estimator. We describe this debiased machine
learner in the next section.
§.§ Estimation with Cross-Fitting
One kind of debiased machine learner can be based on cross-fitting, a form of
sample splitting. Cross-fitting is known to further reduce bias for some
estimators and to help obtain large sample inference results for a variety of
regression learners as in <cit.>. The cross-fitting will
average over different data than used in the construction of γ̂.
To describe the cross-fitting let I_ℓ,(ℓ=1,...,L) denote a partition
of the training set sample indices into L distinct subsets of about equal
size and let T_ℓ be the number of observations in I_ℓ. In
practice L=5 (5-fold) or L=10 (10-fold) cross-fitting is often used. Also
let γ̂_ℓ(x) and α̂_ℓ(x) respectively be
estimators of γ_0 and α_0 computed from all observations not
in I_ℓ, where α̂_ℓ(x) will be described in what
follows. For each fold ℓof the cross-fitting a debiased machine
learner can be constructed as the sum of a plug-in estimator and a bias correction
θ̂_ℓ=1/N∑_i=1^Nm(Z_i,γ̂_ℓ)+1/T_ℓ∑_t∈ I_ℓα̂_ℓ(X_t)[Y_t
-γ̂_ℓ(X_t)].
This estimator is the sum of a plug-in term and a bias correction term that is
motivated by the bias correction described above.
To estimate the asymptotic variance for each θ̂_ℓ we trim the estimator of the debiasing function to obtain α̃_ℓ(X)=τ_n(α̂_ℓ(X)) where
τ_n(a)=1(|a|<τ̅_n)a+1(|a|≥τ̅_n)sgn(a)τ̅_n
and τ̅_n is a large positive constant that grows with n.
The purpose of this trimming is guarantee consistency of the asymptotic variance estimator when we only have mean square convergence rates for the estimators of the regression and the debiasing function.
The asymptotic variance of √(N)(θ̂_ℓ-θ_0) can be estimated as
V̂_ℓ =ŝ_ℓ m^2+ŝ_ℓα^2,
ŝ_ℓ m^2 =1/N∑_i=1^N{m(Z_i,γ̂_ℓ)-m̅_ℓ}^2,
m̅_ℓ =1/N∑_i=1^Nm(Z_i,γ̂_ℓ)
ŝ_ℓα^2 =1/T_ℓ
∑_t∈ I_ℓα̃_ℓ(X_t)^2[Y_t-γ̂_ℓ(X_t)]^2
A single bias corrected estimator and asymptotic variance estimator can then
be obtained by a weighted average of the estimators across the sample splits,
θ̂=∑_ℓ=1^LT_ℓ/Tθ̂_ℓ,V̂=∑_ℓ=1^LT_ℓ/TV̂_ℓ.
An estimator α̂_ℓ(x) of the α_0(x) from equation
(<ref>) is needed for each θ̂_ℓ. Estimators
can be constructed by replacing the expectations of -2m(Z,α) and
α(X)^2 in equation (<ref>) by respective sample averages
over field and training data and minimizing over α in some feasible,
approximating class of functions. A penalty term or terms can be added to
regularize when the feasible class of functions is high dimensional.
We construct α̂_ℓ(x) using a feasible class of functions that
are linear combinations of a dictionary b(x)=(b_1(x),...,b_J(x))^' of approximating functions. We replace the expectation in equation
(<ref>) by sample averages over the test and training data
respectively and minimize a penalized version of the objective function over
all linear combinations of b(x). To describe this α̂_ℓ(x)
let
M̂_j=1/N∑_i=1^Nm(Z_i,b_j),M̂=(M̂_1,...,M̂_J)^',Q̂_̂ℓ̂=1/T-T_ℓ
∑_t∉ I_ℓb(X_t)b(X_t)^'.
We consider α̂_ℓ(x)=b(x)^'ρ̂_ℓ where
ρ̂_̂ℓ̂ =min_ρ{-21/N∑_i=1^N
m(Z_i,b^'ρ)+1/T-T_ℓ∑_t∉ I_ℓ
[b(X_t)^'ρ]^2+r∑_j=1^J|ρ_j|}
=min_ρ{-2M̂^'ρ+ρ^'Q̂_̂ℓ̂
ρ+r∑_j=1^J|ρ_j|}.
Here ρ̂_ℓ minimizes an objective function where the linear term
is obtained from the test data, the quadratic term from the training data, and
an absolute value penalty is included. This estimator differs from that of
<cit.> in the linear and quadratic terms coming
from different data.
§.§ Estimation with Lasso Regression and No Cross-fitting
The cross fitting used to construct the estimator θ̂ requires
repeatedly reusing the training data. In particular the debiasing requires
reusing the training data to estimate γ̂_ℓ and α̂_ℓ for all observations not in I_ℓ as and averaging α̂_ℓ(X_i)[Y_i-α̂_ℓ(X_i)] over observations in
I_ℓ for each split ℓ. When a single Lasso regression estimator
γ̂ is used, based on all the training data, it is possible to do
the bias correction without sample splitting and so reduce greatly the
requirement to reuse the training data. The only training data items needed
for the bias correction will be the second moment matrix of the Lasso
dictionary and the average product of the Lasso dictionary with the Lasso
residuals Y_i-γ̂(X_i).
To describe the estimator let b(x) be the same J×1
dictionary of functions used for construction of α̂ and let r>0 denote a penalty degree. The Lasso regression estimator
from the training data is
γ̂(x)=b(x)^'β̂=min_β1/T∑
_t=1^T[Y_t-b(X_t)^'β]^2+2r∑_j=1^p|β_j| .
A corresponding α̂ can be constructed using the Lasso estimator
describe in Section <ref> without cross-fitting. Let
M̂_j=1/N∑_i=1^Nm(Z_i,b_j),M̂=(M̂_1,...,M̂_J)^',Q̂=1/T∑_t=1
^Tb(X_t)b(X_t)^'.
A Lasso estimator α̂ can be obtained as α̂(x)=b(x)^'ρ̂ where
ρ̂=min_ρ{-2M̂^'ρ+ρ^'Q̂
ρ+2r_α∑_j=1^J|ρ_j|}.
A debiased machine learner without cross-fitting is
θ̂ =1/N∑_i=1^Nm(Z_i,γ̂)+1/T∑_t=1^Tα̂(X_t)[Y_t-γ̂(X_t)]
=1/N∑_i=1^Nm(Z_i,γ̂)+ρ̂^'[
1/T∑_t=1^Tb(X_t){ Y_t-γ̂(X_t)}] .
From equations (<ref>) and (<ref>) we see that the only features of the training data needed to
construct this estimator are the second moment matrix Q̂ of the
dictionary and the cross product ∑_t=1^Tb(X_t){ Y_t
-γ̂(X_t)} between the observations on the dictionary
b(X_t) and the Lasso residuals Y_t-γ̂(X_t).
An asymptotic variance estimator without cross-fitting is
V̂=1/N∑_i=1^N{m(Z_i,γ̂)-m̅}^2+1/T∑_t=1^Tα̃(X_t)^2[Y_t-γ̂(X_t)]^2,m̅=1/N∑_i=1^Nm(Z_i,γ̂),
where α̃(X_t)=τ_n(α̂(X_t)).
§ ASYMPTOTIC THEORY
In this Section we show asymptotic normality of the cross-fit estimator under weak regularity conditions and state a result on asymptotic normality of the Lasso estimator without cross-fitting.
The first condition is the following one:
AssumptionAssumption
LemmaLemma
Theorem[Lemma]Theorem
a) m(Z,γ) is linear in γ; b) E[m(Z,γ)^2]≤
C‖γ‖ _2^2; c) α_0(X) is bounded and
Var(Y|X) is bounded, d) (Z_1,...,Z_N) and W_1,...,W_T are i.i.d.
and mutually independent.
This condition requires that a) the object of interest be a linear functional of a regression; b) the functional m(Z,γ) be mean square continuous in γ; c) that the debiasing function α_0(X) and the conditional variance Var(Y|X) are bounded; and d) the training and test samples are i.i.d. and mutually independent.
For each ℓ the learners γ̂_ℓ and α̂_ℓ satisfy
a) ‖γ̂_ℓ-γ_0‖_2 =o_p(1) and ‖α̂_ℓ-α_0‖_2=o_p(1),
b) ‖α̂_ℓ-α_0‖_2‖γ̂_ℓ-γ_0‖
_2=o_p(1/√(T)),
This condition requires that both α̂ and α̂ are mean square consistent and that the product off their convergence rates is faster than 1/√(min(n,T)). Under these conditions the estimation of γ_0 and α_0 will not affect the large sample properties of the estimator, as shown by the following rueslt.
If Assumptions 1 and 2 are satisfied then
θ̂=θ_0+1/N∑_i=1^Nm(Z_i,γ_0)+1/T∑_t=1^Tα_0(X_t)[Y_t-γ_0(X_t)]+o_p
(min{N,T}^-1/2)
Proof of Lemma 1: For notational convenience we will drop the ℓ subscript
and replace the average over I_ℓ with the average over the entire
training sample while maintaining indpendence of α̂ and the
training data and γ̂ and the field and training data. Algebra
gives θ̂=1/N∑_i=1^Nm(Z_i,γ_0)+1/T∑_t=1^Tα_0(X_t)[Y_t-γ_0(X_t)]+R_1+R_2
+R_3,
R_1 =1/N∑_i=1^Nm(Z_i,γ̂-γ_0)+1/T∑_t=1^Tα_0(X_t)[γ_0(X_t)-γ̂
(X_t)],
R_2 =1/T∑_t=1^T[α̂(X_t)-α_0(X_t
)][Y_t-γ_0(X_t)],
R_3 =1/T∑_t=1^T[α̂(X_t)-α_0
(X_t)][γ_0(X_t)-γ̂(X_t)].
Define Δ̂:=∫ m(z,γ̂-γ_0)F_0(dz) and note
that by Assumption <ref> we have
Δ̂=∫α_0(x)[γ̂(x)-γ_0(x)]F_0(dx).
Then
R̂_̂1̂ =R̂_̂1̂-Δ̂+Δ̂=R_11+R_12,R_11=1/N∑_i=1^N[m(Z_i,γ̂-γ_0)-Δ̂]
R_12 =1/T∑_t=1^T[α_0(X_t){γ_0
(X_t)-γ̂(X_t)}+Δ̂].
Note that by γ̂ independent of training and field data,
E[α_0(X_t){γ_0(X_t)-γ̂(X_t)}|γ̂]=Δ̂,E[m(Z,γ̂-γ_0)|γ̂
]=Δ̂.
Also by Assumption 1 and α_0(X_t) bounded, |Δ̂|≤ C‖γ̂-γ_0‖
_2p⟶0,
E[m(Z_i,γ̂-γ_0)^2|γ̂] ≤ C‖γ̂-γ_0‖ _2^2p⟶0,
E[α_0(X_t)^2{γ_0(X_t)-γ̂(X_t)}^2
|γ̂] ≤ C‖γ̂-γ_0‖
_2^2p⟶0.
Then taking conditional expectations,
E[R_11^2|γ̂]≤C/T‖γ̂-γ
_0‖ _2^2=o_p(1/T),E[R_12^2|γ̂]≤C/N‖γ̂-γ_0‖ _2^2
=o_p(1/N).
It follows by the conditional Markov inequality that
R_11=o_p(1/√(T)),R_12=o_p(1/√(N)).
Next, note that
E[{α̂(X)-α_0(X)}{Y-γ_0(X)}|α̂]
=0,
E[{α̂(X)-α_0(X)}^2{Y-γ_0(X)}^2|α̂] =E[{α̂(X)-α_0(X)}^2Var(Y|X)]
≤ C‖α̂-α_0‖ _2^2
p⟶0.
It then follows similarly to R_11=o_p(1/√(T)) that R_2
=o_p(1/√(T)). Also,
E[|R_3||α̂,γ̂] ≤ E[(α̂(X)-α
_0(X))(γ_0(X)-γ̂(X))]
≤‖α̂-α_0‖_2‖γ̂-γ_0
‖_2=o_p(1/√(T)),
so R_3=o_p(1/√(T)) by the conditional Markov inequality. The
conclusion then follows by the triangle inequality. Q.E.D..
Asymptotic normality of the cross-fit estimator follows from Lemma 1 and the central limit theorem.
We next state a result for the Lasso estimator without cross-fitting that follows similarly to Corollary 9 of Bradic et al. <cit.>. For brevity we omit here a detailed statement of the conditions and the proof.
If Assumptions 1 and 2 are satisfied, N/T converges to ξ with 0<ξ<1, and the Assumptions of Corollary 9 of Bradic et al. <cit.> are satisfied then √(N)(θ̂-θ_0) converges in distribution to N(0,V) where V=Var(m(Z_i,γ_0)) + ξE[α_0(X_t)^2Var(Y_t|X_t)]
§ SIMULATION AND RESULTS
We evaluate the proposed method on a regression problem using a Monte-Carlo simulation. Along with providing useful analysis, this provides a simple case study of how the bias correction could be used in practice.
§.§ Simulation Description
We use a Monte-carlo simulation described by a high dimensional high order multivariate polynomial as follows.
g(𝐮) = α_0 + α_1 ∑_k=1^Kβ_1k u_k + α_2 (∑_k=1^Kβ_2k u_k )^2 + ... + α_3 (∑_k=1^Kβ_3k u_k )^Q
The polynomial is of order Q in K dimensions with cross terms, and defined by α and β, where 𝐮 is a K dimensional vector. The training data is denoted by the random variable X_t, which is also a K dimensional vector. The output of the simulation for an observation from the training data is then described by:
Y_t = g(X_t) + ϵ_t
Here ϵ is zero-centered normally distributed noise with standard deviation σ. Additionally, we can describe the true data curve simply as γ_0(X) = g(X).
For this simulation, both training data (denoted X) and validation data (denoted V) are drawn from a normal distribution with mean zero and standard deviation of one in each dimension. They are combined with a low weight uniform distribution that extends from -5 to +5 in each dimension. In order to represent the effects of distribution shifts, the test data (denoted Z) is drawn from similar distributions but with a normal distribution with a shifted mean.
The simulation outputs are scaled by a constant so that they stay near the range of -1 to 1. The standard deviation, σ, of the noise term ϵ is set to 0.1. The simulation uses a 3rd order polynomial with 6 dimensions. The distribution shift for z used here is 1.1 times σ.
Multiple specifications of the simulation are created by randomizing the simulation parameters (β), with some of elements set to be near zero, resulting in about 60% sparsity. Additionally, for ease of visualization, the β parameters in first dimension in the simulation are multiplied by about 1.71 relative to the rest of the parameters, and the last dimension is multiplied by about 0.29.
We evaluate for many samples and we also vary the simulation specification. For each retraining of our network a new sample of 10,000 observations of x and 10,000 observations of z were generated. For each simulation specification, the network is retrained over 60 different samples. 30 specifications are used resulting in 1800 total iterations of the simulation.
§.§ Bias Correction Applied To This Problem
Our aim is to estimate γ_0(X_t) with γ̂(X_t) obtained by minimizing the sum of squares of γ(X_t) - Y_t using least squares regression with regularization. We will construct the regression estimator, γ̂, using a neural network.
We then take the expected value of the outputs over a large number of values for both X and Z.
We use a relatively simple network model utilizing a fully connected neural network, sometimes called MLP (multilayer perceptron), or ANN (artificial neural network) <cit.>. The network has 4 hidden layers, each with 32 nodes, and ReLU activation. In total, We have about 3000 total learn-able parameters.
For this processing we used Matlab's trainNetwork with featureInputLayer's and fullyConnectedLayer functions. We used a learn rate of 0.01, a mini-batch size of 1024, and up to 500 epochs of training time. Additionally, the neural network is L2 (i.e. Tikhonov) <cit.> regularized with a regularization parameter of 0.0002.
The bias correction can be applied to this simulation. We utilize the dictionary vector b(x) (from (<ref>)) that is J polynomial functions of order 2 described by
b(x_t) = [1, x_1t, x_2t, ..., x_1t x_1t, x_1t x_2t, ..., x_Kt x_Kt]'
Using (<ref>), we first calculate ρ̂_̂ℓ̂ calculate from:
ρ̂ =min_ρ{-2M̂^'ρ+ρ^'Q̂_̂ℓ̂
ρ+r∑_j=1^J|ρ_j|}
We take m from (<ref>) to be the mean of the data Z so that:
M̂_j=1/N∑_i=1^N b_j(Z_i),M̂=(M̂_1,...,M̂_J)^',Q̂=1/T
∑_t=1^Tb(X_t)b(X_t)^'.
For cross fitting data, we simply used a second sample from the simulation, and create a second polynomial expansion, which we'll denote V here. The bias corrected mean of γ̂(Z) therefore is:
θ̂ =1/N∑_i=1^Nγ̂(Z_i)+ρ̂^'[
1/T_v∑_t=1^T_vb(V_t){ Y_t-γ̂(V_t)}]
where T_v is the number of observations in the second sample.
§.§ Results
In order to evaluate algorithm effectiveness, we estimate the algorithm bias for each simulation replication by averaging the difference of the estimated function γ̂(Z) with the average of a large set of 1 million independently selected observations, γ_0(Z) + ϵ, over the 60 samples (with network retraining). The true bias changes for each specification therefore to measure the bias we calculate the root mean square error from zero of the biases across specification. Additionally, we estimate the root mean square error of γ̂(Z) and γ_0(Z) + ϵ.
In figure <ref>, we plot the result as a function of training time. We can see right away how the bias correction lowers the expected average bias consistently across the data. The bias correction has the largest effect at lower training times. This can be explained as an increased bias due to increased regularization as the lower training times represent early stopping regularization.
Figure <ref> also includes error bars which shows the estimated standard deviation of the root mean square of the bias estimate and the standard deviation of the RMSE. As the two curves in each plot are all over about 3 standard deviations apart, the confidence interval between the bias corrected estimate and the original estimate is generally over 99.5%.
Inspecting the plots closely we see that the bias corrected data basically provides the same RMSE at 100 epochs as at 500 epochs, and much more consistency across all of the epochs. This indicates both allowed lowering of required training time and less precision required in determining the best training time.
We also note that the bias correction is shown to provide benefit on average across specifications. For a given specification, the benefit may be negligible or even negative or may be much larger than is shown.
§.§ Discussion
While a variety of possible cross fitting methods can work, the cross-fitting we implemented was done for ease of use in the following manner: The cross-fit data which we will call v is an entirely new sample from the same distribution as x. This represents well the case where we have enough data to have both very large training and validation sets.
It also is interesting to note that we observed during our processing that cross-fitting was generally not necessary for this specific problem. Therefore, if we ignore the need for cross-fitting, we can get an alternative description of the debiasing function that applies under the simple assumptions used in the bias correction for this simulation.
Consider that we have a set of functions b(x) where you are finding the weights ρ for each b(x) given a list of residuals.
Then given that Q̂ in equation <ref> is symmetric, the first order conditions to equation <ref> are a regularized solution of the relationship:
M̂ = ρ̂Q̂
Another solution technique for M̂ = ρ̂Q̂ would be to solve for ρ̂ using a generalized inverse of Q̂. We can re-write our variables with matrix notation:
𝐁=[b(X_1),...,b(X_T)]'
𝐲 = [y(X_1),..., y(X_T)]
γ̂ = [γ̂(X_1),..., γ̂(X_T)]
Thus we can write this generalized inverse as 1/TQ̂^+ = [ B' B]^+. Without cross-fitting, we then can rewrite (<ref>), using α̂(x)=b(x)^'ρ̂, as
1/N∑_i = 1^Nγ̂(Z_i) + [1/N∑_i = 1^Nb(Z_i)]'[ B' B]^+ B' [ 𝐲 - γ̂]
Inspecting this equation, we can set ϕ̂ = [ B'B]^+ B' [ 𝐲 - γ̂] to be the coefficients of a least squares regression of the residuals on B, with the caveat that Q̂ is likely not invertible and thus must be solved using regularization (e.g. a pseudo-inverse on Q̂ or Lasso regression on the whole equation). Then the debiased estimator is 1/N∑_i = 1^N [γ̂(Z_i) + b(Z_i)' ϕ̂], where the bias correction is now the coefficients ϕ̂ on the residuals applied to the test data dictionary b(Z_i).
The test provided here was evaluated with a few different algorithms and, perhaps surprisingly, we found that with appropriately tuned regularization parameters the debias function worked equally well regardless of the method used.
§ CONCLUSIONS
In this paper we have provided debiased machine learning estimators of parameters when there is a covariate shift. We developed a method to debias functionals of trained machine learning algorithms under covariate shift. With cross-fitting the methods were shown to be asymptotically normal as the sample size grows for a variety of regression learners, including neural nets and Lasso.
For Lasso regression it was shown that cross-fitting was not needed for these results.
We evaluated the cross-fit method in a relatively simple simulation generated using polynomial coefficients and a neural network with explicit regularization for fitting. The results strongly indicated that the debiased machine learner described here effectively removes the bias.
A significant caveat to consider is that the proposed methodology requires averaging over a reasonable large z sample, and a substantial amount of z data may be required for accurate results. Importantly, without enough averaging, the noise in the bias correction would likely make the results look worse. Because of this, the proposed methodology requires additional computation, which may become significant depending on the dataset sizes.
We believe these results show promise and in future work these methods could be extended to other settings, for example ’counterfactual averages', or data classification.
plain.bst
|
http://arxiv.org/abs/2307.05345v1 | 20230711153403 | A generalization of Floater--Hormann interpolants | [
"Woula Themistoclakis",
"Marc Van Barel"
] | math.NA | [
"math.NA",
"cs.NA",
"41A05 (Primary) 41A20, 41A25 (Secondary)"
] |
A generalization of Floater–Hormann interpolants
Woula ThemistoclakisC.N.R. National
Research Council of Italy,
IAC Institute for Applied Computing “Mauro Picone”, via P. Castellino, 111, 80131 Napoli, Italy.
[email protected]
Marc Van BarelKU Leuven, Department of Computer Science, KU Leuven,
Celestijnenlaan 200A,
B-3001 Leuven (Heverlee), Belgium. [email protected].
This author was supported by the Research Council KU Leuven, C1-project C14/17/073 and by the Fund for Scientific Research–Flanders (Belgium), EOS Project no 30468160 and project G0B0123N.
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper the interpolating rational functions introduced by Floater and Hormann are generalized leading to a whole new family of rational functions depending on γ, an additional positive integer parameter. For γ = 1, the original Floater–Hormann interpolants are obtained. When γ>1 we prove that the new rational functions share a lot of the nice properties of the original Floater–Hormann functions. Indeed, for any configuration of nodes, they have no real poles, interpolate the given data, preserve the polynomials
up to a certain fixed degree, and have a barycentric-type representation.
Moreover, we estimate the associated Lebesgue constants in terms of the minimum (h^*) and maximum (h) distance between two consecutive nodes. It turns out that,
in contrast to the original Floater-Hormann interpolants, for all γ > 1 we get
uniformly bounded Lebesgue constants in the case of equidistant and quasi-equidistant nodes configurations (i.e., when h∼ h^*). In such cases, we also estimate the uniform and the pointwise approximation errors for functions having different degree of smoothness.
Numerical experiments illustrate the theoretical results and show a better error profile
for less smooth functions compared to the original Floater-Hormann interpolants.
§ INTRODUCTION
In this paper we consider the problem of interpolating a function f(x) on a finite interval [a,b], given its values f(x_i) in
n+1 nodes a = x_0 < x_1< ⋯ < x_n-1 < x_n=b.
Without loss of generalization the interval [-1,1] can be taken.
If one can choose the position of the nodes x_i in the interval, an analytic function can be approximated by polynomials interpolating
at the Chebyshev points of the first or second kind, leading to exponential convergence.
The speed of convergence is determined by the largest Bernstein ellips that can be taken in the analytic domain of the function.
For differentiable functions the convergence is algebraic where the speed of convergence is determined by the smoothness
of the function. For further details we refer the reader to the book of Trefethen <cit.>.
If the nodes can not be freely chosen, the problem becomes much harder. E.g., when the nodes are equidistant in the interval
[-1,+1] and we want to approximate the function 1/(1+25x^2), the Runge phenomenon is exhibited and the approximation
error becomes very large in the neighbourhood of the endpoints of the interval when the number of nodes increases.
In the recent paper <cit.> Huybrechs and Trefethen compare several methods to approximate a function when the nodes are equidistant.
One of these methods uses the Floater–Hormann (briefly FH) interpolating rational functions <cit.>.
This method is valid for any configuration of the nodes but turns out to be very useful for equidistant configurations.
Generalizing Berrut interpolants <cit.> Floater and Hormann introduced a blended form of interpolating polynomials of fixed degree 1≤ d≤ n, leading to a rational function of the form <cit.>
r(x) = ∑_i=0^n-dλ_i(x) p_i(x)/∑_i=0^n-dλ_i(x)
where, for all i=0,…, (n-d),
λ_i(x) = (-1)^i/(x-x_i)(x-x_i+1)… (x-x_i+d),
and p_i(x) denotes the unique polynomial of degree ≤ d interpolating f at the (d+1) points x_i<x_i+1<…<x_i+d.
Floater and Hormann proved that the approximant r(x) has no poles on the real line, it coincides with f on the set of nodes
X_n={x_0,x_1,…,x_n}, and, at any x∉X_n, it admits a barycentric representation allowing to efficient and stable computations. Moreover, regardless the distribution of nodes, the FH approximation error behaves as follows <cit.>
r-f _∞ = Ø(h^d+1), ∀ f ∈^d+2([a,b])
with h = max_1≤ i≤ n (x_i+1-x_i).
Therefore, the FH interpolants, which for d=0 reduce to Berrut interpolants, are able to produce arbitrarily high approximation orders provided that the parameter d is large enough. However, in the important case of equidistant or quasi–equidistant nodes, it has been proved that the Lebesgue constants of FH interpolants grow logarithmically with n but exponentially with d <cit.>. Hence, increasing d too much is not always advisable.
For an overview of linear barycentric rational interpolation, we refer the interested reader to the paper of Berrut and Klein
<cit.>. This overview also describes a generalization of the FH interpolant developed
by Klein <cit.> in the case of equidistant nodes.
In this paper we are also going to generalize the method of Floater and Hormann but in a different way.
For any distribution of nodes, we define a whole family of linear rational approximants, denoted by r̃(x), that depend, besides d, on an additional parameter γ∈ = {1,2,3,…}. When γ = 1, r̃(x) reduces to the original FH interpolant r(x).
Similarly to the latter, we show that any r̃(x) also has no real poles, interpolates the data, and has a barycentric-type
expression. However, in the case of equidistant or quasi-equidistant configurations of nodes, we find some novelty by taking γ>1.
In particular, we prove that for equidistant or quasi-equidistant nodes, the Lebesgue constants corresponding to any 1≤ d≤ n and γ>1 are uniformly bounded w.r.t. both n and γ, even if always exponentially growing with d.
As concerning the approximation rate, for equidistant or quasi-equidistant nodes, we show that
r̃-f _∞ = Ø(h^d), ∀ f ∈^d([a,b]),
holds for all integer parameters 1≤ d≤ n and γ>d+1. Hence, generalized FH interpolants also provide arbitrarily high convergence rates. Moreover, making the comparison with (<ref>), we prove that the generalized FH functions r̃(x), with parameters 1≤ d≤ n and γ>d+2, also can reach the convergence order Ø(h^d+1) but supposing that f∈ C^d+1([a,b]) instead of f∈ C^d+2([a,b]) .
In addition, we estimate the error of generalized FH interpolants also for functions with a non integer smoothness degree. More precisely, in the class Lip_α ([a,b]) of Hölder continuous functions with exponent 0<α≤ 1, we prove that
r̃-f _∞ = Ø(h^α), ∀ f ∈ Lip_α([a,b]), 1≤ d≤ n, γ>α +1
Moreover, denoted by C^s,α([a,b]) the class of functions that are s-times continuously differentiable and have the s–th derivative Hölder continuous with index 0<α≤ 1, the generalized FH function r̃(x), for all integers 1≤ d≤ n, satisfies
r̃-f _∞ = Ø(h^d+α), ∀ f ∈ C^d,α([a,b]), ∀γ>α+d +1
r̃-f _∞ = Ø(h^d+1+α), ∀ f ∈ C^d+1,α([a,b]), ∀γ>α+d +2
Finally, we consider the pointwise error and show that, even in the case of almost everywhere continuous functions f with some isolated discontinuities, the approximation orders displayed in (<ref>)–(<ref>) are locally preserved and continue to hold in all compact subinterval I⊂ [a,b] where f has the described smoothness degree (i.e., f ∈ C^d(I), f ∈ Lip_α(I), f ∈ C^d,α(I), and f ∈ C^d+1,α(I), resp.)
The numerical experiments confirm the theoretical estimates and show that the generalized FH interpolants, in comparison with the original ones,
exhibit a much better error profile when the function is less smooth.
The paper is organized as follows. In Section <ref> the generalized FH rational interpolants are presented.
In Section <ref> we state and prove several properties of these new interpolants which are similar to those of the original
FH interpolants.
In Section <ref> we focus on the case of equidistant or quasi-equidistant nodes and estimate the behaviour of the associated Lebesgue constants. This section concludes with two subsections: in the former the proof and the necessary technical lemmas are given, in the latter a useful related result is stated. In Section <ref> all the error estimates are given.
In Section <ref> we illustrate several numerical examples comparing generalized and original FH approximants. Finally,
Section <ref> gives the conclusion of our paper.
§ GENERALIZED FLOATER–HORMANN INTERPOLANTS
Let
a=x_0<x_1<… < x_n-1<x_n=b, n∈,
be any sequence of nodes where we assume the function f:[a,b]→ has been sampled.
The generalized FH approximation of f is defined very similarly to (<ref>) and (<ref>).
For any integer 0≤ d≤ n, it is also a blended form of the polynomial interpolants p_i of degree at most d.
However, the blending functions depend on an additional parameter γ∈ and are defined as follows
_i(x) = (-1)^i γ/(x-x_i)^γ(x-x_i+1)^γ… (x-x_i+d)^γ, i=0,…, n-d.
Hence, for arbitrarily fixed γ∈ and d∈{0,…, n}, the generalized FH approximation of f is given by
r̃(x) = ∑_i=0^n-d_i(x) p_i(x)/∑_i=0^n-d_i(x), x∈
where _i(x) is defined in (<ref>) and p_i(x) is the polynomial of degree ≤ d interpolating f at the (d+1) nodes
x_i<x_i+1<…<x_i+d.
We point out that the function r̃(x) depends on f, on n, on the nodes (<ref>), and on two integer parameters: 0≤ d≤ n and γ≥ 1. Sometimes we also use the notation r̃(f,x)=r̃(x) and r̃_d(f,x)=r̃(x) in order to highlight the dependence on f and d.
Note that the original FH interpolants are a special case of the generalized ones with parameter γ = 1 (compare (<ref>) and (<ref>)).
If we multiply both the numerator and denominator of (<ref>) by the polynomial
π(x)=(-1)^γ (n-d)∏_k=0^n(x-x_k)^γ,
then we get
r̃(x)=∑_i=0^n-dμ̃_i(x)p_i(x)/∑_i=0^n-dμ̃_i(x)
where we set
μ̃_i(x)=_i(x)π(x)=∏_k=0^i-1(x-x_k)^γ∏_k=i+d+1^n (x_k-x)^γ
being understood that ∏_k=n_1^n_2a_k=1 whenever the product is empty, i.e., if n_1>n_2.
Equation (<ref>) yields the generalized FH approximant r̃(x) as a quotient of two polynomials
P(x)=∑_i=0^n-dμ̃_i(x)p_i(x)
Q(x)=∑_i=0^n-dμ̃_i(x)
where the maximum degree is γ (n-d)+d at the numerator and γ(n-d) at the denominator, i.e., for all γ∈, r̃(x) is a rational function of type (γ n- (γ -1)d, γ n-γ d).
§ PROPERTIES
In this section, we are going to prove that, for any choice of the integer parameter γ>1, the generalized FH function r̃(x) has similar properties to the original r(x).
§.§ No poles on the real line
In the case γ=1 the polynomials μ̃_i(x) defined in (<ref>) were investigated by Floater and Hormann in <cit.>. Using their result, the following lemma can be easily deduced:
Let n∈, d∈{0,1,…, n}, and i∈{0,1,…,n-d} be arbitrarily fixed.
For any γ∈ that is even, and ∀ x∈, we have
μ̃_i(x){[ =0 x∈{x_k : 0≤ k< i ⋁ i+d<k≤ n}; >0 ].
If γ∈ is odd, we distinguish the following cases:
* In the case x∈{x_0,…, x_n} we have
μ̃_i(x_k){[ >0 (k-d)≤ i≤ k; =0 ].
* In the case x∈ [a,b]-{x_0,…, x_n}, if x_ℓ<x<x_ℓ +1 for any ℓ=0,…, n-1, then we have the following
(A) If (ℓ-d+1)≤ i≤ℓ then μ̃_i(x)>0
(B) Suppose ℓ-d≥ 0, for i=0,1,…,(ℓ-d), the sequence μ̃_i(x) has alternate signs, ending with μ̃_ℓ-d(x)>0, and it has increasing absolute values, i.e.,
μ̃_ℓ-d-k(x)> -μ̃_ℓ-d-k-1(x)>0, k=0,2,4,…
(C) For i=(ℓ+1),…, n, the sequence μ̃_i(x) has alternate signs, starting with μ̃_ℓ+1(x)>0, and it has decreasing absolute values, i.e.,
μ̃_ℓ+k(x)> -μ̃_ℓ+k+1(x)>0, k=1,3,5,…
* In the case x<a, the sequence μ̃_i(x), for i=0,…, (n-d), has alternate sign, starting with μ̃_0(x)>0, and it has decreasing absolute values, i.e.,
μ̃_k(x)> -μ̃_k+1(x)>0, k=0,2,4,…
* In the case x>b, the sequence μ̃_i(x), for i=0,…, (n-d), has alternate sign, ending with μ̃_n-d(x)>0, and it has increasing absolute values, i.e.,
μ̃_n-d-k(x)> -μ̃_n-d-k-1(x)>0, k=0,2,4,…
From the previous lemma we deduce the following result that generalizes <cit.>.
For all integers n,γ∈ and 0≤ d≤ n, the generalized FH rational function r̃(x) has no real poles.
Proof of Theorem <ref>Recalling (<ref>), it is sufficient to prove that
Q(x)=∑_i=0^n-dμ̃_i(x)>0, ∀ x∈.
This follows from (<ref>) in the case that γ∈ is even. If γ∈ is odd, by (<ref>) we get
Q(x_k)=∑_i=max{0, (k-d)}^k μ̃_i(x_k)>0, k=0,1,…,n,
while (<ref>) and (<ref>) imply
Q(x)={[ ∑_[ k=0,2,4,...; [-.1cm]
k< n-d ][μ̃_k(x)+μ̃_k+1(x)]>0, ∀ x<a; [1cm]
∑_[ k=0,2,4,...; [-.1cm]
k< n-d ][μ̃_n-d-k(x)+μ̃_n-d-k-1(x)]>0, ∀ x>b, ].
in the case that (n-d) is odd, and
Q(x)={[ ∑_[ k=0,2,4,...; [-.1cm]
k< n-d ][μ̃_k(x)+μ̃_k+1(x)]+ μ̃_n-d(x)>0, ∀ x<a; [1cm]
∑_[ k=0,2,4,...; [-.1cm]
k< n-d ][μ̃_n-d-k(x)+μ̃_n-d-k-1(x)]+μ̃_0(x)>0, ∀ x>b, ].
when (n-d) is even.
Hence, it remains to prove (<ref>) only in the case that γ∈ is odd and x_ℓ<x<x_ℓ+1 for some ℓ=0,1,…, n-1. In such a case, we write
Q(x) = ∑_0≤ i≤ℓ-dμ̃_i(x)+∑_ℓ-d+1≤ i≤ℓμ̃_i(x)+∑_ℓ+1≤ i≤ nμ̃_i(x)
[.1in]
=: Q_1(x)+Q_2(x)+Q_3(x)
being understood that ∑_n_1≤ i≤ n_2a_i=0 in case of empty summation, i.e., if n_1>n_2.
Finally, we observe that whenever the previous summation Q_i(x) are non empty, they are positive by virtue of Lemma <ref> (cf. (A)–(C)). Since at least one term of the summation Q_i(x) is not empty, we have proven the theorem.
We remark that the previous proof also states that for all n,γ∈, and 1≤ d≤ n, if x∈ ]x_ℓ, x_ℓ+1[, with ℓ =0,…, n-1, then we have
|Q(x)|=|∑_i=0^n-dμ̃_i(x)|≥∑_i∈ I_ℓμ̃_i(x)>μ̃_j(x)>0, ∀ j∈ I_ℓ
where
I_ℓ={i∈{0,…, (n-d)} : ℓ-d+1≤ i≤ℓ}
Note that I_ℓ is a non empty set since d≥1.
Moreover, if we divide all terms in (<ref>) by |π(x)|, with π(x) defined in (<ref>), then, by virtue of (<ref>), we obtain
|∑_i=0^n-d_i(x)|≥∑_i∈ I_ℓ_i(x)>_j(x)>0, ∀ j∈ I_ℓ, ∀ x∈ ]x_ℓ, x_ℓ +1[
§.§ Interpolation of the data
In the case γ=1 it is known that the FH rational function r(x) is equal to f(x) if x is one of the nodes (<ref>). Such interpolation property remains valid for the generalized FH approximation.
For all n,γ∈ and any integer 0≤ d≤ n, we have
r̃(f,x_k)=f(x_k), k=0,1,…, n
Proof of Theorem <ref>
Set
J_k={i∈{0,…, (n-d)} : k-d≤ i≤ k},
k=0,…,n
we note that
p_i(x_k)=f(x_k), ∀ i∈ J_k.
Moreover, by Lemma <ref> we have
μ̃_i(x_k)=0, ∀ i∉J_k
Consequently, we get
r̃(f,x_k)=∑_i=0^n-dμ̃_i(x_k)p_i(x_k)/∑_i=0^n-dμ̃_i(x_k)=
∑_i∈ J_kμ̃_i(x_k)p_i(x_k)/∑_i∈ J_kμ̃_i(x_k)=f(x_k).
§.§ Preservation of polynomials
Similarly to the classical FH interpolants, also the generalized ones reduce to the identity on the set _d of all polynomials of degree at most d.
Indeed, for all f∈_d we have
p_i(x)=f(x), i=0,…, n-d, ∀ x∈ [a,b].
Consequently, from (<ref>) we deduce that
r̃(f,x)=f(x), ∀ x∈ [a,b], ∀ f∈_d,
holds for any n,γ∈, and d∈{0,…, n}.
§.§ Barycentric-type representation
We recall that the classical FH interpolants are rational functions of type (n, n-d), hence they can be expressed in barycentric form.
In this subsection, we give a barycentric-type expression for the generalized FH interpolants defined by
(<ref>) and (<ref>), for any parameter γ∈.
The Lagrange representation for the interpolating polynomial p_i is
p_i(x) = ∑_k=i^i+d f(x_k)∏_s=i,s≠ k^i+dx-x_s/x_k-x_s .
Combining this with (<ref>), we get
_i(x) p_i(x) = (-1)^i γ∏_s=i^i+d1/(x-x_s)^γ[∑_k=i^i+d f(x_k)
∏_s=i,s≠ k^i+d
x-x_s/x_k-x_s]
= (-1)^i γ∑_k=i^i+df(x_k)/(x-x_k)^γ∏_s=i,s≠ k^i+d1/(x_k-x_s)(x-x_s)^γ-1.
Hence, we obtain
∑_i=0^n-d_i(x) p_i(x) = ∑_i=0^n-d (-1)^i γ∑_k=i^i+df(x_k)/(x-x_k)^γ∏_s=i,s≠ k^i+d1/(x_k-x_s)(x-x_s)^γ -1,
and changing the order of the summations, we get
∑_i=0^n-d_i(x) p_i(x) = ∑_k=0^n f(x_k)/(x-x_k)^γ[∑_i ∈ J_k (-1)^iγ∏_j=i,j≠ k^i+d1/(x_k-x_j)(x-x_j)^γ -1]
where we recall J_k = { i ∈{0,1,…,n-d} : k-d ≤ i ≤ k } has been introduced in (<ref>).
Hence, defining
w_k(x) = ∑_i ∈ J_k (-1)^iγ∏_s=i,s≠ k^i+d1/(x_k-x_s)(x-x_s)^γ -1, γ∈
we can write
∑_i=0^n-d_i(x) p_i(x) = ∑_k=0^n f(x_k)/(x-x_k)^γw_k(x).
Similarly, we can derive
∑_i=0^n-d_i(x) = ∑_k=0^n 1/(x-x_k)^γ w_k(x),
that is (<ref>) in the case of the unit function f(x)=1, x∈ [a,b].
In conclusion, the generalized FH interpolant (<ref>) can be expressed in the following barycentric-type form
r̃(x) = ∑_k=0^n w_k(x)/(x-x_k)^γ f(x_k)/∑_k=0^n w_k(x)/(x-x_k)^γ, γ∈
Note that in the case γ=1, this yields the classical barycentric form
r(x) = ∑_k=0^n w_k/(x-x_k)f(x_k)/∑_k=0^n w_k/(x-x_k),
w_k= ∑_i ∈ J_k (-1)^i∏_s=i,s≠ k^i+d1/(x_k-x_j),
where the weights w_k can be computed in advance and where, in the denominators,
we have a factor (x-x_k) instead of (x-x_k)^γ.
To evaluate the new interpolant, in m x-values Ø(m n d^2) FLOPS are needed while for the classical barycentric form
the weights can be computed beforehand using Ø(n d^2) FLOPS and evaluating in m x-values costs an additional
Ø(mn) FLOPS.
Hence, it is more efficient to evaluate the classical FH interpolant in comparison to the new one.
However, the new approximant exhibits better performance with respect to the error, especially for less smooth functions, as
will be shown in the numerical examples.
§ ON THE LEBESGUE CONSTANTS BEHAVIOR
Set for brevity
D(x)=∑_i=0^n-d_i(x)
by (<ref>) and (<ref>) we get
r̃(f, x)=∑_k=0^n f(x_k)w_k(x)/(x-x_k)^γ D(x)
i.e., defining
b_k(x)={[ 1 x=x_k; 0 x∈{x_0,…, x_n}-{x_k}; w_k(x)/(x-x_k)^γ D(x) x∉{x_0,…, x_n} ]. k=0,…, n,
we can write
r̃(f,x)=∑_k=0^n f(x_k) b_k(x), x∈
Hence, the Lebesgue constant and function of the generalized FH interpolants at the nodes (<ref>) are given by
Λ_n=sup_x∈[a,b]|Λ_n(x)|, Λ_n(x)=∑_k=0^n |b_k(x)|.
Their behaviour as n→∞ is an important measure for the conditioning of the problem, being well–known that
|r̃(f,x)-r̃(F,x)|≤Λ_n(x) ϵ, ϵ= max_0≤ k≤ n|f(x_k)-F(x_k)| .
Moreover, using the polynomials preservation (<ref>), it is easy to prove that the Lebesgue constants are also involved in the error estimate
|r̃(f,x)-f(x)|≤ [1 + Λ_n] E_d(f), x∈ [a,b]
where E_d(f) denotes the error of best approximation of f in _d w.r.t. the uniform norm f_∞=sup_x∈ [a,b]|f(x)|, namely
E_d(f)=inf_P∈_df-P_∞.
For classical FH interpolation (γ=1) the Lebesgue constants have been estimated in <cit.> for equidistant nodes and in <cit.> for quasi–equidistant nodes. In both cases, they result to grow as log n with n and as 2^d with d.
Here we show that, introducing the additional parameter γ>1, for the generalized FH interpolation we succeed in getting Lebesgue constants uniformly bounded w.r.t. n.
More precisely, setting
h=max_0≤ k<n(x_k+1-x_k), h^*=min_0≤ k<n(x_k+1-x_k)
we have the following
For all n,d,γ∈, with d∈ [1,n] and γ>1, we have
Λ_n(x)≤ d 2^d (h/h^*)^γ+d, ∀ x∈ [a,b],
where >0 is a constant independent of n,h,x,d and γ.
Similarly to the classic FH interpolation, we note that the dependence on d of the Lebesgue constants is exponential for all γ>1 too. Indeed, we conjecture the linear factor d in (<ref>) can be removed, but we were not able to prove it.
An immediate consequence of Th. <ref> is the following
Let n,d,γ∈, with d∈ [1,n] and γ>1. In the case of equidistant nodes (i.e., if h=h^*) and, more generally, in the case of quasi–equidistant nodes (i.e., if h/h^*≤ρ holds with ρ∈ independent of n), we get
sup_nΛ_n <∞
§.§ Proof of Theorem <ref>
In order to prove Th. <ref> let us first state three preliminary lemmas
Let n,d,γ∈ with 1≤ d≤ n and γ>1. If x_ℓ<x<x_ℓ +1, with 0≤ℓ<n, then for all i∈{0,…, n-d} and any k∈{i,…, i+d}, we have
1/|D(x)|∏_s=i, s k^i+d1/|x-x_s|^γ-1≤{[ |x-x_ℓ+1|^γ∏_s=ℓ -d+1^ℓ|x-x_s| i≤ℓ-d; [.2in]
|x-x_k|^γ∏_s=i, s k^i+d|x-x_s| i∈ I_ℓ; [.2in]
|x-x_ℓ|^γ∏_s=ℓ+1^ℓ +d|x-x_s| i≥ℓ+1 ].
where D(x) and I_ℓ are defined in (<ref>) and (<ref>), respectively.
Proof of Lemma <ref>Firstly note that the following inequality
∏_s=i, s k^i+d1/|x-x_s|^γ-1≤{[ ∏_s=ℓ -d+1^ℓ1/|x-x_s|^γ-1 i≤ℓ-d; [.2in]
∏_s=ℓ+1^ℓ +d1/|x-x_s|^γ-1 i≥ℓ+1 ].
can be proved by taking into account that
|x-x_i+j|≥{[ |x-x_ℓ-d+1+j| 0≤ j< k-i; [.1in]
|x-x_ℓ-d+j| k-i<j≤ d ]. ∀ i≤ℓ-d
|x-x_i+j|≥{[ |x-x_ℓ+1+j| 0≤ j< k-i; [.1in]
|x-x_ℓ+j| k-i<j≤ d ]. ∀ i≥ℓ+1
On the other hand, from (<ref>) we deduce
|D(x)| ≥{[ |_ℓ-d+1(x)|=∏_s=ℓ -d+1^ℓ+11/|x-x_s|^γ i≤ℓ-d; [.2in]
|_i(x)|=∏_s=i^i+d1/|x-x_s|^γ ℓ -d+1≤ i≤ℓ; [.2in]
|_ℓ(x)|=∏_s=ℓ^ℓ +d1/|x-x_s|^γ i≥ℓ+1 ].
Hence, by collecting (<ref>) and (<ref>), we obtain (<ref>).
Let n,d,γ∈ with 1≤ d≤ n and γ>1. If x_ℓ<x<x_ℓ +1, with 0≤ℓ<n, then for all i∈{0,…, n-d} and any k∈{i,…, i+d}, we have
1/|D(x)|∏_s=i, s k^i+d1/|x-x_s|^γ-1≤{[ h^d+γd! i ∈I̅_ℓ; [.2in]
h^d|x-x_k|^γ (ℓ+1-i)!(d+i-ℓ)! i∈ I_ℓ ].
where I_ℓ={i∈{0,…, n-d} : i≤ℓ -d ∨ i≥ℓ+1} is the complementary set of I_ℓ defined in (<ref>).
Proof of Lemma <ref>Taking into account that
|x_k-x_s|≤ h|k-s|, ∀ k,s∈{0,…, n}
we get
∏_s=l-d+1^ℓ|x-x_s| ≤ ∏_s=l-d+1^ℓ|x_ℓ+1-x_s|≤ h^d∏_s=l-d+1^ℓ|l+1-s|=h^d d!
∏_s=l+1^ℓ+d|x-x_s| ≤ ∏_s=l+1^ℓ+d|x_ℓ-x_s|≤ h^d∏_s=l+1^ℓ+d(ℓ-s)=h^d d!
Also, by (<ref>), ∀ i∈ I_ℓ we have
∏_s=i, s k^i+d|x-x_s| ≤ ∏_s=i, s k^ℓ|x_ℓ+1-x_s|∏_s=ℓ+1, s k^i+d|x_ℓ-x_s|
≤ h^d (ℓ+1-i)! (d+i-ℓ)!
Hence, the statement follows by applying (<ref>)–(<ref>) to the result of Lemma <ref>.
For all n∈ and i∈ J_k={i∈{0,…, (n-d)} : k-d≤ i≤ k}, with k=0,…, n, we have
∏_s=i, s k^i+d1/|x_k-x_s|≤(1/h^*)^d1/(k-i)! (d+i-k)!
Proof of Lemma <ref>Since
|x_k-x_s| ≥ h^*|k-s|, ∀ k, s∈{0,…,n}
we get
∏_s=i, s k^i+d1/|x_k-x_s|≤(1/h^*)^d
∏_s=i, s k^i+d1/|k-s|= (1/h^*)^d1/(k-i)! (d+i-k)!
Proof of Theorem <ref>
First of all, note that if x∈{x_0,…, x_n} then the statement is trivial since, by (<ref>), we have
Λ_n(x_k)=1, k=0,1,…,n.
Hence, let us assume x_ℓ<x<x_ℓ +1 with ℓ=0,…, (n-1), and prove
(<ref>).
By (<ref>) and (<ref>) we get
Λ_n(x) = ∑_k=0^n|b_k(x)|=∑_k=0^n
|w_k(x)|/|x-x_k|^γ |D(x)|
≤ ∑_k=0^n∑_i∈ J_k1/|x-x_k|^γ|D(x)|∏_s=i, s k^i+d1/|x_k-x_s| |x-x_s|^γ-1
i.e., set for brevity
A_i,k=∏_s=i, s k^i+d1/|x_k-x_s| ,
B_i,k(x)=∏_s=i, s k^i+d1/|x-x_s|^γ-1
we have
Λ_n(x)≤∑_k=0^n∑_i∈ J_kA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|
Now we note that
J_k=(J_k ∩I_ℓ)∪ (J_k∩ I_ℓ), k=0.…, n,
where I_ℓ is given by (<ref>) and I_ℓ is the complementary set.
Hence, we decompose the summation in (<ref>)
Λ_n(x)
≤ ∑_k=0^n{∑_i∈ J_k∩I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|+ ∑_i∈ J_k∩ I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|}
=: ∑_k=0^n {σ_k(x) + μ_k(x)}=:S_1(x)+S_2(x)
where, as usual, we mean that ∑_i∈ Ia_i=0 whenever I is the empty set.
Let us first estimate
S_1(x)= ∑_k=0^nσ_k(x)=∑_k=0^n(∑_i∈ J_k∩I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|)
by distinguishing the following cases:
* Case k=ℓ. Let us estimate the term σ_ℓ(x)
Note that when k=ℓ we have J_k∩I_ℓ={ℓ-d} if ℓ≥ d, otherwise it is J_k∩I_ℓ=∅. Hence, suppose ℓ≥ d (otherwise σ_ℓ(x)=0 holds), we have
σ_ℓ(x):=∑_i∈ J_ℓ∩I_ℓA_i,ℓ B_i,ℓ(x)/|x-x_ℓ|^γ|D(x)|=
A_ℓ-d,ℓ B_ℓ-d,ℓ(x)/|x-x_ℓ|^γ|D(x)|
In this case, we use (<ref>) as follows
|D(x)|≥ |_ℓ-d+1|=∏_s=ℓ-d+1^ℓ +11/|x-x_s|^γ
and by Lemma <ref>, we have
A_ℓ-d,ℓ≤(1/h^*)^d1/d!
Consequently, we get
σ_ℓ(x) = A_ℓ-d,ℓ B_ℓ-d,ℓ(x)/|x-x_ℓ|^γ|D(x)|
≤ 1/d!(h^*)^d1/|x-x_ℓ|^γ |_ℓ-d+1(x)|∏_s=ℓ-d^ℓ-11/|x-x_s|^γ-1
= 1/d!(h^*)^d|x-x_ℓ+1|^γ/|x-x_ℓ-d|^γ∏_s=ℓ-d^ℓ-1|x-x_s|
≤ 1/d!(h^*)^d|x_ℓ-x_ℓ+1|^γ/|x_ℓ-d+1-x_ℓ-d|^γ∏_s=ℓ-d^ℓ-1|x_ℓ +1-x_s|
and by (<ref>), (<ref>), we conclude
σ_ℓ(x)≤(h/h^*)^d+γ1/d!∏_s=ℓ-d^ℓ-1|ℓ+1-s|
= (h/h^*)^d+γ(d+1) .
* Case k=ℓ+1. Let us estimate the term σ_ℓ+1(x)
When k=ℓ+1 we have J_k∩I_ℓ={ℓ+1} and σ_ℓ+1(x) can be estimated by means of (<ref>), (<ref>), (<ref>), and (<ref>) similarly to the previous case, getting
σ_ℓ+1(x) := ∑_i∈ J_ℓ+1∩I_ℓA_i,ℓ+1 B_i,ℓ+1(x)/|x-x_ℓ+1|^γ|D(x)|=
A_ℓ+1,ℓ+1 B_ℓ+1,ℓ+1(x)/|x-x_ℓ+1|^γ|D(x)|
≤ 1/d!(h^*)^d1/|x-x_ℓ+1|^γ |_ℓ(x)|∏_s=ℓ+2^ℓ+1+d1/|x-x_s|^γ-1
= 1/d!(h^*)^d|x-x_ℓ|^γ/|x-x_ℓ+1+d|^γ∏_s=ℓ+2^ℓ+1+d|x-x_s|
≤ 1/d!(h^*)^d|x_ℓ-x_ℓ+1|^γ/|x_ℓ+d-x_ℓ+1+d|^γ∏_s=ℓ+2^ℓ+1+d|x_ℓ-x_s|
≤ (h/h^*)^d+γ1/d!∏_s=ℓ+2^ℓ+1+d|ℓ-s|
= (h/h^*)^d+γ (d+1).
* Case k∉{ℓ,ℓ+1}. Let us estimate the summation of the remaining terms σ_k(x).
By applying Lemma <ref> and Lemma <ref> we have
∑_[ k=0; k∉{ℓ,ℓ+1} ]^n σ_k(x) = ∑_[ k=0; k∉{ℓ,ℓ+1} ]^n
∑_i∈ J_k∩I_ℓ(A_i,k B_i,k(x)/|x-x_k|^γ|D(x)|)
≤ ∑_[ k=0; k∉{ℓ,ℓ+1} ]^n ∑_i∈ J_k∩I_ℓh^γ/|x-x_k|^γ(h/h^*)^d
([ d; k-i ])
and taking into account that
∑_i∈ J_k∩I_ℓ([ d; k-i ])≤∑_i∈ J_k([ d; k-i ])≤∑_j=0^d ([ d; j ])=2^d
we continue the estimate as follows
∑_[ k=0; k∉{ℓ,ℓ+1} ]^n σ_k(x)
≤ h^γ(h/h^*)^d ∑_[ k=0; k∉{ℓ,ℓ+1} ]^n 1/|x-x_k|^γ∑_i∈ J_k∩I_ℓ([ d; k-i ])
≤ 2^d h^γ(h/h^*)^d (
∑_k=0^ℓ-11/|x_ℓ-x_k|^γ
+∑_k=ℓ+2^n 1/|x_ℓ+1-x_k|^γ)
≤ 2^d (h/h^*)^γ+d(
∑_k=0^ℓ-11/|ℓ-k|^γ
+∑_k=ℓ+2^n 1/|ℓ+1-k|^γ)
≤ 2^d+1(h/h^*)^γ+d∑_j=1^n1/j^γ≤ 2^d (h/h^*)^γ+d .
having used ∑_j=1^n 1/j^γ≤∑_j=1^∞1/j^2 <∞ in the last inequality.
Summing up, by (<ref>)–(<ref>) we have proved that S_1(x) satisfies the bound in (<ref>).
Now let us prove that the same holds for
S_2(x)= ∑_k=0^nμ_k(x)=∑_k=0^n( ∑_i∈ J_k∩ I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|) .
We note that if k≤ℓ-d or k>ℓ+d then we certainly have J_k∩ I_ℓ=∅. Thus, S_2(x) is, indeed, given by the following sum
S_2(x)= ∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ +d(∑_i∈ J_k∩ I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|)
Hence, by using Lemma <ref> and Lemma <ref>, we get
S_2(x)
≤ (h/h^*)^d∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ +d∑_i∈ J_k∩ I_ℓ(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)! .
Now we distinguish the following cases:
* If k< ℓ then we have (ℓ+1-i)>(k-i) and (i+d-ℓ)<(i+d-k). Hence we can write
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!= (ℓ+1-i)(ℓ+1-i-1)⋯ (k-i+1)/(i+d-k)(i+d-k-1)⋯ (i+d-ℓ +1))
and taking into account that, for i∈ I_ℓ, the right hand side term takes its maximum value when i=ℓ+1-d, we get
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!≤d (d-1)⋯ (d-(ℓ -k))/(ℓ+1-k)(ℓ-k)⋯ 2=([ d; ℓ+1-k ]) .
* If k=ℓ then we have
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!=(ℓ+1-i)≤ d .
* If k=ℓ+1 then we have
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!=(i+d-ℓ)≤ d .
* If k> ℓ +1 then we have (ℓ+1-i)<(k-i) and (i+d-ℓ)>(i+d-k). Hence we can write
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!= (i+d-ℓ)(i+d-ℓ-1)⋯ (i+d-k +1))/(k-i)(k-i-1)⋯ (ℓ +2-i)
and since, for i∈ I_ℓ, the maximum value of the right hand side term is achieved when i=ℓ, we get
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!≤d (d-1)⋯ (d-(k-ℓ-1))/(k-ℓ)(k-ℓ-1)⋯ 2=([ d; k-ℓ ]) .
Thus, by virtue of (<ref>)–(<ref>), we have proved
(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!≤{[ ([ d; ℓ+1-k ]) ; [.15in]
([ d; k-ℓ ]) ].
Consequently, taking into account that the set J_k∩ I_ℓ has at most d elements, the estimate of S_2(x) continues as follows
S_2(x) ≤ (h/h^*)^d
∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ +d∑_i∈ J_k∩ I_ℓ(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!
≤ (h/h^*)^d
[∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ∑_i∈ J_k∩ I_ℓ([ d; ℓ+1-k ])+
∑_k=ℓ +1^ℓ +d∑_i∈ J_k∩ I_ℓ([ d; k-ℓ ])
]
≤ (h/h^*)^d d 2 ∑_j=1^d([ d; j ])=2 (h/h^*)^d d 2^d .
§.§ A related result
Going along the same lines as the proof of Th. <ref>, we get the following result which will be useful in the next section.
For all α>0 and n,γ,d∈, with 1≤ d≤ n and γ>α+1, we have
Σ(x):=∑_k=0^n |x-x_k|^α |b_k(x)|≤ h^α(h/h^*)^γ+d, ∀ x∈ [a,b],
where >0 is a constant independent of n, h and x.
Proof of Theorem <ref>.
Recalling that b_k(x_j)=δ_k,j (cf. (<ref>)), if x∈{x_0,…, x_n} then the statement is trivial since Σ(x)=0. Hence, let us fix x_ℓ<x<x_ℓ+1 with ℓ=0,…,n-1.
Following the same lines as in the proof of Th. <ref> and using the notations therein introduced, we note that
Σ(x) : =∑_k=0^n |x-x_k|^α |b_k(x)|
≤ ∑_k=0^n |x-x_k|^α∑_i∈ J_kA_i,k B_i,k(x)/|x-x_k|^γ |D(x)|
= ∑_k=0^n |x-x_k|^α{∑_i∈ J_k∩I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|+ ∑_i∈ J_k∩ I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|}
= ∑_k=0^n |x-x_k|^ασ_k(x) +∑_k=0^n |x-x_k|^αμ_k(x)=:Σ_1(x)+Σ_2(x)
Hence, we estimate Σ_1(x) and Σ_2(x) using the results obtained in the proof of Th. <ref> for S_1(x) and S_2(x), respectively.
For simplicity, in the sequel all positive constants independent of n,h,x are denoted by even if they have different values.
As regards Σ_1(x), similarly to S_1(x) in the proof of Th. <ref>, we deduce
Σ_1(x) = |x-x_ℓ|^ασ_ℓ(x)+ |x-x_ℓ+1|^ασ_ℓ+1(x)+
∑_[ k=0; k∉{ℓ,ℓ+1} ]^n
|x-x_k|^ασ_k(x)
≤ 2(d+1)(h/h^*)^d h^α +
(h/h^*)^d h^γ∑_[ k=0; k∉{ℓ,ℓ+1} ]^n
∑_i∈ J_k∩I_ℓ|x-x_k|^α/|x-x_k|^γ([ d; k-i ])
≤ 2(d+1)(h/h^*)^d h^α +
2^d (h/h^*)^d h^γ(
∑_k=0^ℓ-1|x_ℓ +1-x_k|^α/|x_ℓ-x_k|^γ
+∑_k=ℓ+2^n |x_ℓ-x_k|^α/|x_ℓ+1-x_k|^γ)
≤ 2(d+1)(h/h^*)^d h^α +
2^d (h/h^*)^d+γ h^α(
∑_k=0^ℓ-1|ℓ +1-k|^α/|ℓ -k|^γ
+∑_k=ℓ+2^n |ℓ -k|^α/|ℓ+1-k|^γ)
≤ (h/h^*)^d+γ h^α(
∑_j=1^n1/j^γ-α)≤(h/h^*)^d+γ h^α
having used in the last inequality the hypothesis γ>α+1 to get ∑_j=1^∞1/j^γ-α<∞.
As regards Σ_2(x), from the results achieved for S_2(x) in the proof of Th. <ref>, we deduce
Σ_2(x) = ∑_k=0^n|x-x_k|^α( ∑_i∈ J_k∩ I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|)
= ∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ +d|x-x_k|^α(∑_i∈ J_k∩ I_ℓA_i,k B_i,k(x)/|x-x_k|^γ|D(x)|)
≤ (h/h^*)^d ∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ +d |x-x_k|^α∑_i∈ J_k∩ I_ℓ(ℓ+1-i)!(i+d-ℓ)!/(k-i)!(i+d-k)!
≤ d (h/h^*)^d
[∑_[ k=ℓ +1-d; [-.08cm]
k≥ 0 ]^ℓ |x_ℓ+1-x_k|^α([ d; ℓ+1-k ])
+
∑_[ k=ℓ +1; [-.08cm]
k≤ n ]^ℓ +d|x_ℓ-x_k|^α([ d; k-ℓ ])
]
≤ d h^α(h/h^*)^d
[∑_[ k=ℓ +1-d; [-.08cm]
0≤ k≤ n ]^ℓ |ℓ+1-k|^α([ d; ℓ+1-k ])+
∑_[ k=ℓ +1; [-.08cm]
0≤ k≤ n ]^ℓ +d|ℓ-k|^α([ d; k-ℓ ])
]
≤ 2d h^α(h/h^*)^d[∑_j=1^d
j^α([ d; j ])
]≤ h^α(h/h^*)^d
§ ERROR ESTIMATES
Let us start by providing an error estimate in the case that f is a Hölder continuous function satisfying
|f(x)-f(y)|≤ M |x-y|^α, ∀ x,y∈ [a,b]
with 0<α≤ 1 and M>0 independent of x and y.
Denoted by Lip_α([a,b]) the class of all such functions, we state the following
Let n, d,γ∈, 1≤ d≤ n, and let the nodes (<ref>) be equidistant or quasi-equidistant.
If f∈ Lip_α([a,b]) for some 0<α≤ 1 then, for all integers γ>α+1, the associated generalized FH interpolant satisfies
|f(x)-r̃_d(f,x)|≤ h^α, ∀ x∈ [a,b],
with >0 independent of x, n, and h.
Proof of Theorem <ref>.
By the interpolation property (<ref>), it is sufficient to prove (<ref>) for arbitrarily fixed x∈ ]x_ℓ, x_ℓ+1[, with ℓ=0,…, n-1.
By (<ref>), taking into account that r̃(x) certainly preserves the constant functions (since d≥ 1), we note that
1=∑_k=0^n b_k(x).
Consequently, from (<ref>) and (<ref>) we deduce
|f(x)-r̃(f,x)|
= |∑_k=0^n (f(x)-f(x_k)) b_k(x)| ≤∑_k=0^n |f(x)-f(x_k)| |b_k(x)|
≤ M ∑_k=0^n |x-x_k|^α |b_k(x)|,
and since γ>α +1, we can apply Th. <ref> obtaining
|f(x)-r̃(f,x)|≤ M∑_k=0^n |x-x_k|^α |b_k(x)|
≤(h/h^*)^γ+d h^α
with >0 independent of x and n,h.
Hence, the statement follows from the assumption on the nodes distribution that ensures the uniform boundedness of h/h^*.
Now, let us estimate the error in the case of continuously differentiable functions.
Let r̃_d(f,x) be the generalized FH interpolant corresponding to equidistant or quasi-equidistant distribution of nodes and parameters n,d,γ∈ with 1≤ d≤ n and γ>d+1. If f∈ C^d([a,b]) then we have
|f(x)-r̃_d(f,x)|≤ h^d, ∀ x∈ [a,b]
where >0 is a constant independent of x and n,h.
Proof of Theorem <ref>.
Due to the interpolation property (<ref>), it is sufficient to prove (<ref>) in the case that
x∈ ]x_ℓ, x_ℓ+1[, for a given ℓ∈{0,…, n-1}.
Since f∈ C^d([a,b]) with d≥ 1, we can certainly consider the Taylor polynomial
T_d(x)=∑_r=0^df^(r)(x_ℓ)/r!(x-x_ℓ)^r,
that satisfies the following error bound
|f(x)-T_d(x)|≤ |x-x_ℓ|^d , ∀ x∈ [a,b]
where here and in the following >0 denotes any constant independent of x,n,h taking also different values at different occurrences.
Hence, recalling the polynomial preservation property (<ref>), we observe that
f(x)-r̃(f,x)=f(x)-T_d(x)+r̃(T_d-f, x)
and, by (<ref>), we get
|f(x)-r̃(f,x)|
≤ |f(x)-T_d(x)|+∑_k=0^n |T_d(x_k)-f(x_k)| |b_k(x)|
≤ |x-x_ℓ|^d + ∑_k=0^n |x_k-x_ℓ|^d |b_k(x)|
≤ h^d+∑_k=0^n |x_k-x_ℓ|^d |b_k(x)|
On the other hand, taking into account that
(u+v)^d≤ 2^d-1(u^d+v^d), ∀ u,v∈^+,
we note that
|x_k-x_ℓ|^d=|(x_k-x)+(x-x_ℓ)|^d≤(|x_k-x|+|x-x_ℓ|)^d
≤(|x_k-x|^d+|x-x_ℓ|^d)
and applying Th. <ref> and Th. <ref>, we obtain
∑_k=0^n |x_k-x_ℓ|^d |b_k(x)|≤∑_k=0^n |x-x_k|^d |b_k(x)|+ |x-x_ℓ|^d∑_k=0^n |b_k(x)|≤ h^d
By the next theorem, we state that for the generalized FH interpolants with parameter 1≤ d≤ n and γ>d+2, we can get the analogous of (<ref>) under weaker assumption on the function.
For any n∈, let r̃_d(f,x) be the generalized FH interpolant corresponding to equidistant or quasi-equidistant nodes (<ref>) and parameters d,γ∈ with 1≤ d≤ n and γ>d+2. If f∈ C^d+1([a,b]) then we have
|f(x)-r̃_d(f,x)|≤ h^d+1, ∀ x∈ [a,b]
where >0 is a constant independent of x and n,h.
Proof of Theorem <ref>.
Similarly to the proof of Th. <ref>, we consider the polynomial T_d(x) given by (<ref>). Hence, taking into account that f∈ C^d+1([a,b]) implies
|f(x)-T_d(x)|≤ |x-x_ℓ|^d+1 , ∀ x∈ [a,b]
By (<ref>), Th. <ref> and Th. <ref>, we get
|f(x)-r̃(f,x)|
= |f(x)-T_d(x)+r̃(T_d-f, x)|
≤ |f(x)-T_d(x)|+∑_k=0^n |T_d(x_k)-f(x_k)| |b_k(x)|
≤ |x-x_ℓ|^d+1 + ∑_k=0^n |x_k-x_ℓ|^d+1 |b_k(x)|
≤ h^d+1+∑_k=0^n |x_k-x|^d+1 |b_k(x)|+
|x-x_ℓ|^d+1∑_k=0^n |b_k(x)|
≤ h^d+1
Now we estimate the error in the class C^s,α([a,b]) of all functions that are s–times continuously differentiable with f^(s)∈ Lip_α([a,b]), 0<α≤ 1.
For any n∈, let r̃_d(f,x) be the generalized FH interpolant corresponding to equidistant or quasi-equidistant nodes (<ref>) and parameters d,γ∈ with 1≤ d≤ n. For any 0<α≤ 1, we have
γ>d+α+1 ⟹ |f(x)-r̃_d(f,x)|≤ h^d+α ∀ f∈ C^d,α([a,b]),
∀ x∈ [a,b],
γ>d+α+2 ⟹ |f(x)-r̃_d(f,x)|≤ h^d+1+α ∀ f∈ C^d+1,α([a,b])
∀ x∈ [a,b],
with >0 a constant independent of x and n,h.
Proof of Theorem <ref>.
The proof follows similarly to that of Th. <ref> and Th. <ref> by using T_d(x), the Taylor polynomial of f centered at x_ℓ (with x∈ [x_ℓ,x_ℓ+1]) whose error, now, can be estimated as follows
|f(x)-T_d(x)| ≤ |x-x_ℓ|^d+α , ∀ x∈ [a,b], ∀ f∈ C^d,α([a,b])
|f(x)-T_d(x)| ≤ |x-x_ℓ|^d+1+α , ∀ x∈ [a,b], ∀ f∈ C^d+1,α([a,b])
with >0 independent of x.
Note that the bounds (<ref>)–(<ref>) can be easily proved by induction on d. For instance, in the following we prove (<ref>). Indeed, for d=1 (<ref>) holds since by the Mean Value Theorem
f(x)-f(x_ℓ)=f^'(ξ)(x-x_ℓ), x_ℓ≤ξ≤ x
and consequently ∀ f∈ C^1,α([a,b]) we have
|f(x)-T_1(x)| = |f(x)-f(x_ℓ)-f^'(x_ℓ)(x-x_ℓ)|
= |f^'(ξ)-f^'(x_ℓ)||x-x_ℓ|
≤ |ξ-x_ℓ|^α|x-x_ℓ|≤ |x-x_ℓ|^α+1
Moreover, setting
F(x)=f(x)-T_d(x), G(x)=(x-x_ℓ)^d+α,
we observe that, by the Cauchy Theorem, there exists ξ∈ ]x_ℓ, x[ such that
|f(x)-T_d(x)|/(x-x_ℓ)^d+α = |F(x)-F(x_ℓ)|/|G(x)-G(x_ℓ)|=|F^'(ξ)|/|G^'(ξ)|
= |f^'(ξ)-
(∑_r=0^d-1(f^')^(r)(x_ℓ)/r! (ξ-x_ℓ))^r|/(d+α)|ξ-x_ℓ|^d+α-1
Hence, suppose that (<ref>) holds for d-1 and applying it to f^'∈ C^d-1,α([a,b]), we get that (<ref>) also holds for d since
|f(x)-T_d(x)|/(x-x_ℓ)^d+α=
|f^'(ξ)-
(∑_r=0^d-1(f^')^(r)(x_ℓ)/r! (ξ-x_ℓ))^r|/(d+α)|ξ-x_ℓ|^d+α-1≤ |ξ-x_ℓ|^d-1+α/(d+α)|ξ-x_ℓ|^d+α-1≤
with independent of x.
Finally, we focus on the general case that f:[a,b]→ is a function of bounded variation (f∈ BV([a,b])) and show that all the previous error estimates continue to hold locally, in all compact subintervals I where we have f∈ Lip_α(I), or f∈ C^s(I) or f∈ C^s,α(I), with 0<α≤ 1 and s∈.
Let f∈ BV([a,b]) and suppose there exists a subinterval, I=[a^', b^']⊂ [a,b], where we have f∈ C^s,α(I) for some integer s≥ 0 and some 0≤α≤ 1 not simultaneously null (meaning that C^0,α=Lip_α and C^s,0=C^s). Moreover, let r̃(f,x) be the generalized FH interpolant of f corresponding to n,d,γ∈ and equidistant or quasi-equidistant distribution of nodes. If we choose γ>s+α+1 and d such that
[ (i) 1≤ d≤ n s=0 f∈ Lip_α(I); (ii) s≤ d≤ n s=1 ; (iii) s-1≤ d≤ n 2≤ s< n ]
then, as h→ 0, we get
|f(x)-r̃(f,x)|≤ h^s+α, ∀ x∈ [a^', b^']
where >0 is a constant independent of x and n,h.
Proof of Theorem <ref>.
As usual, we consider the case x∈ [a^', b^']-{x_0,…, x_n} is arbitrarily fixed and denote by a positive constant independent of x,n,h that can take different values at different occurrences.As h→ 0, we can suppose that x_ℓ<x<x_ℓ+1 with [x_ℓ,x_ℓ+1]⊂ [a^', b^']. In this way, we can take the Taylor polynomial of f centered at x_ℓ and having degree:
{[ s ; s-1 ].
Denote by T(x) such polynomial, since f∈ C^s,α([a^',b^']), we have
|f(x)-T(x)|≤ |x-x_ℓ|^s+α , ∀ x∈ [a^',b^'],
and since (T)≤ d, by (<ref>), we also have
r̃(T,x)=T(x), ∀ x∈ [a,b].
Consequently, we get
|f(x)-r̃(f,x)| ≤ |f(x)-T(x)|+|r̃(T-f,x)|
≤ |x-x_ℓ|^s+α+ |∑_k=0^n (T(x_k)-f(x_k))b_k(x)|
≤ h^s+α+ ∑_k=0^n |T(x_k)-f(x_k)| |b_k(x)|
and setting
I_1 = {k∈{0,1,…, n} : |x-x_k|≤ (b^'- a^')},
I_2 = {k∈{0,1,…, n} : |x-x_k|> (b^'- a^')}
we obtain
|f(x)-r̃(f,x)| ≤ h^s+α+ {∑_k∈ I_1 + ∑_k∈ I_2} |T(x_k)-f(x_k)| |b_k(x)|
=: h^s+α + s_1(x)+s_2(x)
As regards s_1(x), we note that k∈ I_1 implies x_k∈ [a^', b^'] and hence the bound (<ref>) can be applied. Consequently, by Th. <ref> and Th.<ref>, we get
s_1(x) = ∑_k∈ I_1 |T_s(x_k)-f(x_k)| |b_k(x)|
≤ ∑_k∈ I_1 |x_k-x_ℓ|^s+α |b_k(x)|
≤ ∑_k=0^n (|x_k-x|^s+α+ |x-x_ℓ|^s+α) |b_k(x)|≤ h^s+α
Finally, concerning s_2(x), we note that
sup_k∈ I_2|T(x_k)-f(x_k)|/|x-x_k|^s+α≤
and consequently, by Th. <ref>, we conclude
s_2(x) = ∑_k∈ I_2 |T(x_k)-f(x_k)| |b_k(x)|≤∑_k∈ I_2 |x-x_k|^s+α |b_k(x)|
≤ ∑_k=0^n |x_k-x|^s+α |b_k(x)|≤ h^s+α
§ SOME NUMERICAL EXPERIMENTS
In this section we compare the error function
e_n,d,γ(x) = f(x) - r(x)
for different functions f and different values of n, d and γ.
We denote the maximum error by
E_n,d,γ = max_x ∈ [-1,+1] | e_n,d,γ |.
As interpolation nodes, we take equidistant points in the interval [-1,+1].
Note that the generalized FH interpolants are defined for a general configuration of the interpolation points.
Experiment 1: Let us consider the function f(x) = |x|^0.5∈ Lip_α ([-1,1]) with α = 0.5 to illustrate
Th. <ref>. According to the theory, γ should be taken greater than α+1 = 1.5.
The numerical experiment shows that in this case it is also valid when γ = 1.
We get that for n = 2^k, k=1,2,…,10, the maximum error E_2^k,d,γ is divided by a factor approximately equal to √(2)
when k is increased by one. So the maximum error behaves as Ø(h^0.5).
This is true for γ = 1,2,… and d = 0,1,2,….
Figure <ref> illustrates this by plotting E_2^k,d,γ for k=1,2,…,10 and γ = 1,…,5 with d = 2.
The curve for γ is above the one for γ-1.
So, one would think that increasing the value of γ is not a good idea.
However, the factors between the errors for γ=5 and γ=1 are small and around 1.08.
But more importantly the error function e_n,d,γ for this function f(x) = |x|^0.5
behaves much better for γ=2 compared to γ=1.
Figure <ref> shows the error function e_n,d,γ for n = 1024, d=2 and γ = 1,2.
The blue dots represent the error function for γ = 1 (original FH interpolants), the red dots for γ =2.
Although the peak increases slightly for increasing values of γ, the behaviour away from the origin is
much better. We did not show the behaviour of the error function for γ = 3,4,5.
In this case the behaviour is between the behaviour for γ = 1 and
γ=2.
Experiment 2: Let us consider the function f(x) = |x| ∈ Lip_α ([-1,1]) with α = 1.
The conditions of Th. <ref> require that γ should be greater than α+1 = 2.
However, the numerical experiments indicate that the theorem is also valid for γ =1 and 2.
We get that for n = 2^k, k=1,2,…,10, the maximum error E_2^k,d,γ is divided by a factor approximately equal to 2
when k is increased by one. So the maximum error behaves as Ø(h).
This is true for γ = 1,2,… and d = 0,1,2,….
Figure <ref> illustrates this by plotting E_2^k,d,γ for k=1,2,…,10 and γ = 1,…,5 with d = 1.
The curve for γ is above the one for γ-1.
Also here, one would think that increasing the value of γ is not a good idea.
However, the factors between the errors for γ=5 and γ=1 are also small and around 1.55.
But more importantly the error functions e_n,d,γ for this function f(x) = |x|
also behave much better for increasing values of γ.
Figure <ref> shows the error function e_n,d,γ for n = 1024, d=1 and γ = 1,2,…,5.
The blue dots represent the error function for γ = 1 (original FH interpolants), the red dots for γ =2
and so on.
Although the peak increases slightly for increasing values of γ, the behaviour away from the origin is
much better.
Experiment 3: Let us consider the analytic function f(x) = e^-x^2∈^∞([-1,1]).
Figure <ref> shows the error function e_n,d,γ for n = 1024, d=2 and γ = 1,2.
The blue dots represent the error function for γ = 1 (original FH interpolants), the red dots for γ =2.
Following the theory, the maximum error for the interpolants behaves as Ø(h^d+1) and this is also observed in practice.
The behaviour in between the peaks is much better when γ =2 compared to γ=1.
Experiment 4: Let us consider the analytic Runge-function f(x) = 1/(1+25x^2) ∈^∞([-1,1]).
Figure <ref> shows the error function e_n,d,γ for n = 1024, d=2 and γ = 1,2.
The blue dots represent the error function for γ = 1 (original FH interpolants), the red dots for γ =2.
Following the theory, the maximum error for the interpolants behaves as Ø(h^d+1) and this is also observed in practice.
The behaviour in between the peaks is much better when γ =2 compared to γ=1.
However, if we take the similar figure for n = 2^6 = 64 we obtain Figure <ref>.
In this case, γ=2 performs worse compared to γ=1.
Experiment 5: In <cit.> an upper bound is derived for the Lebesgue constant in case γ=1:
Λ_n ≤ 2^d-1 (2+2log n).
To illustrate this, in Figure <ref> the Lebesgue constant is plotted in function of d where the different lines correspond to values of n = 2^k, k=4,5,…,10
(left) and in function of n for different values of d=1,2,…,10 (right).
The left figure clearly demonstrates the factor 2^d-1 in (<ref>) while the right figure illustrates the log n dependency.
Plotting similar figures for γ=2 we obtain Figure <ref>.
This figure shows that for γ=2 the Lebesgue constant is independent of n.
For γ=3 we obtain similar figures.
In Figure <ref> we plot the Lebesgue constant for d=10:10:50, n=2^k, k=10 and for γ=1,2,3.
We compare this behaviour with the plot 2^d.
This figure shows that the Lebesgue constant behaves as 2^d with independent of d.
In Figure <ref> the Lebesgue function is plotted for n = 2^6, d = 5 and γ =1 and 2.
§ CONCLUSION
In this paper we have defined a whole family of generalized Floater–Hormann interpolants depending on a parameter γ∈.
For γ=1 we obtain the original Floater–Hormann interpolants.
The numerical examples show that this family has potential to approximate non-smooth as well as smooth functions.
In future work, the (sub-)optimal choice of the parameters d and γ could be investigated when the n interpolation points are given.
For the original Floater–Hormann interpolants Güttel and Klein <cit.> have developed a heuristic method to determine the parameter d.
To remedy the bad behaviour of the error function at the endpoints, Klein <cit.> designed a method adding some interpolation points at the two endpoints.
A similar technique could be applied to the generalized Floater–Hormann approximants here introduced. Finally, the following questions are left for future research: (A) Refine the theoretical bound of the Lebesgue constant in (<ref>) and also state a lower bound according to the numerical experiments; (B) Investigate the Lebesgue constants and the error for other configurations of nodes too; (C) Deeper explore the role of γ in view of the numerical experiments that suggest larger theoretical bounds on γ.
abbrv
|
http://arxiv.org/abs/2307.06093v1 | 20230712113627 | Online Laplace Model Selection Revisited | [
"Jihao Andreas Lin",
"Javier Antorán",
"José Miguel Hernández-Lobato"
] | cs.LG | [
"cs.LG",
"stat.ML"
] |
On the Galois covers of degenerations of surfaces of minimal degree
Email address: M. Amram: [email protected]; C. Gong: [email protected]; Jia-Li Mo: [email protected];
2020 Mathematics Subject Classification. 05E15, 14J10, 14J25, 14N20.
[
====================================================================================================================================================================================================================================================
The Laplace approximation provides a closed form model selection objective for neural networks (NN).
Online variants, which optimise NN parameters jointly with hyperparameters, like weight decay strength, have seen renewed interest in the Bayesian deep learning community.
However, these methods violate Laplace's method's critical assumption that the approximation is performed around a mode of the loss, calling into question their soundness.
This work re-derives online Laplace methods, showing them to target a variational bound on a variant of the Laplace evidence which does not make stationarity assumptions, i.e, a mode-corrected method.
Online Laplace and its mode-corrected counterpart share stationary points where 1. the NN parameters are a maximum a posteriori, satisfying Laplace's method's assumption, and 2. the hyperparameters maximise the Laplace evidence, motivating online methods.
We demonstrate that these optima are roughly attained in practise by online algorithms using full-batch gradient descent on UCI regression datasets. Here, online model selection prevents overfitting and outperforms validation-based early stopping.
§ INTRODUCTION
Online model selection holds the promise of automatically tuning large numbers of neural network (NN) hyperparameters during a single training run, obsoleting cross validation <cit.>.
Online Laplace methods (OL) interleave regular NN optimisation steps with steps of hyperparameter optimisation with respect to the Laplace-approximated model evidence <cit.>. The method has been used to learn layer-wise weight decay strengths, data noise scales, and even data augmentation hyperparameters <cit.>.
Laplace's method constructs a second-order Taylor expansion around a mode of the posterior. Here, the first-order term vanishes, leaving a pure quadratic approximation to the log-density, that is, a Gaussian. The Laplace evidence is this distribution's normalising constant.
The online setting implies non-convergence of the NN parameters, violating the stationarity assumption. <cit.> show that applying Laplace's method to pre-trained NNs which do not attain zero loss gradient leads to severe deterioration in model selection performance.
<cit.> experiment with inclusion of the first-order Taylor term in the OL procedure but find that it leads to training instability.
This work revisits online Laplace methods, taking steps towards reconciling their seeming unsoundness with their satisfactory empirical performance.
* We show that a variant of OL that includes the first-order Taylor term (and thus does not assume stationarity) corresponds to NN hyperparameter optimisation with the evidence of a tangent linear model. We re-derive the standard OL objective as a variational lower bound on the evidence of this tangent linear model, motivating its use.
* We show OL shares fixed points, where the ELBO is tight, with the above first-order-corrected procedure. Here, the linear model's maximum a posteriori (MAP) parameters match the NN parameters, for which they are also a MAP, and the linear model's evidence matches the NN's Laplace evidence, further motivating its use.
* We show that these fixed points are roughly attained in practise by small NNs trained via full-batch gradient descent with OL hyperparameter tuning on UCI regression datasets, but they are not attained without online tuning.
The hyperparameters found through online Laplace lead to NNs that outperform their non-online counterparts.
§ PRELIMINARIES: LAPLACE APPROXIMATION AND ONLINE VARIANTS
We consider a supervised problem where n inputs, stacked as X ∈^n × d_x, are paired with scalar targets stacked as y ∈^n. We introduce a NN regressor f: ^d_w×^d_x→ with w ∈^d_w as parameters. Omitting dependence on the inputs from our notation, as these are fixed, and working with arrays of stacked predictions f(w) ∈^n, we model the targets as
y = f(w) + ϵ with w ∼(0, α^-1 I_d_w) and ϵ∼(0, β^-1 I_n),
where α is the scalar precision of our prior over NN parameters and β is the scalar homoscedastic noise precision. The set of both hyperparameters is = {α, β}. Our results readily generalise to vector-valued hyperparameters and outputs <cit.>.
We train the NN to minimise the loss ℓ_f(w; ) = β/2y - f(w)^2 + α/2w^2.
This objective matches the unnormalised, negative log-joint density of our parameters and targets
e^-ℓ_f(w; )(β/2 π)^n/2(α/2 π)^d_w/2
= (y; f(w), β^-1 I_n) (w; 0, α^-1 I_d_w)
p_f(y, w; ).
Laplace's method <cit.> takes a second-order Taylor expansion of log p_f(y, w; ) around w_0, assumed to be a mode (w_0 ∈_w ℓ_f(w; )), and approximates the NN model evidence p_f(y; ) by integrating the expanded expression as
∫ p_f(y, w; ) d w ≈∫exp(
log p_f(y, w_0; )
-1/2(w - w_0)^⊤
H
(w - w_0)
) d w
= exp( d_w/2logα
+ n/2logβ
- β/2‖ y - f(w_0) ‖_2^2
-α/2‖ w_0 ‖_2^2
- 1/2log H
- n/2log 2π)
exp_f(; w_0) ,
where the first-order term vanishes since we assume ∂_wℓ_f(w_0; ) = 0 and H is ∂^2_w ℓ_f(w_0; ).
In practice, the loss Hessian is replaced by the Generalised Gauss Newton matrix (GGN), that is, H = β J^T J + α I_d_w∈^d_w × d_w for J =∂_w f(w_0) ∈^n × d_w the NN's Jacobian evaluated at the training inputs.
This choice provides ease of computation and numerical stability <cit.>.
We hereon assume H to be the GGN and suppress its dependence on the hyperparameters and the expansion point from our notation.
§.§ Online hyperparameter tuning
We may tune the hyperparameters for a NN by choosing them to maximise _f(; w^⋆), where w^⋆∈_w ℓ_f(w; ) is a true optima of the loss. This will not change our NN's predictions however, as its parameters are held fixed at w_⋆. <cit.> proposes to re-train the NN from scratch using the updated hyperparameters. This procedure terminates when a joint stationary point of the parameters and hyperparameters is found:
w_⋆∈_w ℓ_f(w; ^⋆) and ^⋆∈__f(; w^⋆).
r.4
< g r a p h i c s >
Second-order Taylor approximations of a function f (black) around x_0 (star), with (orange) and without (blue) the first-order term.
The size of NN parameter spaces and datasets has grown dramatically since 1992 and re-training the NN is now impractical.
This motivates online Laplace (OL) approaches which, at timestep t with parameters w_t, perform a step of NN parameter optimisation to minimise ℓ_f(w_t; ), followed by a hyperparameter update to maximise _f(; w_t) <cit.>[<cit.> refers to the described online Laplace procedure as Variational Laplace.].
Importantly, the Taylor expansion point is chosen to match the current NN parameter setting w_t.
Since optimisation has not converged, w_t ∉_w ℓ_f(w; ), then _f(; w_t), which discards the first-order expansion term, does not provide a local approximation to the evidence. <Ref> illustrates that discarding this first-order term can result in a vastly different approximation.
§ UNDERSTANDING ONLINE LAPLACE THROUGH THE TANGENT MODEL
This section re-derives online Laplace methods for NNs as empirical Bayesian inference in a tangent linear model. This analysis provides a non-heuristic motivation for these methods.
Online hyperparameter optimisation including the first-order term
To start, we integrate the Taylor expansion of log p_f(y, w; ) at w_t without discarding the first-order term
∫exp(
log p_f(y, w_t; ) - ∂_w ℓ_f(w_t; ) (w - w_t)
-1/2(w - w_t)^⊤ H
(w - w_t)
) d w
= exp( d_w/2logα + n/2logβ - β/2‖ y - (f(w_t) + J(v^⋆- w_t)) ‖_2^2
- α/2‖ v^⋆‖_2^2
- 1/2log H
-n/2log 2π)
exp_h(; w_t),
where the optimum of the quadratic expansion v^⋆ is obtained by performing a Gauss-Newton update on the NN loss at w_t, that is, v^⋆ = w_t - β H^-1(J^T(f(w_t) - y) + α w_t). Gauss-Newton optimisation is common in the NN second order optimisation-literature; it makes more progress per step than gradient descent <cit.>. Since _h(; w_t) represents a local evidence approximation, mode-corrected by a Gauss-Newton step, it may more closely match the Laplace evidence computed at an optimum _f(; w^⋆) than the Laplace evidence computed at the online expansion point _f(; w_t).
We may use _h(; w_t) to construct an algorithm where each step of NN parameter optimisation minimising ℓ_f(w_t; ) is interleaved with a hyperparameter update maximising _h( ; w_t).
Online hyperparameter optimisation with the tangent linear model
The mode-corrected evidence approximation _h( ; w_t) matches the exact evidence of a tangent model with regressor h: ^d_w×^d_x→ given by a first-order Taylor approximation to f at w_t
y = h(v) + ϵ with h(v) f(w_t) + J (v - w_t) and v ∼(0, α^-1 I_d_w), ϵ∼(0, β^-1 I_n).
The tangent model's loss is ℓ_h(v; ) = β/2y - h(v)^2 + α/2v^2. It is minimised by the MAP v^⋆_v ℓ_h(v; ), which matches the optimum of the quadratic expansion in <ref>, and its constant curvature matches the GGN ∂^2_v ℓ_h(v^⋆; ) = H. Thus, the above described mode-corrected algorithm selects hyperparameters that improve the tangent model's evidence _h( ; w_t) at each step. We hereon refer to this procedure as online Linear Model (LM).
Online Laplace as a variational bound on the tangent model's evidence
Finally, we connect the LM procedure to the OL procedure described in <ref>. For this, we apply <cit.>'s lower bound on the linear model's evidence, i.e., the ELBO,
_h( ; w_t) ≥_(v ; μ, H^-1)[log p_h(y,v; )] + ((μ, H^-1)) ( ; μ, w_t)
= exp( d_w/2logα + n/2logβ - β/2‖ y - (f(w_t) + J(μ- w_t)) ‖_2^2
- α/2‖μ‖_2^2
- 1/2log H
-n/2log 2π)
where refers to the differential entropy and p_h(y, v; ) e^-ℓ_h(v; )(β/2 π)^n/2(α/2 π)^d_w/2.
The bound is tight when the variational posterior mean μ matches the linear model's MAP v^⋆, that is, ( ; v^⋆, w_t) = _h( ; w_t). This recovers the LM objective. On the other hand, by choosing the variational mean to match the linearisation point, μ = w_t, we recover the OL evidence (; w_t, w_t) = _f(; w_t). Thus, discarding the first-order term in the Taylor expansion of log p_f(y, w; ), which seemed unjustified, actually results in a lower bound on a variant of the Laplace evidence which corrects for the expansion point not matching the mode, that is, _f(; w_t) ≤_h(; w_t).
This is illustrated in <ref> (left).
Convergence behaviour
Both the OL and LM procedures reach a shared fixed point when a simultaneous optimum of the NN weights and linear model hyperparameters is found, that is, w^⋆ = _w ℓ_f(w; ^⋆) and ^⋆ = __h(; w^⋆) = _(; w^⋆, w^⋆). Since the first-order term vanishes here, ∂_w ℓ_f(w^⋆; ) = 0, the ELBO is tight and the linear model evidence _h matches the regular Laplace evidence _f, satisfying the termination condition of <cit.>'s NN re-training algorithm.
At convergence, we have ∂_w ℓ_f(w^⋆; ) = ∂_v ℓ_h(w^⋆; ) = 0. We may thus use similarity between w^⋆ and v^⋆ to check for convergence of the OL and LM procedures. <ref> shows the LM procedure results in the NN and tangent model converging to a shared MAP. This does not occur when performing standard NN training with fixed hyperparameters.
Since the NN and linear model share hyperparameters , their loss gradients match at the linearisation point ∂_w ℓ_f(w_t; ) = ∂_v ℓ_h(w_t; ). Thus, during training, each NN optimisation step brings its parameters closer to the MAP of the current tangent model while also changing the linearisation point, which in turn changes the tangent model. This is not guaranteed to result in a tangent linear model with increased evidence. Indeed, <ref> (right) shows both the OL and LM evidences exhibit non-monotonic training curves. Interestingly, the slack in the OL bound results in smaller hyperparameter steps, as shown in <ref>. These may counteract training instability coming from a rapidly changing tangent model evidence. We find OL updates often lead to more stable convergence to a shared MAP for the NN and tangent model than LM updates. This can be seen in <ref> (left).
§ EXPERIMENTS
This section demonstrates the OL and LM online hyperparameter tuning algorithms discussed in the previous section and contrasts them with standard NN training, that is, with fixed hyperparameters (labelled offline). To this end, we train single hidden layer, 50 hidden unit MLPs on 5 UCI datasets <cit.>: housing, concrete, energy, wine, and yacht.
We train our NNs with full-batch Adam and an exponentially decaying learning rate. For online methods, we train until convergence, updating the hyperparameters after every NN parameter step. The hyperparameter update step is detailed in <ref>. For offline methods, we select the stopping point that minimises RMSE on a validation set consisting of 10% of training data points. We do not use normalisation layers, as these interfere with the Laplace approximation <cit.>.
Convergence of NN and linear model posteriors <ref> (left) shows the Euclidean distance between the NN parameters and tangent model MAP decreasing throughout training for both the LM and OL procedures. Although the distance does not exactly reach 0, the NN and linear model parameter vectors present nearly identical distributions, as shown in <ref>. This does not occur in the offline setting. The LM procedure takes larger hyperparameter steps than OL resulting in some training instability and overall slower convergence, as shown in <ref>. <ref> (right) shows that both the LM and OL evidence estimates attain their maxima very early in training, before the lowest test RMSE is reached. As discussed in <ref>, these metrics target the evidence of a tangent model and may only transfer to the NN once online training has fully converged.
Predictive performance
We obtain predictive distributions using the linearised Laplace method <cit.>. For NNs trained offline, linearised Laplace hyperparameters are chosen post-hoc my maximising either the _f or _h objectives. <ref> reports log-likelihood mean and standard errors obtained across 10 train-test splits where the test set size is 10% of the whole dataset size.
We find the OL procedure to consistently provide the best performance or be a close second best. The LM procedure performs very similarly to OL on all datasets except energy and yacht, where it performs the worse out of all methods. We attribute this to training instability. Offline methods only perform competitively on yacht, despite use of validation-based stopping. The online methods' improvement over their offline counterparts can mostly be explained by improved RMSE (see <ref>), suggesting online hyperparameter optimisation helps find NN parameters that effectively navigate the bias-variance trade off.
The full set of experimental results, including hyperparameter trajectories and evidence estimates, is provided in <ref>.
§ ACKNOWLEDGEMENTS
JAL was supported by the University of Cambridge Harding Distinguished Postgraduate Scholars Programme. JA acknowledges support from Microsoft Research, through its PhD Scholarship Programme, and from the EPSRC. JMHL acknowledges support from a Turing AI Fellowship under grant EP/V023756/1.
§ EVOLUTION OF NEURAL NETWORK TRAINING
§ ADDITIONAL PREDICTIVE METRICS
§ RELATED WORK
This work builds on the rich literature of Bayesian linear regression, which was first introduced by <cit.>.
We recommend chapter 3 of <cit.> and chapter 2 of <cit.> for an introduction.
The use of both the linear model evidence and Laplace model evidence for hyperparameter selection are introduced in <cit.>. These techniques are analysed and extended by <cit.> for linear model and by <cit.> for NNs.
Despite their analytic tractability, linear models are held back by a cost of inference cubic in the number of parameters when expressed in primal form, or cubic in the number of observations for the dual (i.e. kernelised or Gaussian Process) form. Although we do not deal with this computationally intractability in this work, there exist a number of approximations which aim to make inference in linearised NNs scalable <cit.>.
§ MACKAY'S HYPERPARAMETER UPDATE
Given mean μ and predictions ŷ, the hyperparameters α and β can be updated by
α = γ/‖μ‖_2^2, β = n - γ/‖ y - ŷ‖_2^2, γ = d_w - αTr(H^-1),
to maximise the linear model's evidence <cit.>, where γ is the effective number of dimensions.
For OL, μ = w and ŷ = f(w), and in the case of LM, μ = v^⋆ and ŷ = h(v^⋆).
§ IMPLEMENTATION DETAILS
The initial learning rate used for Adam was set to 0.01 and the exponential learning rate decay factor was set to 0.9999.
In the online setting, housing and wine were trained for 1000, concrete for 2000, energy for 30000 and yacht for 10000 steps until convergence.
In the offline setting, the validation RMSE was tracked for a maximum of 30000 epochs.
|
http://arxiv.org/abs/2307.03987v1 | 20230708142557 | A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation | [
"Neeraj Varshney",
"Wenlin Yao",
"Hongming Zhang",
"Jianshu Chen",
"Dong Yu"
] | cs.CL | [
"cs.CL"
] |
PCG-based Static Underground Garage Scenario Generation
Wenjin Li, Kai Li
Wenjin Li, Kai Li are with the Department of Computer Science and Technology, Southern University of Science and Technology, Shenzhen, 518055, China
August 12, 2023
============================================================================================================================================================================
Recently developed large language models have achieved remarkable success in generating fluent and coherent text. However, these models often tend to `hallucinate' which critically hampers their reliability.
In this work, we address this crucial problem and propose an approach that actively detects and mitigates hallucinations during the generation process.
Specifically,
we first identify the candidates of potential hallucination leveraging the model's logit output values, check their correctness through a validation procedure, mitigate the detected hallucinations, and then continue
with the generation process.
Through extensive experiments with the `article generation task', we first demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, the detection technique achieves a recall of ∼88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations.
Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average.
In summary, our work contributes to improving the reliability and trustworthiness of large language models, a crucial step en route to enabling their widespread adoption in real-world applications.
§ INTRODUCTION
Recently developed large language models such as GPT-3 <cit.>, InstructGPT <cit.>, PaLM <cit.>, LLaMA <cit.>, and several others <cit.>
have achieved remarkable performance on a wide range of language understanding tasks.
Furthermore, they have been shown to possess an impressive ability to generate fluent and coherent text.
Despite all these abilities, their tendency to `hallucinate' critically hampers their reliability and limits their widespread adoption in real-world applications.
Hallucination in the context of language refers to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input <cit.>.
These hallucinations can lead to serious consequences such as spreading of misinformation and violation of privacy.
Thus, in this work, we focus on the crucial problem of `addressing' hallucinations of the large language models.
We propose to actively `detect' and `mitigate' hallucinations during the generation process.
This is crucial as we show that a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input.
Thus, actively detecting and mitigating hallucinations is also important to prevent the propagation of hallucinations in the subsequently generated sentences. We divide our approach into two stages, Detection and Mitigation.
In the hallucination detection stage, we first identify the candidates of potential hallucination, i.e., the key `concepts' of the generated sentence.
Next, leveraging the logit output values of the model, we calculate model's `uncertainty' on the identified concepts.
We demonstrate that this uncertainty provides a signal for hallucination.
However, we note that this is an additional signal and not a necessary requirement for our approach.
Then, we check the correctness of the
`uncertain' concepts through a validation procedure where we:
(a) create a query that tests the correctness of the information pertaining to the concept,
(b) retrieve knowledge relevant to the validation question, (c) answer the validation question leveraging the retrieved knowledge, and verify the corresponding information in the generated sentence to detect hallucinations.
This is followed by the hallucination mitigation stage in which we
`repair' the potentially hallucinated sentence using the retrieved knowledge as evidence.
Figure <ref> illustrates the key steps of our approach.
Furthermore, we conduct a systematic and wide study exploring multiple techniques to achieve the objective of each of the steps.
We design an experimental setup where
we prompt the model to write about topics from diverse domains such as sports, politics, music, literature, etc.
Then, we annotate the correctness of the first five generated sentences for each topic.
We first demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, the detection technique achieves a recall of ∼88% and the mitigation technique successfully mitigates 57.6% of the correctly detected hallucinations.
Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average (Figure <ref>).
We conduct a thorough analysis that further
results in several interesting and important findings.
Lastly, we release our code and correctness annotations that will also facilitate a systematic future research in addressing hallucinations.
§ APPROACH
§.§ Overview
We propose to actively detect hallucinations and mitigate them during the generation process.
This is crucial as we show that
a generated sentence is hallucinated more often
when the model has already hallucinated in its previously generated sentences for the input (Section <ref>).
Similarly, a generated sentence is relatively less often hallucinated when the model has not hallucinated in its previously generated sentences.
Thus, actively detecting hallucinations and mitigating them is also important to prevent the propagation of further hallucinations in subsequently generated sentences.
To this end, we iteratively generate sentences through the model and actively detect and mitigate hallucinations.
Figure <ref> illustrates the key steps of our approach.
In section <ref>, we detail the steps of our hallucination detection approach, i.e., identifying the important `concepts' of the generated sentence, i.e., the candidates of potential hallucination (<ref>), calculating model's uncertainty on the concepts using the logit output values (<ref>), and checking the correctness by creating validation query (<ref>), finding relevant knowledge (<ref>), and verifying information leveraging the retrieved knowledge (<ref>).
We describe various techniques to achieve the objective of each of these steps and also elaborate on several important points such as
using a `self-inquiry' method to answer validation questions without using an external knowledge source and trade-off between executing the validation procedure in parallel for all the concepts and in sequential order based on their `uncertainty'.
For each step, we also indicate the most preferred technique with (*) and provide our justification.
In section <ref>, we detail our hallucination mitigation approach.
Specifically, we `repair' the hallucinated sentence by removing or substituting the hallucinated information leveraging the retrieved knowledge as evidence and also utilize the retrieved knowledge as context (prepended to the input) to generate the next sentence.
§.§ Hallucination Detection
§.§.§ Identify Key Concepts
In the first step, we identify the important concepts from the generated sentence.
We identify these concepts because validating the correctness of the entire sentence at once is infeasible; this is because a sentence may contain a number of different facets all of which can not be validated at once.
On the other hand, individually validating the correctness corresponding to the concepts provides opportunities for accurately detecting hallucinations.
Thus, the objective of this step is to identify the candidates of potential hallucination.
We note that a concept or keyphrase is essentially a span of text consisting of one or more words.
We study the following techniques to identify the concepts:
Entity Extraction:
Entities are usually an important part of a sentence, thus, we use an off-the-shelf entity extraction model
to identify the concepts.
A limitation of this method is that a concept need not necessarily be an entity and can be a non-entity span also.
We address this limitation with a keyword extraction model.
Keyword Extraction:
To also identify the non-entity concepts, we explore an off-the-shelf keyword extraction model[https://huggingface.co./ml6team/keyphrase-extraction-kbir-kpcrowd].
This model uses Keyphrase Boundary Infilling with Replacement (KBIR) as its base model and fine-tunes it on the KPCrowd dataset <cit.>.
*Instructing the Model*:
Since state-of-the-art language models perform remarkably well on a wide range of tasks, in this technique, we directly instruct the model to identify the important concepts from the generated sentence.
An important characteristic of this technique is that it doesn't require calling a task-specific tool (entity or keyword extraction model) for this task.
Table <ref> (in Appendix <ref>) illustrates examples of concepts identified using the three techniques.
It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also.
In contrast, instruction technique successfully identifies all the important concepts.
Moreover, it doesn't require calling a task-specific tool.
Thus, we represent this technique with (*), our preferred technique for this step.
§.§.§ Calculate Model's Uncertainty
GPT-3 <cit.> and several other publicly available models also provide logit output values in their prediction response.
Thus, we study if these logit output values can be utilized to
detect hallucinations.
However, we note that this is an additional source of information and not a necessary requirement for our hallucination detection method as some models that are available only via API calls do not provide these logit output values.
Recall that a concept can consist of more than one token also (note that the model provides logit output values at the level of tokens); thus, we study three different techniques for calculating a probability score for a concept.
Consider a concept consisting of n tokens and having the maximum softmax probabilities as p_1, p_2, p_3, ..., p_n for the n token positions respectively.
We obtain these probabilities by applying the softmax function over the logit values for each token position.
We study the following techniques:
Average of Token Probabilities:
In this technique, we simply take the average of the probabilities of the tokens corresponding to the concept:
score = AVG (p_1, p_2, ..., p_n)
Normalized Product of Token Probabilities:
Here, we take a normalized product of the probabilities of the tokens:
score = (p_1 × p_2 × ... × p_n)^1/n
*Minimum of Token Probabilities*:
Here, we take the minimum of probabilities as the score.
score = MIN (p_1, p_2, ..., p_n)
This is our preferred technique for this step as the other techniques average out the effect of model's uncertainty on the tokens while low probability in even one token of the concept provides a strong evidence of the model being uncertain.
For example, if the model is uncertain on the name of the USA president then its uncertainty on the first token (`Joe') would be high but on the next token (`Biden') would be very low as the token `Joe' is frequently followed by the token `Biden'.
Thus, averaging or normalizing the probabilities will have a limited capability to capture this signal.
Through our experiments (Section <ref>), we show that this score (especially `MIN') indeed provides a signal for hallucination, i.e., the more uncertain a model is on a concept (low probability score), the more likely it is to be hallucinating about that concept.
However, we note that this score is just a signal for hallucination and in no way provides a guarantee for presence of hallucinations.
We utilize this signal and check for hallucinations with respect to the uncertain concepts using our validation procedure (<ref>-<ref>).
In the absence of logit output values:
For models that do not provide the logit output values, all or some heuristically selected concepts (depending on the computational and latency budget of the system) can be passed to the validation stage for detecting hallucinations.
§.§.§ Create Validation Question
We start the validation procedure for a concept by creating a question that tests the correctness of the information (in the generated sentence) pertaining to the concept.
We create Yes/No Questions, i.e., questions for which the answer is either a `Yes' or a `No'.
Table <ref> shows examples of validation questions.
For creating these questions, we explore the following two techniques:
Question Generation Tool:
Here, we use an off-the-shelf answer-aware question generation model.
*Instructing the Model*:
Here, we directly instruct the model to create a validation question checking the correctness of the information about the selected concept.
For the same reason as in the concept identification step, this is our preferred technique as it does not require calling a task-specific tool.
We note that instead of Yes/No questions, Wh-questions can also be used for validation.
We prefer Yes/No questions as it is relatively easier to check the answer for these questions.
We leave exploring Wh-questions for validation for future work.
§.§.§ Find Relevant Knowledge
*Web Search*:
In order to answer the validation question, we retrieve knowledge relevant to it which serves as additional context.
For generality and wide coverage, we use web search (via Bing search API) for retrieving this knowledge.
However, we note that any other search API or knowledge corpus can also be utilized for this purpose.
Self-Inquiry:
We also explore a self-inquiry technique where we directly prompt the model to answer the validation question.
In this technique, the model relies on its parametric knowledge to answer the validation question.
This technique has several drawbacks as compared to web search such as lack of a reliable strategy to extract the parametric knowledge from the model and staleness of the parametric knowledge.
§.§.§ Answer Validation Question
In this step, we prompt the model to answer the validation question (leveraging the retrieved knowledge as context) and verify its response.
If the validation procedure succeeds for all the uncertain concepts then we continue generating the next sentence; otherwise, we interrupt the generation process, mitigate the potential hallucination in the sentence, and then continue generation.
Order of Validation of Concepts:
Validation of different concepts can be done in a sequence (in ascending order of their calculated probability score) or in parallel.
However, running this in parallel would require starting multiple threads which may not be supported by all machines.
Thus, in this work we study only the sequential validation strategy but note that it can be made more efficient by running it in parallel.
We regard this sequential validation as a greedy exiting strategy as we proceed to the mitigation stage on detection of the first potential hallucination.
§.§ Hallucination Mitigation
For mitigating the hallucination in the generated sentence, we instruct the model to repair the generated sentence by either removing or substituting the hallucinated information using the retrieved knowledge as evidence.
Table <ref> shows the instructional prompts for different steps of our approach.
Note: We note that the result of the validation procedure is contingent on the retrieved knowledge and the model's ability to leverage that knowledge in answering the validation question.
Thus, a case is plausible in which the validation procedure reports hallucination even though the sentence is actually not hallucinated.
However, in Section <ref>, we show that our approach performs fairly well on this task.
Moreover, it achieves a very high recall demonstrating its efficacy at detecting hallucinations.
Moreover, in Section <ref>, we show that our mitigation approach does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
§ EXPERIMENTS AND RESULTS
In this section, we first demonstrate the two findings that motivate our approach (<ref> and <ref>).
Then, we show the individual efficacy of our hallucination detection and mitigation techniques in <ref> and <ref>, respectively.
Finally, in <ref>, we show the effectiveness of the proposed active detection and mitigation approach in addressing hallucinations.
Data and Annotation:
In our experimental setup, we prompt the large language model (GPT-3: text-davinci-003) to write about various topics.
Specifically, we use a total of 150 topics from diverse domains.
Figure <ref> shows the distribution of different domains in our topic set.
In each domain, we include different kinds of topics; for instance, Sports domain consists of sports persons, administrators, teams, and games, Music consists of musicians, songs, music labels, and bands, Politics includes politicians, political parties, and elections, Film & TV includes actors, TV personalities, shows, and movies, History includes historians and events, etc.
For selecting the names of people, we use randomly sampled names from the top 20% of longest articles in WikiBio dataset <cit.> as done in <cit.>.
Similarly, for the other topics, we randomly sample from the longest Wikipedia articles.
This is done to ensure that no obscure or ambiguous concept is selected.
Equipped with the list of topics, we give the following input prompt to the model:
for each topic.
Following this, we (the authors) manually annotate the correctness of the first five sentences generated by the model for each topic.
For annotating the correctness, we look at search results from the web to find the relevant knowledge that either supports or contradicts the information present in the generated sentence.
In some cases, multiple web searches were required to check the correctness of different facets of a sentence.
Furthermore, in a small number of cases, we could not find information supporting or contradicting the information in the generated sentence, we mark it as a case of extrinsic hallucination.
We opt for this expert annotation strategy because despite our annotation task being a simple binary classification task, it requires considerable effort in checking the correctness of a given sentence which can not reliably be collected via crowdsourcing.
In addition to this sentence-level annotation, we also annotate correctness at the concept-level that we will detail in <ref>.
We release both sentence-level and concept-level hallucination annotations that will also facilitate a systematic future research in this direction.
§.§ Motivating Findings
§.§.§ Hallucination Causes Further Hallucination
Recall that we consider the first five sentences generated by the model for each topic and annotate their correctness.
Since the sentences are sequentially generated, we investigate the
relationship between `hallucination in a generated sentence' and `hallucination in the previously generated sentences' for an input.
Since there are two binary variables, there exist four possibilities in this relationship, i.e.,
a sentence is hallucinated and there was hallucination in the previously generated sentences (A), the sentence is not hallucinated and there was hallucination in the previously generated sentences (B), the sentence is hallucinated and there was no hallucination in the previously generated sentences (C), the sentence is not hallucinated and there was no hallucination in the previously generated sentences (D).
For illustration, consider a sample case for sentence 3, the two binary variables are whether sentence 3 is hallucinated and whether there was hallucination in the previously generated sentences (i.e. in sentence 1 OR sentence 2).
Figure <ref> demonstrates this relationship for sentences 2, 3, 4 and 5 aggregated over all the topics in our data.
We do not show this for sentence 1 as there is no previously generated sentence for it.
From this figure, we draw the following inferences:
(a) A > B: Cases A and B correspond to the scenario when there is hallucination in the previously generated sentences. It can be observed that A is considerably greater than B which implies that when there is hallucination in the previously generated sentences, a sentence is hallucinated more often.
Moreover, the gap keeps increasing as the sentence number increases.
(b) A > C: Cases A and C correspond to the scenario when a generated sentence is hallucinated. It can be observed that A is greater than C which implies that a generated sentence is hallucinated more when there is hallucination in the previously generated sentences as compared to when there is no previous hallucination.
(c) D > C: Cases C and D correspond to the scenario when there is no hallucination in the previously generated sentences. Here, D is greater than C which implies that when there is no hallucination in the previously generated sentences, a generated sentence is more often not hallucinated.
(d) D > B: Cases B and D correspond to the scenario when a generated sentence is not hallucinated. D is greater than B which implies that a generated sentence is not hallucinated more when there is no previous hallucination as compared to when there is previous hallucination.
This shows that hallucination in a sentence often results in further hallucinations in the subsequently generated sentences and thus actively detecting and mitigating hallucinations can not only fix the current hallucination but can also prevent its propagation in the subsequently generated sentences.
Next, we demonstrate the utility of logit output values in detecting hallucinations.
§.§.§ Logit Output Values Provide a Signal for Hallucination
In this subsection, we first show the trend of hallucination with the probability score.
Note that this score is calculated using the logit output values.
Then, we demonstrate the benefit of identifying concepts from the generated sentence in detecting hallucinations.
Finally, we compare the efficacy of different probability calculation techniques in detecting hallucinations.
Hallucination vs Probability Score:
In order to study the relationship between logit output values and hallucination, we annotate correctness at concept-level also (in addition to sentence-level annotations described earlier).
Specifically, for each identified concept, we mark whether the information about it in the generated sentence is hallucinated or not.
This can be different from sentence-level annotation as it focuses only on the correctness of the information about the concept in the sentence.
Table <ref> shows examples of both sentence-level and concept-level annotations.
Figure <ref> shows the trend of hallucination with our calculated probability scores at both sentence and concept levels.
For a sentence, we use the minimum across tokens of all its identified concepts as the probability score and for a concept, we use the minimum across all its tokens as the probability score.
It can be observed that as the probability score increases (or uncertainty decreases), tendency to hallucinate decreases.
This shows that these probability values can be utilized as a signal for hallucination, i.e., the low probability concepts in a generated sentence can be considered as candidates of potential hallucination and their correctness in the generated sentence can be validated for detecting hallucinations.
On average, we observe an absolute difference of ∼0.15 between the probabilities of concepts when the model is hallucinating vs when it is not hallucinating.
Benefit of Identifying Concepts from a Sentence:
Now, we demonstrate the benefit of identifying concepts from a sentence and leveraging the logit output values corresponding to their tokens for detecting hallucinations.
To this end, we plot precision-recall curves for the hallucination detection task corresponding to two methods that use the probabilities calculated from the logit output values.
The blue curve corresponds to the technique in which we use the minimum probability across all tokens of the sentence and the orange curve is for the technique in which we use the minimum over only the tokens of the identified concepts.
Figure <ref> shows the two curves.
The orange curve achieves higher area under the precision-recall curve implying that utilizing the probabilities of the concept tokens provides a stronger signal for hallucination as compared to the probabilities corresponding to all the tokens.
Comparing Probability Calculation Techniques:
Figure <ref> shows the Precision-Recall curves for the hallucination detection task (at concept-level) using the three probability calculation techniques, i.e., Minimum, Average, and Normalized (described in <ref>).
The `Minimum' technique achieves the highest area under the curve and hence is better at the hallucination detection task.
§.§ Hallucination Detection Performance
In this subsection, we demonstrate the hallucination detection performance of various techniques at both sentence and concept-levels.
Self-Inquiry vs Web Search:
Table <ref> and <ref>
show the hallucination detection performance of the self-inquiry and web search techniques at sentence-level and concept-level, respectively.
For sentence-level results, we predict the sentence to be hallucinated if the validation procedure fails on any identified concept.
Note that in these results, we do not leverage the uncertainty score to select concepts for validation, instead we validate all the identified concepts.
We study the relationship of recall with probability thresholds in Figure <ref> (in Appendix).
From the tables, it can be observed that the web-search technique achieve considerably high recall in detecting hallucinations.
Here, we emphasize on the high `recall' of web-search technique as we show that our mitigation approach does not introduce any new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives (<ref>).
Figure <ref> shows the recall of hallucination detection vs Probability threshold plot for Self Inquiry and web search techniques at both sentence-level and concept-level.
Web-search is consistently and considerably better than self-inquiry.
§.§ Hallucination Mitigation Performance
On sentences where our validation procedure (using Web search) reports hallucinations, we apply our mitigation technique.
We note that a sentence which is reported as hallucination can either be actually hallucinated or not hallucinated, i.e., it could also be a false positive.
Table <ref> shows the result of our method.
It successfully mitigates the hallucination on 57.6% of the correctly detected hallucinations (True Positives); we refer to this metric as `success'.
Furthermore, it achieves this at minimal `deterioration' (3.06%), i.e., it incorrectly converts a minimal 3.06% of the non-hallucinated instances to hallucinated.
§.§ Active Detection and Mitigation
The two findings in Section <ref> motivate our approach of addressing hallucinations in which we actively detect hallucinations leveraging the logit output values and mitigate them during the generation process to prevent their propagation.
Specifically, using the calculated probability scores, we identify the uncertain concepts and check their correctness using our validation procedure.
We generate one sentence at a time and when our detection method reports hallucination, we fix it using our mitigation approach and continue generating the next sentence.
We demonstrated separate detection and mitigation efficacy in <ref> and <ref>, respectively.
Figure <ref> compares the percentage of hallucination in the output of GPT-3 model and our active detection and mitigation approach.
Our approach reduces the percentage of hallucinations from 47.4% to 14.53%.
In Figure <ref>, we demonstrate this comparison for different categories of hallucination.
It shows that our approach reduces hallucinations for all categories.
§ RELATED WORK
Advancements in the field of natural language processing led to the development of models that possess an impressive ability to generate fluent and coherent text. However, these models are vulnerable to a phenomenon called text hallucination.
Prior work <cit.> has categorized text hallucinations into two classes: Intrinsic (when the generated output contradicts the source content) and Extrinsic (when the generated output cannot be verified from the source content, i.e., it that can neither be supported nor contradicted by the source).
One thread of research pertaining to hallucinations has focused on studying different causes of this phenomenon such as training data quality <cit.>, source-target divergence <cit.>, ill-suited modeling <cit.>, and randomness during inference <cit.>.
The other thread focuses on addressing the hallucination problem <cit.>.
<cit.> propose a sampling-based hallucination detection approach in which they first sample multiple responses from the model and then measure the information consistency between the different responses. They posit that when a language model knows a given concept well, the sampled responses are likely to be similar and contain consistent facts; on the other hand, for hallucinated facts, stochastically sampled responses are likely to diverge and may completely contradict one another.
Another recent work <cit.> leverage LLM's internal state to identify the truthfulness of a statement. Using an annotated dataset, they train a separate classifier that takes the LLM's activation values as input and predicts its truthfulness.
<cit.> hypothesize that the randomness of sampling is more harmful to factuality when it is used to generate the latter part of a sentence than the beginning of a sentence and propose a new sampling algorithm named factual-nucleus sampling that dynamically adapts the `nucleus' p along the generation of each sentence.
<cit.> propose an approach motivated by The Society of Mind and multi-agent settings in which multiple models individually propose and jointly debate their responses and reasoning processes to arrive at a common answer.
In our approach, we leverage the logit output values and web search to actively detect and mitigate hallucinations.
§ CONCLUSION
In this work, we proposed an approach that actively `detects' and `mitigates' hallucinations of the large language models.
Through systematic and extensive experiments, we show that our approach successfully reduces the hallucinations of the GPT-3 model from 47.5% to 14.5% on average.
We also demonstrate the individual efficacy of our detection and mitigation techniques.
Specifically, our detection technique achieves a high recall and our mitigation technique successfully mitigates majority of the correctly detected hallucinations.
Notably, the mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
Overall, our work contributes to improving the reliability and trustworthiness of text generation systems, a crucial step en route to enabling their widespread adoption in real-world applications.
acl_natbib
§ APPENDIX
§ APPROACH
Table <ref> shows the instructional prompts used for different steps of our approach.
We note that these techniques are the preferred techniques as they do not require calling an external task-specific tool to achieve the corresponding objectives.
§.§ Identify Key Concepts
Table <ref> shows examples of concepts identified using the three methods, i.e., Entity Extraction, Keyword Extraction, and Instructing the Model.
It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also.
In contract, instruction technique successfully identifies majority of the important concepts.
§.§ Create Validation Question
Table <ref> shows examples of validation questions corresponding to each concept created via instructing the model technique.
It shows examples of both the question types, i.e., Yes/No and Wh questions.
We prefer Yes/No questions as it is relatively easier to check the answer for these questions.
We leave exploring Wh-questions for validation for future work.
§ EVALUATION DATA
Table <ref> shows the statistics of the sentences generated by the GPT-3 (text-davinci-003 with temperature 0) model.
A sentence has ∼18 word on average and each sentence has ∼3.2 key concepts that are identified by our instruction technique.
Table <ref> shows examples of sentence-level and concept-level hallucination annotations.
§ RECALL OF HALLUCINATION DETECTION VS PROBABILITY THRESHOLD
Figure <ref> compares recall of hallucination detection for self-inquiry and web search techniques at different probability thresholds.
Web search considerably outperforms self-inquiry at all thresholds.
|
http://arxiv.org/abs/2307.05973v1 | 20230712074048 | VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models | [
"Wenlong Huang",
"Chen Wang",
"Ruohan Zhang",
"Yunzhu Li",
"Jiajun Wu",
"Li Fei-Fei"
] | cs.RO | [
"cs.RO",
"cs.AI",
"cs.CL",
"cs.CV",
"cs.LG"
] |
Contrastive Learning for Conversion Rate Prediction
Yanlong Du
August 12, 2023
===================================================
Large language models (LLMs) are shown to possess a wealth of actionable knowledge that can be extracted for robot manipulation in the form of reasoning and planning. Despite the progress, most still rely on pre-defined motion primitives to carry out the physical interactions with the environment, which remains a major bottleneck.
In this work, we aim to synthesize robot trajectories, i.e., a dense sequence of 6-DoF end-effector waypoints, for a large variety of manipulation tasks given an open-set of instructions and an open-set of objects.
We achieve this by first observing that LLMs excel at inferring affordances and constraints given a free-form language instruction. More importantly, by leveraging their code-writing capabilities, they can interact with a visual-language model (VLM) to compose 3D value maps to ground the knowledge into the observation space of the agent.
The composed value maps are then used in a model-based planning framework to zero-shot synthesize closed-loop robot trajectories with robustness to dynamic perturbations.
We further demonstrate how the proposed framework can benefit from online experiences by efficiently learning a dynamics model for scenes that involve contact-rich interactions.
We present a large-scale study of the proposed method in both simulated and real-robot environments, showcasing the ability to perform a large variety of everyday manipulation tasks specified in free-form natural language.
Project website: https://voxposer.github.iovoxposer.github.io.
Correspondence to Wenlong Huang [email protected].
§ INTRODUCTION
Language is a compressed medium through which humans distill and communicate their knowledge and experience of the world. Large language models (LLMs) have emerged as a promising approach to capture this abstraction, learning to represent the world through projection into language space <cit.>. While these models are believed to internalize generalizable knowledge in the text form, it remains a question about how to use such generalizable knowledge to enable embodied agents to physically act in the real world.
We look at the problem of grounding abstract language instructions (e.g., “set up the table”) in the robot actions <cit.>. Prior works have leveraged lexical analysis to parse the instructions <cit.>, while more recently language models have been used to decompose the instructions into a textual sequence of steps <cit.>.
However, to enable physical interactions with the environment, existing approaches typically rely on a repertoire of manually-designed or pre-trained motion primitives (i.e., skills) that may be invoked by an LLM or a planner, and this reliance on individual skill acquisition is often considered a major bottleneck of the system due to the lack of large-scale robotic data. The question then arises: how can we leverage the wealth of internalized knowledge of LLMs at the even fine-grained action level for robots, without requiring laborious data collection or manual designs for each individual primitive?
In addressing this challenge, we first note that it is infeasible for LLMs to directly output control actions in text, which are typically driven by high-frequency control signals in high-dimensional space.
However, we find that LLMs excel at inferring language-conditioned affordances and constraints, and by leveraging their code-writing capabilities, they can compose dense 3D voxel maps that ground them in the visual space by orchestrating perception calls (e.g., via CLIP <cit.> or open-vocabulary detectors <cit.>) and array operations (e.g., via NumPy <cit.>).
For example, given an instruction “open the top drawer and watch out for the vase”, LLMs can be prompted to infer: 1) the top drawer handle should be grasped, 2) the handle needs to be translated outwards, and 3) the robot should stay away from the vase. While these are expressed in the text form, LLMs can generate Python code to invoke perception APIs to obtain spatial-geometric information of relevant objects or parts (e.g., “handle”) and then manipulate the 3D voxels to prescribe reward or cost at relevant locations in observation space (e.g., the target location of the handle is assigned a high value while the surrounding of the vase is assigned low values).
Finally, the composed value maps can serve as objective functions for motion planners to directly synthesize robot trajectories that achieve the given instruction
[The approach also bears resemblance and connections to potential field methods in path planning <cit.> and constrained optimization methods in manipulation planning <cit.>.]
, without requiring additional training data for each task or for the LLM. An illustration diagram and a subset of tasks we considered are shown in Fig. <ref>.
We term this approach , a formulation that extracts affordances and constraints from LLMs to compose voxel value maps in 3D observation space for guiding robots to interact with the environment. In particular, the method leverages LLMs to compose the key aspects for generating robot trajectories rather than attempting to train policies on robotic data that are often of limited amount or variability, effectively achieving generalization for open-set instructions in a zero-shot manner.
By integrating it into a model-based planning framework, we demonstrate closed-loop executions via model predictive control (MPC) that is robust to external disturbances. We further showcase how can also benefit from limited online interactions to efficiently learn a dynamics model that involves contact-rich interactions.
Our contributions are summarized as follows:
* We present , a method for extracting affordances and constraints for robotic manipulation from pre-trained language models, which requires no additional training and generalizes to open-set instructions.
* Using to represent task objectives, we demonstrate that synthesized trajectories can be robustly executed in closed-loop via MPC for a large variety of manipulation tasks in both simulated and real-world environments.
* We show the applicability of to benefit from only a limited amount of online interactions by efficiently learning a dynamics model, e.g., learning to open a door with a lever handle in under 3 minutes.
§ RELATED WORKS
Grounding Language Instructions.
Language grounding has been studied extensively both in terms of intelligent agents <cit.> and of robotics <cit.>, where language can be used as a tool for compositional goal specification <cit.>, semantic anchor for training multi-modal representation <cit.>, or as an intermediate substrate for planning and reasoning <cit.>. Prior works have looked at using classical tools such as lexical analysis, formal logic, and graphical models to interpret language instructions <cit.>. More recently, end-to-end approaches, popularized by successful applications to offline domains <cit.>, have been applied to directly ground language instructions in robot interactions by learning from data with language annotations, spanning from model learning <cit.>, imitation learning <cit.>, to reinforcement learning <cit.>.
Most closely related to our work is <cit.>, where an end-to-end cost predictor is optimized via supervised learning to map language instructions to 2D costmaps, which are used to steer a motion planner to generate preferred trajectories in a collision-free manner. In contrast, we rely on pre-trained language models for their open-world knowledge and tackle the more challenging robotic manipulation in 3D.
Language Models for Robotics.
Leveraging pre-trained language models for embodied applications is an active area of research, where a large body of works focus on planning and reasoning with language models <cit.>. To allow language models to perceive the physical environments, textual descriptions of the scene <cit.> or perception APIs <cit.> can be given, vision can be used during decoding <cit.> or can be directly taken as input by multi-modal language models <cit.>. In addition to perception, to truly bridge the perception-action loop, an embodied language model must also know how to act, which typically is achieved by a library of pre-defined primitives. <cit.> showed that LLMs exhibit behavioral commonsense that can be useful for low-level control.
Despite the promising signs, hand-designed motion primitives are still required, and while LLMs are shown to be capable of composing sequential policy logic, it remains unclear whether composition can happen at spatial level.
A related line of works has also explored using LLMs for reward specification in the context of reward design <cit.> and exploration in reinforcement learning <cit.>, and human preference learning <cit.>. In contrast, we focus exclusively on grounding the reward generated by LLMs in the 3D observation space of the robot, which we identify as most useful for manipulation tasks.
Learning-based Trajectory Optimization.
Many works have explored leveraging learning-based approaches for trajectory optimization.
While the literature is vast, they can be broadly categorized into those that learn the models <cit.> and those that learn the cost/reward or constraints <cit.>, where data are typically collected from in-domain interactions.
To enable generalization in the wild, a parallel line of works has explored learning task specification from large-scale offline data <cit.>, particularly egocentric videos <cit.>, or leveraging pre-trained foundation models <cit.>.
The learned cost functions are then used by reinforcement learning <cit.>, imitation learning <cit.>, or trajectory optimization <cit.> to generate robot actions.
In this work, we leverage LLMs for in the wild cost specification without requiring in-domain interaction data and with better generalization.
Compared to prior works that leverage foundation models, we ground the cost directly in 3D observation space with real-time visual feedback, which makes amenable to closed-loop MPC that's robust in execution.
§ METHOD
We first provide the formulation of as an optimization problem (Sec. <ref>). Then we describe how can be used as a general zero-shot framework to map language instructions to 3D value maps (Sec. <ref>). We subsequently demonstrate how trajectories can be synthesized in closed-loop for robotic manipulation (Sec. <ref>). While zero-shot in nature, we demonstrate how can learn from online interactions to efficiently solve contact-rich tasks (Sec. <ref>).
§.§ Problem Formulation
Consider a manipulation problem given as a free-form language instruction ℒ (e.g., “open the top drawer”). However, generating robot trajectories according to ℒ can be difficult because ℒ may be arbitrarily long-horizon or under-specified (i.e., requires contextual understanding). Instead, we focus on individual phases (sub-tasks) of the problem ℓ_i that distinctively specify a manipulation task (e.g., “grasp the drawer handle”, “pull open the drawer”), where the decomposition 𝒯→ (ℓ_1, ℓ_2, …, ℓ_n) is given by a high-level planner (e.g., an LLM or a search-based planner)
[Note that the decomposition and sequencing of these sub-tasks are also done by LLMs in this work, though we do not investigate this aspect extensively as it is not the focus of our contributions.].
The central problem investigated in this work is to generate a motion trajectory τ_i^𝐫 for robot 𝐫 and each manipulation phase described by instruction ℓ_i. We represent τ_i^𝐫 as a sequence of dense end-effector waypoints to be executed by an Operational Space Controller <cit.>, where each waypoint consists of a desired 6-DoF end-effector pose, end-effector velocity, and gripper action. However, it is worth noting that other representations of trajectories, such as joint space trajectories, can also be used. Given each sub-task ℓ_i, we formulate this as an optimization problem defined as follows:
min_τ_i^𝐫{ℱ_task(𝐓_i, ℓ_i) + ℱ_control(τ_i^𝐫) } subject to 𝒞(𝐓_i)
where 𝐓_i is the evolution of environment state, τ_i^𝐫⊆𝐓_i is the robot trajectory, and 𝒞(𝐓_i) denotes the relevant dynamics and kinematics constraints. ℱ_task scores the extent of 𝐓_i completes the instruction ℓ_i while ℱ_control specifies the control costs, e.g., to encourage τ_i^𝐫 to minimize total control effort or time.
By solving this optimization problem for each sub-task ℓ_i, we obtain a sequence of robot trajectories that collectively achieve the overall task specified by instruction ℒ.
§.§ Grounding Language Instruction via
Calculating ℱ_task with respect to free-form language instructions is extremely challenging, not only because of the rich space of semantics language can convey but also because of the lack of robot data labeled with 𝐓 and ℓ. However, we provide a critical observation that a large number of tasks can be characterized by a voxel value map
𝐕∈ℝ^w × h × d
in the observation space of the robot, which guides the motion of an “entity of interest” in the scene, such as the robot end-effector, an object, or an object part.
For example, consider the task “open the top drawer” and its first sub-task “grasp the top drawer handle” (inferred by LLMs) in Fig. <ref>. The “entity of interest” is the robot end-effector, and the voxel value map should reflect the attraction toward the drawer handle. By further commanding “watch out for the vase”, the map can also be updated to reflect the repulsion from the vase.
We denote the “entity of interest” as 𝐞 and its trajectory as τ^𝐞.
Using this voxel value map for a given instruction ℓ_i, ℱ_task can be approximated by accumulating the values of 𝐞 traversing through 𝐕_i, formally calculated as ℱ_task = -∑_j=1^|τ_i^𝐞|𝐕(p^𝐞_j), where p^𝐞_j∈ℕ^3 is the discretized (x,y,z) position of 𝐞 at step j.
Notably, we observe large language models, by being pre-trained on Internet-scale data, exhibit capabilities not only to identify the “entity of interest” but also to compose value maps that accurately reflect the task instruction by writing Python programs. Specifically, when an instruction is given as a comment in the code, LLMs can be prompted to 1) call perception APIs (which invoke visual-language models (VLM) such as an open-vocabulary detector <cit.>) to obtain spatial-geometrical information of relevant objects, 2) generate NumPy operations to manipulate 3D arrays, and 3) prescribe precise values at relevant locations.
We term this approach as . Concretely, we aim to obtain a voxel value map 𝐕_i^t = (𝐨^t, ℓ_i) by prompting an LLM and executing the code via a Python interpreter, where 𝐨^t is the RGB-D observation at time t and ℓ_i is the current instruction.
Additionally, because 𝐕 is often sparse, we densify the voxel maps via smoothing operations, as they encourage smoother trajectories optimized by motion planners.
Additional Trajectory Parametrization.
The above formulation of uses LLMs to compose 𝐕: ℕ^3 →ℝ to map from discretized coordinates in voxel space to a real-valued “cost”, which we can use to optimize a path consisting only of the positional terms.
To extend to SE(3) poses, we can also use LLMs to compose rotation maps 𝐕_r: ℕ^3 →SO(3) at coordinates relevant to the task objectives (e.g., “end-effector should face the support normal of the handle”). Similarly, we further compose gripper maps 𝐕_g: ℕ^3 →{0, 1} to control gripper open/close and velocity maps 𝐕_v: ℕ^3 →ℝ to specify target velocities. Note that while these additional trajectory parametrizations are not mapped to a real-valued “cost”, they can also be factored in the optimization procedure (Equation <ref>) to parametrize the trajectories.
§.§ Zero-Shot Trajectory Synthesis with
After obtaining the task cost ℱ_task, we can now approach the full problem defined in Equation <ref> to plan a motion trajectory.
We use simple zeroth-order optimization by randomly sampling trajectories and scoring them with the proposed objective.
The optimization is further implemented in a model predictive control framework that iteratively replans the trajectory at every step using the current observation to robustly execute the trajectories even under dynamic disturbances
[Although involving an LLM in the loop, closed-loop execution is possible because the generated code remains the same throughout task ℓ_i, which allows us to cache its output for the current task.]
, where either a learned or physics-based model can be used. However, because effectively provides “dense rewards” in the observation space and we are able to replan at every step, we surprisingly find that the overall system can already achieve a large variety of manipulation tasks considered in this work even with simple heuristics-based models.
Since some value maps are defined over “entity of interest”, which may not necessarily be the robot, we also use the dynamics model to find the needed robot trajectory to minimize the task cost (i.e., what interactions between the robot and the environment achieve the desired object motions).
Because our method is agnostic to specific instantiations of motion planning, we leave extended discussion of our implementation to Sec. <ref>.
§.§ Efficient Dynamics Learning with Online Experiences
While Sec. <ref> presents a zero-shot framework for synthesizing trajectories for robot manipulation, herein we demonstrate how can also benefit from online experiences by efficiently learning a dynamics model.
Specifically, consider the standard setup where a robot interleaves between 1) collecting environment transition data (𝐨_t , 𝐚_t , 𝐨_t+1), where 𝐨_t is the environment observation at time t and 𝐚_t = MPC(𝐨_t), and 2) training a dynamics model 𝐠_θ parametrized by θ by minimizing the L2 loss between predicted next observation 𝐨̂_t+1 and 𝐨_t+1. A critical component that determines the learning efficiency of the dynamics model is the action sampling distribution P(𝐚_t | 𝐨_t) in MPC, which typically is a random distribution over the full action space 𝐀. This is often inefficient when the goal is to solve a particular task, such as opening a door, because most actions do not interact with the relevant objects in the scene (i.e., the door handle) nor do they necessarily interact with the objects in a meaningful way (i.e., pressing down the door handle). Since synthesizes robot trajectories with LLMs, which have a wealth of commonsense knowledge, the zero-shot synthesized trajectory τ^𝐫_0 can serve as a useful prior to bias the action sampling distribution P(𝐚_t | 𝐨_t, τ^𝐫_0), which can significantly speed up the learning process.
In practice, this can be implemented by only sampling actions in the vicinity of τ^𝐫_0 by adding small noise ε to encourage local exploration instead of exploring in the full action space 𝐀.
§ EXPERIMENTS AND ANALYSIS
We first discuss our design choices for implementing in Sec. <ref>. Then we validate directly in real-world systems whether can perform everyday manipulation tasks in Sec. <ref>. We also present a detailed quantitative study of the generalization performance of compared to learned and LLM-based baselines in simulation in Sec. <ref>. We further demonstrate how can benefit from only limited online experience to learn a dynamics model for contact-rich tasks in Sec. <ref>. Finally, we study the source of errors in the overall systems and discuss how improvement can be made in Sec. <ref>.
§.§ Implementation of
Herein we discuss our instantiation of . We focus our discussions on design choices shared between the simulated and real-world domains. More details about the environment setup in each domain can be found in Appendix.
LLMs and Prompting.
We follow prompting structure by <cit.>, which recursively calls LLMs using their own generated code, where each language model program (LMP) is responsible for a unique functionality (e.g., processing perception calls). We use GPT-4 <cit.> from https://openai.com/api/OpenAI API. Prompts are in Appendix.
VLMs and Perception.
Given an object/part query from LLMs, we first invoke open-vocab detector OWL-ViT <cit.> to obtain a bounding box, feed it into Segment Anything <cit.> to obtain a mask, and track the mask using video tracker XMEM <cit.>. The tracked mask is used with RGB-D observation to reconstruct the object/part point cloud.
Value Map Composition.
We define the following types of value maps: affordance, avoidance, end-effector velocity, end-effector rotation, and gripper action. Each type uses a different LMP, which takes in an instruction and outputs a voxel map of shape (100, 100, 100, k), where k differs for each value map (e.g., k=1 for affordance and avoidance as it specifies cost, and k=4 for rotation as it specifies SO(3)). We apply Euclidean distance transform to affordance maps and Gaussian filters for avoidance maps. On top of value map LMPs, we define two high-level LMPs to orchestrate their behaviors: takes user instruction ℒ as input (e.g., “open drawer”) and outputs a sequence of sub-tasks ℓ_1:N, and takes in sub-task ℓ_i and invokes relevant value map LMPs with detailed language parameterization.
Motion Planner.
We consider only affordance and avoidance maps in the planner optimization, which finds a sequence of collision-free end-effector positions p_1:N∈ℝ^3 using greedy search. Then we enforce other parametrization at each p by the remaining value maps (e.g., rotation map, velocity map). The cost map used by the motion planner is computed as the negative of the weighted sum of normalized affordance and avoidance maps with weights 2 and 1. After a 6-DoF trajectory is synthesized, the first waypoint is executed, and then a new trajectory is re-planned at 5 Hz.
Environment Dynamics Model.
For tasks in which the specified “entity of interest” is the robot, we assume identity environment dynamics while replanning at every step to account for the latest observation. For tasks in which the “entity of interest” is an object, we study only a planar pushing model parametrized by contact point, push direction, and push distance. The heuristic-based dynamics model translates an input point cloud along the push direction by the push distance. We use MPC with random shooting to optimize for the action parameters. Then a pre-defined pushing primitive is executed based on the action parameters. However, we note that a primitive is not necessary when action parameters are defined over the end-effector or joint space of the robot, which would likely yield smoother trajectories but takes more time for optimization.
§.§ for Everyday Manipulation Tasks
We study whether can zero-shot synthesize robot trajectories to perform everyday manipulation tasks. We set up a real-world tabletop environment with a Franka Emika Panda robot. More details can be found in Appendix <ref>. While the proposed method can generalize to an open-set of instructions and an open-set of objects as shown in Fig. <ref>, we pick 5 representative tasks to provide quantitative evaluations. We also show additional qualitative results in Fig. <ref> that include environment rollouts and value map visualizations. As shown in Table <ref>, we find that can effectively synthesize robot trajectories for everyday manipulation tasks with a high average success rate. Particularly, by leveraging rich world knowledge in LLMs, we can extract language-conditioned affordances for diverse scenes and objects. For example, LLMs can infer that a bottle can be opened by turning counter-clockwise around the z-axis. Subsequently, can ground this knowledge in the observation space that directly guides motion planners to complete the tasks. We further compare to a variant of Code as Policies <cit.> that uses LLMs to parameterize a pre-defined list of simple primitives (e.g., , ). We find that compared to chaining sequential policy logic, the ability to compose spatially while considering other constraints under a joint optimization scheme is a more flexible formulation, unlocking the possibility for more manipulation tasks and leading to more robust execution. In particular, by leveraging the composed spatial maps in MPC, can effectively recover from external disturbances, such as moving targets/obstacles and pulling the drawer open after it has been closed by the robot.
§.§ Generalization to Unseen Instructions and Attributes
Herein we investigate the generalization capabilities of . To provide rigorous quantitative results, we set up a simulated environment that mirrors our real-world setup <cit.>, but with a fixed list of objects (consisting of a cabinet along with 10 colored blocks and lines) and a fixed list of 13 templated instructions (e.g., “push [obj] to [pos]”), where [obj] and [pos] are attributes that are randomized over a pre-defined list. The instructions and attributes are divided into seen and unseen groups, where seen instructions/attributes may appear in the prompt (or in the training data for supervised baselines). We further group them into 2 categories, where “Object Interactions” refer to tasks that require interactions with objects (instead of collision-free path planning), and “Spatial Composition” refer to tasks that require robots to account for spatial constraints in the environment in their trajectories (e.g., moving slower near a particular object).
For baselines, we ablate the two components of , LLM and motion planner, by comparing to a variant of <cit.> that combines an LLM with primitives and to a variant of <cit.> that learns a U-Net <cit.> to synthesize costmaps for motion planning. Table <ref> shows the success rates averaged across 20 episodes per task.
outperforms both baselines in both test categories, especially on unseen instructions or attributes. Compared to cost specification using U-Net trained via supervised learning, LLMs generalize better by explicitly reasoning about language-conditioned affordances and constraints. On the other hand, grounding the LLM knowledge in observation through value map composition rather than directly specifying primitive parameters offers consistent performance that generalizes beyond the examples given in the prompt.
§.§ Efficient Dynamics Learning with Online Experiences
Despite having zero-shot generalization to unseen instructions, we investigate how can also benefit from online interactions in tasks that involve more challenging contact-rich dynamics, as many behavioral nuances may not be present in the LLMs.
To this end, we investigate a suite of simulated tasks involving interactions with common articulated objects, including opening doors, fridges, and windows.
We hypothesize that while these are challenging tasks for autonomous agents due to difficult exploration, trajectories zero-shot synthesized by would provide useful hints for exploration (e.g., “handle needs to be pressed down first in order to open a door”).
Specifically, we first synthesize k different trajectories using , each represented as a sequence of end-effector waypoints.
Then we learn an MLP dynamics model that predicts 𝐨_t+1 from 𝐨_t and 𝐚_t through an iterative procedure where the agent alternates between data collection and model learning.
The initial synthesized trajectories are used as prior in the action sampling distribution of MPC, where we add ε∼𝒩(0, σ^2) to each waypoint in τ^𝐫_0 to encourage local exploration.
Results are shown in Table <ref>. For these tasks involving complex interactions with articulated objects, we find trajectories zero-shot synthesized by are meaningful but insufficient. However, we can learn an effective dynamics model with less than 3 minutes of online interactions by using these trajectories as exploration prior, leading to high eventual success rates. In comparison, without the prior, it is extremely difficult to learn a dynamics model because most actions do not lead to meaningful environment changes. In all cases, the experiments exceed the maximum 12-hour limit.
§.§ Error Breakdown
Since involves multiple components working jointly to synthesize trajectories for various manipulation tasks, herein we analyze the errors resulting from each component and how the overall system can be further improved. We conduct experiments in simulation where we have access to ground-truth perception and dynamics model (i.e., the simulator). Results are shown in Fig. <ref>. U-Net + MP <cit.> trains a U-Net <cit.> to directly map RGB-D observations to value maps which are then used by a motion planner (MP), thus not having an independent perception module.
“Specification error” here refers to the error by U-Net, such as noisy predictions that are difficult for optimization.
LLM + Primitives <cit.> uses LLMs to sequentially compose primitives, thus not having a dynamics module.
For this baseline and , “specification error” refers to errors by LLMs either in composing policy logic or composing value maps.
In comparison, although uses multiple components, by formulating it as a joint model-based optimization problem, achieves the lowest overall error, and its biggest source of error is the perception module. We also observe that having access to a better dynamics model (rather than a heuristics-based model) can contribute to better overall performance, such as a learned model or a physics-based model.
§ EMERGENT BEHAVIORAL CAPABILITIES
r0.53
< g r a p h i c s >
Emergent behavioral capabilities by , including estimating physical properties of objects (top left), behavioral commonsense reasoning (top right), fine-grained language correction (bottom left), and multi-step visual program (bottom right).
Emergent capabilities refer to unpredictable phenomenons that are only present in large models <cit.>. As uses pre-trained LLMs as backbone, we observe similar embodied emergent capabilities driven by the rich world knowledge of LLMs. In particular, we focus our study on the behavioral capabilities that are unique to .
We observe the following capabilities: 1) Estimating Physical Properties: given two blocks of unknown mass, the robot is tasked to conduct physics experiments using the available tools to determine which block is heavier, 2) Behavioral Commonsense Reasoning: during a task where robot is setting the table, the user can specify behavioral preferences such as “I am left-handed”, which requires the robot to comprehend its meaning in the context of the task, 3) Fine-grained Language Correction: for tasks that require high precision such as “covering the teapot with the lid”, user can give precise instructions to the robot such as “you're off by 1cm”, and 4) Multi-step Visual Program <cit.>: given a task “open the drawer precisely by half” where there is insufficient information because object models are not available, can come up with multi-step manipulation strategies based on visual feedback that first opens the drawer fully while recording handle displacement, then close it back to the mid-point to satisfy the requirement.
§ CONCLUSION, LIMITATIONS, & FUTURE WORKS
In this work, we present , a general robot manipulation framework that extracts affordances and constraints from LLMs, offering significant generalization advantages for open-set instructions and objects. In particular, we use code-writing LLMs to interact with VLMs to compose 3D value maps grounded in observation space, which are used to synthesize trajectories for everyday manipulation tasks. Furthermore, we show can benefit from online interactions by efficiently learning a dynamics model for contact-rich tasks.
has several limitations. First, it relies on external perception modules, which is limiting in tasks that require holistic visual reasoning or understanding of fine-grained object geometries.
Second, while applicable to efficient dynamics learning, a general-purpose dynamics model is still required to achieve contact-rich tasks with the same level of generalization.
Third, our motion planner considers only end-effector trajectories while whole-arm planning is also feasible and likely a better design choice <cit.>.
Finally, manual prompt engineering is required for LLMs.
We also see several exciting venues for future work. For instance, the recent success of multi-modal LLMs <cit.> can be directly translated into for direct visual grounding. Methods developed for alignment <cit.> and prompting <cit.> can also be used to improve the quality of synthesized value maps and alleviate prompt engineering effort. Finally, while we use greedy search in trajectory optimization, more advanced optimization methods can be developed that best interface with value maps synthesized by .
We would like to thank Andy Zeng, Igor Mordatch, and the members of the Stanford Vision and Learning Lab for the fruitful discussions. This work was in part supported by AFOSR YIP FA9550-23-1-0127, ONR MURI N00014-22-1-2740, ONR MURI N00014-21-1-2801, the Stanford Institute for Human-Centered AI (HAI), JPMC, and Analog Devices. Wenlong Huang is partially supported by Stanford School of Engineering Fellowship. Ruohan Zhang is partially supported by Wu Tsai Human Performance Alliance Fellowship.
§ APPENDIX
§.§ APIs for
Central to is an LLM generating Python code that is executed by a Python interpreter. Besides exposing NumPy <cit.> and the http://matthew-brett.github.io/transforms3d/Transforms3d library to the LLM, we provide the following environment APIs that LLMs can choose to invoke:
: Takes in an object name and returns a list of dictionaries, where each dictionary corresponds to one instance of the matching object, containing center position, occupancy grid, and mean normal vector.
: Takes in an “entity of interest” as “movable” (a dictionary returned by ) and (optionally) a list of value maps and invokes the motion planner to execute the trajectory. Note that in MPC settings, “movable” and the input value maps are functions that can be re-evaluated to reflect the latest environment observation.
: Takes in a desired offset distance in centimeters along direction and returns 3-dim vector reflecting displacement in voxel coordinates.
: Inverse of . Takes in an integer “index” and a “direction” vector and returns the distance in centimeters in world coordinates displaced by the “integer” in voxel coordinates.
: Takes in a desired pointing direction for the end-effector and returns a satisfying target quaternion.
: Assigns “value” to voxels within “radious_cm” from “voxel_xyz” in “voxel_map”.
: Returns a default affordance map initialized with 0, where a high value attracts the entity.
: Returns a default avoidance map initialized with 0, where a high value repulses the entity.
: Returns a default rotation map initialized with current end-effector quaternion.
: Returns a default gripper map initialized with current gripper action, where 1 indicates “closed” and 0 indicates “open”.
: Returns a default affordance map initialized with 1, where the number represents scale factor (e.g., 0.5 for half of the default velocity).
: Reset to robot rest pose.
§.§ Real-World Environment Setup
We use a Franka Emika Panda robot with a tabletop setup. We use Operational Space Controller with impedance from Deoxys <cit.>.
We mount two RGB-D cameras (Azure Kinect) at two opposite ends of the table: bottom right and top left from the top down view. At the start of each rollout, both cameras start recording and return the real-time RGB-D observations at 20 Hz.
For each task, we evaluate each method on two settings: without and with disturbances. For tasks with disturbances, we apply three kinds of disturbances to the environment, which we pre-select a sequence of them at the start of the evaluation: 1) random forces applied to the robot, 2) random displacement of task-relevant and distractor objects, and 3) reverting task progress (e.g., pull drawer open while it's being closed by the robot). We only apply the third disturbances to tasks where “entity of interest” is an object or object part.
We compare to a variant of Code as Policies <cit.> as a baseline that uses an LLM with action primitives. The primitives include: , , , , . We do not provide primitives such as pick-and-place as they would be tailored for a particular suite of tasks that we do not constrain to in our study (similar to the control APIs for specified in Sec. <ref>).
§.§.§ Tasks
Move & Avoid: “Move to the top of [obj1] while staying away from [obj2]”, where [obj1] and [obj2] are randomized everyday objects selected from the list: apple, banana, yellow bowl, headphones, mug, wood block.
Set Up Table: “Please set up the table by placing utensils for my pasta”.
Close Drawer: “Close the [deixis] drawer”, where [deixis] can be “top” or “bottom”.
Open Bottle: “Turn open the vitamin bottle”.
Sweep Trash: “Please sweep the paper trash into the blue dustpan”.
§.§ Simulated Environment Setup
We implement a tabletop manipulation environment with a Franka Emika Panda robot in SAPIEN <cit.>. The controller takes as input a desired end-effector 6-DoF pose, calculates a sequence of interpolated waypoints using inverse kinematics, and finally follows the waypoints using a PD controller. We use a set of 10 colored blocks and 10 colored lines in addition to an articulated cabinet with 3 drawers. They are initialized differently depending on the specific task. The lines are used as visual landmarks and are not interactable. For perception, a total of 4 RGB-D cameras are mounted at each end of the table pointing at the center of the workspace.
§.§.§ Tasks
We create a custom suite of 13 tasks shown in Table <ref>. Each task comes with a templated instruction (shown in Table <ref>) where there may be one or multiple attributes randomized from the pre-defined list below. At reset time, a number of objects are selected (depending on the specific task) and are randomized across the workspace while making sure that task is not completed at reset and that task completion is feasible. A complete list of attributes can be found below, divided into “seen” and “unseen” categories:
Seen Attributes:
* : [“back left corner of the table”, “front right corner of the table”, “right side of the table”, “back side of the table”]
* : [“blue block”, “green block”, “yellow block”, “pink block”, “brown block”]
* : [“left of”, “front side of”, “top of”]
* : [“topmost”, “second to the bottom”]
* : [3, 5, 7, 9, 11]
* : [“right side of the table”, “back side of the table”]
* : [“faster speed”, “a quarter of the speed”]
* : [“blue line”, “green line”, “yellow line”, “pink line”, “brown line”]
Unseen Attributes:
* : [“back right corner of the table”, “front left corner of the table”, “left side of the table”, “front side of the table”]
* : [“red block”, “orange block”, “purple block”, “cyan block”, “gray block”]
* : [“right of”, “back side of”]
* : [“bottommost”, “second to the top”]
* : [4, 6, 8, 10]
* : [“left side of the table”, “front side of the table”]
* : [“slower speed”, “3x speed”]
* : [“red line”, “orange line”, “purple line”, “cyan line”, “gray line”]
§.§.§ Full Results on Simulated Environments
§.§ Prompts
Prompts used in Sec. <ref> and Sec. <ref> can be found below.
: Takes in a user instruction ℒ and generates a sequence of sub-tasks ℓ_i which is fed into “composer” (Note that planner is not used in simulation as the evaluated tasks consist of a single manipulation phase).
real-world: https://voxposer.github.io/prompts/real_planner_prompt.txtvoxposer.github.io/prompts/real_planner_prompt.txt.
: Takes in sub-task instruction ℓ_i and invokes necessary value map LMPs to compose affordance maps and constraint maps.
simulation: https://voxposer.github.io/prompts/sim_composer_prompt.txtvoxposer.github.io/prompts/sim_composer_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_composer_prompt.txtvoxposer.github.io/prompts/real_composer_prompt.txt.
: Takes in a text query of object/part name and returns a list of dictionaries, where each dictionary corresponds to one instance of the matching object containing center position, occupancy grid, and mean normal vector.
simulation: https://voxposer.github.io/prompts/sim_parse_query_obj_prompt.txtvoxposer.github.io/prompts/sim_parse_query_obj_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_parse_query_obj_prompt.txtvoxposer.github.io/prompts/real_parse_query_obj_prompt.txt.
: Takes in natural language parametrization from and returns a NumPy array for task affordance map.
simulation: https://voxposer.github.io/prompts/sim_get_affordance_map_prompt.txtvoxposer.github.io/prompts/sim_get_affordance_map_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_get_affordance_map_prompt.txtvoxposer.github.io/prompts/real_get_affordance_map_prompt.txt.
: Takes in natural language parametrization from and returns a NumPy array for task avoidance map.
simulation: https://voxposer.github.io/prompts/sim_get_avoidance_map_prompt.txtvoxposer.github.io/prompts/sim_get_avoidance_map_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_get_avoidance_map_prompt.txtvoxposer.github.io/prompts/real_get_avoidance_map_prompt.txt.
: Takes in natural language parametrization from and returns a NumPy array for end-effector rotation map.
simulation: https://voxposer.github.io/prompts/sim_get_rotation_map_prompt.txtvoxposer.github.io/prompts/sim_get_rotation_map_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_get_rotation_map_prompt.txtvoxposer.github.io/prompts/real_get_rotation_map_prompt.txt.
: Takes in natural language parametrization from and returns a NumPy array for gripper action map.
simulation: https://voxposer.github.io/prompts/sim_get_gripper_map_prompt.txtvoxposer.github.io/prompts/sim_get_gripper_map_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_get_gripper_map_prompt.txtvoxposer.github.io/prompts/real_get_gripper_map_prompt.txt.
: Takes in natural language parametrization from and returns a NumPy array for end-effector velocity map.
simulation: https://voxposer.github.io/prompts/sim_get_velocity_map_prompt.txtvoxposer.github.io/prompts/sim_get_velocity_map_prompt.txt.
real-world: https://voxposer.github.io/prompts/real_get_velocity_map_prompt.txtvoxposer.github.io/prompts/real_get_velocity_map_prompt.txt.
|
http://arxiv.org/abs/2307.04357v1 | 20230710055929 | Survey-scale discovery-based research processes: Evaluating a bespoke visualisation environment for astronomical survey data | [
"C. J. Fluke",
"D. Vohl",
"V. A. Kilborn",
"C. Murugeshan"
] | astro-ph.IM | [
"astro-ph.IM",
"astro-ph.GA"
] |
Next generation astronomical surveys naturally pose challenges for human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. While a significant fraction of the data preparation and analysis will be taken care of by automated pipelines, crucial steps of knowledge discovery can still only be achieved through various level of human interpretation. As the number of sources in a survey grows, there is need to both modify and simplify repetitive visualisation processes that need to be completed for each source. As tasks such as per-source quality control, candidate rejection, and morphological classification all share a single instruction, multiple data (SIMD) work pattern, they are amenable to a parallel solution. Selecting extragalactic neutral hydrogen (Hi) surveys as a representative example, we use system performance benchmarking and the visual data and reasoning (VDAR) methodology from the field of information visualisation to evaluate a bespoke comparative visualisation environment: the encube visual analytics framework deployed on the 83 Megapixel Swinburne Discovery Wall. Through benchmarking using spectral cube data from existing Hi surveys, we are able to perform interactive comparative visualisation via texture-based volume rendering of 180 three-dimensional (3D) data cubes at a time. The time to load a configuration of spectral cubes scale linearly with the number of voxels, with independent samples of 180 cubes (8.4 Gigavoxels or 34 Gigabytes) each loading in under 5 minutes. We show that parallel comparative inspection is a productive and time-saving technique which can reduce the time taken to complete SIMD-style visual tasks currently performed at the desktop by at least two orders of magnitude, potentially rendering some labour-intensive desktop-based workflows obsolete.
§ INTRODUCTION
Next generation astronomical surveys will pose challenges for a range of human-centred visualisation and analysis workflows that currently rely on the use of standard desktop display environments. Knowledge discovery activities that were, or perhaps still are, feasible for a human to perform when the quantity (i.e. volume) or rate (i.e. velocity) of data available was low are becoming more reliant on automated or autonomous solutions.
While desktop computing has already been augmented through the adoption of supercomputing and cloud-style remote services, the visualisation and display of astronomical data is still strongly dependent on the utilisation of laptop screens or monitors located in the astronomer's office.
To address the specific needs of individual astronomers, and astronomical research teams, a collection of data analysis and visualisation tools are required. This includes continuing to take full advantage of existing, well-established options that are able to be scaled-up effectively, along with developing and assessing the potential of novel solutions or systems that either provide extra functionalities, or that can be connected into extensible workflows (e.g. virtual observatory model).
§.§ Comparative visualisation
Seeing many sources together – comparative visualisation – is an approach that naturally supports pattern-finding (“those galaxies all show similar kinematic properties”) and anomaly detection (“why is that one source so different to everything else?”).
Such multi-object comparisons might include quality control activities (e.g. assessing whether a source finder or automated calibration pipeline is functioning as expected by selecting a sample of sources for assessment, which might include fine-tuning to check or verify a machine learning algorithm), investigating outcomes of model-fitting (e.g. examining the residual signal once different types of kinematic models are applied), or any of a range of standard analysis tasks that can be performed based on morphological or environmental selection criteria (e.g. field compared with cluster galaxies, dwarf galaxies versus grand design spirals, or the discovery of novel classes of objects when a new discovery space is opened). We will refer to all such activities as survey-scale discovery-based research processes, as the purpose is to explore data in order to make sense of it [see the model of “sensemaking” presented by <cit.>, and applied in Section <ref>].
Limited scope for comparative visualisation can occur by either loading data into several independent instances of a visualisation tool (usually on the same computing platform) or by switching between individual views of multiple objects, requiring loading and unloading of data.
When working with large-scale survey data, desktop-based visualisation strategies may lead to a reduction in the ability for an individual to see patterns across a sizeable portion of the survey.
In practice, effective comparative visualisation cannot be achieved by moving between visualisations of one or two objects at a time. At each stage, there is a loss of time to input/output, and a strong reliance on the visual recall abilities of the astronomer [see <cit.> for a related discussion]. Individual instances are unlikely to have linked camera actions (e.g. panning, rotation, zoom, scaling), requiring the use of repetitive interaction processes.
Moreover, if performed at the desktop, the small physical display space of a standard monitor is not always conducive to real-time, collaborative inspection for those researchers who prefer, or find it more productive, to work this way.
§.§ Single instruction, multiple data work patterns
Survey-scale discovery-based research processes, such as those described above, are all highly repetitive, and may need to be completed for each individual source. Many repetitive research processes share a single instruction, multiple data (SIMD) work pattern, and so are amenable to a parallel solution.
One approach to the parallelisation of human-centred visualisation and analysis tasks is to share the work out amongst multiple team members [e.g. as occurred while preparing catalogues for the Hi Parkes All Sky Survey – see <cit.> and <cit.>], or further afield via crowd-sourcing of citizen scientists <cit.>.
A limitation to these distributed processes is one of consistency in decision-making between team members with diverse skill levels [see, for example, <cit.>]. An investment in training may be required, or a complex task must be abstracted to one of group-consensus classification. Furthermore, while serendipitous discoveries do occur in citizen science activities, that is not the norm.
An alternative is to change the viewing paradigm, so that a more suitable mode of parallel inspection by a single researcher, or co-located team, can be achieved. This is the approach we investigate in this work using encube[Long term access to open source software described by <cit.>.]: a visual analytics framework for collaborative and comparative visualisation, designed to work on a multi-monitor tiled display wall and dedicated compute nodes <cit.>. Figure <ref> shows encube operating on the Swinburne Discovery Wall (see Section <ref>), providing simultaneous display of 80 spectral cubes sampled from three extragalactic neutral hydrogen (Hi) surveys (described in more detail in Section <ref>).
§.§ The visual data analysis and reasoning methodology
In order to best utilise non-standard or novel visualisation systems, it is important to understand their strengths and weaknesses. The suitability of any visualisation approach or environment – software or hardware, standard or bespoke – should be examined or evaluated using appropriate methodologies.
Looking to the broader field of information visualisation, such evaluations can include investigation of either the process of visualisation or the nature of visualisation systems and algorithms <cit.>. For our investigation of survey-scale discovery-based research processes, we select the empirical visual data analysis and reasoning (VDAR) methodology.
A VDAR evaluation is usually approached via a case study: a cohort of experts assess their ability to derive knowledge about a relevant dataset while using a new visualisation system, software or strategy to perform domain-specific tasks <cit.>.
As our relevant dataset, we utilise existing extragalactic Hi survey data (see Section <ref>), available as an ensemble of spectral cubes (two spatial dimensions and one spectral dimension). We consider three representative survey-scale discovery-based research processes that can occur in the preparation and analysis of large-scale extragalactic Hi surveys:
* Quality control of individual sources, ensuring that calibrations have been applied correctly and bad channels (e.g. impacted by interference or instrumental features) have been flagged or removed;
* Candidate rejection, whereby false-positive detections from automated source finders are identified and removed from the catalogue. This can also help to improve training sets of “non-source” examples for use with machine learning and related automated methods; and
* Morphological classification, identifying and sorting sources into categories based on observed structural, kinematic or environmental properties. The classification process may also include anomaly detection, wherein unexpected discoveries are made based on the observed structural properties.
Through a mix of visual analytic functionalities, including interactive three-dimensional (3D) volume rendering methods, encube provides ways to explore both spatial and spectral features, which can be matched to other observed or derived parameters. A 3D approach can help to reveal complex kinematic structures or system artefacts that might otherwise appear only in projection using moment maps or position-velocity diagrams.
We choose to perform our evaluation with 3D methods as they: (1) are the current defaults within the public encube code; (2) present an upper bound in terms of the computation required for benchmarking purposes; and (3) provide the VDAR user cohort with access to novel comparative sensemaking strategies via the Swinburne Discovery Wall. For other applications, alternative data visualisation modes such as moment maps[A camera projection parallel to any axis of a spectral cube can be used to generate a two-dimensional (2D) projection of the data <cit.>, and hence can be used to generate 2D solution space representations while still retaining access to the full representation of the data in memory for fast calculations using graphics shaders.] or scatter plots could be utilised as they are supported by the underlying visualisation framework.
§.§ Overview
In this paper, we consider a specific visualisation problem that is not feasible to address using a desktop-based visualisation solution: interactive, comparative visualisation of ≥100 data instances. We evaluate the practicality of using a bespoke visualisation environment (viz. encube and the Swinburne Discovery Wall) for survey-scale discovery-based research processes through: (1) system benchmarking, which provides quantitative information on system performance and scalability; and (2) a visual data analysis and reasoning study.
For five different display configurations, supporting simultaneous visualisation of 20, 40, 80, 120 or 180 spectral cubes, selected from representative extragalactic Hi survey datasets, we report benchmarking in terms of the two most critical factors: (1) the time taken to load an ensemble of spectral cubes; and (2) the typical minimum interactive frame rate.
Together, these values allow us to estimate the visualisation throughput, V_ tp (sources/hour), that might be achieved by a single user when undertaking SIMD tasks such as quality control, candidate rejection or morphological classification.
Compared to the serial case of viewing one data instance at a time on a standard desktop monitor, encube and the Swinburne Discovery Wall could decrease the time taken to complete survey-scale comparative visualisation workflows by a factor of 100 or more.
In Section <ref>, we explain the main technical elements of the bespoke visualisation environment. In Section <ref>, we provide background on the extragalactic Hi case study.
We evaluate the visualisation environment through system benchmarking (Section <ref>) and via the VDAR evaluation (Section <ref>), which considers three typical discovery-based SIMD activities: quality control, candidate rejection and morphological classification.
We present a discussion of our finding in Section <ref>, and present our conclusions in Section <ref>.
Further technical and implementation notes can be found in <ref>.
Our approach can be generalised to any survey datasets comprising more individual observations or instances than can be comfortably analysed or scrutinised by one investigator on a standard desktop display. This might include two-dimensional images or moment-map projections, optical/infrared spectral cubes (e.g. from integral field spectroscopy), or simulation data products. The comparative visualisation strategies demonstrated here are applicable to any similar SIMD-style activity, and are not restricted to the specific use of encube with the Swinburne Discovery Wall. As an open source solution, users are encouraged to modify the functionality of encube (e.g. in order to provide alternative 2D or 3D visualisation modes or to handle domain-specific data formats) or reconfigure the arrangement of the display environment to suit their own survey-scale discovery-based research needs.
§ A BESPOKE COMPARATIVE VISUALISATION ENVIRONMENT
In this section, we provide a technical overview of the two main components of the bespoke comparative visualisation environment used in this work: (1) the encube framework, which enables visualisation of multiple data instance (in the form of spectral cubes for our case study); and (2) the Swinburne Discovery Wall, a specific instance of a large-area tiled display wall.
Encube was conceptualised and developed specifically to support SIMD visualisation and analysis tasks, with an aim to accelerate data-intensive comparative visualisation and discovery workflows. Encube displays multiple individual data visualisations across single or multiple display devices, with interaction coordinated through a user interface on the master node. For related approaches, see the virtual reality implementation of BentoBox <cit.> and the “shelves” metaphor for small-multiples that considers utilisation of immersive space <cit.>.
§.§ The encube framework
The encube framework <cit.> supports comparative visualisation and analysis of survey data (also referred to as an ensemble in other domains). The primary development emphasis was for structured 3D data: spectral cube data from astronomy and magnetic resonance imaging data from medical imaging. Encube provides an interactive data exploration and analysis experience, employing a strategic mixture of software (data processing, management, visualisation, analysis) and hardware (graphics processing units, computer cluster, displays).
Encube is a modular and open-source code base <cit.>, where each module targets a specific set of tasks within a visual analytics workflow: (1) processing and visualisation of data; (2) workflow and communication management; and (3) user interactions. Similar to a microservices-style architecture, the modular design allows individual components to be connected, enhanced or replaced as required, so that encube can be kept compatible with, and scalable to, the requirements of future science operations. For instance, customisable code for 3D visualisation is currently created using the C/C++ languages for good performance with the S2PLOT interactive programming library <cit.>, which builds on the OpenGL[<http://www.opengl.org>] graphics library.
From a system architecture standpoint, encube comprises a process layer and an input/output (I/O) layer. The process layer performs data processing tasks (load data, compute statistics, render visualisation), and the I/O layer responds to user inputs and generates visual outputs. Each layer contains units where specified tasks are performed. Depending on the task, a unit can be instantiated once, or multiple times for parallel operation (generally on different compute hardware). In its current form, the encube process layer comprises a single manager unit and one or more process and render units, while the I/O layer contains an interaction unit and one or more display units.
Units can communicate between each other in order to pass workflow information across the architecture. The communication pathway between units can be represented as a directed graph [see Figures 2 and 4 of <cit.>]:
↕
↕
↓
where the arrows indicate the information flow direction between two unit vertices on the graph. Based on the number of instances of a unit, communication can include serial or parallel messages. We note that peer-to-peer communication within a unit type is not currently implemented (e.g. direct message passing between two interaction units).
The manager unit orchestrates the overall software workflow. It first reads a configuration file containing network information about the available compute nodes, characteristics of the tiled visualisation output, along with system metadata and the location of the dataset. This unit also schedules and synchronises the workflow, sharing metadata as well as commands with other neighbouring units. Here, the manager unit acts as a messenger between an interaction unit and a process and render unit. Moreover, given that all commands pass through the manager unit, the workflow history and system state can be recorded (if requested) so that actions can be revised, replicated, or continued later.
The interaction unit is where a user interacts with the dataset. In particular, the user can specify which data files to load and visualise, change visualisation parameters (e.g. ray-tracing method), select and organise individual visualisations, and request diagnostic plots. The interaction unit provides a “world in miniature” view of the display setup, mapping regions within the user interface to the physical display.
Metadata is presented in a table, which can be sorted by categories. Visualisations are generated after selecting rows of the table, either individually or by ordered batch (e.g. sorted by parameters such as distance, size, etc.). Once data is loaded into memory on a process and render unit, visualisation parameters (e.g. histogram thresholds, spatial cropping, colourmap selection) can be updated in real time to modify one or more visualisations. Global or partial statistical values can also be computed on request for selected data files and gathered to summarise properties of a subset.
The process and render unit provides functionalities such as loading data files to GPU memory, computing statistics (e.g. mean, standard deviation, histogram), creating visualisation callbacks (e.g. including responses to input via keyboard, mouse, or the remote user interface), and generating the visualisations through texture-based volume rendering.
Finally, a visualisation rendered by a process and render unit is displayed on screen via the display unit. A display unit provides a mapping to one or more physical screens via the configuration file read by the manager unit.
§.§ The Swinburne Discovery Wall
From its inception, encube was designed for use in high-end visualisation environments comprising multiple off-the-shelf displays, i.e. a tiled display wall (TDW). See <cit.> and <cit.> for detailed investigations of the role of TDWs in astronomy. A TDW provides several advantages over a standalone workstation monitor: many more pixels, a greater display area, and, in some cases, access to additional co-located computing power.
Initial deployment and testing of encube was undertaken with the CAVE2^ TM hybrid high-performance computing and visualisation space at Monash University [as reported in <cit.>]. The Monash CAVE2^ TM <cit.> comprised 80 stereoscopic-capable displays, with a cylindrical configuration (330 degrees to allow entry and exit from the physical space) of four rows and 20 columns. Collectively, the environment provided 84 million pixels for two-dimensional display and 42 million pixels in stereoscopic mode. The Monash CAVE2^ TM was linked to a real-time compute cluster with a peak of 100 Tflop/s and 240 GB of GPU memory.
Additional development, and the activities presented in this work, utilised the Discovery Wall (Figure <ref>) operated at Swinburne University of Technology. The Swinburne Discovery Wall is a TDW comprising ten Philips BDM4350UC 4K ultra high-definition (4K-UHD) monitors
arranged in a matrix of two rows and five columns. The total pixel count is approximately 83 Megapixels and the accessible screen area is just under 5.0 m^2 (see Table <ref>).
Each column of the Discovery Wall is connected to a Lenovo ThinkStation P410 Mini Tower (2.8 GHz, 16 GB RAM) with an NVIDIA GTX1080 graphics card (8 GB). The workstations operate with the CentOS[<http://www.centos.org>] Linux operating system (Version 7.4.1708), noting that we use the version of CentOS that was installed on the Discovery Wall when it was commissioned in 2018.
The original iteration of the Swinburne Discovery Wall, which operated until November 2021, had one additional column of two 4K-UHD monitors such that the total screen area was 6.0 m^2 and a pixel count closer to 1 million pixels. In December 2021, the Discovery Wall hardware was transferred to a new location, but with insufficient wall-space to accommodate all six columns. Reconfiguration of encube to work on the relocated and reduced-scale Discovery Wall in February 2022 required approximately two minutes to remove references to the sixth Lenovo MiniTower workstation from the encube source and scripts.
§ CASE STUDY: EXTRAGALACTIC HI ATRONOMY
Consider the specific case of extragalactic Hi astronomy, which is based on observations of the 21 cm (1420.40576 MHz) hyperfine spin flip transition of the hydrogen atom. Theoretically predicted by <cit.>, and first detected by <cit.>, <cit.> and <cit.>, the 21 cm line provides a valuable signature of the neutral gas content of galaxies.
Apart from being the primary component from which stars are eventually formed, the Hi gas in galaxies is also typically much more extended than their stellar discs [see <cit.>] making it an important tracer of the effects of both internal properties of galaxies, such as feedback and angular momentum <cit.>, as well as environmental processes such as ram pressure and tidal stripping to name a few [see <cit.> and <cit.>]. For these reasons, high spatial and spectral resolution studies of the HI gas distribution in galaxies are paramount for our understanding of galaxy evolution.
Historically, extragalactic Hi surveys fall into three broad categories: (1) spectral line observations, using single-dish radio telescopes; (2) spatial mapping with multi-beam receivers <cit.>, whereby it became feasible to undertake spectral-line surveys at a large scale <cit.>;
and (3) high-resolution spectral cube observations, utilising aperture synthesis.
§.§ Extragalactic neutral hydrogen surveys
The number of sources available from Hi surveys is undergoing a step-change. New wide-field and deep surveys have been enabled through instruments and facilities including:
* The APERture Tile In Focus (APERTIF) upgrade to the Westerbork Synthesis Radio Telescope (WRST) – see <cit.>, with Hi survey descriptions in <cit.>, <cit.> and <cit.>;
* The Australian Square Kilometre Array Pathfinder (ASKAP) – see <cit.> and Hi survey descriptions for the Widefield ASKAP L-band Legacy All-sky Blind SurveY (WALLABY) in <cit.> and <cit.>; and
* MeerKAT <cit.>, with local <cit.> and ultra-deep <cit.> Hi surveys planned.
The scale and rate of data collection from these programs provide a first opportunity to prepare for the future of Hi astronomy that will occur with the Square Kilometer Array (SKA).
Using WALLABY as an example, these surveys will produce three main categories of data:
* Large-scale survey cubes. Over a period of five years, WALLABY is expected to cover up to 1.4π sr of the sky with ∼ 550 full-resolution spectral cubes. Each cube is anticipated to have 4200 × 4200 spatial pixels and 7776 spectral channels, requiring ∼ 600 Gigtabytes (GB) per cube. The total data storage required for WALLABY will exceed 1 Petabyte.
* Small-scale source cubelets. By running the Source Finding Application <cit.> on the survey cubes, candidate source cubelets can be extracted and stored separately, or simply have the coordinates of their bounding boxes within the survey cubes stored [see <cit.> for an overview, and <cit.> for a comparison of Hi source finders]. As source cubelets take up only a small fraction of the survey cubes, this is a much more manageable data volume to work with. Estimates of the number of Hi detections from WALLABY exceed 200,000 sources. Approximately 15–20 % of these sources are expected to be spatially resolved (i.e. where the spatial distribution of Hi is visible, which is anticipated to require at least 3-4 resolution elements or synthesised beams across the source).
* Catalogues of derived data products. Along with the key parameters (e.g. position, velocity dispersion, Hi flux) generated by source finders such as SoFiA and Selavy <cit.>, further automated processing and analysis tasks can provide additional data. This includes activities such as disk-based model fitting [e.g. TiRiFiC <cit.>, ^ 3DBAROLO <cit.>, or 2DBAT, <cit.>, and see also the description of the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) in <cit.>], computation of integral properties (e.g. total Hi mass, star formation rates), or cross-matching with optical/infrared catalogues.
Each of these data products will aid the development of insight and improved understanding of Hi's role in galaxy formation and evolution.
§.§ Visualisation-dominated workflows
The data-intensive demands of new Hi surveys has motivated the development of a number of customised tools for interactive qualitative and quantitative spectral cube visualisation <cit.>.
Moving beyond the well-established and widely-utilised solutions such as Karma[https://www.atnf.csiro.au/computing/software/karma] <cit.> and CASA[https://casa.nrao.edu] [the Common Astronomy Software Applications package; <cit.>],
alternatives for desktop-based visualisation and analysis include AstroVis <cit.>, SlicerAstro <cit.>, FRELLED [<cit.> using the free, open-source Blender animation software], FITS3D <cit.>, Shwirl <cit.>, and CARTA[https://cartavis.org/] <cit.>.
<cit.> prototyped a solution using the Unity[https://unity.com] real-time 3D engine, which can be deployed on a desktop or operate with a variety of advanced display technologies. With their iDAVIE solution, <cit.> have successfully moved spectral cube visualisation and analysis into interactive and immersive virtual reality environments.
Finally, targeting data products that greatly exceed the processing capabilities of standard desktop computers, <cit.> achieved real-time interactive visualisation of Terabyte-scale spectral cubes using a high-performance solution with graphics processing units (GPUs) and the GraphTIVA framework.
For most of these examples, the workflow for visualisation and analysis of the gas in galaxies emphasises the study of one galaxy at a time. When the data volume is low and the data rate is slow, a great deal of human time can be dedicated to examining individual data cubes or source cubelets. While highly appropriate in an era of small surveys, this serial processing presents a bottleneck for knowledge discovery once the ASKAP and MeerKAT surveys scale up to include many thousands of spatially resolved sources.
The transformation of a survey cube to a subset of source cubelets, and ultimately, a reliable, science-ready catalogue of data products can be encapsulated as a workflow. Parts of the workflow are expected to be fully automated [e.g. the Apercal calibration pipeline for Apertif surveys <cit.> or ASKAPSoft for ASKAP <cit.>]. Other stages will rely on some level of human intervention, either through computational steering (selecting parameters for the workflow, setting thresholds on source finders, etc.) or data visualisation for analysis and discovery.
§.§ Survey data
While future applications of the comparative visualisation strategies examined here may include the Hi surveys to be conducted with ASKAP and MeerKAT, we perform the benchmarking and VDAR evaluations using data from three extant Hi surveys that targetted nearby spiral and irregular galaxies:
* WHISP: Westerbork Observations of Neutral Hydrogen in Irregular and Spiral Galaxies[http://wow.astron.nl], undertaken with the Westerbork Synthesis Radio Telescope <cit.>;
* THINGS: The Hi Nearby Galaxy Survey[https://www2.mpia-hd.mpg.de/THINGS/Data.html] comprising high-spectral and high-spatial resolution data from the National Radio Astronomy Observatory Very Large Array <cit.>; and
* LVHIS: The Local Volume Hi Survey[https://www.atnf.csiro.au/research/LVHIS/LVHIS-database.html], which obtained deep Hi line and 20-cm radio continuum observations with the Australia Telescope Compact Array <cit.>.
We categorise the survey data products in terms of: (1) the number of sources (N_ s) in each survey catalogue; (2) the typical dimensionality of the data cubes (measured as spatial or spectral pixels); (3) the number of voxels (in Megavoxels or Mvox); and (4) the storage size (in Megabytes or MB) for an individual cube. For all three datasets, the spectral cubes were stored (and loaded into encube) using the Flexible Image Transport System (FITS) format <cit.>. See Table <ref> for further details, where we present the minimum, maximum and median values for the dimensions, voxel counts and storage sizes for the WHISP, THINGS and LVHIS catalogues.
To simplify both the benchmarking investigation and VDAR evaluation, we make several minor modifications to the datasets in their published forms:
* WHISP: Initial inspection of a sub-set of WHISP galaxies revealed that many of the spectral cubes have high levels of flux (relative to the peak source flux) at either end of the spectral band. Rapid identification of such systematic effects is an example of the type of SIMD quality control activity that comparative visualisation can address (see Section <ref>). For all of the WHISP cubes, we created new FITS files where we set the data values in the first eight and last eight spectral channels to zero. This does not change the load times for the mock surveys but does improve the default visualisation via texture-based volume rendering.
* THINGS: We did not use the spectral cube for NGC 3031 (M81) in our benchmarking. As NGC 3031 is a nearby grand design spiral in Ursa Major, the spectral cube is much larger than other galaxies in the sample with 2201 × 2201 spatial and 178 spectral channel pixels. The file size of 3.45 GB is approximately half of the available memory on a GTX1080 GPU. Such a large source would not be typical of new extragalactic sources discovered with blind surveys.
* LVHIS: A spectral cube data for NGC 5128 (LVHIS 048) was not available from the survey web-site, and we note a replication of data between sources LVHIS 014 and LVHIS 016, which are both identified as the dwarf irregular galaxy AM 0319-662. Removing LVHIS 016 and LVHIS 048 from the samples leaves us with N_ s = 80.
§ BENCHMARKING COMPARATIVE WORKFLOWS
In this section, we report on benchmarking activities undertaken with the implementation of encube on the Swinburne Discovery Wall.
§.§ Benchmarks
Previous system benchmarks reported in <cit.> were performed with the Monash CAVE2^ TM. For deployment on the Swinburne Discovery Wall, we report: (1) the total (i.e. parallel) load time, T_ Load, for a configuration displaying N_ cube spectral cubes; and (2) the steady-state minimum frame rate, F_ rate, in frames/second. We consider both the frame rate per column, looking for variations in performance, along with the overall mean, standard deviation, and median of F_ rate.
Frame rate quantities are calculated from the S2PLOT displays on columns 2 to 5 (see Figure <ref>). Column 1 is used for additional management and coordination tasks, and in order to access the user interface in the web browser, the S2PLOT display is not resized over both 4K-UHD monitors. The higher F_ rate values reported for column 1 show the overall reduced graphics workload when data is visualised on one 4K-UHD monitor instead of two.
We obtained a total of 54 independent benchmarks for five different configurations (Sets A–E), displaying N_ cube = 20, 40, 80, 120 or 180 spectral cubes in total using the per-column configurations summarised in Table <ref>. The main limiting factors on N_ cube are the available GPU memory (8 GB/GPU for each of the five NVIDIA GTX1080 GPUs of the Swinburne Discovery Wall) and the number of columns of monitors. A simple upgrade path to improve performance is to replace these five older-generation GPUs with higher-memory alternatives.
The benchmark configurations were generated comprising either spectral cubes from a single survey (denoted as [W]HISP, [T]HINGS or [L]VHIS) or from the combination of the three input surveys (denoted as [C]ombination). For scenarios where N_ cube exceeds the survey size, N_ s (see Table <ref>), random sampling with replacement is used to generate an appropriately-sized data set. For the combination survey, random sampling with replacement is used to generate a mock survey that is roughly equally split between the three input catalogues.
Figure <ref> demonstrates the use of the two different colour-mapping methods for a mock LVHIS survey with 180 spectral cubes. The top panel uses a heat-style colour map, while the bottom map colours based on the relative velocity with respect to the middle spectral channel, which is assumed to be the kinematic centre.
To mitigate the impact of memory caching on measurements of T_ Load, we generated three independent combinations of spectral cubes for each of the W, T, L and C configurations. A single benchmark value of T_ Load was obtained for each of the three alternatives, along with the measurements of F_ rate. For the 80-cube instance, we note that all LVHIS cubes are used, but they are randomly assigned between the five columns of the Discovery Wall for each benchmark instance.
We did not generate configurations with N_ T > 80 as these data volumes exceed the memory capacity of the GPUs. The THINGS galaxies are the highest-resolution spectral cubes considered in this study, and are not as representative of the typical resolved or partially-resolved new detections that will arise from ASKAP or MeerKAT Hi surveys.
Due to the presence of differing numbers of key-value pairs in the FITS headers, there is slight variation (see Table <ref>) in the ratio between V_ Store (the total data volume in GB) and N_ vox (the total number of voxels in Gigavoxels) for the 54 independent survey configurations. The result of a least-squares fit to the these two quantities was:
V_ Store = 4.07 N_ vox- 0.084 ,
with the mean and sample standard deviation between measured and modelled values for V_ Store calculated to be -9.4 × 10^-6 GB and 0.13 GB respectively. For simplicity, we can approximate V_ Store∼ 4 N_ vox as expected for a data format using four bytes per voxel.
§.§ Procedure
All of the spectral cubes are stored on the workstation associated with column 1 of the Swinburne Discovery Wall (the Master Node - see Figure <ref>), and the other workstations access this data through a network file sytem (NFS) mount (see <ref>). Consequently, we expect that the limiting factors on T_ Load are: (1) the network bandwidth between each Process and Render workstation and the Master; (2) the read time from the NFS-mounted drive; and (3) the processing overheads due to pre-computation of statistical parameters, as noted at the end of <ref>.
The following procedure was used to conduct each of the benchmark trials:
* The set of spectral cubes is randomly selected either without replacement (when N_ cube≤ N_ s) or with replacement, and a database file is generated in the comma-separated variable (CSV) format required by encube.
* Symbolic links are generated to each of the N_ cube spectral cubes, to minimise the duplication of data on the Master workstation.
* Modifications to the encube configuration file (keyword-value pairs using JavaScript Object Notation[JSON: https://www.json.org/json-en.html]) are made, specifically the number of rows and columns of S2PLOT panels per column of the Discovery Wall, the total number of panels per workstation, and the names of the workstations.
* Encube is launched from the Master workstation using the JSON configuration file, with calls to start the software on the Process and Render nodes. Socket connections are established between the Master and the Process and Render nodes, and a port is opened for connection to the user interface (UI).
* The encube UI is activated as a web-page in the Firefox browser on the Master machine. The UI displays the database of spectral cube files. The required files are selected and timing for T_ Load commences on mouse-clicking the Load button.
* Timing ends when all spectral cubes are displayed. As timing is performed by hand, all times are rounded up to the nearest whole second to account for the timekeeper's reaction time.
* For the subset of configurations where frame rates are also recorded on a per-column basis, an autospin signal is triggered from the UI which causes all of the spectral cubes to rotate around the vertical axis. At each of the five keyboards attached to the columns (see Figure <ref>), the d key is pressed, activating the S2PLOT graphics debug mode, which reports the instantaneous frame rate (measured over a moving window of 5 seconds duration). After each spectral cube has completed several complete rotations, the lowest measured frame rate is recorded. This presents the worst-case scenario, as the frame rate is a strong function of both the viewing angle of a spectral cube and the fraction of the screen that is mapped to data voxels.
* Once benchmark quantities have been recorded, a signal to stop the encube instances is initiated from the UI, and all of the processes are stopped from the Master workstation. It takes approximately 60 seconds for all nodes to release their socket connections ready for the next full iteration of the procedure.
The outcomes of the benchmarks are reported as follows:
* A statistical summary (mean, sample standard deviation, and median) of T_ Load for the three independent instances of each survey configuration is presented in the final two columns of Table <ref>.
* The survey load time is plotted as a function of the storage volume in the left-hand panel of Figure <ref>. All 54 independent benchmarks for T_ Load are presented, with symbols for WHISP (squares), THINGS (circles), LVHIS (triangles) and the Combination survey (diamonds).
* Individual values, and statistical characterisation of F_ rate is presented in Table <ref>. A subset of 21 configurations was considered here: Set A, with N_ cube = 20 and Set E, with N_ cube = 180.
* The minimum frame rates for each of columns 2-5 for Set A (circles) and Set E (triangles) is plotted in the right-hand panel of Figure <ref> as a function of the mean memory per GPU on the Discovery Wall.
A linear relationship exists between T_ Load (s) and V_ Store (GB), with a least squares fit result:
T_ Load = 8.07 V_ Store + 4.58 .
The mean and sample standard deviation between measured and modelled values for T_ Load were calculated to be 5.6 × 10^-4 seconds and 13.9 seconds respectively. The Pearson correlation coefficient between T_ Load and V_ Store was r = 0.98. For completeness, we find:
T_ Load = 32.83 N_ vox + 4.063
with N_ vox in Gigavoxels.
We discuss the implications of our benchmarking activities in Sections <ref> to <ref>. In the next section, we provide details of our VDAR evaluation.
§ VISUAL DATA ANALYSIS AND REASONING STUDY
<cit.> <cit.> proposed a taxonomy for understanding and evaluating visualisation methods. We select the VDAR approach to examine typical survey-scale discovery-based research processes, relevant for current and future extragalactic Hi surveys.
VDAR includes methodologies for evaluating the effectiveness or efficacy by which a visualisation tool helps to generate domain-specific actionable knowledge or understanding. VDAR methods, which often are based on case studies, investigate “the tool used in its intended environment with realistic tasks undertaken by domain experts” <cit.>, with an emphasis on the process rather than measurements of outcomes.
Our user group for the VDAR study comprises only the authors of this work. This cohort includes domain experts (i.e. Hi astronomers with relevant experience in the observation, analysis and visualisation of spectral cubes), as required with the VDAR methodology. We assert that these experiences are representative of the broader Hi research community.
Alternative evaluation methodologies for visualisations and visualisation systems <cit.> that we did not pursue include Evaluating Collaborative Data Analysis (CDA), which focuses on the process of collaboration and how it is supported by a visualisation solution, and User Performance (UP), which uses controlled experiments to measure, for example, the time taken for different users to complete tasks. As a point of comparison, <cit.> used the UP methodology to measure task performance when novice and expert participants completed an object identification activity using either a standard desktop monitor or a TDW.
To provide relevant scenarios for the VDAR study, we
consider three important SIMD processes that may be required when analysing extragalactic Hi survey data: (1) quality control of individual candidate spectral cubes; (2) candidate rejection, whereby false-positive detections from automated source finders are rejected; and (3) morphological classification, identifying and sorting sources into categories based on observed structural or kinematic properties. These three processes currently require some level of visual inspection [which may include the use of either projected moment maps or 3D visualisation methods, depending on the workflow preferences of the researcher(s) involved] in order to produce reliable, science-ready catalogues from large-scale, next-generation surveys.
It is important to note that our VDAR study does not intend to demonstrate new knowledge about any of the three input Hi surveys – WHISP, THINGS, and LVHIS – as all have been well-studied in many other contexts. They stand in as proxies for future Hi survey data products that are, potentially, being viewed for the very first time by members of the research team. As such, there may be unexpected, or unexplained, features that are present in the data products, necessitating appropriate follow-up actions once they have been identified.
Alternatively, the comparative visualisation stage may reveal that all is well with automated calibration or processing steps (e.g. model-fitting) at an early stage of science operations, thus serving its purpose. For a related example where the use of an alternative display technology evolves throughout the lifetime of an astronomical research project, see Section <ref>.
§.§ Quality control
When an Hi source finding pipeline is applied to a large-scale survey cube, the output is a set of individual source cubelets. Prior to their use in further analysis, there is value in performing by-eye quality control, to ensure that there are no significant issues with the data quality. This step would be expected to include looking for: (1) bad channels; (2) calibration errors such as poor continuum subtraction; (3) objects that have not been correctly extracted, such as extended sources that exceed the boundaries of the extracted cubelet; and (4) radio frequency interference.
The VDAR study we performed to understand the quality control process relates to our observation when first visualising a sub-set of WHISP galaxies with encube. As noted in Section <ref>, spectral channels at both ends of the band-pass contain excess flux. We illustrate this issue in the top panel of Figure <ref>, using an 80-cube configuration. The excess flux is visible in 77 of the cubes displayed. This is seen as the strong blue and red features in each cube, making it difficult to see the WHISP galaxies themselves.
With encube, it is immediately clear that a quality control issue is present and is impacting a sizeable portion of the survey. From Table <ref>, it takes less than 90 seconds to load the 80 WHISP cubes, and then less than 60 seconds to identify the 3 cases that do not appear to be affected. Performing this task in a serial fashion would require individual loading and inspection of spectral cubes: it would take much longer than 150 seconds to determine the extent of the quality control issue in order to take an appropriate action.
Our solution was to replace data values in the first eight and last eight channels of each WHISP spectral cube. This has the desired effect, revealing the kinematic structures of the sources (see the lower panel of Figure <ref>).
There will be an additional quantity of time required to resolve any quality control issue. In this case, we needed to write and execute a C-language program using the CFITSIO[https://heasarc.gsfc.nasa.gov/fitsio/] <cit.> library to create modified FITS-format data cubes for the WHISP galaxies. For a future Hi survey, it may require modification or re-tuning of an automated calibration pipeline. However, this time is independent of whether the quality control visualisation is approached in a serial or parallel fashion. Indeed, comparative visualisation provides a more rapid demonstration that the intervention had the desired effect.
Our approach to comparative quality control with encube is consistent with the model of sensemaking presented by <cit.>. Here, our use of the Discovery Wall has two dimensions: (1) a foraging loop, organising data, searching for relations, and gathering evidence; and (2) a sensemaking loop, where alternative hypotheses are posed and examined, leading to a presentation of the outcomes.
In the foraging loop, we determine that a quality control issue exists, as the initial volume renderings are not consistent with the expected profiles of Hi-detected sources. This issue impacts a significant number of spectral cubes in the sample (77 out of 80). Through physical navigation (i.e. moving to different locations near the Discovery Wall), the viewer can change their attention from a single object to an ensemble in order to gather evidence regarding the possible cause of the failed visualisations.
In the sensemaking phase, we decide that a first course of action is to remove the impact of the excess flux in all spectral cubes, and visualise the outcomes. Further investigation could include selecting the subset of those spectral cubes most strongly impacted, in order to determine the cause(s) of the excess flux.
§.§ Candidate rejection
An unwanted outcome of automated source finders is the generation of false-positive detections. This is particularly true in their early phase of operation of new survey programs, when source finders may not have been tuned optimally to the specific characteristics of the data. But false-positives may persist throughout the lifetime of a survey.
One way to improve the accuracy of source-finders is to raise the acceptance threshold, so that fewer candidates make it through the processing pipeline for further inspection and analysis. This approach reduces the discovery space, with many interesting objects remaining undetected. By lowering the acceptance criteria, more false candidates will need to be reviewed and ultimately rejected. This can be a particularly labour intensive phase.
Visual inspection is the simplest way to distinguish between true sources and false detections, but may require an appropriate level of expertise. Here, again, quality control processes will be crucial, as individual cubelets may suffer from anomalies from processing, calibration, or interference.
Our bespoke visualisation environment permits rapid inspection and comparison of many sources at the same time, improving the way that decisions are made regarding the nature of candidates. The VDAR study we performed to understand the candidate rejection process was to:
* Load one of the 80-cube combination surveys (Set C), with T_ Load∼ 150 seconds. The combination survey includes a high proportion of spatially resolved galaxies from the THINGS and LVHIS catalogues.
* Visually inspect every source, looking for the spatially resolved galaxies, and then identifying which of these did not immediately match the expected template of a grand design spiral galaxy.
It took less than three minutes to visually inspect all 80 cubes. While some resolved, non-spiral galaxies were very easy to identify, others require additional time in order to reach a decision. Here, the use of the volume rendering technique allows for individual sources, or sets of sources, to be rotated such that either the spatial or kinematic structure can be used to reach a decision.
Figure <ref> shows columns 2–5 of the Swinburne Discovery Wall, with labels under the image used to identify five sources of interest (A-E):
* Source A (THINGS, NGC 3077) is spatially resolved, but shows a disrupted Hi structure. NGC 3077 is connected to a larger neighbouring spiral galaxy, M81, by an Hi bridge <cit.>;
* Source B (LVHIS, ESO 245-G007) shows a “tube-like” feature (readily apparent when rotating the spectral cube) surrounding a central, somewhat spatially unresolved object;
* For source C (WHISP, UGC01178), there is no visible flux, which is likely due to a poor choice of the default visualisation parameters;
* Source D (LVHIS, AM 0319-662) comprises two Hi detections, with the more prominent source offset from the centre of the cube. The central LVHIS source is a dwarf irregular galaxy, a companion to NGC 1313 at the lower right of the cube <cit.>; and
* Source E (THINGS, NGC5236) is a spiral galaxy, but the overall blue feature extending across the source indicates some additional processing may be required. In particular, this can be explained as this source, Messier 83, is known to have an HI diameter much larger than the VLA primary beam with which it was observed in the THINGS project. The overview provided by many small-multiples rapidly highlight this source's distinctive feature, which was not present in any of the other 79 sources in this sample.
Identification of these five “anomalous” cases occurs rapidly, when the viewer is able to both see a large sample (i.e. comparative visualisation, by stepping back from the Discovery Wall) and investigate an individual object in more detail (by moving closer to view, or interact with, an object of interest).
To close the loop on candidate rejection, a minor modification to encube would allow each spectral cube to be tagged in real time as a true or false detection, which would then be fed back to the source finder to improve the true detection rate.
§.§ Morphological classification
Once a catalogue of robust detections has been gathered, the nature of the sources must be considered. For previously known objects, a morphological classification has likely already occurred. For new discoveries, an initial classification can be provided.
For future Hi surveys conducted with wide-field interferometric imaging, the extended structure of many sources will be visible. This includes detecting the presence of low column density features such as bridges, tails, etc. Consequently, visual morphological classification of complete, unbiased, sub-populations of sources will be possible. Indeed, with a statistically significant population of Hi galaxies, selected in an unbiased (i.e. blind survey) fashion, it becomes possible to develop new morphological categories – beyond the standard Hubble classification – that may correlate with the local or global environment or integral properties, such as the Hi mass.
The morphological classification process shares many similarities with the candidate rejection phase, and we appeal to the same VDAR study as in Section <ref>. The two features of our bespoke visualisation environment that provide an alternative approach to morphological classification, at scale, are: (1) the use of volume rendering, which allows each spectral cube to be rotated around any axis, providing immediate access to both spatial and kinematic information; and (2) the comparative nature of the display configuration, which makes it easy to go back-and-forth between specific objects in order to reach a decision regarding the classification. This might mean a change in the outcome of an initial or even pre-existing classification, or the recognition that a new sub-class of objects had been identified.
§ DISCUSSION
In this Section, we interpret the benchmarking results obtained with encube on the Swinburne Discovery Wall.
By considering survey sizes, data load times, visualisation configurations and interaction frame rates, we estimate the visualisation throughput, which we present in terms of the number of sources that could be examined in a given period of time. As a reflection on the role for bespoke visualisation environments in astronomy, we also discuss the evolution of advanced visualisation systems when used in astronomical research projects.
§.§ Load times
In order to be a useful adjunct to desktop-based visualisation methods, an alternative display solution needs to provide an appropriate level of computational performance.
Regardless of whether a single spectral cube or multiple cubes are to be visualised, there is an unavoidable overhead while the data is transferred from its storage location into the computer memory. While this latency may not be as noticeable when working with a single cube, there is a cumulative loss of time when working with large surveys. This effect increases if individual cubes are loaded multiple times for comparative tasks. The most important factors in the load time are the network and internal transfer bandwidths and the volume of data.
Our benchmarking results revealed a strong positive correlation between T_ Load and V_ Store across a range of storage volumes from 1.17 GB to 34.73 GB. This is consistent with our expectation that each of: (1) the data access and load phase, where each Process and Render node must transfer data via the NFS mount to the Master node; (2) the pre-computation performed for each spectral cube; and (3) the initial transfer of data to the GPU for texture-based volume rendering have O(N) algorithmic behaviour. If any one of these processes imposed a bottleneck for the increasing total data volume, we would expect to see deviations away from the linear scaling.
With the Swinburne Discovery Wall hardware, we can load 180 spectral cubes drawn from: (1) the LVHIS survey in under 2 minutes; (2) the WHISP survey in under 3 minutes; and (3) combinations of WHISP, THINGS and LVHIS cubes in under 5 minutes.
Using the median T_ Load for WHISP-only surveys in Table <ref>, we can consider alternative configurations that reach the same total number of data cubes, but through multiple loads of smaller quantities at a time. An additional overhead here is that we need to wait T_ Socket = 60 seconds for the Process and Render nodes to release their socket connections before the next configuration can be loaded. Expected total load times (rounded up to the nearest half minute) are as follows:
* Nine sets of 20 WHISP cubes will load in 11.5 minutes
(9 × 21 + 8 * T_ Socket = 669 s);
* Four sets of 40 WHISP cubes plus one set of 20 WHISP cubes will load in 7.0 minutes
(4 × 38 + 1 × 21 + 4 * T_ Socket = 413 s); and
* Two sets of 80 WHISP cubes plus one set of 20 WHISP cubes will load in 5.0 minutes
(2 × 73 + 1 × 21 + 2 * T_ Socket = 287 s).
By increasing the total number of cubes displayed on the Discovery Wall, we benefit from parallelisation across the Process and Render nodes during the pre-computation phase and we do not experience the system latency imposed by T_ socket. The advantage of using the 4K UHD monitors is that we retain a reasonable image resolution per source even when there are 18 spectral cubes per individual monitor (36 cubes per column) of the Discovery Wall.
§.§ Frame rates
Once a configuration of spectral cubes has been loaded and displayed on the Discovery Wall, the most important metric is the frame rate. The higher the frame rate, the smoother the interaction experience when modifying the location of the camera (e.g. when controlling the visualisation of all the spectral cubes simultaneously via the user interface).
For encube, there are several key observations that we make:
* The frame rate depends on the size of the S2PLOT window, such that expanding over both 4K-UHD monitors per Process and Render node decreases the frame rate. This is seen in the per-column frame rates in Table <ref>, where F_1 values (the Master node) are generally higher than those of the other four columns (F_2 to F_5). In order to display the user interface in the web browser on the Master node, we do not extend the S2PLOT window across both monitors.
* There are variations in the frame rate as a function of viewing angle, which depends on the relative number of voxels along each axis of a cube [see, for comparison, Figure 5 of <cit.>]. By reporting the lowest measured frame rates after each cube has undergone several complete rotations, we are presenting worst-case outcomes on interactivity.
* Frame rates can decrease when zooming in on details. The amount of processing work performed by the GPU depends on the fraction of screen pixels that contain visible data. When zoomed out, a larger percentage of each panel comprises non-data (i.e. background) pixels. We did not record the effect on frame rates as the default configurations for 180 cubes presents a comparable ratio of data to total pixels as occurs when zooming in on with one of the lower N_ cube configurations.
Setting a target of 10 frames/s as an indicator of reasonable interactivity with the data cubes, we exceed this for all of the 20-cube mock surveys (mean and median frame rates in Table <ref>), and for configurations of 180 sources selected entirely from the WHISP and LVHIS surveys.
For the 180-cube combination configuration, which includes a randomly-selected sample of 60 THINGS cubes, the mean and median frame rates fall below 5 frames/s. Here, the higher frame rates measured for spectral cubes assigned to the fifth column of the Discovery Wall (column F_5 in Table <ref>) occur as only 5-6 out of 36 spectral cubes were randomly selected from the THINGS survey. If we had “perfect” randomness in the construction of the mock survey samples, we would expect 12 THINGS galaxies assigned to each column. Instead, columns two to four are required to perform much more processing than column five per screen refresh (more memory or total voxels per GPU), resulting in the lower frame rates for (F_2 – F_4) when a single GPU is driving two 4K UHD monitors.
§.§ Throughput
One of the key metrics we wish to ascertain is the visualisation throughput, V_ tp, which is the number of source cubelets that can be inspected in a given period of time, measured in units of sources/hour.
For a single user, it is not expected that a peak V_ tp could be sustained throughout an entire day, but it is reasonable to assume that rates of 25-50% of V_ tp might be achievable for extended periods of time. This is compatible with a work pattern for quality control or source-finding candidate rejection where the candidates from the latest large-scale survey cube(s) are assessed daily.
§.§.§ Multi-object workflows
To estimate the throughput for a multi-object workflow, we consider two scenarios using the combination mock survey:
* An 80-cube configuration. The full dataset loads in around T_ Load = 160 seconds (mean load time plus one standard deviation). An initial inspection can occur in T_ Inspect = 180 seconds (see Section <ref>). If we assume 25% of sources require additional action, and the recording of that action takes 60 seconds, then T_ Action = 1200 seconds.
* A 180-cube configuration. The full dataset loads in T_ Load = 300 seconds. The time required for the initial inspection is assumed to scale linearly with the number of sources, such that T_ Inspect∼ 405 seconds. With 25% of sources requiring a 60-second action to be recorded, then T_ Action = 2700 seconds.
The total time required for the completion of a SIMD process with encube is then:
T_ SIMD = T_ Load + T_ Inspect + T_ Action + T_ Socket
where T_ Socket, introduced in Section <ref>, is a system latency. Using the values proposed for these four quantities, we suggest that T_ SIMD(80 ) = 1600 seconds (26.7 minutes) and T_ SIMD(180 ) = 3465 seconds (58 minutes).
Taken together, we estimate that V_ tp = 160-180 sources/hour seems reasonable for the completion of one of the three SIMD tasks we have considered in our VDAR study. Moreover, we have assumed only a single astronomer completing the task, whereas the large-format workspace of the Discovery Wall comfortably accommodates a small group working together.
§.§.§ Comparison with single-object workflows
As a point of comparison, we consider a single-object workflow, i.e. one source is loaded and visualised at a time with encube and using the Swinburne Discovery Wall hardware.
A relationship between the single object load time and the FITS filesize was determined using a minimal sample of representative spectral cubes from each of the WHISP, THINGS and LVHIS datasets. We select the cubes with the smallest and largest filesizes, along with a cube that had the median file size (see Table <ref>). We measure load times for visualisation with encube running only on the head node, where the data is stored, and on a remote machine over the network via the NFS mount. We used a manual timing method with a reaction time error of 0.5 seconds.
As shown in Figure <ref>, we find minimal differences in load times from the local disk (filled circles) or via the remote NFS mount (open circles). Performing a least squares fit to the combined data, we obtain:
T_ Load = 37.71 V_ Store - 1.04 seconds
with a Pearson correlation coefficient between T_ Load and V_ Store calculated to be r = 0.997.
Using the average and median sample survey file sizes from Table <ref>, we compare the single-object and multi-object load times for the 80-cube WHISP, THINGS, LVHIS and combination configurations – see Table <ref>. The ratio of the single-to-multi object load times was calculated for each configuration, showing a 4-5 times speed-up in load times using the five compute nodes of the Swinburne Discovery Wall. This is not surprising for the nearly-perfect parallelism expected in this stage of the workflow, but with a slight input/output bottleneck at the head node where all of the data is stored.
§.§.§ Estimates for future extragalactic Hi surveys
In Figure <ref>, we estimate and compare the throughput for multi-object and single-object SIMD workflows. In addition to the LVHIS and WHISP extragalactic Hi, we obtain preliminary results for the APERTIF and WALLABY surveys; these values are indicative only of future analysis that is yet to be completed. We base our throughput predictions on 10,000 APERTIF sources (in the velocity range 1,000 to 10,000 km/s) with a mean storage volume of 0.62 MB/source cubelet[K.Hess, private communication] and 210,000 sources in WALLABY with a mean storage volume of 3 MB/source cubelet.[Analysis by author CM]
The time to inspect each source is highly dependent on the SIMD task. For the candidate rejection VDAR activity (Section <ref>), we performed an initial visual scan across 80 spectral data cubes displayed on the Swinburne Discovery Wall in three minutes or 2.25 seconds/cube. This is achievable once all cubes have been loaded using physical navigation to rapidly move around the display space. With the continual cognitive set-shifting required for a lone astronomer to load and inspect one cube at a time, regardless of the display and visualisation software used, it may take 10-30 seconds per cube even at peak performance. Moreover, the single-object workflow removes the opportunity to perform comparisons, or rapid revisits to double check that a previously-viewed source had been inspected adequately.
For each survey, we consider three scenarios with different follow-up action times: (1) T_ Action = 0, such that inspection occurs but no additional actions are required for all sources;
(2) T_ Action = 30 s/source for 10% of sources; and (3) T_ Action = 60 s/source for 25% of sources. Symbols are used in Figure <ref> to differentiate between the inspection times, with T_ Inspect = 3 s/source for a multi-object workflow (filled circle) and T_ Inspect = 10 s/source (open triangle) and T_ Inspect = 30 s/source (plus symbol) for single-object workflows. For large survey sizes, N_S, these components of T_ SIMD dominate over T_ Load regardless of whether a single-object or multi-object workflow is used. The minor contribution from T_ Socket has been omitted. In all of the scenarios we considered, the estimated throughput with a multi-object workflow exceeds that of a single-object workflow.
§.§ Evolution of visualisation solutions
Astronomers have developed their craft over centuries by using a combination of singular, bespoke facilities for data gathering (e.g. dedicated observatories and supercomputers) supported by widely-available, general purpose resources for data analysis and visualisation (e.g. desktop and laptop computers in the digital era). We assert that a complementary role exists for dedicated advanced visualisation facilities that can provide a very different experience to that of the everyday.
In the same way that astronomers do not expect to operate their own personal 64-metre radio telescope or 8-metre class optical/infrared telescope, there should not be an expectation, or need, for all astronomical institutions to operate a local advanced visualisation facility. What is more important is that when such facilities are available, there is a community of interested and potential users who are able to take advantage of them.
As astronomical teams prepare themselves for the next phase of petascale and exascale data collection, new visualisation strategies that enable and enhance survey-scale discovery-based research processes will be required. Our VDAR evaluation demonstrates how comparative visualisation (implemented using encube and the Swinburne Discovery Wall) could be applied to SIMD visual analysis tasks that would not otherwise be feasible using a standard desktop configuration.
Until a survey project is underway, the exact configuration of software and hardware that provides the most productive approach to advancing scientific knowledge may not be known. As the projects develop, familiarity with the strengths and weaknesses of the instrumentation and software-pipelines will also grow. The strategies for analysis and visualisation adopted during the first year of data collection may not be the same as those deemed essential in the years that follow.
Some approaches to analysis and visualisation become essential throughout the lifetime of the individual research project where they were first adopted, perhaps spreading further into the discipline to become ubiquitous. Other alternatives may be relevant for a short period of time, or may only need to be accessed by a few members of a research team, but provide a much-needed distinctive perspective that serves to accelerate discovery. By presenting alternatives to current ways of working, astronomers can consider for themselves whether a combination of options will assist them at various stages of their research workflow.
As an illustrative example of the evolution in the use of display environments, we look to the real-time, multi-wavelength Deeper Wider Faster (DWF) fast transient detection program <cit.>, where the Swinburne Discovery Wall – used as a TDW without encube – has also played an important role.
As an international collaboration, DWF operations rely on a core team of co-located human inspectors with access to suitable visualisation software and hardware to support their decision-making processes during high-intensity, real-time observing campaigns. Through identification of potential fast or short-lived transient events, the DWF team determines whether there is a need to trigger immediate follow-up observations (e.g. target of opportunity spectroscopic observations with one of the Keck Observatory telescopes).
Informed by a user performance study that investigated potential roles for TDWs in supporting inspection of very high pixel-count images by individuals or small teams <cit.>, a TDW became a necessary component of the display ecology used in the DWF project. The TDW replaced an initial inefficient visualisation workflow (used during pilot observations in 2015), where the research team used laptop screens and desktop monitors to inspect each of the 60 CCD frames (4096 × 2048 pixels) per field imaged with the Dark Energy Camera [DECam; <cit.>].
Over successive observing campaigns, as reported by <cit.>, the role and configuration of the TDW changed in response to user requirements and feedback.
The visual inspection tasks performed by DWF team members were modified due to improvements in scientific understanding of the categories of fast transients that were being identified in real-time (and by extension those categories that could be analysed after the short-duration observing campaigns had concluded), along with enhancements to the automated pipelines <cit.>.
In turn, improvements of the automated pipeline were directly informed by the knowledge the team acquired through using the TDW.
At the time of writing, while no longer essential in the DWF context, the Swinburne Discovery Wall continues to play a role during real-time DWF campaigns. At critical stages of the development of DWF, however, the TDW was a solution that was “fit for purpose” and supported team-based visual discovery tasks that were not feasible to conduct with a standard desktop-bound approach.
§ CONCLUSIONS
The expected growth in both the volume and velocity of data from future astronomical surveys necessitates a move away from serial workflows.
The comparative visualisation approach we have investigated here via benchmarking and a VDAR evaluation is not intended to replace existing alternatives, but provides a demonstration of a complementary workflow that addresses some existing – and emerging – challenges in the size and scale of astronomical surveys.
Within our case study context of extragalactic Hi surveys, we anticipate that both the short and longer term use of automated pipelines will retain a stage of visual inspection and classification. We suggest that this can be achieved more successfully, and more rapidly, using a method that is not about inspecting one object at a time.
As we have shown here, the encube framework operating on a tiled display wall presents a compelling alternative mode for SIMD activities.
We have considered tasks that are highly repetitive, yet may need to be performed on all sources detected within a survey. Examples here include quality control, candidate rejection, and morphological classification. In all cases, as identified through our VDAR studies, encube
encouraged a sensemaking process <cit.> with a foraging phase and a sensemaking loop. The comparative nature of the display – comfortably visualising 180 spectral cubes at a time, using the Swinburne Discovery Wall configuration of ten 4K-UHD monitors – supports the rapid identification of features affecting multiple source cubelets while also presenting immediate access to both the spatial and spectral data for individual objects (through our use of volume rendering).
A few hours interacting with data with encube on the Discovery Wall could replace weeks to months of work at the desktop – without diminishing the importance of the follow-up detailed analysis that the desktop supports. We estimate a throughput of 160-180 sources/hour could be inspected using the configuration that we assessed.
Both encube and the Swinburne Discovery Wall are easily modifiable and scalable, in the sense that additional columns of monitors plus computers can be added to increase the number of sources displayed at a time. Implementation of our solution at another institution requires access to: the open-source software<cit.>; one or more Linux-based computers; (ideally) multiple monitors; and an appropriate network connection between the process and render nodes and the master node where the data set is stored.
Customised visualisation and analysis approaches will evolve over time as surveys progress. They should be employed during those periods that are particularly labour-intensive, while assisting in the identification of additional processes that can be fully or partly automated. Finding the appropriate balance between human inspection and automated detection may help to maximise the overall discovery potential of a workflow <cit.>.
§ ACKNOWLEDGEMENTS
We acknowledge the Wurundjeri People of the Kulin Nation, who are the Traditional Owners of the land on which the research activities were undertaken.
Christopher Fluke is the SmartSat Cooperative Research Centre (CRC) Professorial Chair of space system real-time data fusion, integration and cognition. SmartSat CRC's activities are funded by the Australian Government's CRC Program. We acknowledge the generous support of the Eric Ormond Baker Charitable fund, which helped to establish the Discovery Wall and the remote observing facility at Swinburne University of Technology. We are extremely grateful to David Barnes and Amr Hassan for their technical advice and encouragement during early phases of this work, and to Kelley Hess for assisting with understanding the preliminary APERTIF Hi survey results. This paper made use of data from: WHISP, Westerbork Observations of Neutral Hydrogen in Irregular and Spiral Galaxies <cit.>; THINGS, The Hi Nearby Galaxy Survey <cit.>; and LVHIS, The Local Volume Hi Survey <cit.>.
§ IMPLEMENTATION NOTES
§.§ Technical matters
In this section, we highlight some additional features of the implementation of encube on the Swinburne Discovery Wall. One workstation is assigned the role of the Master Node, where the manager unit and interaction unit are deployed. All five workstations act as Process and Render nodes. Figure <ref> illustrates the connections and communication pathways between the Master node and each of the Process and Render nodes.
Encube is launched from a Linux terminal on the Master node, which activates the program instance on each of the Process and Render nodes. Each program instance: (1) creates and opens a socket for communication with the Master node; (2) and makes application programming interface (API) calls in C code to the S2PLOT library for interactive graphical elements. Relevant content from the configuration file hosted on the Master node is passed to the Process and Render nodes. Once the socket connections have been established, the user interface is accessed through a Web browser accessing localhost on the Master node (see Figure <ref>).
S2PLOT allows for the creation of independent regions of the graphics display window, referred to as panels. For simplicity, panels are presented in encube as a uniformly tiled matrix of rows and columns. The 3D geometry within an S2PLOT panel can be controlled by selecting the panel and using the attached mouse to rotate the data cube or the keyboard to zoom in or out. As each display column of the Discovery Wall is independent, it is possible to use the keyboard and mouse associated with a column in order to work with a local subset of data (see Figure <ref>). Alternatively, the location, orientation and view direction of the virtual camera can be set for each panel using an API call. This method is used when interacting with the user interface on the Master node, so that the virtual camera is updated simultaneously for all of the panels.
Each Process and Render node requests and loads relevant data files from the Master node, using a drive that is accessible using the network file system (NFS). Once each Process and Render node has loaded the required data, the spectral cube is visualised using 3D texture-based volume rendering. Here, an S2PLOT callback function is associated with each panel, and once per refresh cycle, the volume rendering is generated based on the current virtual camera position.
3D texture-based rendering provides a compromise between lower-fidelity two-dimensional texture image stacks (also implemented in S2PLOT) or computationally-demanding ray-shooting.
For simplicity of operation, two different colour-mapping options are provided: intensity-based, whereby a heat-style colour map is assigned from the minimum to the maximum voxel value for each spectral cube, and velocity-based mapping <cit.>. Here, the velocity data is utilised along with the voxel values, in order to provide cues as to whether neutral Hi gas is blue-shifted or red-shifted along the spectral axis with respect to the centre of the cube (assumed to be equivalent to the centre-of-mass for most systems).
While completing the benchmarking and VDAR evalauation activities (described in Sections <ref> and <ref>), we chose not to invest development time to make some cosmetic changes to the encube user interface. In particular, the world in miniature component of the interface (see Figure <ref>) was not ideal when the number of spectral cubes visualised exceeded 40. This temporarily limits the ability to use some of the features of encube, such as the ability to select and swap cubes between any of the displays in real-time. However, the overall functionality and performance of the encube process and render components is not impeded.
In the implementation of encube that we benchmarked, there were some additional processing steps performed that add to the time taken to load each spectral cube. These comprise several independent complete passes through the spectral cube to calculate statistical parameters, compare actual data values with those recorded in the spectral cube metadata, and generation of a histogram of data values for each spectral cube. Each of these processes have algorithmic linear scaling depending only on the number of voxels in the spectral cube. Consequently, they introduce a multiplicative factor on the time to load all of the spectral cubes. Such pre-computation is a design choice that allows the CPU memory to be freed once data is loaded onto a GPU. Accessing these values has O(1) complexity later during interactive analysis.
§.§ Future enhancements
While working with encube during the VDAR evaluation, we identified several additional features or enhancements that could extend the framework's suitability for comparative visual analysis of large-scale extragalactic Hi surveys:
* Add an on-screen scale indicator. As all spectral cubes are scaled to a unit cube for convenience, the physical size of individual objects was lost.
* Within the user interface, allow selection or sorting of the source list by any metadata attribute, such as size, total Hi mass, or distance.
* Access and display detailed metadata of a selected object or set of objects. During the present work, a trivial modification was made to toggle visibility of the name of each object within its S2PLOT display panel.
* Improve the creation of the on-screen configuration, allowing more flexibility in how data is assigned to the available display space. For example, a non-uniform arrangement of panels per column, which could allow individual spectral cubes to be visualised at increased levels of detail or cubes with different sizes (e.g. spatial pixel coverage or rest-frame physical dimensions) could be presented at the same scale as demonstrated in Figure <ref>.
* Include support for additional data types to be loaded and displayed, including spectral cubes from different wavelength regimes or observing modes (e.g. optical integral field units), overlay of two-dimensional images, or visualisation of one-dimensional spectra.
* Provide a mechanism by which annotations could be recorded regarding individual sources, preferably through the use of speech-to-text capture and conversion.
* Support interactive masking of channels via the user interface for selected subsets of cubelets, so that the issues identified with the WHISP sample could have been resolved in real-time. Such modifications could then be embedded into the dataset, by exporting the modified spectral cubes for future automated, or human, analysis.
|
http://arxiv.org/abs/2307.04228v3 | 20230709164437 | Efficient Bayesian travel-time tomography with geologically-complex priors using sensitivity-informed polynomial chaos expansion and deep generative networks | [
"Giovanni Angelo Meles",
"Macarena Amaya",
"Shiran Levy",
"Stefano Marelli",
"Niklas Linde"
] | physics.geo-ph | [
"physics.geo-ph",
"cs.LG"
] |
Efficient Bayesian travel-time tomography with geologically-complex priors using sensitivity-informed polynomial chaos expansion and deep generative networks
Ali Abdali, Student Member, IEEE,
and Murat Kuscu, Member, IEEE
The authors are with the Nano/Bio/Physical Information and Communications Laboratory (CALICO Lab), Department of Electrical and Electronics Engineering, Koç University, Istanbul, Turkey (e-mail: {aabdali21, mkuscu}@ku.edu.tr).
This work was supported in part by EU Horizon 2020 MSCA-IF under Grant #101028935, and by The Scientific and Technological Research Council of Turkey (TUBITAK) under Grant #120E301.
August 12, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Monte Carlo Markov Chain (MCMC) methods commonly confront two fundamental challenges: accurate characterization of prior distributions and efficient evaluation of likelihoods.
In the context of Bayesian studies on tomography,
principal component analysis (PCA) can in some cases facilitate the straightforward definition of the prior distribution, while simultaneously enabling the implementation of accurate surrogate models based on polynomial chaos expansion (PCE) to replace computationally intensive full-physics forward solvers.
When faced with scenarios where PCA does not offer a direct means of easily defining the prior distribution alternative methods like deep generative models (e.g., variational autoencoders (VAEs)), can be employed as viable options.
By sampling a simple prior probability distribution defined in the low-dimensional latent space of a VAE, it becomes possible to efficiently sample the physical domain. This is accomplished by employing a generator that is typically highly non-linear.
Deep generative models therefore offer appealing features for MCMC.
However, accurately producing a surrogate capable of capturing the intricate non-linear relationship between the latent parameters of a VAE and the outputs of forward modeling presents a notable challenge. Indeed, while PCE models provide high accuracy when the input-output relationship can be effectively approximated by relatively low-degree multivariate polynomials, this condition is typically unmet when utilizing latent variables derived from deep generative models.
In this contribution, we present a strategy that combines the excellent reconstruction performances of VAE in terms of prior representation with the accuracy of PCA-PCE surrogate modeling in the context of Bayesian ground penetrating radar (GPR) travel-time tomography.
Within the MCMC process, the parametrization of the VAE is leveraged for prior exploration and sample proposal. Concurrently, modeling is conducted using PCE, which operates on either globally or locally defined principal components of the VAE samples under examination. Our methodology is exemplified using channelized subsurface structures, providing accurate reconstructions and significant speed-ups, particularly in cases where the computation of the full-physics forward model is costly.
§ INTRODUCTION
Bayesian inversion methods can account for data and modeling uncertainties as well as prior knowledge, thus,
representing an attractive approach for tomography.
However, the difficulties in specifying appropriate prior distributions and high computational burden associated with repeated forward model evaluations often hinder proper implementation of Bayesian tomography
<cit.>.
In geophysical settings, prior distributions have traditionally been specified by assuming the subsurface to be represented by a Gaussian random field. As an alternative, in cases where this assumption is invalid, the prior can be informed using training images (TI), that is, large gridded 2-D or 3-D unconditional representations of the expected target spatial field that can be either continuous or categorical <cit.>. Proper parametrization of the prior is essential in Bayesian inversion, and suitable prior parametrizations may differ from what is commonly used for other purposes.
For example, in physics-based modeling and visualization, representations of geophysical media most often rely on pixel-based parametrizations. Physical media can then be associated with points in ℝ^N, where N is the number of elements in the corresponding pixel-based representation. However, while allowing easy implementation of forward modeling schemes (e.g., Finite Difference (FD) based on partial differential equations), pixel-based N-dimensional parametrizations are often not suitable to effectively parametrize the prior distribution, as N can be very large.
When prior knowledge suggests constrained spatial patterns, such as covariance or connected spatial structures, the prior-compatible models populate manifolds embedded in ℝ^N.
If this manifold can be locally assimilated to a subset of
ℝ^M, with M ≪ N, the prior distribution reduces to a function of M variables only, which leads to lower-dimensional inverse problems.
Various approaches can be employed to achieve manifold identification through dimensionality reduction. Among these techniques, principal component analysis (PCA) and related methods are the most commonly utilized linear dimensionality reduction methods <cit.>.
Based on a data set of prior model realizations and the eigenvalues/eigenvectors of the corresponding covariance matrix, PCA provides optimal M-dimensional representations in terms of uncorrelated variables that retain as much of the sample variance as possible.
PCA has found widespread application in geophysics, utilized in both deterministic and stochastic inversion algorithms, with recent advancements in this field offering the potential to reconstruct even discontinuous structures <cit.>.
In the context of media characterized by binary, channelized aquifer structures, which has been explored by several authors <cit.> and investigated in this study, PCA-based methods may however encounter challenges in identifying a lower-dimensional manifold that is suitable for easy sampling, which is particularly desirable in MCMC methods.
As an effective alternative, deep generative models such as variational autoencoders (VAEs) or Spatial Generative Adversarial Networks (SGANs) can be employed to achieve a low-dimensional parameterization of complex media distributions, facilitating easy sampling.
VAEs and SGANs are distinct types of deep generative models, both utilizing deep neural networks to learn the underlying input distribution and generate synthetic samples that closely resemble a provided dataset. VAEs capture patterns in data through a compressed latent space for reconstruction and generation, while SGANs use adversarial training to generate realistic synthetic output resembling reference data <cit.>.
In the context of Bayesian inversion, both VAEs and SGANs possess a crucial property: they generate realizations that exhibit patterns consistent with the TI when applied to random realizations drawn from an uncorrelated standard normal or uniform distribution effectively functioning as priors <cit.>. The incorporation of a low-dimensional parameterization representing the prior distribution simplifies the sampling process, thus making both VAEs and SGANs well-suited for MCMC schemes.
For this manuscript, we employ VAEs, as recent research suggests that their lower degree of nonlinearity in the corresponding networks makes them more amenable for modeling and inversion <cit.>.
Employing an effective parametrization for the prior (e.g., via PCs or VAEs) does not however in any way alleviate the challenge of computational load associated with modeling within MCMC methods, which can be substantial in applications like ground-penetrating radar (GPR) travel-time tomography.
Travel-time tomography encompasses diverse imaging techniques that use wave propagation to non-destructively deduce properties of the medium. For shallow subsurface applications, MCMC travel-time tomography can be employed with ground-penetrating radar to infer the distribution of electromagnetic ground wave velocity.
GPR data can be collected in a variety of configurations, with cross-hole designs being particularly well suited for groundwater investigations and deterministic and probabilistic algorithms alike <cit.>.
MCMC schemes for GPR travel-time inversion entail a substantial computational cost due to the numerous required forward model evaluations. This expense can be prohibitive, despite using advanced FD solvers <cit.>. To address this concern, a potential solution is to explore PCE surrogate modeling <cit.>. For Bayesian travel-time tomography,
<cit.> recently
proposed a Bayesian framework based operating on data-driven PCs to characterize 2D multi-Gaussian priors and PCE surrogates to compute GPR travel-times.
Taking these considerations into account, it could be speculated that adopting a VAE parametrization for the prior within PCE modeling could effectively address both issues simultaneously.
However, our contribution contradicts this speculation and demonstrates that utilizing a VAE parametrization for both the prior and PCE modeling can lead to sub-optimal outcomes.
We then present an effective alternative to capitalize on the beneficial aspects of both VAEs and PCE by proposing the separation of input parametrization between the inversion and modeling steps. Our approach involves using the latent representation to define the prior distribution and explore the posterior distribution. Additionally, we employ global or local sensitivity kernel-based PCA decompositions of the input, generated by the underlying VAE decoder, to facilitate the modeling process and address the inherent challenges of PCE in handling high-dimensional input.
§ BAYESIAN INVERSION
Forward models are mathematical tools that quantitatively evaluate the outcome of physical experiments. We refer to the relationship between input parameters and output values as the 'forward problem':
ℱ() = + ϵ.
Here, ℱ, , and ϵ stand for the physical law or forward operator,
the input parameters, typically representing local media properties, the output and a noise term, respectively.
The goal of the 'inverse problem' associated with the 'forward problem' in Eq. (<ref>) is to infer properties of conditioned by the data while taking into account any available prior information about .
A general solution to this problem can be expressed in terms of the posterior distribution defined over the input domain by Bayes' theorem:
P(|)= P(|)P()/P()=L()P()/P().
Here, P(|) is the posterior distribution of the input parameter given the data , P(|) (also indicated as L() and known as 'the likelihood') is the probability of observing the data given the input parameter , while P() and P() are the prior distribution in the input parameter domain and the marginalized likelihood with respect to the input parameters (also known as evidence), respectively. In practical applications, Eq. (<ref>) is seldom used when solving inverse problems as the computation of the evidence is in most cases very expensive. However, since the numerator of the right-hand side of Eq. (<ref>), that is, L()P(), is proportional to the posterior distribution, one can use a Markov chain Monte Carlo (MCMC) methods to draw samples proportionally from P(|) without considering the evidence <cit.>.
For practical applications, computation of both P() and L() is often critical.
Evaluation of P() can be expensive for high-dimensional problems. This is the case for tomography, where typically represents the slowness distribution at every point/pixel in space.
Moreover, computing L() requires the solution of a forward problem, which can be extremely demanding in Bayesian scenarios as this evaluation needs to be repeated many times. In the following sections we discuss how both these problems can be approached by using a latent representation and surrogate modeling to evaluate P() and L(), respectively.
§ DIMENSIONALITY REDUCTION STRATEGIES FOR BAYESIAN INVERSION AND MODELING
Sets of parameters are particularly well suited for Bayesian inversion accelerated by surrogate modeling when they can easily encode the prior distribution and effectively simplify the underlying physics of the investigated problem.
<cit.> used variables defined in terms of principal components to (a) represent the prior distribution related to a random Gaussian field on a low-dimensional manifold and (b) implement an accurate surrogate to compute the forward problem. However, it is not generally granted that a single change of coordinates can achieve both (a) and (b).
§.§ Bayesian inversion in latent spaces
We first concentrate on recently introduced strategies to adequately represent the prior on a low-dimensional manifold. For representing the prior distribution, we can utilize manifold identification extracted from the TI using a VAE. This involves utilizing a VAE to characterize a latent space using a set of coordinates (here indicated as ) and a statistical distribution defined prior to the training. The VAE also can be used to transform such latent space to the physical space through a deep learning decoder, denoted here as 𝒢_VAE. In simple terms, for a given random realization of the prior in the latent space, the decoder operation 𝒢_VAE() produces an output in the physical space that adheres to the characteristics of the TI. The details of the VAE utilized in this study can be found in <cit.>.
The use of this new set of coordinates casts the inverse problem on the latent manifold as:
P(|)= P(|)P()/P().
While formally identical to Eq. (<ref>), Eq. (<ref>) involve significant advantages.
For this class of coordinates, not only is typically low-dimensional (consisting of at most a few tens of variables) but we can also design the corresponding statistical prior distribution P() as desired. In this case, we set during VAE training P() to be a multivariate standard Gaussian distribution.
§.§ Forward modeling within Bayesian inversion in latent spaces
For the sake of discussing modeling , we now rewrite the forward problem in Eq. (<ref>) using the new coordinates:
ℳ_VAE() = +ϵ,
where ℳ_VAE=ℱ∘ 𝒢_VAE, ∘ stands for function composition, and we assume no error induced by the VAE dimensionality reduction.
The complexity and non-linearity of 𝒢_VAE imply that the forward operator ℳ_VAE will exhibit considerable irregularity, making it difficult to replace with a surrogate model.
Hence, when utilizing only a latent parametrization, two key prerequisites for successful and efficient MCMC, namely prior fidelity and the implementation of surrogate modeling, diverge from each other.
Consequently, we investigate alternative approaches for constructing an accurate surrogate that avoids utilizing the latent parametrization for the modeling aspect while retaining it for the prior representation. Building upon the insights presented in <cit.>, we proceed to explore modeling based on PCA.
Without any loss of generality, we consider a complete set of principal components for realizations of the deep generative network (implemented via 𝒢_PCA (_full)= 𝒢_VAE() =) and rewrite Eq. (<ref>) as:
ℳ_PCA(_full) = +ϵ,
where ℳ_PCA=ℱ∘ 𝒢_PCA.
We show in the following that the relatively simple relationship 𝒢_PCA (_full)= makes it suitable for implementing a surrogate of ℳ_PCA, provided that the input and the model can be faithfully represented as operating on an effective M-dimensional truncated subset of the new coordinates _full, that is: G_PCA () ≈
and
ℳ_PCA() = +ϵ̂,
where ϵ̂ is a term including both observational noise and modeling errors related to the projection on the subset represented by .
§.§ Polynomial chaos expansion modeling within Bayesian inversion in latent spaces
Forward models are typically implemented by potentially expensive schemes (e.g. Finite Element Methods (FEM), Finite-Difference Time-Domain (FDTD)) that solves physical equations to mimic the relationship between input and output data.
As discussed in section <ref>, the function a forward solver has to model depends on the set of coordinates used to represent the input.
For simplicity, we focus here on the derivation of a surrogate for ℳ_PCA(), but identical formal derivations would apply to ℳ_VAE or ℱ, albeit with relevant caveats that will be discussed below.
A surrogate model ℳ̃_PCA is a function that seeks to emulate the behaviour of an expensive forward model at negligible computational cost per run:
ℳ̃_PCA() ≈ℳ_PCA().
Among the many available surrogate models, sparse adaptive polynomial chaos expansions are one of the most widely used due to their efficiency, flexibility and ease of deployment <cit.>.
Polynomial chaos expansions are a type of stochastic spectral expansions that can approximate forward operators in terms of linear combinations of orthonormal multivariate polynomials Ψ_α:
ℳ̃_PCA() = ∑_α∈𝒜 a_αΨ_α(),
where M is the dimension of and 𝒜 is a subset of ℕ^M implementing a truncation scheme to be set based on accuracy requirements and computational resources availability <cit.>.
For a review of more advanced basis truncation and construction schemes, the reader is redirected to <cit.>.
To calculate the set of expansion coefficients a_α
sparse regression techniques, also known as compressive sensing, have been demonstrated to be highly efficient in the context of surrogate modeling <cit.>, and are adopted in this paper.
Note that the evaluation of the coefficients a_α is computationally unfeasible when the input domain is high-dimensional (the case for a surrogate ℱ̃() of ℱ()). Moreover, when the truncation pattern cannot fully account for the degree of non-linearity of the underlying model (the case for a surrogate ℳ̃_VAE() of ℳ_VAE()), the still unbiased PCE predictions are inevitably affected even if the input domain is low-dimensional.
In any case, if a_α is calculated from a finite data set, the surrogate forward modeling predictor can be evaluated at a negligible cost by direct computation of Eq. (<ref>) and its accuracy
estimated using a validation set or cross-validation techniques <cit.>.
In this manuscript, our focus is on PCE surrogates operating on latent variables (i.e., ℳ̃_VAE()) and principal components (i.e., ℳ̃_PCA()). We demonstrate how utilizing PCs based parametrizations significantly enhances PCE accuracy and, consequently, improves MCMC performances. The main innovation of our manuscript thus lies in effectively differentiating the Bayesian and modeling domains, allowing them to complement each other's potentials and overcome their respective limitations.
In line with the consolidated literature, we account for the modeling error by considering in the likelihood function
a covariance operator C_D = C_d+C_Tapp, where C_d is the covariance matrix describing data uncertainty and C_Tapp
accounts for the modeling error.
The likelihood incorporating the modeling error is then expressed as
L()= ( 1/2 π)^n / 2 |C_D|^-1/2[ -1/2 (ℳ̃_PCA() - )^T C_D^-1 (ℳ̃_PCA() - ) ] .
where |C_D| is the determinant of the covariance matrix C_D <cit.>.
§.§ Decoupling inversion and modeling domains in Metropolis-Hastings MCMC
The successful implementation of MCMC methods hinges on effectively addressing two fundamental challenges: precise characterization of priors and efficient modeling. As we have discussed in the preceding sections, relying on a single parametrization may prove inadequate in achieving both objectives. Therefore, we have introduced the innovative concept of potentially decoupling the inversion and modeling domains to tackle these challenges. Once a suitable prior has been defined and a accurate modeling strategy devised, in the scenario discussed here based on VAE and PCs parametrizations, respectively, a Metropolis-Hastings MCMC algorithm can be used to sample the posterior distribution P(|) <cit.>. A sample of the posterior in the physical space, P(𝒢_VAE()|), is then also available via mere application of the 𝒢_VAE to draws from P(|).
The latent representation provided by the VAE is used to evaluate P() and explore the posterior according to a Gaussian proposal distribution characterized by a diagonal covariance matrix λ𝕀, where λ can be tuned to achieve a proper acceptance rate <cit.>.
For each step in the MCMC process, the surrogate modeling operates on subsets of principal components associated with the physical distribution 𝒢_VAE() proposed by the MCMC.
§ APPLICATION TO GPR CROSSHOLE TOMOGRAPHY
In the previous sections, we briefly covered the basic principles of Bayesian methods and emphasized the significance of dimensionality reduction and surrogate models for their implementation. Now, we integrate these concepts to tackle GPR cross-hole travel-time tomography through MCMC.
Our focus is twofold: exploring the potential of a VAE architecture to parameterize complex channelized structures in the prior distribution, and investigating the challenges of leveraging PCE for accelerated modeling.
§.§ Input and output for MCMC
For the representation of the prior and posterior exploration in the MCMC process, we consider coordinates induced by a VAE deep generative network 𝒢_VAE <cit.>.
As for the relevant physical distributions, we target lossless media represented by binary images with two facies of homogeneous GPR velocity (6· 10^7 and 8· 10^7 m/s) resembling river channels
<cit.>.
As for the output, we consider arrival-times associated with the recording configuration displayed in Fig. <ref>(a-e) with 12 evenly spaced sources and receivers located in two vertically-oriented boreholes. The distance between the boreholes is 4.6 m, while the spacing between sources/receivers is 0.9 m, such that a total of 144 travel-times are collected. We employ both an eikonal and a 2D FDTD solver to simulate noise-free propagation of GPR waves <cit.>. For the FDTD code each source is characterized by a 100 MHz Blackman–Harris, while perfectly matched layers (PML) surrounding the propagation domain are used to prevent spurious reflections from contaminatinating the data, while appropriate space-time grids are employed to avoid instability and dispersion artefacts. Travel-times are picked automatically based on a threshold associated with the relative maximum amplitude of each source-receiver pair.
§.§ Parametrization of the input domain for PCE
The utilization of the VAE parametrization allows to faithfully represent complex priors, as demonstrated in Figure <ref>(a-e). However, the VAE parametrization per se does not alleviate the computational load associated with MCMC methods.
As a result, our objective is to develop an efficient and accurate PCE meta-model that can synergistically collaborate with the VAE parametrization.
We propose three different strategies to use PCE in conjunction with the VAE parametrization for prior-sampling, using the
complex channelized structures in <cit.> as an example. Note that while we specifically reference a particular case, the proposed strategies have broader applicability and can be utilized in various domains and scenarios. For each strategy we build a corresponding PCE to model travel-time arrivals using the Matlab Package UQlab <cit.>.
To offer a fair comparison, we employ the same training and validation datasets for all proposed schemes.
In the first strategy, referred to as VAE-PCE, the input for the PCE modeling are the 20-dimensional vectors induced by the VAE deep generative network 𝒢_VAE mapping the latent space into the physical one, that is: 𝒢_VAE() = <cit.>.
The second strategy, in the following Global-PCA-PCE, uses a similar approach to <cit.>, with inputs of the PCE modeling defined in terms of projections on prior-informed PCA components spanning the entire domain. The symbols _GPCA, 𝒢_GPCA, and ℳ_GPCA are used in the following to refer to the input, the physical distribution and the model associated with the Global-PCA-PCE strategy, respectively.
More specifically, in the Global-PCA-PCE approach we randomly create a total of 1000 slowness realizations 𝒢_VAE() from the prior and compute the corresponding principal components (see Fig. <ref>).
The input for PCE in the Global-PCA-PCE approach are then the projections of 𝒢_VAE() on a to-be-chosen number of such PCs.
Note that, while following <cit.> all PCA processes are defined in terms of slowness, for readability purposes in all of the figures of this manuscript we present velocity fields.
The effective dimensionality of the input with respect to ℳ_GPCA, that is, the number of principal components representing the input, is not a-priori defined. Following a similar approach to <cit.>, the effective dimensionality is here assessed by analysing the convergence to the reference solution in the output domain within the noise level as a function of the number of principal components.
In Fig. <ref>(a) and (e), two velocity distributions are shown next to the approximate representations (Fig. <ref>(b-d) and (f)-(h)) obtained by projecting them on 30, 60 and 90 principal components, respectively. As expected, the reconstruction quality improves as more principal components are included.
To quantify the faithfulness of the various reduced parametrizations in terms of the output, we consider 100 realizations of the generative model, and compute the resulting histograms of the travel-time residuals using the reference forward solver. The root-mean-square error (in the following, rmse) of the misfit between the data associated with the original distribution and its projections on 30, 60 and 90 principal components, shown in Fig. <ref>(i)-(k), are 1.60, 0.85 and 0.55 ns, respectively, that are to be compared to the expected level of GPR data noise of 1 ns for 100 MHz data <cit.>. The number of principal components required to approximate the forward solver below the expected noise level (i.e., 90 PCs) is larger than discussed in <cit.> (i.e., 50 PCs). Building a PCE on such a large basis can be challenging in terms of computational requirements and efficiency, and could lead to poor accuracy if a small training set is employed. To address this, one approach is to either reduce the number of components, which introduces larger modeling errors, or explore alternative parameterizations that offer improved computational efficiency and accuracy.In this study, the Global-PCA-PCE approach utilizes 60 components, while an alternative strategy is developed based on physical considerations to further enhance the modeling process.
A final parametrization can be derived by considering the forward problem's specific characteristics, which are not taken into account in the aforementioned definition of PCs. In fact, the PCs in the Global-PCA-PCE approach refer to the input field in its entirety. However, the actual arrival-time for a given source/receiver combination depends only on a sub-domain of the entire slowness distribution. This leads us to suggest a local approach, in the following referred to as Local-PCA-PCE. Instead of using principal components describing the entire slowness field, we aim to employ output-driven local principal components that characterize only the sub-domains impacting the physics of the problem <cit.>.
We then expect that fewer local PCs are needed than the global ones to achieve the desired input/output accuracy.
In practice, the construction of local PCs involves utilizing fat-ray sensitivity kernels, which capture the sensitivity of arrival-times to small perturbations in the subsurface model, thus providing valuable insights into the regions that have the most significant impact on the observed data.
For a given source/receiver pair, the corresponding sensitivity kernel depends on the slowness field itself, with patterns that can vary significantly (see Fig. <ref>(a)-(j)). The sought local decomposition needs to properly represent any possible slowness field within the prior, thus it reasonable to define it based on a representative sample of input realizations. To reduce the overall PCE computational cost it is also convenient to limit as much as possible the number of used output-driven decompositions. To achieve these goals, we assume that the prior model is stationary with respect to translation. Instead of focusing on each specific source/receiver pair (total number of decompositions in our scenario: 144), we can then consider source/receiver altitude angle (total number of decompositions given our acquisition geometry: 23). We then use a total of 1000 slowness realizations 𝒢() from the prior and build the corresponding fat-ray sensitivity kernels using the reference eikonal solver for each of the 23 possible angles <cit.>.
For any given angle,
we consider a cumulative kernel consisting of the sum of the absolute values of each kernel (green areas in Fig. <ref>(k)-(t)). Such a cumulative kernel cover an area larger than each individual kernel but is still considerably smaller than the entire slowness field. For any possible additional input model, the corresponding sensitivity kernel is then very likely geometrically included in the area covered by the cumulative kernel (see Fig. <ref>(k)-(t)). Based on this insight, we define principal components spanning only the area covered by such cumulative kernels or relevant parts thereof (e.g., a threshold can be taken into consideration to reduce the size of these sub-domains). For the practical definition of the components, the cumulative kernels are either set to 0 or 1 depending on whether the corresponding value is equal or larger than the threshold, respectively. We then multiply point by point the slowness distributions with the cumulative kernels, and consider the principal components of such products.
In Fig. <ref>(a)-(e) and (f)-(j), for given source/receiver pairs, the first five principal components are shown. Note that the pattern variability is confined within the cumulative kernels, while in the complementary areas the values are 0. Note also that compared to the first five principal components in Fig. <ref>, higher resolution details can be identified in the sensitivity confined modes. Given the same number of PCs used, we can then expect the input to be better presented in the physically relevant sub-domain when the Local-PCA-PCE rather then the Global-PCA-PCE approach is followed.
For all source/receiver pairs corresponding to the same altitude angle, the same kernels and principal components are used, provided they are shifted to cover the appropriate geometry.
In Fig. <ref>(a)-(g) the two slowness distributions from Fig. <ref> are shown next to the approximate representations obtained by projecting them on 30 local principal components defined specifically for different source-receiver angles. In the areas complementary to the sensitivity kernels, the speed is set to 0.07 m/ns.
Input reconstructions are remarkably improved with respect to when using the same number of PCs (compare <ref>(a)-(g) and (h)-(l) to Fig. <ref>(b) and (f)) of the entire slowness field. More interestingly, the modeling errors provided by using just 30 sensitivity-based PCs is lower than what was previously provided by 90 standard components (i.e., rms ≈ 0.45 ns).
These considerations underscore the significance of introducing a new category of PCs that are driven by the forward model. By incorporating these tailored PCs, we can attain enhanced output fidelity when utilizing truncated representations of the input. This enhanced fidelity proves particularly advantageous for the implementation of PCE, allowing for more precise and efficient modeling of intricate systems. Consequently, this approach holds substantial promise in achieving superior accuracy and computational efficiency in PCE-based analyses. The symbols _LPCA, 𝒢_LPCA, and ℳ_LPCA are used to refer to the input, the physical distribution and the model associated with the Local-PCA-PCE strategy, respectively.
We have introduced three different parametrizations to be used for PCE. We consider coordinates inherited by the VAE, and principal components derived by considering either entire slowness fields or sensitivity-based sub-domains. We refer to these three parametrizations in what follows as VAE-PCE, Global-PCA-PCE and Local-PCA-PCE, respectively.
§.§ PCE performance
We here analyze the PCE performance of the different parametrizations for surrogate modeling introduced in Section <ref>. In agreement with <cit.>, we consider for each surrogate a total of 900 training sets and a polynomial degree p of five, for all three schemes when applied on eikonal data.
When the VAE parametrization is used as input, the PCE performance is rather poor, with a rmse of 2.01 ns, which is well beyond the noise level and consequently considered unsatisfactory. (see Fig. <ref>).
The poor performance is due to the highly non-linear map ℳ_VAE. In such a scenario, PCE does not provide accurate results even if the physical input of the validation set is exactly defined by the deep generative network 𝒢_VAE()=. In this scheme, note that the evaluation of 𝒢_VAE() is not required to compute the corresponding PCE.
Despite the partial reconstruction of the input (e.g., 𝒢_G/LPCA() ≈) provided by the PCA approaches, the corresponding parametrizations provide good results when used to build PCE surrogates, with the Global-PCA-PCE approach being outperformed by the Local-PCA-PCE scheme in terms of accuracy (rmse of 1.31 and 0.68 ns, respectively, see Fig. <ref>(b) and (c) for the corresponding histograms). In both cases, the PCE operates on more variables (i.e., 60 and 30 for the Global-PCA-PCE and Local-PCA-PCE parametrizations, respectively, versus 20 for the VAE-PCE scheme).
Moreover, the input for the Global-PCA-PCE and Local-PCA-PCE schemes are projections of images, which require the evaluation of 𝒢_VAE(), on PCs. As such, evaluation of the corresponding PCEs is computationally more expensive for the PCA-based approaches than for the VAE-PCE case.
In the Local-PCA-PCE approach, for each of the 23 angles considered, training involves randomly chosen source/receiver- pair data associated with identical altitude differences, while the final rmse is computed on the standard 144 travel-time gathers.
For the Local-PCA-PCE scheme we also consider training and validation using FDTD data in addition to the eikonal data discussed above. Results are similar and still well below the noise, with an rmse of 0.65 ns (the corresponding histogram is displayed in Fig. <ref>(d)). All PCE results are unbiased and closely resemble Gaussian distributions.
Depending on the parameterization chosen, PCEs approximate eikonal and FDTD solvers to different degrees.
Figure <ref> represent the corresponding covariance matrices accounting for the modeling error of each surrogate model.
A graphical summary of the proposed parameterizations of the PCE input variables introduced in Section <ref> is depicted in Fig. <ref>.
Despite excellent input representation, PCE performance associated with the VAE parametrization are poor (left column), regardless of the maxium degree used (here we consider a maximum degree of five, as indicated by α∈ℕ_5^20). The family of orthonormal polynomials, Ψ, are entirely determined by the statistical properties of z and do not depend on the source-receiver pair, whereas the coefficients a^s,r_α obviously do. Things improve when the Global-PCA-PCE strategy is used. Also in this case we consider a maximum degree of five (indicated by α∈ℕ_5^60 since we have 60 parameters). Once again, the family of orthonormal polynomials Ψ^G (which are constructed on the global principal components) do not depend on the source-receiver pair, while the (different) coefficients a^s,r_α do.
By far, the best results are achieved when the Local-PCA-PCE strategy is chosen. In this case α∈ℕ_5^30, however the family of orthonormal polynomials, Ψ^θ, depend on the angle between the source-receiver pair (upon which the local principal components are built), with multiple source/receiver pairs relying on the same polynomial bases, while the coefficient depend strictly on the source-receiver pair a^s,r_α.
We now discuss the computational burden of each strategy when running on a workstation equipped with 16GB DDR4 of RAM and powered by a 3.5GHz Quad-Core processor running Matlab and Python on Linux. We emphasize that our goal with the present manuscript is to propose novel methods to conjugate VAE and PCE rather than offer optimized implementations.
There are up to three relevant computational steps in the execution of a forward run for the VAE-PCE, Global-PCA-PCE, Local-PCA-PCE, eikonal and FDTD simulations, namely the evaluation of the physical input, 𝒢_VAE(), its decomposition on either global or local principal components and the actual evaluation of the forward model. Not all methods require each of these steps. The VAE-PCE model, for example, is not a function of 𝒢_VAE() but rather depends on only. Evaluation of the VAE-PCE model is extremely fast both when involving one or 35 (as in the MCMC inversion discussed below, based on 35 chains) simultaneous model runs, taking on average ≈ 0.06 and ≈ 0.08s, respectively.
Evaluation of the Python-based decoder 𝒢_VAE(), required for all forward models except the VAE-PCE, is actually the bottleneck of the Matlab algorithms used here, requiring ≈ 1.35 and ≈ 1.43s, respectively, when operating on one or 35 input when considering the eikonal solver. However, this cost could be decreased
with either an ad hoc implementation of the decoder or PCE in the same environment or with a more efficient calls of the Matlab/Python scripts in our codes.
When in its native environment, evaluation of 𝒢_VAE() is actually very fast, taking only ≈ 0.005 and ≈ 0.08s when operating on one or 35 inputs, respectively. Still, note that this cost is overall negligible even in our non-optimized setting when considering expensive physics-based forward solvers such as FDTD.
Only the Global-PCA-PCE and Local-PCA-PCE strategies require PCA decompositions.
The Global-PCA-PCE approach is faster, requiring only up to ≈ 0.002s and ≈ 0.05s when applied to one and 35 input elements, respectively, while the Local-PCA-PCE method is slower, taking up to ≈ 0.06s and ≈ 0.23s in the same situation.
For the Global-PCA-PCE method the cost of a single forward run is ≈ 0.52s, which is significantly more than for VAE-PCE. The difference between these two PCE strategies can be attributed to the significantly larger number of input parameters of the Global-PCA-PCE with respect to the VAE-PCE scheme (i.e., 60 vs 20). Note that the PCE model evaluations are vectorized and, therefore, the cost is almost the same when applied to 35 input (≈ 0.57s).
Moreover, the computational cost of the Global-PCA-PCE method could be reduced by applying a PCA decomposition of the output, akin to what is proposed in <cit.>.
Despite involving fewer variables than the Global-PCA-PCE approach, the Local-PCA-PCE method is slightly more computationally demanding with a cost of ≈ 0.64s and ≈ 0.65s, respectively, when operating on one or 35 input, respectively.
The increase in cost compared to the Global-PCA-PCE method depends on the fact that each Local-PCA-PCE has its own family of polynomials Ψ^θ (see <ref>).
Differently than PCE methods, the cost of the reference eikonal solver is basically a linear function of the number of input distributions it operates on. A single run requires ≈ 0.05s, while given 35 velocity distributions the cost increases up to ≈ 1.67s
As such, its cost are either significantly smaller or slightly larger than what required by the Global/Local-PCA-PCE approaches. Finally, the cost required by the reference FDTD code is ≈ 120s and ≈ 4200s if operating on either one or 35 velocity distributions, which is orders of magnitude longer than for the eikonal or PCE models. These results are summarized in Table <ref>, where we estimate the performance of an ideally-optimized Local-PCA-PCE method benefitting from (a) evaluating 𝒢_VAE() in its native environment and (b) using a single family of polynomials Ψ^θ for all angles. In numerical results not presented herein, we find that choosing any one of the Ψ^θ families for all models provides nearly identical fidelity to what is achieved by using specifically tailored polynomials for each angle at the considerably smaller cost of ≈ 0.06 and 0.16s when applied to either one or 35 input, respectively. While such a result cannot be generalized, it is always possible to test the corresponding PCEs accuracy with a representative evaluation set. The option of relying on a single family of polynomials for the Local-PCA-PCE method is certainly to be taken into account when computationally-optimising algorithms.
§.§ Inversion results
We now explore the performance of the different parametrizations used for PCE-based surrogate modeling, namely VAE-PCE, Global-PCA-PCE and Local-PCA-PCE, when performing probabilistic MCMC inversion.
The inversions were carried out using the UQLab Matlab-based framework <cit.>.
We invert for the distribution shown in Fig. <ref>, which is used to generate a total of 144 travel-times using the reference eikonal and FDTD solvers. Note that this field is not used to train the PCEs. Uncorrelated Gaussian noise characterized by σ^2=1 ns^2 was added to the data used for inversion.
We use a Metropolis-Hastings algorithm, and run 35 Markov chains in parallel for 4×10^5 iterations. A burn-in period defined according to the Geweke method prevents over-sampling regions around starting points, while the scaling factor of the proposal distribution is tuned so that an acceptance rate of about 30% is achieved for each experiment <cit.>.
Finally, outlier chains with respect to the Inter Quartile-Range statistics discussed in <cit.> are considered
aberrant trajectories and are ignored in the MCMC analysis.
We first present the results for training data generated by an eikonal solver. We compare VAE-PCE, Global-PCA-PCE and Local-PCA-PCE inversion results to those achieved by employing the eikonal solver, which represent the reference solution since the full physics solver is used in the entire MCMC process.
Inversion results in terms of mean and standard deviations incorporating the model error (i.e. using PCE derived C_Tapp
in Eq. (<ref>)
are shown in Fig. <ref>(a-e).
The mean of the posterior obtained employing the VAE-PCE poorly resembles the reference velocity field, with relevant topological differences between the two (compare Fig. <ref> to Fig. <ref>(a)). Note that the misfit between the observed data used for inversion and the VAE-PCE evaluated on the reference input is particularly large for this input (i.e., 3.1 ns). This poor performance is also obtained when considering other test models (see Appendix A).
The mean of the posterior provided by the Global-PCA-PCE approach shares many features with the reference distribution, but the two images visibly differ in the profile of the lower and upper channelized structures.
The similarity between the posterior mean and the true distribution increases significantly when the Local-PCA-PCE is used (compare Fig. <ref> to Fig. <ref>(c)). These results also show close proximity with the posterior mean solution obtained by the eikonal solver (see Fig. <ref>(d)), that is, without any surrogate modeling.
Also, when the FDTD Local-PCA-PCE is employed, an almost identical solution to what is achieved using the Local-PCA-PCE is obtained (see Fig. <ref>(e)).
For this alternative data set, the use of the FDTD solver in the inversion algorithm would be extremely expensive and is not considered here.
The quality of the solution offered by the surrogate on FDTD data can be heuristically appreciated by noting its similarity to results obtained by the Local-PCA-PCE based on eikonal data, which in turn produces results close to those of the eikonal solver on eikonal data.
Although not strictly consequential, it is then to be expected that the results offered by the surrogate-based on FD data would also be similar to those that would have been obtained if using the FDTD solver on FDTD data.
High standard deviations values are found distributed in wide domains of the image when VAE-PCE is used (see Fig. <ref>f). In contrast, when Global-PCA-PCE (Fig. <ref>g), Local-PCA-PCE (Fig. <ref>h) and eikonal (Fig. <ref>i) solvers are used, high standard deviation values are found mainly only along channel boundaries.
Convergence is assessed using the potential-scale reduction factor R̂ considering the variance of the individual Markov chains with the variance of all the Markov chains merged together <cit.>, as calculated by the second half of the chains. Convergence is usually assumed if R<1.2 for all parameters. In our experiments, full convergence for all of the 20 parameters is achieved when the VAE-PCE, the Global-PCA-PCE and the Local-PCA-PCE approaches are used. Six parameters do not converge when the eikonal solver is employed, but the values R are nevertheless close to 1.2 (see Fig. <ref>).
Further quantitative assessments can be achieved by comparing the reference input and the corresponding inversion solutions in terms of input domain root-mean-square error (in the following, RMSE), structural similarity (in the following, SSIM) and rmse in the output domain <cit.>.
Among these metrics, SSIM specifically evaluates the structural similarity between images, emphasizing their underlying patterns and details. It assigns a value between 0 and 1, with 0 indicating a notable dissimilarity and 1 denoting a substantial level of similarity.
Again, we see that the VAE-PCE performs poorly, with a low SSIM (0.30) value and large input and output root-mean-square error (8.01· 10^6 m/s and 3.49 ns, respectively). Better results are provided by the Global-PCA-PCE strategy (SSIM, RMSE and rmse are 0.54 and 5.38· 10^6 m/s and 1.49 ns, respectively). The Local-PCA-PCE scheme results are the closest to the reference solutions achieved using the eikonal solver (the corresponding SSIM, RMSE and rmse are 0.73 and 0.87, 2.67· 10^6 m/s and 1.57· 10^6 m/s and 1.15 and 1.01 ns, respectively). Also considering these additional metrics, the FDTD Local-PCA-PCE performs similarly to the Local-PCA-PCE strategy (SSIM: 0.71, RMSE: 3.06· 10^6).
The results are summarized in Table <ref>.
We also consider histograms of SSIM values in the posterior distributions provided by the five inversion schemes considered above. Fig. <ref> shows how the VAE-PCE posterior has low similarity with the reference model, with the maximum SSIM value being below 0.5. Closer proximity is found among samples obtained using the Global-PCA-PCE approach, a trend that is further improved when considering the the Local-PCA-PCE scheme that shows some overlap with the results of the reference eikonal inversion. Note again that the FDTD Local-PCA-PCE algorithm is similar to the Local-PCA-PCE scheme, and thus the eikonal-based strategy, in this analysis as well.
This can be further appreciated by looking at random posterior realizations for each of the inversion strategies discussed above (see Fig. <ref>).
§ DISCUSSION
Deep generative networks, like VAEs, offer a robust framework for characterizing complex priors, enabling the description of intricate input distributions. However, proper characterization of prior distributions alone does not ensure the efficient estimation of posterior distributions when using MCMC methods. In such situations, the use of surrogate models becomes beneficial or even essential for evaluating likelihoods effectively.
Surrogate modeling with PCE has become widespread in many disciplines. The massive decrease of the computational costs associated with PCE is achieved by approximating demanding computational forward models with simple and easy-to-evaluate functions. A key requirement to allow implementation of PCE is that the number of input variables describing the computational model is relatively small (i.e., up to a few tens) and that the target model can be approximated by truncated series of low-degree multivariate polynomials.
The number of coefficients defining the PCE model grows polynomially in both the size of the input and the maximum degree of the expansion. When the reference full-physics model response is highly nonlinear in its input parameters, the problem is typically non-tractable <cit.>.
Since the high-fidelity mapping of complex prior distributions provided by deep generative networks is based on highly non-linear relationships between latent variables and physical domains, replicating the performance of such networks and/or composite functions thereof (e.g., ℳ_VAE=ℱ∘ G_VAE in Eq. <ref>) using PCE is problematic.
To circumvent this challenge, we have explored two PCA-based decompositions that facilitated the straightforward implementation of PCE. One decomposition was designed to encompass the entire input domain, while the other specifically focused on subdomains of particular physical relevance within it. While the latter concept is investigated here in the context of travel-time tomography, the integration of problem-related PCs operating synergistically with latent parametrizations has the potential for broader applications.
In the context of tomography-related problems, other potentially simpler parameterisations could have been considered, ideally not based on the analysis of a large sample of realisations of deep generative models but on properties of the input (including, for example, angle and mean/maximum/minimum velocity along the line connecting source and receiver or a sub-domain similar to the cumulative sensitivity kernels used here), which will be the subject of future research.
Whatever the choice of input coordinates, for example, based on PCA or local properties of the input, the determining criterion for evaluating the quality of the corresponding PCE should always be performance on a representative validation set.
In case of PCA, the lower bound of prediction misfit rmse can be a priori estimated by comparing the accuracy of the reference model acting on the full and the compressed domains, that is, ℳ_G/LPCA(_full) and ℳ_G/LPCA(). In our case, such lower bounds for the Global-PCA-PCE approach operating with 60 components is 0.85 ns. Using the Local-PCA-PCE scheme with 30 components only, the rmse drops to 0.55 ns.
However, the accuracy of the corresponding Global/Local-PCA-PCE is worse, with rmse of 1.31 and 0.67 ns, respectively, mainly due to the small size of the training set. Note that while the lower bound of PCA decreases when more PCs are taken into account, the corresponding accuracy of PCE is limited by the size of the training set. Increasing the number of components can actually worsen PCE performance if the training set is not adequate to determine the polynomial coefficients, which, as mentioned, grow significantly with input size. In our case, using 90 components would imply an rmse of 1.39 ns, which is worse than what was obtained by the 60 components PCE. Note that while for the VAE-PCE method the parametrization is inherited by the latent space structure, for the Global/Local-PCA-PCE schemes the number of variables/PCs has to be chosen based on accuracy and computational burden analysis, with the computational cost increasing with the size of the input. In this study using 30 components for both the Global-PCA-PCE and the Local-PCA-PCE strategies would have reduced the accuracy of the Global-PCA-PCE method. On the other hand, using 60 components for the Local-PCA-PCE scheme would have decreased the computational efficiency.
Both the VAE-PCE and Global-PCA-PCE consist of 144 (i.e., the total number of source/receiver combinations) different PCE models operating, for each travel-time simulation, on an identical input, that is, the latent variables or the 60 PCs characterizing the entire physical domain.
On the other hand, the Local-PCA-PCE scheme consists of 23 (i.e., the total number of source/receiver altitude angles) different models operating, for each travel-time simulation, on 144 different input, that is, the 30 PCs characterizing each local sub-domain.
Since each of the 23 models operates on specific local PCs, the corresponding families of orthonormal polynomials Ψ^θ are different. This is in contrast with the Global-PCA-PCE method, for which each model operates via a single family of polynomials, namely
Ψ^G.
Thus, the Local-PCA-PCE scheme is computationally more demanding than the Global-PCA-PCE (see Table <ref>). However, the use of a single family of polynomials can also be considered for the Local-PCA-PCE method, resulting in shorter run time.
When considering computational performance, an optimal implementation of 𝒢_VAE() should also be sought.
In this study, to determine the minimum number of PCs for constructing an accurate PCE, we assess the lower bound of output prediction misfit rmse as a function of the number of PCs used. We project the input onto subsets of PCs, typically ranging in the tens. This process generates non-binary images, which are then utilized to compute the output using the reference forward modeling.
Alternatively, we could consider re-binarizing the reconstructed images as done in <cit.>. This approach would bring the projected input back into the prior, but this property is not necessarily relevant for the determination of PCE accuracy. However, irrespective of the chosen reconstruction algorithm, the Local approach maintains a significant advantage over the Global method. When using an equal number of components, Local PCs, in fact, consistently yield superior approximations of the relevant input compared to Global PCs.
We have seen that once a latent parametrization has been found to reduce the effective dimensionality of the input domain and, based on PCA decompositions, high fidelity PCEs have been trained, MCMC inversion can be efficiently implemented.
Relying on PCE rather than advanced deep learning methods
can be advantageous in terms of ease of implementation, as potentially complex training of a neural network is not needed.
Many effective sampling methods, such as Adaptive Metropolis, Hamiltonian Monte Carlo, or the Affine Invariant Ensemble Algorithm <cit.> could be easily used in our workflow, but in the current implementation we have used a standard Metropolis-Hastings sampling algorithm.
Even if we considered borehole GPR applications, adaptation of the relatively effective Global-PCA-PCE strategy presented here could be employed for other imaging problems such as active or passive seismic tomography at different scales <cit.>. On the other hand, implementation of the Local-PCA-PCE schemes would depend on the properties of the corresponding sensitivity kernels, which would require more careful evaluation and problem-specific design.
§ CONCLUSIONS
Low-dimensional latent variables associated with deep generative networks, such as VAE, optimally conform to complex priors, and provide an ideal setting to explore posterior model distributions using MCMC strategies.
MCMC methods can also benefit greatly from surrogate modeling based on PCE, provided the forward model can be approximated by low-degree multivariate polynomials.
Such latent variable models tend to have a highly non-linear relation to data, and are, thus, poorly approximated by low-degree PCEs. As such, performing PCE-accelerated MCMC inversion based on a latent parametrization for both inversion and surrogate modeling leads to large posterior uncertainty due to the need to account for modeling errors in the likelihood function.
Indeed, in the context of GPR travel-time tomography, PCE based on VAE latent variables results in modeling error well beyond noise level.
By separating the parametrization used for inversion and the one for surrogate modeling, we can circumvent this problem and perform MCMC in a latent space while modeling is approximated by PCEs operating on globally or locally-defined principal components.
We find that both globally- and locally- defined PCEs largely outperform surrogate modeling based on VAE parametrizations.
For the channelized structures of interest, modeling errors are comparable to the typical observational errors when PCE are based on globally defined principal components and significantly better performance can be achieved when locally defined principal components are taken into account, with errors typically well below the noise level. Generally speaking, using PCE significantly reduces the computational burden of MCMC, but it can be successfully employed to perform non-linear MCMC inversion only if the corresponding modeling error is not excessively large.
In this manuscript we have shown how PCE based on VAE parametrizations performs poorly in MCMC inversion, whereas PCE based on globally and locally-defined principal components produce results comparable or close to those obtained using full-physics forward solvers.
The methods presented herein are extendable to other problems involving wave-based physics of similar complexity.
§ ACKNOWLEDGMENTS
We acknowledge the feedback Prof. Andrew Valentine (Durham University) and Dr. Robin Thibaut (Ghent University) in the preparation of this preprint.
Niklas Linde, Macarena Amaya and Shiran Levy acknowledge support by the Swiss National Science Foundation (project number: 184574).
§ DATA AVAILABILITY
The data underlying this article are available upon request to the corresponding author.
apalike
figuresection
§ APPENDIX: OVERVIEW OF THE LIMITATIONS OF THE VAE-PCE APPROACH
For the configuration considered in this manuscript, PCEs based on VAE parameters provide poor accuracy in predicting travel-times. When calculated on a representative validation set, an aggregate rmse of 2.01 ns is observed for the misfit between reference and predicted data.
For the velocity distribution in Fig. <ref>, the travel-time prediction is particularly poor, with an rmse of 3.1 ns.
In Fig. <ref>, we consider six additional reference velocity fields and the corresponding posterior mean images for MCMC inversion when using VAE-PCE surrogate modeling.
In some cases the posterior mean resembles the reference velocity field well (compare <ref>(a) to <ref>(g), or <ref>(f) to <ref>(l)). However, large differences can arise between the reference and the VAE-PCE posterior mean (e.g., compare <ref>(b) to <ref>(h), or <ref>(e) to <ref>(k)). Even if the corresponding modeling error is accounted for in the inversion implying that the posterior mean models should be unbiased, we find that the modeling error has severe impacts by increasing the posterior model uncertainty.
|
http://arxiv.org/abs/2307.07371v1 | 20230714142529 | Two-Way Quantum Time Transfer: A Method for Daytime Space-Earth Links | [
"Randy Lafler",
"Mark L. Eickhoff",
"Scott C. Newey",
"Yamil Nieves Gonzalez",
"Kurt E. Stoltenburg",
"J. Frank Camacho",
"Mark A. Harris",
"Denis W. Oesch",
"R. Nicholas Lanning"
] | quant-ph | [
"quant-ph"
] |
apsrev
Air Force Research Laboratory, Directed Energy Directorate, Kirtland AFB, NM, United States
[[email protected]
DISTRIBUTION A: Approved for public release; distribution is unlimited.
Public Affairs release approval AFRL-2023-3154]
The Boeing Company, Albuquerque NM, United States
The Boeing Company, Albuquerque NM, United States
The Boeing Company, Albuquerque NM, United States
The Boeing Company, Albuquerque NM, United States
Leidos, Albuquerque NM, United States
Leidos, Albuquerque NM, United States
Leidos, Albuquerque NM, United States
Air Force Research Laboratory, Directed Energy Directorate, Kirtland AFB, NM, United States
Remote clock synchronization is crucial for many classical and quantum network applications.
Current state-of-the-art remote clock synchronization techniques achieve femtosecond-scale clock stability utilizing frequency combs, which are supplementary to quantum-networking hardware.
Demonstrating an alternative, we synchronize two remote clocks across our freespace testbed using a method called two-way quantum time transfer (QTT).
In one second we reach picosecond-scale timing precision under very lossy and noisy channel conditions representative of daytime space-Earth links with commercial off-the-shelf quantum-photon sources and detection equipment.
This work demonstrates how QTT is potentially relevant for daytime space-Earth quantum networking and/or providing high-precision secure timing in GPS-denied environments.
Two-Way Quantum Time Transfer: A Method for Daytime Space-Earth Links
R. Nicholas Lanning
August 12, 2023
=====================================================================
fancy
DISTRIBUTION A: Approved for public release; distribution is unlimited. Public Affairs release approval AFRL-2023-3154.
Precise synchronization of remote clocks is important for position, navigation, and timing (PNT), high speed transactions, distributed computing, quantum networking, and many other applications.
A common technique used to synchronize remote clocks is based on global positioning system (GPS) public signals, which can achieve nanosecond-scale synchronization<cit.>.
If more precision is desired, or one is operating in a GPS-denied environment, other techniques must be used.
Optical two-way time and frequency transfer (O-TWTFT) utilizes frequency combs to synchronize two remote clocks to femtosecond precision <cit.>.
To date, demonstrations of O-TWTFT have been performed between stationary sites <cit.> and slow moving drones, <25 m/s.
However, femtosecond-scale synchronization may be excessive for many applications.
As an alternative, the White Rabbit (WR) protocol can achieve picosecond-level precision and has been investigated for quantum-networking synchronization <cit.>.
It was designed for wireline fiber-optic applications, but has been implemented wirelessly <cit.> and considered for space applications <cit.>.
Perhaps the most straightforward optical-time-transfer technique uses pulsed lasers, photodetectors, and software-based correlation methods.
For example, time transfer by laser link (T2L2) demonstrations have achieved picosecond-scale precision between remote ground stations operating in common view with the Jason-2 satellite after a 1000-s acquisition <cit.>.
However, these techniques utilize systems and hardware nonessential to quantum communication.
Another technique consists of utilizing the femtosecond-scale temporal correlations of photon pairs created in spontaneous-parametric-down conversion (SPDC) photon sources.
In this case, the relative time offset between two remote clocks is measured with the following procedure:
(1) a series of photon pairs are separated and transmitted to two remote sites,
(2) the photons are detected and their arrival times are time-tagged based on the respective local clock,
(3) after sufficient detection events are collected, the series of arrival times from each site are combined and correlation methods are used to find the clock offset.
This technique was first proposed in Ref. <cit.> and we refer to it as quantum time transfer (QTT).
One-way QTT enables relative clock synchronization <cit.>, and two-way QTT enables absolute clock synchronization <cit.>.
Subsequently, we investigated the suitability of this method for lossy and noisy channel conditions commensurate with daytime Earth-satellite quantum downlinks <cit.>.
In this letter, we report remote clock synchronization and ranging with two-way QTT during conditions representative of bi-directional Earth-satellite links.
Our system achieves picosecond-scale timing precision during these challenging conditions with commercial off-the-shelf (COTS) hardware.
We discuss the measured clock offset, the overlapping Allan deviation, the time deviation, the propagation distance between the transmitter sites, and the coincidence-to-accidental ratio (CAR) of the quantum correlation signals relative to the measured atmospheric conditions.
Our freespace quantum-communication testbed is located at the Starfire Optical Range (SOR), Kirtland AFB, NM in the Southwestern United States.
The transceiver sites are the same sites utilized in Ref. <cit.>, but are enhanced for two-way propagation using the arrangement in Fig. <ref>.
We tune the Alice-to-Bob direction to be representative of a low-Earth orbit (LEO) downlink; we set the beam divergence to impose the correct geometric loss, we use a white light source to scatter noise photons into the channel, we use a heat source to add scintillation, and we utilize a closed-loop adaptive-optics (AO) system to monitor and compensate for atmospheric turbulence <cit.>.
In the Bob-to-Alice “uplink” direction, the quantum signal is precompensated by the AO system, and photons arrive at Alice with similar efficiency to the downlink direction.
However, for Earth-satellite uplinks, the quantum signal is transmitted at a point-ahead angle such that it intercepts the LEO satellite.
As a consequence, the quantum signal travels a slightly different path than the downlink beacon that drives the AO system.
This typically results in more loss due to reduced compensation of atmospheric-turbulence effects caused by isoplanatism <cit.>.
Therefore, to make our “uplink" direction more realistic, we apply an additional 2 dB attenuation in software to the true coincidences in the receive channel at the Alice site.
This level of attenuation corresponds to AO compensation with an ideal beacon in the point-ahead direction.
This could be achieved using a boom or a small beacon satellite leading the quantum satellite.
Closing the AO loop on a downlink beacon would result in ∼10 dB more attenuation.
Applying this level of attenuation we observe temporary loss of synchronization due to the relatively low-performance sources in this field experiment.
A higher pair rate and heralding efficiency source at the ground station would increase the attenuation tolerance and relax the requirement for the beacon in the point-ahead direction <cit.>.
Figure <ref> gives a schematic of the two-way QTT components.
The sites labeled Alice and Bob each have a Thor Labs SPDC810 bi-photon source and a pair of Excelitas SPCM-AQRH detectors labeled L and R for local and received, respectively.
The clock system is comprised of a Picoquant Hydraharp 400 time tagger, a Stanford Research Systems PRS10 Rubidium frequency standard (RbFS), and a PC.
Alice and Bob create a pair of photons, detect one of the pair with their local detector L, and send the other photon across the 1.6 km freespace channel to the other site where it is detected with a receive detector R.
The detection times t in the Alice-to-Bob direction α are related by
t_A,L = t_B,R-T_prop-δ,
where the subscripts A and B correspond to the local clocks at Alice and Bob, respectively, T_prop is the propagation time, and δ is the absolute clock offset.
Similarly, the detection times in the Bob-to-Alice, β, direction are
t_A,R = t_B,L+T_prop-δ.
One-way QTT is performed independently in both the α and β directions, and the resulting relative clock offsets τ_α and τ_β are
τ_α = t_A,L-t_B,R = -T_prop-δ
τ_β = t_B,L-t_A,R = -T_prop+δ.
Combining Eq. <ref>
we find the absolute clock offset and the propagation time <cit.>
δ = τ_β-τ_α/2
T_prop = -τ_β+τ_α/2.
In Fig. <ref>(a) we show the measured turbulence parameters [r_0, σ_I^2] over which we performed two-way QTT, where each data point represents the average during a one second acquisition.
The projections onto the Hufnagel Valley 1×, 2×, and 3× HV_5/7 theoretical turbulence profiles <cit.> show that the atmospheric conditions were similar or worse than an Earth-satellite downlink.
We select two continuous 1-hr acquisitions that are representative of daytime (blue) and nighttime (black) atmospheric conditions in order to highlight the performance of the two-way QTT algorithm under each condition.
All other data is shown in gray.
In Fig. <ref>(b) we show the CAR of the QTT correlation signal in the α direction as a function of the background sky radiance H_b during the daytime scenario, where larger CAR corresponds to greater confidence that the QTT algorithm correctly identified the true correlation signal.
As expected, Fig. <ref>(b) shows that the CAR increases as the background sky radiance H_b decreases.
Furthermore, it shows the robustness of the QTT algorithm to high background levels, that is, the QTT algorithm reliably finds the true correlation signal even as the CAR approaches 1.
Next, we consider the timing performance of our two-way QTT clock system.
We do this by utilizing the following two logical or “software” clocks.
One is drifting, that is, the local hardware clocks drift apart and the absolute clock offset δ is simply tracked.
The other is synchronized, that is, we perform the following recursive synchronization algorithm.
First, we estimate the current clock drift Δ U based on prior measurements or estimates.
Next, we predict how far the clocks will drift before the next measurement by adding Δ U × T_a to the most recent offset measurement, where our acquisition time T_a=1 s.
We move the synchronized clock ahead based on the prediction, and thus the next measurement is the difference from the prediction.
The result is shown in Fig. <ref>(c) for the daytime and nighttime scenarios with the synchronized (solid curves and left axis) and drifting (dashed curves and right axis) clocks.
The standard deviations are 27.1 and 39.7 ps for the nighttime and daytime scenarios, respectively.
As expected, the timing jitter is smaller at night.
Meanwhile, the drifting clock shows that the local clocks drift from each other at a rate of ∼340 ps per second on average.
Figure <ref>(d) shows the absolute clock offset δ according to the drifting clock after the mean clock drift is removed.
Furthermore, we see that the two-way QTT algorithm can run continuously without losing synchronization for an extended period of time under challenging channel conditions.
The Allan deviation is a standard method to characterize the stability and noise profile of a clock system <cit.>.
In Fig. <ref>(e) we show the overlapping Allan deviation, where the solid and dashed curves are the synchronized and drifting clocks, and the daytime and nighttime scenarios are blue and black, respectively.
The gray curve is the Allan deviation reported in the user manual of the Stanford Research Systems PRS10 RbFS.
The slopes for the drifting and synchronized clocks are approximately -0.6 and -1, respectively.
Furthermore, we measure the time deviation (TDEV) <cit.>, which has a slope of -1/2 according to the synchronized clock.
Combined these results indicate that the predominate noise is white FM and white PM for the drifting and synchronized clocks, respectively.
Visually, one can see these different noise profiles in Figure <ref>(c) and (d).
Assuming a constant speed of light c, one can use Eq. <ref> to find the propagation distance.
In Fig. <ref> we plot the propagation-distance distribution for the nightime and daytime 1-hour acquisitions with the average propagation distance subtracted.
We find that the mean of both distributions is μ_prop=1.644403 km with standard deviations σ_prop 1.04 cm and 0.57 cm for the daytime and nighttime scenario, respectively.
The width of the daytime distribution is larger than the nighttime distribution because the timing jitter is worse during the daytime scenario.
In this letter we report a remote clock synchronization and ranging demonstration over daytime space-Earth channel conditions utilizing a protocol we call two-way quantum time transfer.
Our algorithm synchronizes to 10's of ps after only 1 second of integration with COTS hardware.
Furthermore, we precisely measured the propagation distance of our testbed to a precision better than 1 cm.
We analyze the performance of the protocol using the Allan and time deviations with synchronized and drifiting software clocks.
The authors acknowledge program management support from Valerie Knight, Ryan Riley, Ian Blake, and Adrian Lewis AFRL.
The views expressed are those of the author and do not necessarily reflect the official policy or position of the Department of the Air Force, the Department of Defense, or the U.S. government.
The appearance of external hyperlinks does not constitute endorsement by the U.S. Department of Defense (DoD) of the linked websites, or the information, products, or services contained therein. The DoD does not exercise any editorial, security, or other control over the information you may find at these locations.
Approved for public release; distribution is unlimited. Public Affairs release approval AFRL-2023-3154
|
http://arxiv.org/abs/2307.05106v1 | 20230711083057 | Tree-Based Scenario Classification: A Formal Framework for Coverage Analysis on Test Drives of Autonomous Vehicles | [
"Till Schallau",
"Stefan Naujokat",
"Fiona Kullmann",
"Falk Howar"
] | cs.SE | [
"cs.SE",
"cs.LO"
] |
Tree-Based Scenario Classification: A Formal Framework for Coverage Analysis on Test Drives of Autonomous Vehicles
Till Schallau1 0000-0002-1769-3486, Stefan Naujokat1 0000-0002-6265-6641, Fiona Kullmann1 0000-0001-5858-0659, and Falk Howar12 0000-0002-9524-4459
1TU Dortmund University, Dortmund, Germany
2Fraunhofer ISST, Dortmund, Germany
{till.schallau, stefan.naujokat, fiona.kullmann, falk.howar}@tu-dortmund.de
August 12, 2023
============================================================================================================================================================================================================================================================================================================================
Scenario-based testing is envisioned as a key approach
for the safety assurance of autonomous vehicles.
In scenario-based testing, relevant (driving) scenarios
are the basis of tests. Many recent works focus on
specification, variation, generation and execution of individual
scenarios. In this work, we address the open challenges
of classifying sets of scenarios and measuring coverage of theses scenarios in recorded test drives.
Technically, we define logic-based classifiers that
compute features of scenarios on complex data streams
and combine these classifiers into feature trees that
describe sets of scenarios.
We demonstrate the expressiveness and effectiveness of our approach
by defining a scenario classifier for urban driving and evaluating it
on data recorded from simulations.
temporal logic, metric, scenario classification, scenario-based testing, autonomous vehicles
§ INTRODUCTION
One of the open challenges in the development of autonomous driving software is
assuring its safety <cit.>. It has long been established that statistical arguments on
the performance of the complete system (e.g., caused fatalities per million
miles) are not attainable in
practice <cit.>.
The billions of miles that would have to be driven without failures are simply
not feasible for every new vehicle or software update. For several years now,
the focus of research has been on structured approaches to assuring the safety
of autonomous driving functions instead <cit.>.
The recently published ISO 21448 <cit.> norm (Safety
of the Intended Functionality) transfers the conceptual framework of system
safety approaches (e.g., ISO 26262 <cit.>) to the
assurance of a vehicle’s safety under all environmental conditions and possible
faults that are triggered by the environment <cit.>.
Basically, the idea is to identify relevant driving situations and potential
triggers and then use these as a basis for testing the safety of a vehicle or
its driving software. Many recent works focus on defining notions of
safety <cit.>,
formalizing what constitutes
scenarios <cit.>, and on testing safety in specified
scenarios <cit.>.
Recent standardization efforts target the specification of so-called
operational design domains (ODDs) <cit.> that
define the anticipated environmental conditions for autonomous vehicles at a
high level (e.g., weather conditions, road types and parameters, etc.). To
combine works and results on testing individual scenarios into compelling
arguments about the safety of an autonomous vehicle in its operational design
domain, we need tools for describing sets of relevant scenarios in some ODD and
methods for analyzing coverage of these scenarios in driving tests as, e.g.,
stated in the ASAM OpenODD concept <cit.>.
In this paper, we present an approach to the specification of sets of scenarios
through classifiers for features of scenarios in the set. These classifiers can
then be used to identify observed scenarios in recorded test drives. Moreover,
we can compute the set of possible combinations of features from our
specification. This enables us to provide coverage metrics and to identify
counterfactual scenarios, i.e., scenarios that were not observed but could be
observed. Technically, we use logic-based classifiers that identify features of
scenarios on complex data streams and combine these classifiers into feature
trees that describe sets of scenarios, emerging from the combinatorial
combination of features. We extend an existing modal logic to express features
that can be found in ODD standards, in the 6-layer model of driving
scenarios <cit.>, and in the classification of
driving maneuvers (e.g., intersection, traffic light present, light rain,
oncoming traffic, left-turn maneuver, etc.).
We demonstrate the expressiveness and effectiveness of our approach in a case
study: We specify a small set of features and use test drives in a randomized
simulation to analyze the observed scenario classes and the coverage that can be achieved in
this setup. We also show how coverage can be decomposed and only analyzed for
individual features or layers of the 6-layer model.
Summarizing, the contribution of this paper is threefold:
* We present a formal logic for describing properties over recorded
sequences of scenes.
The logic extends upon existing temporal logics in
multiple aspects that are essential for concise specifications that work on
field-recorded data: firstly, the logic allows fuzzy specifications (in the
spirit of: “most of the time”); secondly, it is defined over complex
structured domains for capturing scenes
(cf. Sect. <ref>).
* We present a method for classifying sets of scenarios that is
conceptually inspired by recent works and standardization efforts around
operational design domains (ODDs) and technically inspired by feature models <cit.>,
where features of scenarios are specified using the presented logic over
sequences of scenes. To the best of our knowledge, this is one of
the first approaches that addresses classification and analyses on sets of
different scenarios (cf. Sect. <ref>).
* The specification of features and formal models for sets of scenarios
enable several quantitative and qualitative analyses on recorded driving
data, e.g., scenario coverage, missing scenarios, missing combination of
features, and distribution of combinations of features. In contrast to other
works, these metrics focus on sets of scenarios instead of on parameter
ranges within one scenario (cf. Sect. <ref>).
The ultimate goal of this work is to get a
handle on the task of specifying, selecting, and prioritizing relevant
scenarios and representative combinations of environmental conditions across
all scenarios.
Outline.
The paper is structured as follows. Section <ref> outlines an
example that motivates our approach. The formal logic for defining scenario
properties is introduced in Sect. <ref>.
Section <ref> then introduces the formalism for scenario classifier trees
and the calculation of coverage metrics and analyses
on such trees. The
results of our case study are presented in Sect. <ref>, which
is followed by a discussion on related work in Sect. <ref>. The paper
concludes in Sect. <ref>.
Reproduction Package.
For the experiments conducted i
n our case study, a reproduction package is available on Zenodo <cit.>.
§ MOTIVATIONAL EXAMPLE
We illustrate our approach for the task of analyzing test drives in
an urban environment. We assume to have a database of recorded test
drives. Recordings consist of sequences of scenes and are
split into meaningful segments, e.g., based on regions of a map.
A single scene is the snapshot of the state and observed
environment of the ego vehicle,
comprising map data, position and velocity of the
ego vehicle, stationary objects, and moving objects around
the ego vehicle. Segments (i.e., sequences of scenes) are recorded
at fixed (e.g., 0.5-second) intervals.
Our task is now to decide if this database contains test drives that
cover enough relevant scenarios (i.e., archetypes of
driving situations), or at least to identify and classify the encountered
scenarios. A scenario, in this case, would be a basic driving task,
like making an unprotected left turn on a three-way intersection,
and it could have variants, (e.g., presence of
oncoming traffic or pedestrians).
Figure <ref> shows an example of a scene from a segment
in which three road users are situated on a T-junction.
The ego vehicle, which is marked by the red box, is planning
to turn left. It is currently stopped at the stop line
of the stop sign on the ego vehicle's lane.
The car marked with the green box is following
its lane, going straight over the junction and crossing the trajectory
of the ego vehicle. The destination lane of ego contains a crosswalk
on which a pedestrian, marked in solid blue, is currently crossing the road.
The pedestrian is also crossing the trajectory of the ego vehicle.
When analyzing the segment, specific maneuvers,
environmental properties, and features can be observed from the
viewpoint of the ego vehicle:
road type is T-junction; ego is turning left;
there is oncoming traffic; a stop sign is present;
the ego vehicle does stop at the stop line,
since it must yield to another vehicle;
a pedestrian is crossing the destination
lane; the weather is sunny during daytime.
These features can be formally described by formulas
in a temporal first-order logic over sequences of scenes.
A set of features can then be used to classify segments:
the combination of features that hold defines the scenario class.
The segment then is one concrete instance of this scenario class.
Assuming that features are not entailed by other features,
we generate 2^n scenario classes with n features.
For the more realistic case that some dependencies exist between
features (e.g., no overtaking without multiple lanes),
we can use trees to model taxonomies of features and still
compute possible scenario classes and check if they exist in
our data.
Possible variations of features in the example could
be: the ego vehicle drives straight instead of turning left,
there is no oncoming traffic, or no pedestrian is crossing the road.
For the sake of simplicity, we neglect the other properties for the following calculation.
Based on these three variations,
a total of 2^3=8 possible scenario classes are observable.
We can use this information to compute missed scenario classes or to
measure scenario class coverage for our database of test drives.
In our example, one scenario class was observed.
Given the eight possible scenario classes, we obtain
a scenario class coverage of 12.5%.
The following two sections formalize these concepts.
§ A TEMPORAL LOGIC FOR PROPERTIES OF SCENARIOS
We base our classifiers for scenarios on the environment representation
that is usually produced by the perception sub-system of an autonomous
vehicle: a map of the road network and typed objects with positions,
velocities, and observed states.
To express properties of recorded sequences of scenes (i.e., momentary
snapshots of the environment of the ego vehicle), we need a formal logic
that can express properties in individual scenes as well properties between
objects in multiple different scenes. Examples are distances between objects in
one scene, or the fraction of scenes in a sequence in which a leading
vehicle is present. We introduce such a logic and then use it
for defining classifier trees that express sets of scenarios in terms
of features in the scenarios (cf. Sect. <ref>).
We use logic structures to describe scenes over a given
signature of domain-specific functions and relations
(e.g., positions, lanes, vehicles, velocities, etc.).
We introduce
CMFTBL (Counting Metric First-Order Temporal Binding Logic),
a metric first-order logic for modeling time that
extends MFOTL (Metric First-Order Temporal
Logic) <cit.>,
while focusing on finite traces of states.
In particular, we extend MFOTL by a
minimum prevalence operator that allows us to express that a property (or
sub-formula) holds for a certain fraction of all future states (within the finite trace).
We also introduce a binding operator
that stores an evaluation of a term into a variable, so that the
result of this evaluation can
be accessed in operator contexts of future states.
While the former extends the expressiveness of MFOTL,
the second one is a shorthand for existentially
quantified formulas of a certain form.
A signature is a tuple
𝒞, ℱ, ℛ,,
where 𝒞 is a set of named constants,
ℱ is a set of function symbols,
ℛ is a set of relation symbols, and
: (ℱ∪ℛ) ↦ℕ_0
defines an arity for each
function symbol f ∈ℱ and
relation symbol r ∈ℛ.
A -structure is a pair
, I of a domain
and interpretations of constants, functions,
and relations with I(c) ∈
for c∈𝒞,
(f)-ary function I(f): ^(f)→
for f ∈ℱ, and
I(r) ⊆^(r)
for r ∈ℛ.
An interval of the set of non-empty intervals ℐ over ℕ can be written as
[b,b'):={a∈ℕ|b≤ a<b'}, where
b∈ℕ, b'∈ℕ∪{∞} and b<b'.
CMFTBL formulas over the signature , intervals ℐ, and the countably
infinite set of variables 𝒱 (assuming 𝒱∩ (𝒞∪ℱ∪ℛ) = ∅)
are inductively defined as follows:
* A term t is either a constant c, a variable v, or for f∈ℱ and terms t_1,⋯,t_(f) the application f(t_1,⋯, t_(f)).
* For r ∈ℛ and terms t_1,⋯,t_(r), the predicate
r(t_1,⋯, t_(r)) is a formula.
* For x ∈𝒱 and d ∈, if t is a term, φ and ψ are formulas, then (¬φ), (φψ), (∃ x:φ), and (txφ) are formulas,
where tx evaluates t in the current state and binds the result to variable x.
* For I ∈ℐ and p ∈ℝ, if φ, and ψ are formulas then
next (∘_I φ),
until (φ U_I ψ), and
min. prevalence (∇_I^p ψ) are formulas.
While the semantics of MFOTL is defined over infinite sequences, we restrict our
attention and definitions to finite sequences.
The pair , is a finite temporal
structure over the signature , where = (_0,
_1, ⋯, _n) is a finite sequence of structures (i.e.,
scenes) over and =(τ_0, τ_1, ⋯, τ_n) is a
finite sequence of non-negative rational numbers τ_i ∈ℚ^+ with length n.
The elements in the sequence are (increasing) time
stamps. Furthermore, the interpretations of relations
r^_0, r^_1,
⋯, r^_n in a temporal structure , corresponding to a predicate symbol r ∈ℛ may
change over time. The same is true for functions.
Constants c∈𝒞 and the domain , on the other hand, do not change over time. More formally, we assume for all 0 ≤ i < n that
τ_i < τ_i+1 and
for _i=_i, I_i and _i+1=_i+1, I_i+1 that _i = _i+1.
Moreover, c^_i = c^_i+1 for each constant symbol c∈𝒞.
A valuation is a mapping
v:𝒱→
from variables to domain elements. We write v [x ↦ d] for the valuation v that maps x to d. All other variables are not affected in the valuation v. We abuse notation by applying a valuation v also to constant symbols c∈𝒞, with v(c)=c^.
We evaluate term t for valuation v and structure , denoted by
β[t,v,] as follows. For constants and variables x, let
β[x,v,] = v(x). For function application a=f(t_1,⋯, t_(f)),
let
β[a, v, ] = f^( β[t_1,v, ],⋯, β[t_(f),v, ]).
We define the semantics of CMFTBL in terms of the relation (,
, v, i)_CMFTBLφ inductively in
Table <ref>, where || denotes the count of
time stamps and is mostly used as an upper bound for intervals of temporal
operators.
The temporal structure , satisfies formula φ
iff (, , ∅, 0)_CMFTBLφ.
For I∈𝕀 and the common Boolean constant ⊤ (for true), we define the usual syntactic shorthands and non-metric versions of operators as follows.
(φψ) := (¬((¬φ)(¬ψ))) logical and
(φ⇒ψ) := ((¬φ)ψ) implication
(∀ x:φ) := (¬(∃ x:¬φ)) all quantifier
(◊_Iφ) := (⊤ U_I φ) eventually
(□_Iφ) := (¬(◊_I(¬φ))) always
(Δ^p_Iφ) := (∇_I^1-p¬φ) max. prevalence
We obtain non-metric variants of the temporal operators for interval
[0,∞). The past-time operators from MFOTL (previously, since, once, and
historically) are not required in the study presented in this paper. They could
equally be defined for CMFTBL, but are omitted for brevity.
To enhance the readability (and also the writing) of CMFTBL formulas, we introduce several notational conventions.
Let isVehicle ∈ℛ be a unary relation.
We define the set of
all vehicles 𝒱⊆ as follows:
𝒱 := { d ∈| isVehicle(d) }
Analogously, we define the set of pedestrians as 𝒫, and the set of
actors 𝒜 := 𝒫∪𝒱 with 𝒫∩𝒱 = ∅.
For our domain elements, we furthermore introduce a notation
reminiscent of object-relational associations in programming
languages. For some vehicle v ∈𝒱 and the relations {isEgo, isLane, onLane}⊆ℛ we use
shorthand notations like:
v.isEgo := isEgo(v)
v.lane := l | l ∈ isLane(l) onLane(v,l)
All formulas used for the properties in our tree-based classifier need to be
evaluated for the ego vehicle and usually depend on one unary relation. Thus, for most formulas
φ we can define a pattern for some r ∈ℛ:
φ := ∃ v ∈𝒱: □(v.isEgo) r(v)
In such cases, we just define the relation r. For example, assuming r =
obeyedSpeedLimit, we could validate
that the ego vehicle at all times obeys the speed limit:
obeyedSpeed Limit(v) :=
□(v.speed ≤ v.lane.speedLimitAt(v.pos))
The associations v.speed and v.pos are functions as
introduced before and speedLimitAt is a
function from a position number and a lane to a speed limit number.
For numbers, we assume the relations eq, neq,
lt, gt, leq, geq∈ to represent the common mathematical comparators
=, ≠, <, >, ≤, ≥, which we also allow as notation shortcuts.
With these notational conventions, we can quite straightforwardly define traffic
rules and environmental features using CMFTBL formulas and evaluate those
on sequences of scenes. Each predicate of our case study (cf.
Sect. <ref>) is expressed this way. For comparison, consider the obeyedSpeedLimit
formula without these syntactic conventions:
φ_oSL := ∃ v ∈: □(isVehicle(v) isEgo(v))
□(∃ l ∈: ∃ p ∈: isLane(l) onLane(v,l)
leq(speed(v), speedLimitAt(p, l)))
Assumptions on the data can furthermore be validated
using dedicated formulas. For example, only one ego vehicle may exist at all times and it does
not change over time could be expressed as:
uniq ueEgo :=
∃ v ∈𝒱: □(v.isEgo ∀ v'∈𝒱: v'.isEgo ⇒ v = v' )
Other data checks – e.g., that each vehicle can only be on one lane at a time or
that a vehicle's actual position on a lane can not be greater than the lane's length – can be added accordingly to ensure the single
relation nature of the object associations as well as overall data sanity and consistency.
§ CLASSIFIERS FOR SCENARIOS AND METRICS ON SETS OF SCENARIOS
We want to use CMFTBL formulas for expressing
features of scenarios and for classifying recorded
driving data into scenario classes.
Formally, we assume recorded driving data
to be given as temporal structures
, over some
fix basic signature. This basic signature is
the set of properties that is provided as
information in the data, i.e., objects with
positions and classifications on a road network with
information about lanes, signs, and signals.
For the scope of this paper, we additionally assume
that the recorded data is already segmented into
sequences in a meaningful way. We use
𝔖 to denote a set of segments
of form ,.
In practice, segmentation could either be done based
on a map or based on classification, e.g., of driving
maneuvers of the ego vehicle, or by some other sensible
approach.[In our case study, we will segment data with the help of a map into
sequences that contain one main driving situation:
driving through an intersection, driving along
a section of a multilane road between two intersections,
etc.]
We can then define classifiers that identify the scenario class
of some observed data ,
and define metrics over observed classes of scenarios.
§.§ Classifiers for Scenarios
Instead of simply using a set of features, we organize
features hierarchically in trees to account for dependencies
between features (a lane change, e.g., can only occur on
a multi-lane road). This will enable us to capture the
taxonomies of features found in the 6-layer model
or in draft standards for specifying operational design
domains.
A tree-based scenario classifier (TSC) is a tuple
𝒬, q_r, Γ, λ_l, λ_u
with
* set of nodes 𝒬 (i.e., the modeled features)
* root node q_r ∈𝒬
* set of edges Γ of type q, q', φ where
* q, q' ∈𝒬 are source and destination,
* CMFTBL formula φ is the edge condition,
* lower bounds for sub-features of nodes λ_l: 𝒬→ℕ_0
* upper bounds for sub-features of nodes λ_u: 𝒬→ℕ_0
We write q q' for q, q', φ.
We require to be a tree rooted at q_r.
For q∈𝒬, let c(q) = q' | q q'
∈Γ denote the children of q.
Bounds must be 0 ≤λ_l(q) ≤λ_u(q) ≤ |c(q)|.
A path of length k in is a sequence of k transitions q_i-1 q_i with 1 ≤ i ≤ k and q_0 = q_r.
Inspired by feature models, we name certain types of nodes q ∈𝒬
depending on their lower and upper bounds (abbreviated notation with parentheses):
All / (A) λ_l(q) = λ_u(q) = |c(q)|
Exclusive / (X) λ_l(q) = λ_u(q) = 1
Optional / (O) λ_l(q) = 0 λ_u(q) = |c(q)|
a/b-Bounded / (a..b) λ_l(q) = a λ_u(q) = b
Leaf / ( ) λ_l(q) = λ_u(q) = 0
We introduce bounds on sub-features as a means of computing an upper bound
on the number of combinatorial combinations, i.e., the number of observable
scenario classes, in the next section.
A more precise approach to computing feasible scenarios would be to
compute satisfiable combinations of features. Such an approach,
however, does not seem feasible or meaningful.
Even if the satisfiability of some fragment of CMFTBL can be established,
there is no mechanism to constrain acceptable models to realistic segments.
We can now describe individual scenario classes for a scenario classifier.
For a given tree-based scenario classifier
= 𝒬^o q_r^o, Γ^o, λ_l^o, λ_u^o,
a scenario class is a tree T= 𝒬, q_r, Γ
with
* set of nodes 𝒬⊆𝒬^o
* root node q_r = q_r^o
* set of edges Γ of type q, q' and such that q, q', φ∈Γ^o
We require the number of children c(q)
for every node q ∈𝒬 to be within the lower and upper bounds of q in .
Let _ denote the (finite) set of all scenario
classes for tree-based classifier ,
and let 𝒲 denote the (infinite) set of observable
segments of driving data ,.
We denote the classification function that maps observed driving data
, to a scenario class T based on
the tree-based scenario classifier
by C_:𝒲→_.
For recorded data segment 𝒮=,,
we compute C_(𝒮) = 𝒬,
q_r, Γ by computing the set 𝒬 of nodes,
which uniquely determines the set of transitions.
We initialize 𝒬 as q_r^o and then
add every node q' for which
q∈𝒬 and q, q', φ∈Γ^o
with 𝒮φ until a fixed point is reached.
We assume that bounds permit that a valid class
can be computed for every realistic segment 𝒮
and lift C_ to sets of segments by letting
C_() denote the set of observed scenario
classes for .
§.§ Coverage Metrics for Sets of Scenarios
Given a set of recorded segments and a classifier ,
we want to analyze and quantify if and to which degree the
recorded data covers possible scenarios.
We start by showing how to compute the number of scenario classes for
a tree-based scenario classifier = 𝒬, q_r, Γ, λ_l, λ_u.
Let Γ_q = { (q,q',φ) ∈ Γ} be the set of edges
originating in q, and
[Γ_q]^λ_l(q)..λ_u(q) =_def ⋃_i = λ_l(q)^λ_u(q) [Γ_q]^i
be the set of all
subsets of these edges with size within lower bound and upper bound of q.
We define the size || = |q_r|
recursively as
|q| =_def ∑_G ∈ [Γ_q]^λ_l(q)..λ_u(q) ( ∏_(q,q',φ) ∈ G | q' | )
The primary metric we are considering is scenario class coverage (SCC),
expressing the ratio between the amount of observed scenario classes and
the number of classes modeled by classifier
= 𝒬, q_r, Γ, λ_l, λ_u.
For a set of recorded segments, we define
SCC(, ) =_def | 𝒞_() |/||
It can be expected that gaining high coverage on TSCs with (potentially
multiple combinations of) rare events requires an increasingly high amount of
test scenarios. To measure the individual rarity of the modeled environmental
conditions, from which explanations for coverage gaps might be derived, we
introduce a metric for absolute feature occurrence (afo). It
counts the number of segments
that are classified as scenarios containing a given node (i.e., feature).
afo(, q) =_def | {𝒬, q_r, Γ∈𝒞_() | q ∈𝒬} |
In addition to coverage, which only considers if a scenario class
has been observed, we define scenario instance count (sic) to count
how often a certain class has been encountered in a set of scenarios.
sic(, t) =_def | 𝒮∈ | 𝒞_(𝒮) = t |
Similar to calculating the size of a TSC, we can enumerate all
possible scenario classes and use them to identify class instance
missings, i.e., classes as which no ∈ is classified.
However, gaining meaningful insights from large sets of missing classes is
difficult. Therefore, we also analyze feature pair misses, i.e., pairs
of TSC nodes that do not exist together in any observed class.
ew edges new/.style=
for tree=
l sep+=50pt
,
§ EVALUATION
Our evaluation is designed as a single case mechanism experiment <cit.>
that validates the presented approach and our implementation.
We develop a tree-based scenario classifier for an urban
driving environment and use it for analyzing simulated urban
traffic. Features are chosen to model
the types of properties (or labels) that are envisioned for
specifications of operational design domains (ODDs) as
described in BSI 1883 <cit.> or OpenODD <cit.>.
Since this is the first work on scenario class coverage, we aim at answering the following questions —
mostly with qualitative data.
Q1. Can relevant properties of operational design domains
be expressed in CMFTBL?
Q2. Is it computationally feasible to classify scenarios with
a tree-based scenario classifier?
Q3. To which degree can scenario class coverage be achieved and
can scenario class coverage generate useful insights (e.g.,
missing classes)?
The remainder of this section discusses the
classifier developed for our case study, details the experimental setup, presents
results from the simulated experiments, and provides
initial answers to the above questions.
§.§ Tree-Based Scenario Classifier Definition
To construct a tree-based scenario classifier (TSC) for our case study, we
evaluated the 6-layer model of scenario classification by Scholtes
et al. <cit.> and extracted observable
features. We defined logical formulas using CMFTBL
that are capable of identifying these features on segments (i.e., sequences of scenes).
The hierarchic organization of all the features resulted in the TSC visualized in Fig. <ref>.
We additionally define smaller TSCs by
grouping features that we want to analyze together.
This allows us to study coverage
for smaller sets of features.
For the sake of
presentation in this paper, we introduce TSC projections. They combine related
features into subsets of all features of
an original full TSC. The following projections are based on the layers of
information discussed in <cit.>:
* full TSC: the complete TSC that serves as the reference point for the comparison with the other projections
* layer 1+2: driving features in relation to static information (roads, lanes traffic signs, etc.) during the scenario run
* layer 4: driving features in relation to other objects that dynamically change during the scenario run (like other vehicles, pedestrians, etc.)
* layer 1+2+4: combination of static information and dynamically changing objects
* layer (4)+5: environmental features from layer 5 combined with the traffic density from layer 4
* pedestrian: example for a more specialized projection to analyze the coverage of pedestrians crossing the street in all possible environmental situations
We did not include Layer 3 (Temporary Modifications of Layer 1 and Layer 2)
and Layer 6 (Digital Information), as there were no elements
of these layers available in our test environment.
We visualize our projection labels in
Fig. <ref> through colored circles. A filled circle indicates that the complete subtree rooted at this node is included in the
projection.
Half circles indicate that the subtree rooted at this node is
partially (i.e., as labeled) included in the projection.
All edges of the TSC have a corresponding logical condition.
For readability reasons, Fig. <ref> only depicts always-true edge conditions and
formulas explicitly described in this paper.
A more detailed overview of the set of implemented predicates is given in the next section.
There, we discuss the total number of predicates, their definitions using our newly introduced operators, and their
complexity.
§.§ Predicates
For our case study, we defined all scenario class features as CMFTBL predicates and formulas. This section discusses a selection of these
predicates, in particular to demonstrate our newly introduced prevalence and binding CMFTBL operators.
Let 𝒱 be the set of all vehicles and 𝒫 be the set of
all pedestrians. As a basis for our data structure, we use the OpenDrive standard[<https://www.asam.net/standards/detail/opendrive/>].
Therefore, we can reason about static and dynamic relations between vehicles
and other entities (e.g., other vehicles, pedestrians, landmarks, etc.) by
mapping entity positions to their respective OpenDrive lanes. Consequently,
each entity is related to a lane which in itself contains additional information
we use for our predicate definitions.
To decide whether a vehicle v ∈𝒱 was primarily driving through a junction during the analyzed segment,
we require the vehicles' road to be categorized as a junction in at least 80%
of the time stamps.
isInJunction(v) := ∇^0.8(v.lane.road.isJunction)
Similarly, to determine whether a vehicle v ∈𝒱 is driving on a single-lane road,
we require v's road to have only one lane for at least 80% of the observed scenes.
Additionally, we require the road to not be classified as a junction.
onSingleLa neRoad(v) := ¬ isInJunction(v)
∇^0.8(sameDirectionLaneCount(v.lane) = 1)
2
We define the predicate for deciding if a vehicle v ∈𝒱 is on a multi-lane road by
combining the predicates (<ref>) and (<ref>).
onMultiLaneRoad(v) := ¬ onSingleLaneRoad(v)
¬ isInJunction(v)
To be able to detect a lane change for a given vehicle
v ∈𝒱, our binding operator is utilized.
We bind the lane of vehicle v at the first evaluation
time stamp to a new variable l. As the vehicle v progresses in time
and might change its lane, we can compare its lane value to l
to detect a lane change.
changedLane(v) := v.lanel (◊(l ≠ v.lane))
Pedestrians crossing a lane are (at some timestamp) identified as being on this lane.
Therefore, we can detect if for a pedestrian p ∈𝒫
and vehicle v ∈𝒱 the predicate onSameLane(v,p) holds.
For the vehicle v, a crossing of pedestrian p is only relevant if it happens `closely in front of v'.
This is defined by inReach(p,v) which checks if p's position on the (same) lane is in front of and at most 10 meters away from v.
pedest rianCrossed(v) :=
◊(∃ p ∈𝒫: onSameLane(p, v) inReach(p, v))
onSa meLane(a_0, a_1) := a_0.lane = a_1.lane
inReach(a_0, a_1) := 0 ≤ a_0.pos - a_1.pos ≤ 10
2
All predicates defined for the tree-based scenario classifier in Fig. <ref> are of similar complexity as the ones presented above.
In total, we defined 51 predicates (including sub-predicates) to completely express the detection of the modeled features for our experiments.
We used the min. prevalence operator in 18 predicates to model that some
feature is present for most of the time covered by a segment.
The binding operator was used once directly, for specifying the change of lanes,
and once indirectly by using the negation of the change of lanes.
§.§ Experimental Setup
As the basis for our experiments, we built a toolchain using the CARLA
simulator <cit.> and an analysis framework written
in the Kotlin programming language[<https://kotlinlang.org>]
that classifies recorded scenario runs according to a tree-based classifier.
As a proof-of-concept implementation, it is not optimized for performance.
However, it is already sufficient
to run our experiments within a few hours. Thus, we left a proper algorithmic
approach to CMFTBL evaluation for future work.
Based on the classifications of recorded runs, a subsequent analysis step calculates the
coverages and analyses introduced in Sect. <ref>. Additionally, by
iteratively analyzing the set of recorded segments, we can measure class
coverage over time by counting newly observed scenario classes.
This provides us with an increasing curve on coverage.
In our toolchain, TSCs are evaluated on an abstract representation of the scene,
i.e., the ego vehicle and its surroundings. This data structure
is designed to be constructed in various ways, and we provide an
implementation for CARLA. Static as well as dynamic data is
exported into JSON files during simulation, read by the Kotlin
framework, and weaved together forming the consistent abstract view of the world.
The data for each simulation run is then segmented to be classified. The
primary factor for this segmentation is the ego vehicles' road.
This results in each simulation run being cut into individual segments of
either driving through a junction or following a (potentially multi-lane) road section
without crossed lanes. After this segmentation, we discard all segments too short for a meaningful analysis, i.e., segments containing only 10 or fewer scenes.
For our experiments, we recorded 100 simulation runs of 5 minutes each. In
every run, a random map, daytime, and weather were chosen and up to 200
vehicles and 30 pedestrians were spawned randomly on the map. For maps that do
not specify enough spawn points, we spawned as many actors as possible.
During the simulation, all vehicles drove around the map using CARLA's autopilot.
We analyzed each simulation run multiple times: once with each vehicle
being considered to be the ego vehicle. This enabled us to increase the
amount of encountered situations (and therefore coverage) without the need to
record about 200 times as many simulation runs. Overall, this resulted in
113,767 analyzed segments representing 1,104 hours of driving data. The
analysis of this data with the classifiers and predicates introduced in
Sects. <ref> and <ref> takes about 118
minutes on a single core of a 2021 Apple M1 Pro SoC. A reproduction package
for our experiments – a virtual machine image that contains our recorded
driving data, the framework, specifications, and analysis code – is
archived on Zenodo <cit.>.
§.§ Experimental Results
In this section, we present the application of our coverage metrics and analyses for scenario classes based on our data set of 113,767 classified segments.
Class Coverage.
We visualize our results for scenario class coverage over the course of analyzed segments in Fig. <ref>. Each
colored curve represents the coverage result of one classifier projection, as defined in Sect. <ref>.
The legend also shows for each projection the final count of observed classes after analysis of all 113,767 segments as well as the number of possible classes.
The layer 1+2 projection covers 100% of scenario classes after 12,233 analyzed segments.
Furthermore, layer (4)+5 almost fully covers the possible scenario classes after around 59,409 segments,
but misses one scenario class and therefore only reaches 97%. The pedestrian projection
covers over 90% of relevant scenario classes. The projections layer 4 and
layer 1+2+4 cover 72% and 48% of relevant scenario classes, respectively.
Finally, the reference projection full TSC reaches a coverage of 26%.
Scenario Instance Count.
In Fig. <ref>, we exemplarily demonstrate the scenario instance count metric
of the 175 observed scenario classes for the layer 1+2+4 projection. The plot
shows a long-tail distribution in which 85,120 segments
of the total 113,767 segments
are classified into only 15 scenario classes.
The remaining 28,647 segments are classified into the remaining
160 scenario classes. The three most common scenario classes are each observed about 11,000 times.
Test Scenario Set Analysis.
Table <ref> gives an overview of the statistical values of our generated test scenario set.
We used three maps shipped with the CARLA simulator with an average lane length of 37.4 meters. Note that
especially Town 01 has some long lane sections with the maximum length being 310 meters. In contrast,
some lane sections are only 4 meters long. In total, we generated 1,104.2 hours of data with a
segment count of 113,767. On average, there are over 1,000 segments per road section of each map, while
the segments have an average length of over 50 scenes with a maximum scene count of 592.
Absolute Feature Occurrence.
Our analysis provides detailed insights into specific scenario classes regarding the underlying features and their combinations. To demonstrate
the results, Fig.<ref> exemplarily shows analyses on the Dynamic Relation features of the Multi-Lane node of our
TSC (cf. Fig. <ref>). For better readability in the figure, we label the observable features as
a=“Oncoming Traffic”, b=“Pedestrian Crossed”, c=“Following Leading Vehicle” and d=“Overtaking”.
We also write (x) or (x̅) if feature x was observed or not observed, respectively.
For example, the combination (a·b·c·d̅) describes the scenario classes in which
Oncoming Traffic, Pedestrian Crossed and Following Leading Vehicle are observed, while
Overtaking is not observed.
Figure <ref> visualizes the individual absolute occurrences of each observable feature for the dynamic relations on multi-lane roads.
The percentages are based on the 19,913 analyzed segments classified as containing the Multi-Lane feature.
Here, Oncoming Traffic (a) appears in 95.41% of the total occurrences. Pedestrian Crossed (b) and Following Leading Vehicle (c) are similarly
present with a coverage of 24.92% and 29.85%, whereas Overtaking (d) is only encountered in 0.25% of the analyzed segments.
Full Combinatorial Analysis.
As defined in our TSC (cf. Sect. <ref>) the Dynamic Relation node is
marked as Optional, i.e., all combinations of the four children form valid scenario classes. Consequently, there are 16 possible combinations of features.
Figure <ref> visualizes the distribution for all combinations of
features. It can be seen that 95.16% of observed scenarios are covered by the following
four feature combinations: (a·b̅·c̅·d̅), (a·b̅·c·d̅), (a·b·c̅·d̅) and (a·b·c·d̅).
Of the five feature combinations that never occurred, each includes feature (d), which directly stems from the overall low occurrence of only 0.25% of feature (d).
Additionally, the three observed combinations that include feature (d) are among the four combinations with lowest occurrence.
Feature Pair Misses.
As discussed before, our method yields precise information on which scenario
classes never occurred. But as the full TSC analysis resulted in 3,702 unseen
classes, a detailed analysis is unfeasible. With the analysis of feature pair
misses, we instead focus on predicate combinations that never occurred. This
results in the information that the following five predicate combinations were
never observed together: (Overtaking & Lane Change), (Overtaking & Has Red
Light), (Has Stop Sign & High Traffic), (Has Yield Sign & High Traffic), (Has
Yield Sign & Middle Traffic).
§.§ Discussion
In the previous section, we visualized and described various methods of analyzing test drives
regarding a given specification. We demonstrated the expressiveness of our approach
with coverage metrics for scenario classes and predicate combinations. This
section discusses these findings in the context of the three questions
formulated on page sec:evaluation and closes with a discussion on
threats to validity.
Q1 (Expressivity).
Using the CMFTBL logic, we were able to express many relevant properties for
common driving situations considered in the proposals for operational design
domains <cit.> and approaches like the 6-layer model <cit.>.
In particular, the prevalence operator was helpful to detect properties where
it is natural to formulate `majority of the time' constraints (like
environmental conditions or traffic density). The binding operator
adds an intuitive mechanism for value storage that can be used to include
`remembered information from the past' in the evaluation of a state.
The notational conventions (like dedicated sets for vehicles/pedestrians or
object-relational element associations) furthermore facilitate a mapping from
a more human-readable presentation to CMFTBL formula syntax.
Properties we did not include in our case study were usually left out not
because it was impossible (or even particularly inconvenient) to be expressed using
CMFTBL, but because we were not able to automatically extract – with a reasonable
amount of effort – the required information from our simulation setup with
CARLA (e.g., yield priorities in roundabouts, behavior on highway entries, or
temporary modifications like construction work).
We are confident that the logic can express most of the features
required of a scenario classifier for an ODD.
Q2 (Analysis Time).
We analyzed a total of 1,104 hours of data from simulated test drives, which
took a little over 118 minutes. With a total of 113,767 segments we have on
average 34.93 seconds of driving data per segment and 62.23 milliseconds of
computation time per segment evaluation.
While a more comprehensive scenario classifier would contain
more features, due to the tree-based structure of our classifier,
whole sub-trees get cut off from evaluation if a condition does not hold
(e.g., none of the single-lane features of Fig. <ref> are
evaluated when the segment is recognized as a junction).
The obtained results thus indicate that our approach is
generally feasible with regard to computation time. Even online monitoring of
properties while driving (i.e., after completing a segment) seems possible.
Q3 (Scenario Coverage).
Our experiments demonstrate that scenario coverage can be achieved with
our concept of hierarchical classifiers. Even though the features evaluated with
our classifiers are limited in scope, they cover a sensible amount of
situations for urban driving.
With our approach, it is possible to
automatically classify test drives
based on a predefined specification.
Our detailed analyses proved particularly useful for the interpretation of the
coverage levels our projections converge to. All five feature combinations not
encountered at all throughout our data combine a layer 1+2 feature with
a layer 4 feature. Due to the combinatorial nature of our classifier
concept, about half of all combinations in the layer 1+2+4 projection
remain undetected. We can utilize this information and investigate why
certain feature pairs are missed. For instance, in the three maps we included
in our experiments, only a single junction on a small
side road has a yield sign. We are less likely to detect middle or high traffic
density on this road.
These insights can be used to plan test drives or as a basis for
analyzing the relevance of specified scenarios in some real environment.
Threats to Validity
Internal Validity.
To test our approach, we generated data with CARLA,
as this allowed us to produce a large set of test drives
using automated scripts.
We have not tested our approach on
a set of ground truth data to check our predicates against pre-labeled data.
However, we manually inspected rendered videos
of the generated data set to match the actual driving situations we addressed with our formulas.
Combined with manually written test cases for each predicate
and all implemented logical operators,
we are confident that the predicates are capable of detecting the scenarios
they are designed for.
Even though the binding
operator was not used pervasively in our case study, it would not have been
possible to express a change of lanes without the operator and we expect that
many more complex predicates pertaining to driving require the operator.
Two examples are change of heading and giving the right of way in an all-way stop situation.
External Validity.
We were able to define all relevant predicates for our experiments, but this is
not fully independent of our selection of maps and the behavior of CARLA's autopilot.
As stated in Sect. <ref>, we focused on analyzing urban driving scenarios,
but the available maps in the CARLA simulator also include interstate traffic.
As other works already formalize interstate traffic (see <cit.>),
we are confident that we are also capable of analyzing new types of maps
and traffic situations using our introduced approach. Furthermore, we
can also record human-controlled driving behavior using the CARLA
simulator and a hardware setup containing a steering wheel, pedals
and multiple screens. These recordings produce the same file format
as our generated scenarios and can therefore easily be analyzed.
Concept Validity.
When analyzing more complex situations, the specification might get too large
for our approach to be practically usable. Especially, as data from the real world
can contain errors and deviations, various complex predicates and classification trees might become necessary.
Our experiments use the perfect world perception provided by CARLA,
which removes the fuzziness of sensor data.
Analyzing real-world data requires the intelligent handling of such
fuzzy sensor data streams. Predicates then need to take into consideration that the environmental
perception, such as object tracking, might be incorrect or imprecise.
Previous works show that current research is already addressing certain problems
in regards to environmental perceptions <cit.>, such as sensor fusion <cit.>, or object reference generation <cit.>.
We are confident that with further results and insights,
we can use our formal specifications to include fuzzy perception data.
§ RELATED WORK
Our approach is related to various existing
works on the safety of autonomous vehicles.
Formalizing traffic scenarios.
Previous work formalizes traffic rules using different formal logics to define
specific scenario rules. Esterle et al. formalize traffic rules for highway
situations <cit.> by using Linear Temporal Logic (LTL).
The same logic is also used by
Rizaldi et al. <cit.> to formalize German overtaking rules.
Additionally, they provide verified checkers that are able to calculate the satisfaction
of a specific trace against the defined LTL formulas. Other works formalize similar traffic rules
using the Metric Temporal Logic such as interstate traffic <cit.> or intersections <cit.>.
Additional traffic rules regarding uncontrolled intersections are formally described
by Karimi and Duggirala <cit.> using Answer Set Programming.
Most of their rules specify the expected behavior of traffic participants in regards of
the right of way at unprotected intersections.
Scenario-based Testing.
In the past few years, research has started to focus on scenario-based safety assurance
(mainly testing) of autonomous vehicles <cit.>, exploring definition,
specification, instantiation, execution <cit.>, and generation of scenarios for scenario-driven development <cit.>, for regression testing <cit.>, and for accident scenarios <cit.>, mining
scenarios from data, test automation <cit.>, notions of similarity between scenarios <cit.>, and on
finding critical test scenarios <cit.>. Steimle et al. <cit.> provide a
taxonomy and definitions of terms for scenario-based development and
testing. Ulbrich et al. <cit.> define a scenario
to be a sequence of scenes and a scene to be a snapshot of a
vehicle’s environment, including all actors, observers,
self-representations, and relationships between them.
Klischat and Althoff <cit.> generate critical test scenarios
using evolutionary algorithms by minimizing the solution space of the vehicle under test.
Menzel et al. <cit.> define that scenarios can be functional,
logical, or concrete.
A functional scenario describes the entities
of the domain and their relation at a semantic level
(different levels of abstraction are deemed possible). Logical
scenarios represent entities and relations with the help of
parameter ranges, i.e., provide an interpretation of the semantic
signature on the tempo-spatial structures that represent scenes.
Concrete scenarios, finally, are individual instances of
tempo-spatial structures with their semantic meaning.
These notions are widely accepted in industry and academia today and provide a
framework for formulating goals and challenges <cit.>.
Generating scenarios from semantic primitives and developing adequate semantic
primitives is approached by Zhang et al. <cit.>
and Medrano-Berumen and Akbas <cit.>.
The first work generates collision-free traffic scenarios by describing road shapes using
extracted traffic primitives.
Similarly, the second work generates roadways by connecting building blocks
(i.e., geometric primitives). In contrast to our work, these works describe scenarios
using geometric shapes and calculations to build scenarios, while we describe scenarios
with logically defined higher-level predicates.
Analyzing Real-World Data.
Besides scenario-based testing being widely accepted for testing autonomous
systems, evaluations on real-world data are nevertheless mandatory <cit.>.
Especially, as real-world data can be used to find relevant or critical
traffic scenarios, which can then be applied to develop scenarios for
scenario-based testing <cit.>.
Real-world data can also be utilized to help to understand
how human drivers perceive autonomous system failures in real-world
situations <cit.>.
Other applications for real-world data are: decision making <cit.>,
pedestrian intention estimation <cit.> or object
detection <cit.>.
Nevertheless, real-world data should always be accompanied by exhaustive simulation data <cit.>,
as it can model situations that are not feasible to test in the real-world (e.g., accidents).
Coverage and Metrics.
To check whether a test set is sufficient, coverage criteria are developed.
For this, Laurent et al. <cit.> introduce a coverage
criterion for the parameters that are utilized in the decision process of autonomous systems.
Langner et al. <cit.> automatically detect novel traffic scenarios using
a machine-learning approach.
Using this, they are able to reduce a given test set to unique test scenarios. Their future work
includes labeling of scenarios to further improve the classification of novel
scenarios.
Hauer et al. <cit.> introduce a test ending criterion for testing automated and autonomous
driving systems that should help arguing over the safety of autonomous vehicles.
Closest to ours are the following two works:
Amersbach and Winner <cit.> introduce a first approach for scenario coverage by calculating the
required number of concrete scenarios regarding specified parameter ranges. They argue that
for validating highly automated vehicles a specification of functional scenarios (e.g., lane-change,
following, etc.) has to be developed.
Li et al. <cit.> generate abstract scenarios while maximizing the coverage of k-way combinatorial testing. Each abstract scenario can be seen as an equivalence class for which a set of concrete scenarios is generated. The categories used for generating the abstract scenarios are similar to the scenario classifiers used in this paper (e.g., weather, road type, ego-action).
§ CONCLUSION
We have presented a logic for expressing features of
driving scenarios in a temporal logic and for combining
classifiers for features into tree-based scenario classifiers
that structure the operational design domain of an
autonomous vehicle into relevant scenario classes.
Tree-based scenario classifiers enable an analysis of
scenario class coverage for recorded driving data.
We have evaluated our technique in simulated urban driving
experiments. The results show that we are capable of achieving full
coverage for some scenario classifiers and can reason about
the observed features of the analyzed set of recorded test drives.
IEEEtranPatched
|
http://arxiv.org/abs/2307.07595v1 | 20230714193805 | Training Discrete Energy-Based Models with Energy Discrepancy | [
"Tobias Schröder",
"Zijing Ou",
"Yingzhen Li",
"Andrew B. Duncan"
] | stat.ML | [
"stat.ML",
"cs.LG"
] |
JWST/CEERS Sheds Light on Dusty Star-Forming Galaxies: Forming Bulges, Lopsidedness and Outside-In Quenching at Cosmic Noon
Aurélien Le Bail1 Emanuele Daddi1 David Elbaz1 Mark Dickinson2 Mauro Giavalisco3 Benjamin Magnelli1 Carlos Gómez-Guijarro1 Boris S. Kalita4,5,6 Anton M. Koekemoer7 Benne W. Holwerda8 Frédéric Bournaud1 Alexander de la Vega9 Antonello Calabrò10 Avishai Dekel11 Yingjie Cheng12 Laura Bisigello13,14 Maximilien Franco15 Luca Costantin16 Ray A. Lucas7 Pablo G. Pérez-González16 Shiying Lu1 Stephen M. Wilkins17,18 Pablo Arrabal Haro2 Micaela B. Bagley15 Steven L. Finkelstein15 Jeyhan S. Kartaltepe19 Casey Papovich20,21 Nor Pirzkal22 L. Y. Aaron Yung23NASA Postdoctoral Fellow
August 12, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Training energy-based models (EBMs) on discrete spaces is challenging because sampling over such spaces can be difficult. We propose to train discrete EBMs with energy discrepancy (ED), a novel type of contrastive loss functional which only requires the evaluation of the energy function at data points and their perturbed counter parts, thus not relying on sampling strategies like Markov chain Monte Carlo (MCMC). Energy discrepancy offers theoretical guarantees for a broad class of perturbation processes of which we investigate three types: perturbations based on Bernoulli noise, based on deterministic transforms, and based on neighbourhood structures. We demonstrate their relative performance on lattice Ising models, binary synthetic data, and discrete image data sets.
§ INTRODUCTION
Building large-scale probabilistic models for discrete data is a critical challenge in machine learning for its broad applicability to perform inference and generation tasks on images, text, or graphs. Energy-based models (EBMs) are a class of particularly flexible models p_∝exp(-U), where the modelling of the energy function U through a neural network function can be taylored to the data set of interest. However, EBMs are notoriously difficult to train due to the intractability of their normalisation.
r0.20
< g r a p h i c s >
Generated samples from the EBM trained with Energy Discrepancy on static MNIST.
The most popular paradigm for the training of EBMs is the contrastive divergence (CD) algorithm <cit.> which performs approximate maximum likelihood estimation by using short-run Markov Chain Monte Carlo (MCMC) to approximate intractable expectations with respect to p_. The success of CD has lead to rich research results on sampling from discrete distributions to enable fast and accurate estimation of the EBM <cit.>.
However, training EBMs with CD remains challenging: Firstly, discrete probabilistic models often exhibit a large number of spurious modes which are difficult to explore even for the most advanced sampling algorithms. Secondly, CD lacks theoretical guarantees due to short run MCMC <cit.> and often times leads to malformed energy landscapes <cit.>.
We propose the usage of a new type of loss function called Energy Discrepancy (ED) <cit.> for the training of energy-based models on discrete spaces. The definition of ED only requires the evaluation of the EBM on positive and contrasting, negative samples. Unlike CD, energy discrepancy does not require sampling from the model during training, thus allowing for fast training with theoretical guarantees.
We demonstrate the effectiveness of ED by training Ising models, estimating discrete densities, and modelling discrete images in high-dimensions (see <ref> for an illustration).
§ ENERGY DISCREPANCIES
Energy discrepancies are based on the idea that if information is processed through a channel 𝒬 then information will be lost. Mathematically, this is expressed through the data processing inequality Qp_Qp_≤p_p_. Consequently, the difference of the two KL divergences forms a valid loss for density estimation <cit.>. Retaining only terms that depend on the energy function U results in the energy discrepancy <cit.>:
[Energy Discrepancy]
Let p_data be a positive density on a measure space (𝒳, d)and let q(|) be a conditional probability density. Define the contrastive potential induced by q as
U_q () := - log∑_'∈𝒳 q( | ') exp(-U('))
On discrete spaces d is assumed to be a counting measure. On continuous spaces 𝒳, the appearing sums and expectations turn into integrals with respect to the Lebesgue measure
We define the energy discrepancy between p_data and U induced by q as
_q (p_data, U) := 𝔼_p_data() [U()] - 𝔼_p_data()𝔼_q( | ) [U_q ()].
The validity of this loss functional is given by the following non-parametric estimation result, previously stated in <cit.>:
[]theoremrestatheoremone
Let p_data be a positive probability density on (𝒳, d). Assume that for all ∼ p_ and ∼ q(|), Var(|)>0. Then, the energy discrepancy _q is functionally convex in U and has, up to additive constants, a unique global minimiser U^* = _q (p_data, U). Furthermore, this minimiser is the Gibbs potential for the data distribution, i.e. p_∝exp(-U^∗).
We give the proof of <ref> in <ref>. The perturbation q can be chosen quite generally as long as it can be guaranteed that computing comes at a loss of information which mathematically is expressed through the variance of recovering from ∼ q(|) being positive. In the next section, we propose some practical choices for q.
§.§ Training Discrete Energy-Based Models with Energy Discrepancy
The perturbation process q needs to be chosen under the following considerations: 1) The contrastive potential U_q() has a numerically tractable approximation. 2) The negative samples obtained through q are informative for training the EBM when only finite amounts of data are available. We propose three categories for constructing perturbative processes:
Bernoulli Perturbation.
For ε∈ (0, 1), let ∼Bernoulli(ε)^d. On 𝒳= {0, 1}^d, consider the perturbation = + mod(2) which induces a symmetric transition density q(-) on {0, 1}^d. Due to the symmetry of q, we can then write the contrastive potential as
U_bernoulli() = -log∑_'∈𝒳 q(-') exp(-U(')) = -log𝔼_'∼ q(-')[exp(-U('))]
The expectation on the right hand side can now be approximated via sampling M Bernoulli random variables ^j and taking the remainder of (+^j)/2. We denote this method as ED-Bern.
Deterministic Transformation.
The perturbation q can also be defined through a deterministic information loosing map g:𝒳→𝒴, where the space 𝒴 may or may not be equal to 𝒳 depending on the choice of g. The contrastive potential can be expressed in terms of the preimage of g, i.e.
U_g() = -log∑_{': g(') = }exp(-U(')) = -log𝔼_' ∼𝒰({g^-1()})[exp(-U('))] - c
with c = log|{g^-1()}|. Again, the contrastive potential can be approximated through sampling M instances from the uniform distribution over the set {': g(') = }. In our numerical experiments, we focus on the mean-pooling transform g_pool whose inverse are block-wise permutations. For details, see <ref>. We denote this method as ED-Pool.
Neighbourhood-based Transformation.
Finally, inspired from concrete score matching <cit.>, we may define energy discrepancies based on neighbourhood maps ↦𝒩()∈𝒳^K which assign each point ∈𝒳 a set of K neighbours[We are making the assumption that the numbers of neighbours is the same for each point. A more general case is discussed in <ref>.]. We define the forward perturbation q(|) by selecting neighbours ∼𝒰(𝒩()) uniformly at random. Conversely, the contrastive potential can be expressed in terms of the inverse neighbourhood ↦𝒩^-1()∈𝒳^K, i.e. the set of points that have to their neighbour. We then obtain for the contrastive potential
U_𝒩() = -log1/K∑_'∈𝒳: ∈𝒩(')exp(-U(')) = -log𝔼_' ∼𝒰({𝒩^-1()})[exp(-U('))] .
In practice, we choose the grid neighbourhood (<ref>) and denote this method by ED-Grid.
Stabilising Training.
Above schemes permit the approximation of the contrastive potential from M samples which are generated by first sampling ∼ q(|), after which we compute M approximate recoveries _-^j. The full loss can then be constructed for each data point _+∼ p_ by calculating log∑_j= 1^M exp(U(_+) - U(_-^j))-log(M) using the numerically stabilised logsumexp function. In practice, however, we find that this estimator for energy discrepancy is biased due to the logarithm and can exhibit high variance. To stabilise training, we introduce an offset for the logarithm which introduces a deterministic lower bound for the loss. This yields the energy discrepancy loss function
ℒ_q, M, w(U) := 1/N∑_i=1^N log(w+ ∑_j= 1^M exp(U(^i_+) - U(_-^i,j))) - log(M)
with ^i_+ ∼ p_. In <ref> we proof that this approximation is consistent for any fixed w:
[]theoremrestatetheoremtwo
For every ε> 0 there exist N, M ∈ℕ such that |ℒ_q, M, w(U) -_q(p_, U)|<ε a.s..
§ EXPERIMENTS
r0.60
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
< g r a p h i c s >
Experiment results on learning lattice Ising models. Left to right: ground truth, ED-Bern, ED-Pool, ED-Grid.
Training Ising Models.
We evaluate the proposed methods on the lattice Ising model, which has the form of
p() ∝exp (^T J ), ∈{-1,1}^D,
where J=σ A_D with σ∈ℝ and A_D being the adjacency matrix of a D× D grid.
Following <cit.>, we generate training data through Gibbs sampling and use the generated data to fit a symmetric matrix J via energy discrepancy.
In <ref>, we consider D=10× 10 grids with σ =0.2 and illustrate the learned matrix J using a heatmap. It can be seen that the variants of energy discrepancy can identify the pattern of the ground truth, confirming the effectiveness of our methods. We defer experimental details and quantitative results comparing with baselines to <ref>.
Discrete Density Estimation.
In this experiment, we follow the experimental setting of <cit.>, which aims to model discrete densities over 32-dimensional binary data that are discretisations of continuous densities on the plane (see <ref>). Specifically, we convert each planar data point ∈ℝ^2 to a binary data point ∈{0,1}^32 via Gray code <cit.>. Consequently, the models face the challenge of modeling data in a discrete space, which is particularly difficult due to the non-linear transformation from to .
We compare our methods to three baselines: PCD <cit.>, ALOE+ <cit.>, and EB-GFN <cit.>. The experimental details are given in <ref>.
For qualitative evaluation, we visualise the energy landscapes learned by our methods in <ref>. It shows that energy discrepancy is able to faithfully model multi-modal distributions and accurately learn the sharp edges present in the data support. For further qualitative comparisons, we refer to the energy landscapes of baseline methods presented in Figure C.2 of <cit.>.
Moreover, we quantitatively evaluate different methods in <ref> by showing the negative log-likelihood (NLL) and the exponential Hamming MMD <cit.>. Perhaps surprisingly, we find that energy discrepancy outperforms the baselines on most settings, despite not requiring MCMC simulation like PCD or training an additional variational network like ALOE and EB-GFN. A possible explanation for this are biases introduced by short-run MCMC sampling in the case of PCD or non-converged variational proposals in ALOE. By definition, ED transforms the data distribution as well as the energy function which corrects for such biases.
Discrete Image Modelling.
Here, we evaluate our methods in discrete high-dimensional spaces. Following the settings in <cit.>, we conduct experiments on four different binary image datasets. Training details are given in <ref>. After training, we adopt Annealed Importance Sampling <cit.> to estimate the log-likelihoood.
The baselines include persistent contrastive divergence with vanilla Gibbs sampling, Gibbs-With-Gradient <cit.>, Generative-Flow-Network <cit.>, and Discrete-Unadjusted-Langevin-Algorithm <cit.>. The NLLs on the test set are reported in <ref>. We see that energy discrepancy yields comparable performances to the baselines, while ED-Pool is unable to capture the data distribution. We emphasise that energy discrepancy only requires M (here, M=32) evaluations of the energy function per data point in parallel. This is notably fewer than contrastive divergence, which requires simulating multiple MCMC steps without parallelisation.
We also visualise the generated samples in <ref>, which showcase the diversity and high quality of the images generated by ED-Bern and ED-Grid.
However, we observed that ED-Pool suffers from mode collapse.
§ CONCLUSION AND OUTLOOK
In this paper we demonstrate how energy discrepancy can be used for efficient and competitive training of energy-based models on discrete data without MCMC. The loss can be defined based on a large class of perturbative processes of which we introduce three types: noise, determinstic transform, and neighbourhood-based transform. Our results show that the choice of perturbation matters and motivates further research on effective choices depending on the data structure of interest.
We observe empirically that similarly to other contrastive losses, energy discrepancy shows limitations when the ambient dimension of 𝒳 is significantly larger than the intrinsic dimension of the data. In these cases, training is aided significantly by a base distribution that models the lower-dimensional space populated by data. For this reason, the adoption of ED on new data sets or different data structures may require adjustments to the methodology such as learning appropriate base distributions and finding more informative perturbative transforms.
For future work, we are interested in how this work extends to highly structured data such as graphs or text. These settings may require a deeper understanding of how the perturbation influences the performance of ED and what is gained from gradient information in CD <cit.> or ratio matching <cit.>.
§ ACKNOWLEDGEMENTS
TS would like to thank G.A. Pavliotis for insightful discussions leading up to the presented work. TS was supported by the EPSRC-DTP scholarship partially funded by the Ddepartment of Mathematics, Imperial College London. ZO was supported by the Lee Family Scholarship. ABD was supported by Wave 1 of The UKRI Strategic Priorities Fund under the EPSRC Grant EP/T001569/1 and EPSRC Grant EP/W006022/1, particularly the “Ecosystems of Digital Twins” theme within those grants and The Alan Turing Institute. We thank the anonymous reviewer for their comments.
icml2022
Appendix for “Training Discrete EBMs with Energy Discrepancy”
.tocmtappendix
mtchapternone
mtappendixsubsection
§ ABSTRACT PROOFS AND DERIVATIONS
§.§ Proof of the Non-Parametric Estimation Theorem <ref>
In this subsection we give a formal proof for the uniqueness of minima of _q(p_, U) as a functional in the energy function U. We first reiterate the theorem as stated in the paper:
*
We test energy discrepancy on the first and second order optimality conditions, i.e. we test that the first functional derivative of ED vanishes in U^∗ and that the second functional derivative is positive definite. For uniqueness and well-definedness, we constrain the optimisation domain to the following set:
𝒢 := {U:𝒳↦ℝ such that exp(-U)∈ L^1(𝒳, d) , U∈ L^1(p_) , and min_∈𝒳 U(x) = 0}
and require that there exists a U^∗∈𝒢 such that exp(-U^∗) ∝ p_. We now start with the following lemmata and then complete the proof of <ref> in <ref>.
Let h∈𝒢 be arbitrary. The first variation of _q is given by
. d/dϵ_q (p_, U + ϵ h) |_ϵ = 0 = 𝔼_p_() [h()] - 𝔼_p_()𝔼_q(|)𝔼_p_U( | )[h()]
where p_U( | ) = q(|)exp(-U())/∑_'∈𝒳 q(|') exp(-U(')).
We define the short-hand notation U_ϵ := U + ϵ h. The energy discrepancy at U_ε reads
_q (p_, U_ϵ) = 𝔼_p_() [U_ϵ()] + 𝔼_p_()𝔼_q( | )[ log∑_∈𝒳 q( | ) exp(-U_ϵ()) ] .
For the first functional derivative, we only need to calculate
d/dϵlog∑_∈𝒳 q( | ) exp(-U_ϵ()) = ∑_∈𝒳- q( | ) h() exp(-U_ϵ())/∑_'∈𝒳 q( | ^') exp(-U_ϵ(^')) = -𝔼_p_U_ϵ( | )[h()].
Plugging this expression into _q (p_, U_ϵ) and setting ϵ = 0 yields the first variation of _q.
The second variation of _q is given by
. d^2/dϵ^2_q (p_, U + ϵ h) |_ϵ = 0 = 𝔼_p_()𝔼_q(|)Var_p_U(| )[h()].
For the second order term, we have based on equation <ref> and the quotient rule for derivatives:
d^2/dϵ^2log ∑_∈𝒳 q( | ) exp(-U_ϵ())
= ∑_∈𝒳 q( | ) exp(U_ϵ()) h^2() ∑_^'∈𝒳 q( | ^') exp(-U_ϵ(^')) /( ∑_^'∈𝒳 q( | ^') exp(-U_ϵ(^')) )^2
- ∑_∈𝒳 q( | ) exp(U_ϵ())h() ∑_^'∈𝒳 q( | ^') exp(-U_ϵ(^')) h(^') /( ∑_^'∈𝒳 q( | ^') exp(-U_ϵ(^')) )^2
= 𝔼_p_U_ϵ(| )[h^2()] - 𝔼_p_U_ϵ(| )[h()]^2 = Var_p_U_ϵ(| )[h()] .
We obtain the desired result by interchanging the outer expectations with the derivatives in ϵ.
Let c=min_∈𝒳 (-log p_()). For U^∗ = -log(p_) - c∈𝒢 it holds that
. d/dϵ_q (p_, U^∗ + ϵ h) |_ϵ = 0 = 0
. d^2/dϵ^2_q (p_, U^∗ + ϵ h) |_ϵ = 0 >0 for allh ,
Furthermore, U^∗ is the unique global minimiser of _q(p_, ·) in 𝒢.
By definition, the variance is non-negative, i.e. for every h∈𝒢:
. d^2/dϵ^2_q (p_, U + ϵ h) |_ϵ = 0 = Var_p_U(| )[h()]≥ 0 .
Consequently, the energy discrepancy is convex and an extremal point of _q(p_, ·) is a global minimiser. We are left to show that the minimiser is obtained at U^∗ and unique. First of all, we have for U^∗:
𝔼_p_U^∗( | )[h()]
= ∑_∈𝒳q(|) exp(-U^*())/∑_^'∈𝒳 q(|^') exp(-U^*(^')) h()
= ∑_∈𝒳q(|) p_()/∑_^'∈𝒳q(|^') p_(^') h().
By applying the outer expectations we obtain
𝔼_p_()𝔼_q(|)𝔼_p_U^∗(| )[h()]
= ∑_∈𝒳 p_() ∑_∈𝒴(q(|) ∑_z∈𝒳(q(|) p_()/∑_'∈𝒳 q(|^') p_(^') h() ))
= ∑_∈𝒳∑_∈𝒴 q(|) p_() h()
= 𝔼_p_() [h()],
where we used that the marginal distributions ∑_∈𝒳 p_() q(|) cancel out and the conditional probability density integrates to one. This implies
. d/dϵ_q (p_, U^* + ϵ h) |_ϵ = 0 =𝔼_p_() [h()]-𝔼_p_() [h()] = 0.
for all h∈𝒢. We now show that
. d^2/dϵ^2_q (p_, U^∗ + ϵ h) |_ϵ = 0 = 𝔼_p_()𝔼_q(|)Var_p_(| )[h()] >0 .
Assume that the second variation was zero. Since the perturbed data distribution ∑_∈𝒳 p_()q(|) is positive, the second variation at U^∗ is zero if and only if the conditional variance Var_p_(|)[h()] = 0. Since U^∗+ε h∈𝒢, the function h can not be constant. By definition of the conditional variance, h() must then be a deterministic function of ∼∑_∈𝒳 q(|)p_(). Since h was arbitrary, there exists a measurable map g such that = g() and Var_p_(|)[] = 0 which is a contradiction to our assumptions. Consequently, U^∗ is the unique global minimiser of _q which completes the statement in <ref>.
§ CONNECTIONS TO OTHER METHODS
In this section, we follow <cit.>.
§.§ Connections of Energy Discrepancy with Contrastive Divergence
The contrastive divergence update can be derived from an energy discrepancy when, for E_θ fixed, q satisfies the detailed balance relation
q(|)exp(-E_θ()) = q(|)exp(-E_θ()) .
To see this, we calculate the contrastive potential induced by q: We have
-log∑_'∈𝒳 q(|')exp(-E_θ(')) = -log∑_'∈𝒳 q('|)exp(-E_θ())= E_θ() .
Consequently, the energy discrepancy induced by q is given by
_q(p_, E_θ) = 𝔼_p_()[E_θ()]-𝔼_p_()𝔼_q(|)[E_θ()] .
Updating θ based on a sample approximation of this loss leads to the contrastive divergence update
Δθ∝1/N∑_i=1^N ∇_θ E_θ(^i)- 1/N∑_i=1^N ∇_θ E_θ(^i) ^i ∼ q(·|^i)
It is important to notice that the distribution q depends on E_θ and needs to adjusted in each step of the algorithm. For fixed q, _q(p_, E_θ) satisfies <ref>. This means that each step of contrastive divergence optimises a loss with minimiser E_θ^∗ = -log p_ +c. However, the loss function changes in each step of contrastive divergence. The connection also highlights the importance Metropolis-Hastings adjustment to ensure that the implied q distribution satisfies the detailed balance relation.
§.§ Derivation of Energy Discrepancy from KL Contractions
A Kullback-Leibler contraction is the divergence function p_p_ - Qp_Qp_ <cit.> for the convolution operator Q p() = ∑_'∈𝒳 q(|')p('). The linearity of the convolution operator retains the normalisation of the measure, i.e. for the energy-based distribution p_ we have
Q p_ = 1/Z_U∑_'∈𝒳 q(|') exp(-U(')) with Z_U = ∑_'∈𝒳exp(-U(')) .
The KL divergences then become with U_q := -log Qexp(-U())
p_p_ = 𝔼_p_()[log p_()] + 𝔼_p_()[U()] + log Z_U
Qp_Qp_ = 𝔼_Qp_()[log Qp_()] + 𝔼_Qp_()[U_q()] + log Z_U
Since the normalisation cancels when subtracting the two terms we find
p_p_ - Qp_Qp_ = _q(p_, U) + c
where c is a constant that contains the U-independent entropies of p_ and Qp_.
§ SAMPLE APPROXIMATIONS OF ENERGY DISCREPANCIES
In this section, we discuss practical implementations of the mean-pooling transform as an information destroying deterministic process and the grid-neighbourhood as a neighbourhood-based transformation.
§.§ General Strategy
As a general strategy, the contrastive potential has to be written as an expectation over an appropriate to be determined distribution p_neg, q, that depends on the chosen perturbation process and on the point where the contrastive potential is evaluated, i.e.
U_q() = -log𝔼_p_neg, q, (')exp(-U('))
which allows the evaluation of the contrastive potential via sampling from p_neg, q,. The energy discrepancy can then be written as
_q(p_, U) = 𝔼_p_()𝔼_q(|)[log𝔼_p_neg, q, (')[exp(U()-U('))]]
by using properties of the logarithm and exponential and the fact that U() does not depend on the expectations taken in and '. The loss can then be approximated via ancestral sampling. We first sample a batch _+^i∼ p_, subsequently sample its perturbed counter part ^i∼ q(·|_+^i), and finally sample M negative samples _-^i, j∼ p_neg, q, ^i. Sometimes, the perturbed sample ^i is never explicitely computed in the process. As described in <ref>, the approximation is always stabilised through tunable hyper-parameter w which finally yields the loss function
ℒ_q, M, w(U) := 1/N∑_i=1^N log(w+ ∑_j= 1^M exp(U(^i_+) - U(_-^i,j))) - log(M)
The justification for the stabilisation is two-fold. Firstly, the logarithm makes the Monte-Carlo approximation of the contrastive potential biased due to Jensens inequality. The bias is negative, given to leading order by the variance of the approximation, and depends on the energy function U. Thus, the optimiser may start to optimise for a high bias and high variance estimator of the contrastive potential rather than learning the data distribution. While this issue can be alleviated by significantly large choices for M, it is much more practical to introduce a deterministic lower bound to the loss-functional through the stabilisation w, which prevents the bias and logarithm from diverging. Secondly, the effect of the stabilisation goes to zero as M increases. Thus, the asymptotic limit for M and N large is retained through the stabilisation. For more details and analogous arguments in the continuous case, see <cit.>.
§.§ Mean Pooling Transform
We describe the mean-pooling transform on the example of image data which takes values in the space {0, 1}^h× w. We fix a window size s and reshape each data-point into blocks of size s× s, i.e.
{0, 1}^h× w→{0, 1}^s× s ×h/s×w/s , ↦
The mean pooling transform g_pool computes the average over each block _∙, ∙, i, j for i=1, 2, …, h/s and j=1, 2, …, w/s. The corresponding preimage of the mean pooling transform is given by the set of points which are identical to up to block-wise permutation, i.e.
g^-1(g_pool())={'∈𝒳: there exist π_i, j∈ S_s× s s.t. '_l, k, i, j = '_π_i, j(l, k), i, j for all l, k, i, j}
where S_s× s denotes the permutation group for matrices of size s× s. In practice, the mean-pooled data point has to never be computed, only the block wise permutations of the data point are required. Consequently, we obtain negative samples through _-^i, j∼𝒰(g^-1(g_pool(^i))), i.e. via block wise permutation of the entries of each data point ^i.
Strictly speaking, this transformation violates the assumptions of <ref> for data points that only consist of blocks that average to 1 or 0. Since this is only the case for a small set of the state space, we assume this violation to be negligible.
§.§ Grid Neighborhood
The grid neighbourhood for ∈{0, 1}^d is constructed as
𝒩_grid() = {∈{0, 1}^d : - = ±_k, k = 1, 2, …, d}
where _k is a vector of zeros with a one in the k-th entry. This neighbourhood structure is symmetric, i.e. 𝒩_grid^-1() = 𝒩_grid(). Consequently, the negative samples are created by sampling from
_-^i, j∼𝒰(𝒩_grid( ^i)) with ^i∼𝒰(𝒩_grid( ^i))
Notice that each negative sample is the second neighbour of the positive sample, and with a small chance the positive sample itself.
§.§ Directed Neighbourhood Structures
More generally, the neighbourhood structure may form a non-symmetric directed graph for which the neighbourhood maps 𝒩^-1 and 𝒩 don't coincide. In this case, an additional weighting-term is introduced. We denote the number of neighbours of as K_ = |𝒩()| and the number of elements of which is a neighbour as K'_ = |𝒩^-1()|. The forward transition density is given by the uniform distribution, i.e.
q(|) = {
1/K_ if ∈𝒩()
0 else .
We then have
U_𝒩() = log∑_'∈𝒳 q(|') exp(-U('))
= log∑_'∈𝒩^-1()1/K_'exp(-U('))
= log1/K'_∑_'∈𝒩^-1()K'_/K_'exp(-U('))
= log𝔼_' ∼𝒰({𝒩^-1()})[ω_'exp(-U('))]
where we introduced the weighting term ω_' = K'_ / K_'.
§.§ Consistency of our Approximation
The following proof is similar to <cit.>. We first restate the consistency result:
*
For N data points _+^i∼ p_ and perturbed points ^i∼ q(·|_+^i) denote the M corresponding negative samples by _-^i, j∼ p_neg, q, ^i. Notice that the distribution of the negative samples depends on ^i. Using the triangle inequality, we can upper bound the difference |_q(p_, U)-ℒ_q, M, w(U)| by upper bounding the following two terms, individually:
|_q(p_, U) - 1/N∑_i=1^N log𝔼[exp(U(_+^i)-U(_-^i, j) | _+^i, ^i]|
+ |1/N∑_i=1^N log𝔼[exp(U(_+^i)-U(_-^i, j) | _+^i, ^i] - ℒ_q, M, w(U)|
The conditioning expresses that the expectation is only taken in _-^i, j∼ p_neg, q, ^i while keeping the values of the random variables _+^i and ^i fixed. The first term can be bounded by a sequence ε_N 0 due to the normal strong law of large numbers. For the second term one needs to consider that the distribution p_neg, q, ^i depends on the random variable ^i. For this reason, we notice that _-^i, j are conditionally indepedent given _+^i, ^i and employ a conditional version of the strong law of large numbers <cit.> to obtain
1/M∑_j=1^M exp(U(_+^i) - U(_-^i, j))𝔼[exp(U(_+^i)-U(_-^i, j) | _+^i, ^i]
Next, we have that the deterministic sequence w/M→ 0. Thus, adding the stabilisation w/M does not change the limit in M. Furthermore, since the logarithm is continuous, the limit also holds after applying the logarithm. Finally, the estimate translates to the sum by another application of the triangle inequality:
For each i = 1, 2, …, N there exists a sequence ε_i, M 0 such that
|1/N∑_i=1^N log𝔼[exp(U(_+^i)-U(_-^i, j) | _+^i, ^i] - ℒ_q, M, w(U)|
≤1/N∑_i=1^N |log𝔼[exp(U(_+^i)-U(_-^i, j) | _+^i, ^i] - log1/M∑_j=1^M exp(U(_+^i) - U(_-^i, j)) |
< 1/N∑_i=1^N ε_i, M≤max(ε_1, M, …, ε_N, M) .
Hence, for each ε>0 there exists an N∈ℕ and an M(N)∈ℕ such that |_q(p_, U)-ℒ_q, M(N), w(U)| < ε almost surely.
§ RELATED WORK
Contrastive loss functions
Our work is based on an unpublished work on energy discrepancies in the continuous case <cit.>. The motivation for such constructed loss functions lies in the data processing inequality. A similar loss has been suggested before as KL contraction divergence <cit.>, however, only for its theoretical properties. Interestingly, the structure of the stabilised energy discrepancy loss shares similarities with other contrastive losses such as <cit.>. This poses the question of possible classification-based interpretations of energy discrepancy and of the w-stabilisation.
Contrastive divergence and Sampling.
Discrete training methods for energy-based models largely rely on contrastive divergence methods, thus motivating a lot of work on discrete sampling and proposal methods. Improvements of the standard Gibbs method were proposed by <cit.> through locally informed proposals. The method was extended to include gradient information <cit.> to drastically reduce the computational complexity of flipping bits of binary valued data and to flipping bits in several places <cit.>. Finally, discrete versions of Langevin sampling have been introduced based on this idea <cit.>. Consequently, most current implementations of contrastive divergence use multiple steps of a gradient based discrete sampler. Alternatively, energy-based models can be trained using generative flow networks which learns a Markov chain to construct data by optimising a given reward function. The Markov chain can be used to obtain samples for contrastive divergence without MCMC from the EBM <cit.>.
Other training methods for discrete EBMs.
There also exist some MCMC free approaches for training discrete EBMs.
Our work is most similar to concrete score matching <cit.> which uses neighbourhood structures to define a replacement of the continuous score function. Another sampling free approach for training discrete EBMs is ratio matching <cit.>. However is has been found that also for ratio matching, gradient information drastically improves the performance <cit.>. Moreover, <cit.> proposed to apply variational approaches to train discrete EBMs instead of MCMC. <cit.> replaced the widely-used Gibbs algorithms with quasi-rejection sampling to trade off the efficiency and accuracy of the sampling procedure. The perturb-and-map <cit.> is also recently utilised to sample and learn in discrete EBMs <cit.>.
§ MORE ABOUT EXPERIMENTS
r0.6
Mean negative log-RMSE (higher is better) between the learned connectivity matrix J_ϕ and the true matrix J for different values of D and σ. The results of baselines are directly taken from <cit.>.
!
5cD=10^2 2cD=9^2
(lr)2-6(lr)7-8
Method \ σ 0.1 0.2 0.3 0.4 0.5 -0.1 -0.2
Gibbs 4.8 4.7 3.4 2.6 2.3 4.8 4.7
GWG 4.8 4.7 3.4 2.6 2.3 4.8 4.7
EB-GFN 6.1 5.1 3.3 2.6 2.3 5.7 5.1
ED-Bern (ours) 5.1 4.0 2.9 2.5 2.3 5.1 4.3
ED-Pool (ours) 4.9 3.6 3.2 2.6 2.3 4.9 3.6
ED-Grid (ours) 4.6 4.0 3.1 2.6 2.3 4.5 4.0
§.§ Training Ising Models
Experimental Details. As in <cit.>, we train a learnable connectivity matrix J_ϕ to estimate the true matrix J in the Ising model. To generate the training data, we simulate Gibbs sampling with 1,000,000 steps for each instance to construct a dataset of 2,000 samples. For energy discrepancy, we choose w=1,M=32 for all variants, ϵ=0.1 in ED-Bern, and the window side is √(D)×√(D) in ED-Pool. The parameter J_ϕ is learned by the Adam <cit.> optimizer with a learning rate of 0.0001 and a batch size of 256. Following <cit.>, all models are trained with an l_1 regularization with a coefficient in {10, 5, 1, 0.1, 0.01} to encourage sparsity. The other setting is basically the same as Section F.2 in <cit.>. We report the best result for each setting using the same hyperparameter searching protocol for all methods.
Quantitative Results.
We consider D=10× 10 grids with σ = 0.1, 0.2, …, 0.5 and D=9× 9 grids with σ=-0.1, -0.2. The methods are evaluated by computing the negative log-RMSE between the estimated J_ϕ and the ture matrix J. As shown in <ref>, our methods demonstrate comparable results to the baselines and, in certain settings, even outperform Gibbs and GWG, indicating that energy discrepancy is able to discover the underlying structure within the data.
§.§ Discrete Density Estimation
Experimental Details.
This experiment keeps a consistent setting with <cit.>. We first generate 2D floating-points from a continuous distribution p̂ which lacks a closed form but can be easily sampled. Then, each sample := [_1, _2] ∈ℝ^2 is converted to a discrete data point ∈{0,1}^32 using Gray code. To be specific, given ∼p̂, we quantise both _1 and _2 into 16-bits binary representations via Gray code <cit.>, and concatenate them together to obtain a 32-bits vector . As a result, the probabilistic mass function in the discrete space is p() ∝p̂( [ GrayToFloat(_1:16), GrayToFloat(_17:32) ] ). It is noteworthy that learning on this discrete space presents challenges due to the highly non-linear nature of the Gray code transformation.
The energy function is parameterised by a 4 layer MLP with 256 hidden dimensions and Swish <cit.> activation. We train the EBM for 10^5 steps and adopt an Adam optimiser with a learning rate of 0.002 and a batch size of 128 to update the parameter. For the energy discrepancy, we choose w=1, M=32 for all variants, ϵ=0.1 in ED-Bern, and the window size is 32× 1 in ED-Pool. After training, we quantitatively evaluate all methods using the negative log-likelihood (NLL) and the maximum mean discrepancy (MMD). To be specific, the NLL metric is computed based on 4,000 samples drawn from the data distribution, and the normalisation constant is estimated using importance sampling with 1,000,000 samples drawn from a variational Bernoulli distribution with p=0.5. For the MMD metric, we follow the setting in <cit.>, which adopts the exponential Hamming kernel with 0.1 bandwidth. Moreover, the reported performances are averaged over 10 repeated estimations, each with 4,000 samples, which are drawn from the learned energy function via Gibbs sampling.
Qualitative Results.
We qualitatively visualise the learned energy functions of our proposed approaches in <ref>. To provide further insights into the oracle energy landscape, we also plot the ground truth samples in Figure <ref>. The results clearly demonstrate that energy discrepancy effectively fits the data distribution, validating the efficacy of our methods.
The Effect of ϵ in Bernoulli Perturbation.
Perhaps surprisingly, we find that the proposed energy discrepancy loss with Bernoulli perturbation is very robust to the noise scalar ϵ.
r0.60
< g r a p h i c s >
Density estimation results of ED-Bern on the pinwheel with different ϵ and M=32,w=1.
In <ref>, w visualise the learned energy landscapes with different ϵ.
The results demonstrate that ED-Bern is able to learn faithful energy functions, even with extreme values of ϵ, such as ϵ∈{0.999, 0.001}. This highlights the robustness and effectiveness of our approach. In <ref>, we further show that, with ϵ∈{0.9999, 0.0001}, ED-Bern can still learn a faithful energy landscape using a large value of M. However, when ϵ∈{1, 0}, ED-Bern fails to work. It is noteworthy that the choice of ϵ is highly dependent on the specific structure of the dataset. While ED-Bern exhibits robustness to different values of ϵ in the synthetic data, we have observed that a large value of ϵ (ϵ≥ 0.1) is not effective for discrete image modeling.
r0.45
< g r a p h i c s >
Density estimation results of ED-Pool on the pinwheel with different window sizes, M and w=1.
The Effect of Window Size in Deterministic Transformation.
To investigate the effectiveness of the window size in ED-Pool, we conduct experiments in <ref> with different window sizes. The results indicate that employing a small window size (e.g., 2× 1) does not provide sufficient information for energy discrepancy to effectively learn the underlying data structure. Furthermore, our empirical findings suggest that solely increasing the value of M is not a viable solution to address this issue. Again, the choice of the window size should depend on the underlying data structure. In the discrete image modelling, we find that even with a small window size (i.e., 4 × 4), energy discrepancy yields an energy with low values on the data-support but rapidly diverging values outside of it. Therefore, it fails to learn a faithful energy landscape.
Qualitatively Understanding the Effect of w and M.
The hyperparameters w and M play a crucial role in the estimation of energy discrepancy. Increasing M can reduce the variance of the Monte Carlo estimation of the contrastive potential in (<ref>), while a proper value of w can improve the stabilisation of training. Here, we evaluate the effect of w and M on the variants of energy discrepancy in <ref>. Based on empirical observations, we observe that when w=0 and M is small (e.g., M ≤ 32 for ED-Bern and M ≤ 64 for ED-Pool and ED-Grid), energy discrepancy demonstrates rapid divergence and fails to converge. Additionally, we find that increasing M can address this issue to some extent and introducing a non-zero value for w can significantly stabilize the convergence, even with M=1. Moreover, larger w tends to produce a flatter estimated energy landscapes, which also aligns with the findings in continuous scenarios of energy discrepancy <cit.>.
§.§ Discrete Image Modelling
Experimental Details.
In this experiment, we parametrise the energy function using ResNet <cit.> following the settings in <cit.>, where the network has 8 residual blocks with 64 feature maps. Each residual block has 2 convolutional layers and uses Swish activation function <cit.>. We choose M=32, w=1 for all variants of energy discrepancy, ϵ=0.001 for ED-Bern, and the window size is 2× 2 for ED-Pool. Note that here we choose a relatively small ϵ and window size, since we empirically find that the loss of energy discrepancy converges to a constant rapidly with larger ϵ and window size, which can not provide meaningful gradient information to update the parameters. All models are trained with Adam optimiser with a learning rate of 0.0001 and a batch size of 100 for 50,000 iterations. We perform model evaluation every 5,000 iterations by conducting Annealed Importance Sampling (AIS) with a discrete Langevin sampler for 10,000 steps. The reported results are obtained from the model that achieves the best performance on the validation set. After training, we finally report the negative log-likelihood by running 300,000 iterations of AIS.
Qualitative Results.
We show the generated images in <ref>, which are the samples in the final step of AIS. We see that our methods can generate realistic images on the Omniglot dataset but mediocre images on Caltech Silhouette. We hypothesise that improving the design of the affinity structure in the neighborhood-based transformation can lead to better results. On both the static and dynamic MNIST datasets, ED-Bern and ED-Grid generate diverse and high-quality images. However, ED-Pool experiences mode collapse, resulting in limited variation in the generated samples.
|
http://arxiv.org/abs/2307.04864v1 | 20230710192410 | Hyperelliptic and trigonal modular curves in characteristic $p$ | [
"Maarten Derickx",
"Filip Najman"
] | math.NT | [
"math.NT"
] |
Let X_Δ(N) be an intermediate modular curve of level N, meaning that there exist (possibly trivial) morphisms X_1(N)→ X_Δ(N) → X_0(N).
For all such intermediate modular curves, we give an explicit description of all primes p such that X_Δ(N)__p is either hyperelliptic or trigonal. Furthermore we also determine all primes p such that X_Δ(N)__p is trigonal.
This is done by first using the Castelnuovo-Severi inequality to establish a bound N_0 such that if X_0(N)__p is hyperelliptic or trigonal, then N ≤ N_0. To deal with the remaining small values of N, we develop a method based on the careful study of the canonical ideal to determine, for a fixed curve X_Δ(N), all the primes p such that the X_Δ(N)__p is trigonal or hyperelliptic.
Furthermore, using similar methods, we show that X_Δ(N)__p is not a smooth plane quintic, for any N and any p.
Social inequalities that matter for contact patterns, vaccination, and the spread of epidemics
Adriana Manna^1 Júlia Koltai^2,3,4Márton Karsai^1,4,5*
^1Central European University, Quellenstraße 51, 1100 Vienna, Austria
^2Computational Social Science - Research Center for Educational and Network Studies, Centre for Social Sciences, Tóth Kálmánutca 4,Budapest, 1097, Hungary
^3Department of Social Research Methodology, Faculty of Social Sciences, Eötvös Loránd University, Pázmány Péter s étány 1/A, Budapest, 1117, Hungary.
^4 National Laboratory for Health Security, Hungary.
^5 Rényi Institute of Mathematics, Reáltanodautca 13-15, Budapest, 1053, Hungary.
^*Corresponding author: [email protected]
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Let k be a field and C a curve over k. Throughout the paper we will assume all curves are geometrically integral and proper over k. The gonality _k C of C over k is the least degree of a non-constant morphism f:C→^1_k. Equivalently, it is the least degree of a non-constant function f∈ k(C).
We will study the gonality of modular curves, and in particular of ”intermediate" modular curves X_Δ(N). Let N be a positive integer and let Δ be a subgroup of (/ N)^× /-1; we will say that Δ is of level N. Let X_Δ(N) be the modular curve, defined over , associated to the modular group Γ_Δ=Γ_Δ(N) defined by
Γ_Δ={[ a b; c d ]∈_2() | c ≡ 0 mod N, (a mod N) ∈Δ}.
If Δ is the trivial subgroup of (/ N)^× /-1, then X_Δ(N)=X_1(N) and for Δ=(/ N)^× /-1 we have X_Δ(N)=X_0(N). The morphisms X_1(N)→ X_0(N) factor through X_Δ(N): X_1(N)→ X_Δ(N) → X_0(N).
Gonalities of modular curves over fields of characteristic 0, most commonly or , have received considerable attention. The modular curves that have been most studied are X_0(N) and X_1(N). For X_0(N) the results are as follows. Ogg <cit.> determined the hyperelliptic modular curves X_0(N). Hasegawa and Shimura <cit.> determined the X_0(N) that are trigonal over and over , and Jeon and Park <cit.> determined the X_0(N) that are tetragonal over . More recently, Najman and Orlić <cit.> determined the X_0(N) that are tetragonal and pentagonal over and determined the gonality of X_0(N) for all N<135.
All the curves X_1(N) with _ X_1(N)=d were determined for d=2 by Ishii and Momose <cit.> (they even determined all hyperelliptic X_Δ(N)), for d=3 by Jeon, Kim and Schweizer <cit.> and for d=4 by Jeon, Kim and Park <cit.> and for 5 ≤ d ≤ 8 by Derickx and van Hoeij <cit.>. Derickx and van Hoeij <cit.> also determined _ X_1(N) for all N≤ 40 and gave upper bounds for N≤ 250.
More generally, Abramovich <cit.> gave a lower bound for the gonality of any modular curve over (which is usually not sharp). From this result, it easily follows that there are only finitely many modular curves X with _ X ≤ d for some fixed positive integer d.
In this paper we consider the gonality of the modular curves X_Δ(N) over _p and _p.
Poonen <cit.> showed that if one fixes the prime p, then the set of Γ such that __pX_Γ≤ d is finite, and gave lower bounds on __p^2X_Γ and __pX_Γ, depending on p and the index of Γ in _2() <cit.>.
In <Ref> and <Ref> we will explicitly find all the pairs (Γ_Δ,p), with p not dividing the level of Γ_Δ such that __pX_Γ=2 and __pX_Γ=3.
It is easy to see that, for any curve X, __pX≤_X. It is natural to ask, in the case of (some sets of) modular curves, when does equality hold? A question in this direction, which was also one of the motivations for this paper, was asked by David Zureick-Brown on MathOverflow[<https://mathoverflow.net/questions/132618/hyperelliptic-modular-curves-in-characteristic-p>].
Are there any N such that X_0(N)_ is not hyperelliptic but for some p not dividing N, X_0(N)__p is hyperelliptic?
We show in <Ref> below that the answer to this question is negative. We consider the following more general questions.
A) Given some family of modular curves S and a positive integer d, can we determine all X∈ S and primes p of good reduction for X such that
__pX = d?
B)
Given S and d, can we also determine the X and p as above such that
__pX = d?
For d=2 both versions of the question are equivalent if X() ≠∅ for all X ∈ S, since a curve with an _p point is hyperelliptic over _p if and only if it is hyperelliptic over _p. We answer these questions for d=2 and 3 where S is the set of all intermediate modular curves X_Δ(N).
Let N be a positive integer, p a prime not dividing N, and X_Δ(N) an intermediate modular curve of level N. Then
2=__pX_Δ(N) < _X_Δ(N).
if and only if N=37, Δ=⟨ 4 ⟩≤ (/ N)^× and p=2.
Let N be a positive integer, p a prime not dividing N, and X_Δ(N) an intermediate modular curve of level N. Then
3=__pX_Δ(N) < _X_Δ(N).
if and only if X_Δ(N)=X_0(73) and p=2.
To answer <Ref> A) for d=3, in <Ref> we determine the fields of definition of all trigonal maps X_Δ(N) →^1 in characteristic p, for all p not dividing N.
Anni, Assaf and Lorenzo García recently proved <cit.>, among other results, that there are no modular curves X_Δ(N) that have a smooth plane model of degree 5. We prove the same result for all X_Δ(N) over all _p.
Let N be a positive integer, p a prime not dividing N, and X_Δ(N) an intermediate modular curve of level N. Then X_Δ(N)__p is not a smooth plane quintic.
We prove our results in 2 steps. First, we show that the set of N we need to consider is such that the class number h(-4N) is not too large (see <Ref> and <Ref>). The (now proven) Gauss conjecture then reduces the problem to dealing with only finitely many N. This strategy could in principle also be used to classify all hyperelliptic and trigonal modular curves over ℚ and ℂ.
Second, to deal with the remaining N, we develop explicit computational criteria (see <Ref>), based on Petri's theorem, to check whether, for a given N, there exists an X_Δ(N) of level N and a prime p not dividing N such that X_Δ(N)__p is hyperelliptic/trigonal/a smooth plane quintic.
All the code used to obtain our results can be found at <cit.>.
§.§ Acknowledgements
We thank John Voight and David Zureick-Brown for their helpful discussion and pointers to relevant literature.
§ BACKGROUND AND NOTATION
We now set up notation that will be used throughout the paper. By X_0(N) we denote the classical modular curve parametrizing isomorphism classes of pairs (E, C) of generalized elliptic curves together with a cyclic subgroup C of order N. We will call a divisor d>1 of N an Atkin-Lehner divisor if (d,N/d)=1. For any Atkin-Lehner divisor d of N the Atkin-Lehner involution w_d acts on (E,C) by sending it to (E/C,E[N]/C). The Atkin-Lehner involutions form a subgroup of (X_0(N)) isomorphic to (/2)^ω(N), where ω(N) is the number of prime divisors of N. The curve X_0(N) and all its Atkin-Lehner involutions are defined over . The quotient X_0(N)/w_N is denoted by X_0^+(N) and the quotient of X_0(N) by the whole group of Atkin-Lehner involutions is denoted by X_0^*(N).
The number of the ramification points ν(d, N) of the map X_0(N)→ X_0(N)/w_d will be given in terms of class numbers of imaginary quadratic fields (see <Ref> and <Ref>). We give the relevant data, which is taken from <cit.>, about quadratic imaginary fields of class number up to 100 in <Ref>
By X_1(N) we denote the modular curve whose k-rational points parameterize pairs (E,P) where E is a generalized elliptic curve and P∈ E(k) is a point N, up to k-isomorphism. For a d∈ (/N)^× /-1, the diamond operator ⟨ d⟩ acts on X_1(N) by sending (E,P) to (E,dP). For a subgroup Δ≤ (/N)^× /-1, X_Δ(N) is the quotient of X_1(N) by the automorphism group of diamond operators ⟨ d ⟩, for d∈Δ. We note that for Δ=(/N)^×, we have X_Δ(N) =X_0(N) and for Δ=-1, we have X_Δ(N)=X_1(N). For any Δ≤ (/N)^× /-1, the curve X_Δ(N) lies "in between" X_0(N) and X_1(N), in the sense that there are (Galois) morphisms
X_1(N)→ X_Δ(N)→ X_0(N);
hence the curves X_Δ(N) are called intermediate modular curves.
Let X/k be a curve over a number field k of genus g≥ 2. The canonical ring (or as it is often called, the homogenous coordinate ring) of X is
R(X):=⊕_d=0^∞H^0(X, Ω ^⊗ d_X/k).
Let V:=H^0(X,Ω_X/k) and (V):=⊕_d=0^∞^d(V). The identity map ^1(V)→ R(X)_1 (which just sends V to V) induces a map of graded rings f_can:(V)→ R(X). Hence we obtain the canonical map
X≃ (R(X)) → ( (V)) ≃^g-1_k.
The ideal I_can:= f_can⊆ (V) is called the canonical ideal.
Recall Petri's theorem (<cit.>, see also <cit.> for a historical overview): for a nonsingular projective curve X of genus ≥ 2 that is neither hyperelliptic, trigonal (possessing a map X→^1 of degree 3) nor a smooth plane curve of degree 5, the canonical ring R(X) is generated in degree 1 and I is generated in degree 2. More precisely, we have the following proposition.
Let X/k be a curve of genus g ≥ 3. By V· (I_can)_2 we denote the image of V⊗ (I_can)_2 in (I_can)_3.
a) X is hyperelliptic over k if and only if (I_can)_2 ⊆^2 (V) is of dimension g-12.
b) Suppose X is not hyperelliptic and not a smooth a plane quintic. Then X is trigonal over k if and only if the dimension of
(I_can)_3/(V· (I_can)_2)
is g-3.
c) Suppose X is a smooth plane quintic over k. Then g=6 and the dimension of
(I_can)_3/(V· (I_can)_2)
is g-3=3.
This follows directly from <cit.>.
§ BOUNDS FOR HYPERELLIPTIC AND TRIGONAL CURVES
We say that a curve is subhyperelliptic if it is of gonality ≤ 2.
Let k be a prefect field, and let X, Y, Z be curves over k. Let non-constant morphisms π_Y:X→ Y and π_Z:X→ Z over k be given, and let their degrees be m and n, respectively. Assume that there is no morphism X→ X' of degree >1 through which both π_Y and π_X factor. Then the following inequality hold:
g(X)≤ m · g(Y)+n· g(Z) +(m-1)(n-1).
Let, as before, N be a integer, d an Atkin-Lehner divisor of N and let f_d:X_0(N)→ X_0(N)/w_d be the quotient map by the Atkin-Lehner w_d involution and denote by ν(d;N) the number of complex ramification points of f_d.
Let N > 4 be an integer. The number of ramification points ν(N;N) of f_N satisfies
ν(N;N)= h(-4N) +h(-N) if N ≡ 3 4,
h(-4N) otherwise.
In fact in <cit.> there is also a description of v(d;N) when d ≠ N. While we used that formula in some explicit calculations, it is not necessary for the main argument.
Let X be a curve over a field k of genus g, let f:X→^1 be a morphism of prime degree p, and suppose G ≤_k̅ X is a Galois stable subgroup such that X/G is of genus 0. If g > (p-1)(#G-1) then f is cyclic and there is an automorphism σ of order p in G such that f is quotienting by σ.
Since g > (p-1)(#G-1) the Castelnuovo-Severi inequality applied to g:X → X/G and f tells us that f and g have a common factor. But since f is of prime degree, it follows that g factors through f. Since, by Galois theory, all intermediate curves X→ X' → X/G are of the form X'=X/H for some subgroup H of G, it follows that there exists a subgroup G' of G of order p such that f is quotienting by G', proving the claim.
Let p be a prime, N an integer, and k an algebraically closed field of characteristic coprime to N. Let Δ be a subgroup of (/N)^×/-1, such that w_d Δ w_d^-1 = Δ. Suppose there is a d | N with (d,N/d)=1 such that X_0(N)/w_d is of genus 0, and that genus(X_Δ(N)) > (p-1)(ϕ(N)/#Δ-1). Then any morphism f:X_Δ(N)→^1 of degree p is cyclic and there is an automorphism σ is in the group generated by w_d and the diamond operators, such that f is quotienting by σ.
The condition w_d Δ w_d^-1 = Δ ensures that the map X_1(N)/Δ→ X_0(N)/w_d is Galois. The result now follows by applying <Ref> to the subgroup of X_Δ(N) generated by w_d and the diamond operators.
Suppose k is a field such that p= k does not divide N. Let d>1 be an Atkin-Lehner divisors of N and f_d:X_0(N)→ X_0(N)/w_d the quotient map. The following hold:
* if X_0(N)_k is hyperelliptic, then either (X_0(N)/w_d)_[1/N] is of genus 0 or
ν(d;N) ≤ 4.
* if f: X_0(N)_k →^1_k is a map of odd degree m then ν(d;N) ≤ 2m.
* if f: X_0(N)_k →^1_k is a map of even degree m then either ν(d;N) ≤ 2m or f factors via f_d.
We prove (1); part (2) and (3) are proven analogously, where in (2) we use that an odd degree map cannot factor via f_d. First observe that since X_0^+(N)_[1/N] is smooth, the genus is preserved under base change to k. It follows that X_0(N)/w_d has genus 0 over k if and only if it has genus 0 over [1/N]. Hence if its genus is nonzero, then f_d is not the hyperelliptic map.
Note that since the degree of a map doesn't change under field extensions the proposition for k follows from the proposition for k̅, so we may assume k is algebraically closed and hence perfect. Applying the Castelnuovo-Severi inequality to the maps f_d and the hyperelliptic h:X_0(N)→^1, we get
g(X_0(N))≤ 2g(X_0(N)/w_d)+1 .
Applying the Riemann-Hurwitz formula to the map f_d we get
2g(X_0(N))=4 g(X_0(N)/w_d)-2+ν(d;N) .
Combining (<ref>) and (<ref>) yields the claimed result.
Suppose p and N are different primes and that N is such that X_0(N)_[1/N] is not hyperelliptic, while X_0(N)__p is. Then ν(N)≤ 4.
Let N and p such that they satisfy our assumptions. First observe that since X_0^+(N)/[1/N] is smooth, and hence the genus is preserved under reduction modulo p, it follows that X_0^+(N) has genus 0 over _p if and only if it has genus 0 over [1/N]. Hence f_N is not the hyperelliptic map.
Apllying the Castelnuovo-Severi inequality to the maps f_N and the hyperelliptic h:X_0(N)→^1, we get
g(X_0(N))≤ 2g(X_0^+(N))+1 .
Applying the Riemann-Hurwitz formula to the map f_N we get
2g(X_0(N))=4 g(X_0^+(N))+ν(N) .
Combining (<ref>) and (<ref>) yields the claimed result.
§.§ Hyperelliptic curves
Let N be an integer, Δ≤ ( /N )^×/-1 and p a prime not dividing N. We will say that a pair (Δ,p) is an exceptional hyperelliptic pair if X:=X_Δ(N) is not hyperelliptic over [1/N], but X__p is hyperelliptic.
We define
S_0:={ 34,43,45,52,57,64,67,72,73,85,93,97,163,193},
H:={ 22, 23, 26, 28, 29, 30, 31, 33, 35, 37, 39, 40, 41, 46, 47, 48, 50, 59, 71 },
and
SH:=H ∪{N≤ 32, 36, 49}.
The set H is the set of N such the X_0(N) is hyperelliptic and SH is the set of N such that X_0(N) is subhyperelliptic. The set S_0 consists of the values of N such that X_0(N)_[1/N] is not subhyperelliptic, and v(d;N)≤ 4 for all Atkin-Lehner divisors d of N.
Let Δ=( /N )^×/-1, i.e. X_Δ(N)=X_0(N). If (Δ,p) is an exceptional hyperelliptic pair, then N ∈ S_0.
Suppose X_0(N)__p is hyperelliptic.
By <Ref> (1), it follows that either X_0(N)_[1/N] is hyperelliptic (in which case (Δ,p) is not exceptional) or v(d;N)≤ 4 for every Atkin-Lehner divisor d of N. Using <Ref> we compute that v(d;N)≤ 4 for all Atkin-Lehner divisors d of N and X_0(N)_[1/N] is not subhyperelliptic only if N∈ S_0 ∪{88,148,232}.
To rule out the values N∈{88,148,232} we note that for each of these N, there exists a divisor n|N such that X_0(n)_[1/n] is not subhyperelliptic and v(d;n)> 6 for some Atkin-Lehner divisor d of n. Hence for all primes p not dividing N (and hence not dividing N), it follows by the same argument as above that X_0(n)__p is not hyperelliptic. Now it follows by <cit.> that X_0(N)__p is not hyperelliptic.
§.§ Trigonal curves
Let N be an integer, Δ≤ ( /N )^×/-1 and p a prime not dividing N. We will say that a pair (Δ,p) is an exceptional trigonal pair if X_Δ(N) is not trigonal over
[1/N], but is trigonal over _p. We will say (N,p) is an exceptional pair if X_0(N) is not trigonal over
[1/N], but is trigonal over _p.
Let S_1 be the following set
S_1:={ 34, 37, 38, 40, 43, 44, 45, 48, 50, 52, 53, 54, 57, 58, 61, 64, 67, 72, 73, 76,
81, 85, 88, 93, 97, 106, 108, 109, 121, 157, 162, 163, 169, 193, 277, 397}.
Let X=X_0(N). If (N,p) is an exceptional trigonal pair, then N ∈ S_1.
Suppose X_0(N)__p is trigonal.
By <Ref> (2), it follows that either X_0(N)_[1/N] is trigonal (in which case (N,p) is not exceptional) or v(d;N)≤ 6 for an Atkin-Lehner divisors d of N. Using <Ref> we compute that v(d;N)≤ 6 and X_0(N)_[1/N] is not trigonal only if N∈ S_1∪{148, 172, 232, 268, 652}.
To rule out the values N∈{148, 172, 232, 268, 652} we note that for each of these N, there exists a divisor n|N such that X_0(n)_[1/n] is of gonality >3 and v(d;n)> 6 for some Atkin-Lehner divisor d of n. Hence for all primes p not dividing N (and hence not dividing n), it follows by the same argument as above that X_0(n)__p is not trigonal. Now it follows by <cit.> that X_0(N)__p is not trigonal.
In <Ref> we will show that there is exactly one exceptional trigonal pair (N,p), namely (73,2). Using the same arguments as at the end of <Ref>, we see that the proof of <Ref> is now reduced to checking finitely many cases.
If (N,p) is a hyperelliptic pair and q is a divisor of N, then q ≤ 253.
Let p,q and N be such that they satisfy our assumptions and let d:X_0(N)→ X_0(q) be the degeneracy map. Since X_0(N)__p is by assumption hyperelliptic, then by <cit.> it follows that __p(X_0(q))≤ 2. Hence either _(X_0(q))≤ 2 or X_0(q) satisfies the assumptions of <Ref>.
The previous lemma does not seem to help at all and trivially follows from the Corollary. Should we remove it?
The set S_2 of primes q such that _(X_0(q))≤ 2 are known by the work of Ogg <cit.>. We get that the set S=S_1∪ S_2 of possible prime q divisors of N such that X_0(N)__p is hyperelliptic, while X_0(N)_[1/N] is not
is contained in
S:=TODO.
§ CHECKING HYPERELLIPTICITY AND TRIGONALITY OF A GIVEN X_Δ(N) OVER _P FOR ALL P
In <Ref> we showed that to find all exceptional hyperelliptic pairs (Δ,p) we need to consider only finitely many subgroups Δ, i.e. only those for which either the level is in S_0 or such that X_0(N) is subhyperelliptic. Similarly, to determine all exceptional trigonal pairs (Δ,p) we need to consider only the finitely many subgroups Δ for which either the level is in S_1 or such that X_0(N) is of gonality ≤ 3 over .
In this section we explain how to find, for a given Δ, all the p such that (Δ, p) is an exceptional hyperelliptic or trigonal pair.
We will first need the following lemma.
Let R be a discrete valuation ring with residue field k, fraction field K and uniformizer π. Let X be a nice (meaning smooth, projective, and geometrically integral) curve over R and suppose ℒ is a line bundle on X such that ℒ(X_k) and ℒ(X_K) have the same dimension, then the map ℒ(X)⊗_R k →ℒ(X_k) is an isomorphism.
This is done by taking global sections of the exact sequence
0 →ℒℒ→ℒ/πℒ→ 0.
Since ℒ/πℒ≅ℒ⊗ k, taking global sections of this sequence gives an injection ℒ(X)⊗_R k →ℒ(X_k). Comparing dimensions shows that it has to be an isomorphism.
Let X be nice curve over [1/N] of genus g>2. We use the notation set up in <Ref>, in particular, V:=H^0(X,Ω_X_[1/N]). We have that V, ^2(V) and R(X)_2=H^0(X, Ω ^⊗ 2_X_[1/N]) are free [1/N]-modules of rank g, g+12 and 3g-3, respectively.
From the previous lemma we get
V⊗_p ≃ H^0(X__p, Ω_X__p),
^m(V) ⊗_p ≃^m(V⊗_p),
R(X)_m ⊗_p ≃ R(X__p)_m =H^0(X__p, Ω ^⊗ m_X__p).
The degree m part (I_can, _p)_m of the canonical ideal of X__p can be represented as
(I_can, _p)_m=( f^m_can,_p: ^m(H^0(X__p, Ω_X__p))→ H^0(X__p, Ω ^⊗ m_X__p) ).
§.§ Hyperelliptic curves
It follows by <Ref> and <cit.> that X__p is hyperelliptic if and only if
(I_can, _p)_2= g-12.
Now the map f^2_can,_p is the reduction mod p of the map f^2_can: ^m(V) → R(X)_2. By putting a matrix representing the map f^2_can into Smith normal form it is easy to see modulo which primes the dimension of the kernel is g-12. So we have a criterion to detect the primes of hyperelliptic reduction.
To explicitly determine H^0(X_Γ, Ω _X_Γ) for a modular curve X_Γ corresponding to the congruence group Γ⊃Γ(N), we use the isomorphism <cit.>
H^0(X_Γ, [1/N], Ω _X_Γ,[1/N]) ≅ S_2(Γ,[1/N]),
where for a ring R,we denote by S_2(Γ,R) is the space of cusp forms of weight 2 with coefficients in R. The map
f^2_can,[1/N]: ^2(H^0(X, Ω_X_[1/N]))→ H^0(X_[1/N], Ω ^⊗ 2_[1/N])
can be computed on Sym^2 S_2(Γ,[1/N]) by multiplying q-expansions. On the other hand, (Ω^1_X/[1/N]⊗Ω^1_X/[1/N]) can be identified with the subspace of S_4(Γ,[1/N]) of cusp forms of weight 4 that have a double zero at all cusps.
In particular, after obtaining a matrix representing the [1/N]-module homomorphism f^2_can,[1/N] as explained above, and putting it in Smith normal form, one can easily read out the exact primes p where the rank of this matrix will change upon reduction modulo p. So we have translated everything into ranks of matrices that can easily be computed in terms of cusp forms.
§.§ Trigonality of nonhyperelliptic curves
For the entirety of this section let X be nice curve over [1/N] of genus g>3 such that furthermore X__p is not hyperelliptic for any prime p coprime to N. Then we can also detect whether X__p is either trigonal or a smooth plane quintic. Namely by <Ref> this happens if and only if
((I_can,_p)_3/(V ⊗_p · (I_can,_p)_2)) = g-3.
The assumption that X__p is not hyperelliptic for all primes p coprime to M implies that R(X__p)_m is generated in degree 1. In particular the map f^m_can,_p in (<ref>) is surjective and the ranks of the matrices associated to f_can,^m and f_can,_p^m are the same, from which it also follows that the dimension of (I_can, _p)_m is the rank of (I_can)_m as a [1/N]-module. As a consequence we have
(I_can)_m ⊗_p ≅ (I_can, _p)_m
The importance of the above isomorphism is that for all primes p one has that the linear map μ__p: (V ⊗_p) ⊗ (I_can,_p)_2 → (I_can,_p)_3 is just the reduction modulo p of the linear map:
μ: V ⊗ (I_can)_2 → (I_can)_3.
From the surjectivity of (<ref>) one can also compute (I_can,_p)_m. Indeed
(I_can,_p)_m = ^m(H^0(X__p, Ω_X__p)) - H^0(X__p, Ω ^⊗ m_X__p),
= g+m-1m - (2m-1)(g-1).
Putting the above equalities together one has that the primes of trigonal or smooth plane quintic reduction are exactly the primes such that the matrix μ has rank g+3-13 - 5(g-1) - (g-3) modulo p. Again, these primes can easily be read of from the matrix μ after one puts μ into Smith normal form.
However, the matrix μ has a domain and codomain whose dimension grow as a cubic polynomial in g so computing this matrix and putting it in Smith normal form might become computationally very expensive once g becomes large. And in fact it does become too expensive for some of the curves for which we wanted to compute the primes of trigonal reduction. However if X([1/N]) contain an integral point then the computation can be significantly sped up, as we will describe now.
From the discussion in the first paragraph of <cit.> we directly get the following lemma.
Let X_k be a non-hyperelliptic curve of genus ≥ 4 over an algebraically closed field k and X_k,2 be the variety cut out by (I_can,k)_2, i.e. the quadrics vanishing on X_k. Then
(1) X_k is trigonal or a smooth plane quintic if and only if X_k,2 is a surface.
(2) X_k is not trigonal or a smooth plane quintic if and only if X_k,2 =X_k.
Now let P ∈ X([1/N]) be a point. From <Ref> it follows that the tangent space T_P X_k,2 is 1-dimensional if and only if X is not trigonal or a smooth plane quintic. Since X is not hyperelliptic modulo any prime, the canonical embedding allows one to see X as a subvariety of ^g-1_[1/N]. Let x_0, x_1,…,x_g-1 be the coordinates on ^g-1_[1/N] of some affine neighborhood of P and f_i be generators of (I_can)_2 on this neighborhood. Because of (<ref>) the reductions f_i modulo p are also generators of (I_can,_p)_2; we compute T_P__p X__p,2 as the kernel of the Jacobian matrix J=(δ f_i(P)/δ x_i)_i,j modulo p. This is a matrix that can itself be written down over [1/N], and as before putting it in Smith normal form allows us to easily read out the possible trigonal or smooth plane quintic primes, by computing the primes such that this matrix has rank < g-2.
we might want to remove the below since it is now outdated?
(I_can)_3= g-3 + (V·(I_can)_2)=
Check whether the above generalizations are correct.
The isomorphisms (<ref>) - (<ref>) show that (I_can)_m can be obtained by information about the map
M:^m(V)→ R(X)_m.
In the hyperelliptic case, since the dimension of Sym^2(H^0(X_Γ, Ω ^⊗ 2_X_Γ)) is g+12, it follows that that if M = g-12, then
M = g+12-g-12= 2g-1.
In the trigonal and plane quintic case, we have
M = g+23-?= .
§.§ Smooth plane quintics
We note that the methods of our paper do not distinguish between smooth plane quintics and trigonal curves, hence the methods of <Ref> are also used to detect possible smooth plane quintics. However, since plane smooth quintics necessarily have genus 6, this greatly reduces the number of curves that need to be considered. It fortunately turns out that no intermediate modular curves of genus 6 satisfy the necessary and sufficient conditions of being either trigonal or a smooth plane quintic.
§.§ Explicit computations with modular curves
To explicitly determine H^0(X_Γ, Ω _X_Γ) for a modular curve X_Γ corresponding to the congruence group Γ⊃Γ(N), we use the isomorphism <cit.>
H^0(X_Γ, [1/N], Ω _X_Γ,[1/N]) ≅ S_2(Γ,[1/N]),
where for a ring R,we denote by S_2(Γ,R) is the space of cusp forms of weight 2 with coefficients in R. The map M can be computed on Sym^2 S_2(Γ,k) by multiplying q-expansions. On the other hand (Ω^1_X/k⊗Ω^1_X/k) can be identified with the subspace of S_4(Γ,k) of cusp forms of weight 4 that have a double zero at all cusps.
In particular, after obtaining a matrix representing the [1/N]-module homomorphism M as explained above, and putting it in Smith normal form, one can easily read out the exact primes p where the rank of this matrix will change upon reduction modulo p. So we have translated everything into ranks of matrices that can easily be computed in terms of cusp forms.
§ PROOF OF THEOREMS <REF> <REF> AND <REF>
We first prove that there are no exceptional pairs (Δ,p), where Δ is the trivial group; this is done as explained in <Ref>. By <Ref> the values that need to be checked are N∈ S_0. We obtain that p∤ N, the curve X_0(N)__p is hyperelliptic if and only if X_0(N)_[1/N] is.
It remains to consider the nontrivial subgroups Δ. Suppose Δ is nontrivial and (Δ, p) is an exceptional hyperelliptic pair. If there exists a morphism of curves X_k→ Y_k defined over a field k and X_k is hyperelliptic, then it follows that Y_k has to be subhyperelliptic <cit.>. It follows, that X_Δ(N) can be hyperelliptic over _p only if X_0(N)_[1/N] is hyperelliptic. Since for any N there are finitely many Δ of level N, we are reduced to checking the hyperellipticity of finitely many X_Δ(N) to complete the proof of <Ref>.
All the computations to verify this take 116 seconds, and we find the unique exceptional pair Δ:=⟨ 4 ⟩≤ (/ 37)^× and p=2.
We first determine the exceptional trigonal pairs for the trivial groups Δ, i.e. X_Δ(N)=X_0(N). By <Ref>, the values that need to be checked are N∈ S_1. We follow the procedure described in <Ref> and get the exceptional trigonal pair X_0(73) and p=2.
Using the arguments as in the proof of of <Ref>, it follows that it remains to nontrivial subgroups Δ, and the ones that need to be considered are the ones such that X_0(N)_[1/N] is of gonality ≤ 3, and in addition the Δ of level 73 for p=2. We check this, and after 23 minutes of computation obtain that there are no additional exceptional pairs.
Using the arguments as in the proof of <Ref> (and the fact that we need only consider curves of genus 6), it turns out there are no intermediate modular curves that are smooth plane quintics in any characteristic.
§ FIELDS OF DEFINITION OF TRIGONAL MAPS ON NON HYPERELLIPTIC CURVES
While <Ref> completely solves <Ref> B) for intermediate modular curves and d=3, it remains to consider <Ref> A) for trigonal curves over finite fields _q. Let X/k be a curve of genus g with X(k)≠∅ that is trigonal over k and not subhyperelliptic. Then X is trigonal over k if g=3 (<cit.>) or g>5 <cit.>. Hence the only case of interest here is g=4.
Let X be a curve of genus 4 over [1/N] of non-hyperelliptic reduction at all primes of good reduction with X([1/N])≠∅. Then the canonical model of X is a smooth complete intersection of a cubic and a quadric Q in ^3_[1/N]. Let D be the discriminant of an extension of this quadric Q to ; we assume that D is not a perfect square and let Ø(D) be the order of discriminant D. Then
* X is trigonal over the quadratic field Ø(D) ⊗ℚ, but not over ℚ.
* For all positive odd integers n and all rational primes p coprime to N at which the order Ø(D) is maximal, X is trigonal over _p^n if and only if p splits or ramifies in Ø(D).
* For all positive even integers n and all rational primes p coprime to N, X is trigonal over _p^n.
Let k be a field of characteristic coprime to N with X:=X_k smooth and non-hyperelliptic. Furthermore, let W_d^r(X_k) ⊂^d X_k denote the Brill-Noether variety corresponding to line bundles that have at least r+1 linearly independent global sections. Note that since X([1/N]) ≠∅ we also have X(k) ≠∅ and hence any point in ^d X (k) actually comes from a k-rational line bundle of degree d [on the other hand, if X(k) = ∅ and k is perfect, then a point in ^d X (k) only gives a line bundle over k that is isomorphic to all its Galois conjugates]. In particular, every k-rational point on the Brill-Noether variety W_3^1(X_k) comes from a line bundle defined over k of degree 3 with two linearly independent global sections, and hence gives rise to a non-constant k-rational function degree ≤ 3, which is actually of degree 3 by the non-hyperellipticity assumption. Conversely, every k-rational function of degree 3 gives a rational point on W_3^1(X_k). In conclusion, X_k is trigonal if and only if W_d^r(X_k)(k) ≠∅.
By <cit.>, the Brill-Noether variety W_3^1(X_k) either has 1 or 2 points over k. The case W_3^1(X) being isomorphic to X mentioned there cannot happen since we assumed X to not be hyperelliptic. If #W_3^1(X_k)(k)=1 then #W_3^1(X_k)(k)=1 and X_k is trigonal by the above discussion. On the other hand if #W_3^1(X_k)(k)=2, then there are exactly two possibilities. Namely:
(a) #W_3^1(X_k)(k)=2 and hence X_k is trigonal,
(b) #W_3^1(X_k)(k)=0 and hence X_k is not trigonal but becomes trigonal over the quadratic extension of k over which the two points of #W_3^1(X) are defined.
We let F_X_k denote the smallest field extension of k for which #W_3^1(X_k)(F_X_k) > 0. So either F_X_k = k or it is the quadratic field over which the two points of #W_3^1(X) are defined in case (b) above.
Since X_k is not hyperelliptic, the canonical model of X is a smooth complete intersection of a cubic and a quadric Q_k in ^3_k. Each g_3^1 corresponding to one of the points in W_3^1(X)(k) is a family of lines in ^3_k intersecting X_k three times, counting multiplicities. Let L denote one of these lines. By Bezout's theorem this line L lies on the quadric Q_k. In particular, if Q_k is nonsingular then the lines of the g_3^1 actually form a ruling of Q (see <cit.> again). In the nonsingular case, the field F_X_k defined above is actually the field of definition of these rulings. If k>2, then this field is obtained by adjoining to k the square root of the discriminant of the polynomial defining Q (see e.g <cit.>).
Part (1) immediately follows from this discussion. Indeed, F_X_ = (√(D)) = Ø(D) ⊗ℚ. So for a field extension K of we have # W_3^1(X_)(K) > 0 if and only if K contains (√(D)). Both of these equivalent conditions are equivalent to X_K being trigonal.
Part (3) follows similarly since F_X__p is either _p or _p^2. So in particular, _p^n contains F_X__p if n is even.
Now we prove part (2) in the case where p splits or ramifies in Ø(D). Let 𝔭 be a prime of Ø(D) lying over p. Then by the maximality assumption on Ø(D) at p, the ring Ø(D)_𝔭 is a discrete valuation ring and the residue field of Ø(D)_𝔭 is _p. By part (1) we have X_Ø(D) ⊗ = 3. Since the gonality of a curve can only decrease under specialization and field extensions we have
X__p^n≤ X__p≤ X_Ø(D)_𝔭⊗≤ X_Ø(D) ⊗ =3.
Hence X__p^n=3 by the non-hyperellipticity assumption.
What remains is to show part (2) when p is inert. Assume for the moment p>2. Since p is inert in Ø(D) we know that F_X__p = _p(√(D)) is a quadratic extension of _p and hence isomorphic to _p^2. It follows that X is trigonal over _p^n if and only if _p^n contains _p^2, which happens exactly when n is even. So if n is odd, X__p^n is not trigonal.
The case p=2 is slightly more subtle. While it might be possible to deal with this case abstractly, we found it clearer to take a more explicit approach. Let's write the quadric Q in ^3 as
Q = ∑_i=0^3 ∑_j=0^i a_i,jx_ix_j.
For this quadric we will also use the matrix form notation
Q = [ a_0,0 a_0,1 a_0,2 a_0,3; a_1,1 a_1,2 a_1,3; * a_2,2 a_2,3; * * a_3,3 ].
Let P ∈ X([1/N]) ⊆^3([1/N]) be a point. By choosing a suitable set of coordinates on ^3 we may assume P = (0:0:0:1) and hence a_3,3=0. By a further change of coordinates we may assume a_1,3=a_2,3=0 as well, so that Q looks like:
[ a_0,0 a_0,1 a_0,2 a_0,3; a_1,1 a_1,2 0; * a_2,2 0; * * 0 ].
With Q as above, we have D = a_0,3^2(a_1,2^2 - 4a_1,1a_2,2). The assumption for p to be inert in Ø(D) is equivalent to D ≡ 5 8. In particular Q__p is nonsingular and a_0,3≡ a_1,2≡ a_1,1≡ a_2,2≡ 1 2.
Since Q__2 is nonsingular it has two rulings. These two rulings both contain a line passing through P. Additionally, starting from P one can let T ⊂ P^3__2 be the tangent space to Q__2 at P__2, and T ∩ Q__2 will be exactly the union of these same two lines. In particular if these two lines are interchanged by the action of Galois, then these two rulings will be interchanged as well.
Now let's compute T on the affine chart where x_3 ≠ 0 and let X_0,X_1,X_2 be the affine coordinates on ^3__2. Then the tangent space T is given by X_0=0, so that the union of two lines on Q__2 passing through P__2 can be described by X_0=0 and a_1,1X_1^2 + a_1,2X_1X_2 +a_2,2X_2^2. Since a_1,1≡ a_2,2≡ a_1,2 = 1 2, the polynomial a_1,1X_1^2 + a_1,2X_1X_2 +a_2,2X_2^2, is irreducible and hence so is also the scheme T ∩ Q__2. This can only happen if the geometric lines generating T ∩ Q__2 are Galois conjugates. In particular, the two rulings of Q__2 are swapped by the action of Galois, W_3^1(X__2) consist of two points whose field of definition is _2^2 and hence X__2^n is not trigonal when n is odd.
In case X_k is of genus 4 with X_k(k)≠∅ and is neither hyperelliptic or trigonal over k, then it is necessarily tetragonal over k, see e.g. <cit.>.
The table below lists all genus 4 intermediate modular curves by specifying their level N, the group Δ(N), and the disciriminant D of the corresponding order Ø. Note that Δ(N)=(/N)^*/± 1 means X_Δ(N)=X_0(N) and D a perfect square means that the curve is trigonal over . We note that the cases X_Δ(N)=X_0(N) had already been previously solved in <cit.>.
§
§.§ Discriminants of class number at most 100
Let D<0 be a fundamental discriminant and f a positive integer. Then the following formula relates class numbers of (not necessarily fundamental) discriminants to those of fundamental discriminants (see <cit.>)
h(Df^2) = h(D)f/u_D,f∏_p | f( 1- (D/p) 1/p),
where w_D,f = 3 if D=-3 and f ≠ 1, w_D,f = 2 if D=-4 and f ≠ 1 and w_D,f=1 otherwise. Using this formula and the list of negative fundamental discriminants of class number ≤ 100 from <cit.>, it is straightforward to compile a list of all negative discriminants of class number ≤ 100. For each class number h ≤ 100 we record the number of discriminants Df^2 < 0 such that h(Df^2)=h in the second column, and the smallest Df^2 such that h(Df^2)=h in the third column.
The full list of all discriminants of class number ≤ 100 can be found at <cit.> in the file.
§.§ Ramification degrees of X0(N) -> X0(N)+ at most 100
Using <Ref> and <Ref>
one can easily compile a list of all integers N such that the ramification degree of X_0(N) → X_0(N)^+ is at most 100. For each degree d ≤ 100 we record the number of integers N that the ramification degree of X_0(N) → X_0(N)^+ equals d, as well at the maximum of these N. Note that by the Riemann-Hurwitz formula the ramification degree is always even.
The full list of all integers N of ramification degree ≤ 100 can be found at <cit.> in the file.
siam
|
http://arxiv.org/abs/2307.04050v1 | 20230708212820 | Optimization-based Learning for Dynamic Load Planning in Trucking Service Networks | [
"Ritesh Ojha",
"Wenbo Chen",
"Hanyu Zhang",
"Reem Khir",
"Alan Erera",
"Pascal Van Hentenryck"
] | cs.AI | [
"cs.AI",
"cs.LG",
"cs.SY",
"eess.SY"
] |
A Robust and Efficient Optimization Model for Electric Vehicle Charging Stations in Developing Countries under Electricity Uncertainty
[
=========================================================================================================================================
*Co-first authors
The load planning problem is a critical challenge in service network
design for parcel carriers: it decides how many trailers (or loads),
perhaps of different types, to assign for dispatch over time between
pairs of terminals. Another key challenge is to determine a flow plan, which specifies how parcel volumes are assigned to planned loads. This paper considers the Dynamic Load Planning Problem (DLPP) that considers both flow and load planning challenges jointly in order to adjust loads
and flows as the demand forecast changes over time before the
day of operations. The paper aims at developing a decision-support
tool to inform planners making these decisions at terminals across the
network. The paper formulates the DLPP as a MIP and shows that it
admits a large number of symmetries in a network where each commodity
can be routed through primary and alternate paths. As a result, an
optimization solver may return fundamentally different solutions to
closely related problems (i.e., DLPPs with slightly different inputs),
confusing planners and reducing trust in optimization. To remedy this
limitation, the paper proposes a Goal-Directed Optimization (GDO) that
eliminates those symmetries by generating optimal solutions staying
close to a reference plan. The paper also proposes an optimization
proxy to address the computational challenges of the optimization
models. The proxy combines a machine learning model and a
feasibility restoration model and finds solutions that satisfy real-time constraints imposed by planners-in-the-loop. An extensive computational study on industrial instances shows that the optimization proxy is around 10 times faster than the commercial solver in obtaining the same quality solutions and orders of magnitude faster for generating solutions that are consistent with each other. The proposed approach also demonstrates the benefits of the DLPP for load consolidation, and the significant savings obtained from combining machine learning and optimization.
§ INTRODUCTION
The e-commerce market continues to show robust growth and leading
analysts project that today's $3.3 trillion market could grow further
to $5.4 trillion annually by 2026 (<cit.>). Much of e-commerce
relies on home delivery of small packages or parcels and other boxed
freight. Key freight carriers like UPS and FedEx continually seek to
redesign and operate profitable logistic networks that meet e-commerce
customer service expectations. Beyond physical network design
including the location and sizing of various freight processing
terminals, these companies face challenging service network design
problems. A critical service network design challenge for package
carriers are the so-called load planning problems (for
background, see <cit.>). Here, load planning refers to
decisions related to the number of trailers or container loads, perhaps of different types, to plan for dispatch over time between
pairs of terminals. Such planned loads are the transportation capacity
of the network. Flow planning decisions represent another key challenge, where the flow plan specifies how to allocate parcel volumes to planned loads to feasibly and cost-effectively serve network demand. As each package moves from its origin to destination, it is transported by a sequence of planned loads where it is unloaded and sorted at a transfer (hub) terminal between each loaded dispatch. Together, the flow and load plan decisions define a service network that moves package
volume from origins to destinations in order to meet customer service
expectations. The research described in this paper is conducted
directly with a leading global parcel carrier that operates a massive
network moving large volumes of packages each day. Figure <ref> illustrates the load planning operations at an example
terminal. It highlights the planner-in-the-loop environment in which load planning takes place; an important consideration underlying this
research.
Packages at a terminal with the same destination and service class are
referred to as a commodity. A flow plan defines flow
rules for each commodity in the service network; these flow rules
specify how a commodity is routed through the network over time. Since parcel carriers operate massive terminal networks with large numbers of transfer locations, a flow plan may include alternate flow rules that specify loading paths for commodities in addition to the default path specified by the primary flow rules. Both the primary (default) and alternate paths specify how a commodity moves through the network, and these planned paths are service feasible, i.e., they ensure that commodities arrive on time given their service guarantees.
This paper considers the Dynamic Load Planning Problem (DLPP)
faced by the load planner at a terminal as depicted in Figure
<ref> during a short time period (one or two weeks) leading up to the day of operations. The goal of the planner, and thus of the DLPP, is to decide (1) how many loads should be planned for outbound dispatch to other terminals at various times during the day of operations and (2) how to allocate commodity volumes across planned loads respecting the capacity constraints and the primary and alternate flow rules. These two decisions define what is called a load plan in this paper. The objective of the DLPP is to obtain a load plan that
minimizes the number of loads, consolidating the commodities as best
as possible. In practice, the DLPP is solved by planners, who adjust
existing load plans manually to reflect changes in commodity volumes
arriving at the terminal. This process is typically myopic
and creates inefficiencies across the network.
The goal of this research is to develop a decision support tool to
assist planners in solving the DLPP, suggesting load plans that remove
existing inefficiencies. Moreover, for terminals that do not have a
planner, the tool can fully automate the DLPP, bridging the gap
between network design and operations. To develop such a tool, this paper first investigates optimization models for the DLPP. In its general form, the DLPP is strongly NP-hard and its MIP formulation is challenging for state-of-the-art solvers given the size of the instances encountered in practice. Moreover, the natural MIP model exhibits significant symmetries which is highly undesirable for the planner-in-the-loop environment of the industrial partner. Indeed, planners will be extremely confused if small changes in commodities result in completely different load plans. To address this challenge, this paper presents a Goal-Directed Optimization (GDO) that solves a first model to find the optimal solution to the DLPP and uses a second model to find a plan that is as close as possible to a reference plan. GDO is shown to produce consistent plans, i.e., plans that are close for inputs that only differ slightly. Unfortunately, the GDO approach is too time-consuming to be used in planner-in-the-loop environments. To address this final difficulty, this research proposes the use of
optimization proxies that combine a Machine-Learning (ML) model and a
feasibility restoration procedure to obtain near-optimal solutions in
a few seconds, even for the largest terminals. The ML model uses
supervised learning to mimic the GDO approach and predicts the optimal set of planned loads. The feasibility restoration procedure then solves a small MIP model to determine the final allocation of commodity volumes to planned loads, adding extra capacity as needed to ensure feasibility. The proposed approach is practical since it produces high-quality plans that are consistent with each other, where small changes in inputs leads to very similar load plans by virtue of the ML training that mimics the GDO optimization.
The main contributions of the paper can be summarized as follows:
* The paper formalizes the DLPP and develops a natural MIP
formulation to solve it.
* The paper proposes a Goal-Directed Optimization approach to
remedy the limitations of the MIP formulation; it uses a 2-stage
approach to eliminate symmetries and provide optimal load plans that
are close to a reference plan.
* The paper proposes an optimization proxy to address the computational
difficulties of the GDO approach; the optimization proxy uses a
machine learning model to predict the loads and a feasibility
restoration procedure to adjust the predictions to satisfy the problem
constraints and determine the commodity flows. Once trained, the optimization proxy provides high-quality solutions in a few seconds.
* The paper presents extensive computational results on industrial
instances, including some of the largest terminals in the network;
the results demonstrate the significant benefits of optimization and
the ability of the optimization proxy to find high-quality and
consistent solutions in real time. More precisely, the paper shows
that the optimization proxy outperforms a greedy heuristic and the
MIP model solved by a commercial solver both in terms of the
objective function value and consistency metrics. The
optimization proxy is around 10 times faster than the commercial
solver in obtaining solutions with the same objective function value
and orders of magnitude faster in terms of generating solutions that
are consistent with a reference plan. Empirical experiments show
the value of breaking symmetries by GDO, which helps the proxy to
produce high-quality and consistent load plans.
* From a business and sustainability perspective, the experiments
demonstrate the value of having alternate flow paths for the
commodities, in addition to the primary flow paths. The proposed load
plans allocate approximately 17% commodity volume to the alternate
flow paths and reduce the required load capacity by 12%-15%.
The rest of this paper is organized as follows. Section
<ref> summarizes related work. Sections
<ref> and <ref> introduces the DLPP
and its modeling. Sections <ref> and <ref>
present the GDO approach and the optimization proxy. Section
<ref> describes a heuristic that mimic human
planners and serve as a baseline. Section <ref>
describes the computational results. Section <ref>
discusses the benefits of the DLPP formulation, optimization, and
machine learning, quantifying the cost and sustainability benefits and
the important factors driving them.
§ RELATED WORK
Service Network Design.
There is abundant research on network design for the Less-than-truckload (LTL) trucking industry (see
<cit.>). Interested readers
can consult erera2013creating for a detailed description
of LTL operations. <cit.> present a detailed
description of the mathematical models and heuristics for the problems
arising in trucking service network design. The authors describe the
tactical flow and load planning problem which is solved weeks
in advance for “typical” commodity volume (e.g., average daily
origin-destination commodity volume) for a network of terminals. The
goal of the flow and load planning problem is to determine
effective primary flow paths for the commodity volume and the total
trailer capacity required on each flow path in a network of terminals.
Most of these network design problems are formulated over time-space
networks using integer programming models. The flow and load planning
problem with both primary and alternate flow paths for industry-scale
instances can be modeled as large-scale integer programming models
which, unfortunately, cannot be solved directly by commercial
solvers. Therefore, previous work in this area focused mainly on
finding a single cost-effective primary flow path for the
commodities. Exact approaches to solve these problems have been
proposed by <cit.>, <cit.>,
and <cit.>. However, these approaches can only
solve instances with a few thousand packages. For industry-scale
instances, researchers have resorted to various heuristics including
variants of local search heuristic algorithms
(<cit.>,
<cit.>) and greedy algorithms (<cit.>).
Flow and Load Planning with Alternate Paths.
Tactical flow and load planning is typically based on average daily
estimates of origin-destination commodity volume. However, commodity
volume differ substantially from day to day and from week to week
(<cit.>). Hence, planners at a terminal locally
modify the load plans on a daily basis, using the latest estimates of
commodity volume until the day of operations. More specifically, the
planners take advantage of both primary and alternate flow paths to
improve trailer consolidation at their respective
terminals. It is worth highlighting that the primary flow paths come from flow and load planning. Once primary options are available alternate flow paths, that are time feasible, are identified. To the best of our knowledge no paper carefully studies the problem of allocating volume across alternate flow paths in operations. Alternate flow paths are useful to reduce trailers when commodity volume can be split across paths. This is especially useful because of demand uncertainty. <cit.> present a study on the value of
having these alternate flow paths to hedge against demand
uncertainty. They show that it is sufficient to have just one
alternate to contain the impact of most of the fluctuations
in demand; the authors refer to such a load plan as a 2-alt load plan. Subsequently, the authors in <cit.> study the operational decisions that LTL carriers need to make to effectively operate a 2-alt load plan when demand changes dynamically on a day-to-day basis. However, the proposed approach cannot be solved for practical sized instances. This paper proposes a ML-based solution approach for the allocation of volume across both primary and multiple alternate flow paths; the proposed approach is shown to be effective for large scale instances experienced in practice.
Dynamic Load Planning.
Network-wide simultaneous optimization of load planning adjustments is
a daunting challenge due to the scale of the network, number of commodities and the number of transfer hubs for the commodities.
Existing research in the literature may be applicable to the problem of selecting a single primary flow path
(non-splittable) for each commodity at each terminal for each sorting
period in order to minimize the cost of the resulting load plan. Splitting commodity volume across alternate flow paths is likely to improve trailer utilization as it introduces more flexibility in the load planning process. This research considers the DLPP problem at a terminal in which the commodity volume can be split (among primary and alternate flow paths) to promote better trailer utilization, lower transportation cost, and increased sustainability. The flexibility to adjust plans enables terminal planners to better manage daily operations while maintaining service guarantees. This problem is mentioned as an
interesting and useful future research direction by <cit.>.
One paper in the literature, <cit.>, does introduce the problem of re-routing freight volume on alternate flow paths to improve on-time performance of load plans on the day-of-operations; this becomes necessary when the actual volume deviates from the forecasted volume on the day-of-operations. In this work, commodity volume is assigned to exactly one flow path (it is not splittable) such that the total (fixed) trailer capacity is respected and the objective is to minimize the total lateness of shipments. The authors develop MIP models for this problem and propose heuristic algorithms to solve them. Note that a key difference between this approach and the approach proposed in the current paper is that we allow volume to be split across multiple flow paths on the day-of-operations. Furthermore, we also adjust the load plan to identify opportunities to reduce outbound capacity (and improve utilization) as demand forecasts are updated.
The DLPP is also similar to the variable-sized bin packing problem
described by <cit.> where the objective is to
minimize the total space used to pack a set of items into bins
(available in different sizes), such that each item is packed into
exactly one bin. In the DLPP, the packages are the items and trailers
are bins but the key difference is that the DLPP allows for the
splitting of the package volume into compatible trailers in order to
further reduce the transportation cost by promoting better
consolidation or packing.
Machine Learning for Optimization.
In recent years, there has been a notable surge of interest among
researchers in the development of ML surrogates for solving MIPs. This
emerging field has attracted attention due to the potential of ML
techniques to provide efficient approximations for computationally
intensive calculations involved in solving MIPs. We refer the reader
to (<cit.>,<cit.>) for a comprehensive
overview on the topic. The techniques can fall into one of the two
categories. The first category includes methods based on
reinforcement learning
(<cit.>),
where the ML model is trained by interacting with simulation
environments. The second category comprises supervised learning
(<cit.>),
where the ML model imitates the optimization model and replaces
expensive calculations with a quick approximation. This research
focuses on the latter category since the proposed optimization model
could be used as the expert for supervised learning. Optimization
proxies, which combine learning with feasibility restoration, has
emerged from supervised learning. Recent work in this area includes
(<cit.>).
§ PROBLEM DESCRIPTION AND MODELING
Parcel carriers operate massive terminal networks with hundreds of
facilities to move large volumes of parcels each day. Each day
at a terminal is divided into time windows (typically three to four
hours in length), called sort periods or sorts,
during which parcels are sorted. A typical operational day includes
“day”, “twilight”, “night” and “sunrise” sorts that are
non-overlapping in time. All parcels sorted at a terminal
during a given sort with the same service class (e.g., one-day service
or two-day service) and the same destination are referred to as a
commodity. Suppose then that each commodity has a primary flow path and one or more alternate flow paths that each specify a sequence terminals and sorts that parcels will traverse en route from origin to destination. For a specific commodity at a specific terminal at a specific sort, each flow path will determine the next terminal and sort to which packages will be loaded. Typically, shipments are loaded on trailers moving along the primary flow path for the commodity; however, when there are better consolidation opportunities, commodity volume can be split over primary and alternate flow paths, or completely allocated to alternate flow paths. The rest of this section describes the main concepts underlying the DLPP. Section <ref> describes some key terminology and presents examples to illustrate the operations at terminals. Section <ref> describes the DLPP that includes splitting of commodity volume across
primary and alternate flow paths.
§.§ Definitions
Let 𝒢 = (𝒩,𝒮) denote a time-space network. Each node n ∈𝒩 represents a terminal location at a particular time period and is defined by a tuple, i.e. n=(terminal, sort, day). Each arc s ∈𝒮 represents a directed dispatch of loads from one timed node to another. Henceforth in the paper, we refer to each such an arc as a sort pair. Figure <ref> illustrates an example time-space network for terminal A during a single twilight sort period. In this example, three sort pairs are outbound from terminal A on day 1, namely, (A,Twilight,1)→(X,Twilight,2),
(A,Twilight,1)→(Y,Twilight,2), and
(A,Twilight,1)→(Z,Twilight,3). Figure <ref> illustrates another example of terminal B that operates multiple sort periods, i.e., the day, twilight, night sorts on a given day, and seven sort pairs (b_1,b_2,b_3,b_4,b_5,b_6,b_7) outbound from terminal B.
A key objective in load planning is to determine the number of
trailers (possibly of different types) to operate on each sort pair to containerize the total commodity volume allocated to the sort pair. During a sort, each loading door at a terminal builds/loads trailers for a specific sort pair destination. In a single sort facility, as shown in Figure <ref>, if there is commodity volume allocated on each of the three sort pairs, then at least three trailers (one on each sort pair) should be opened at the loading doors corresponding to the sort pair destinations.
In practice, commodities outbound from an origin terminal that arrive over consecutive sorts and that are heading to the same time-space destination can be consolidated together. For that, the concept of load pairs is introduced, where a load pair represents a set of consecutive sort pairs that share the same destination node. Combining sort pairs into load pairs allows better consolidation and trailer utilization, since trailers can be held partially loaded from one sort to the next prior to dispatch to the destination. Figure <ref> illustrates an example of a load pair that is composed of three different sort pairs.
We now relate primary and alternate flow paths to sort pairs. If we consider volume for commodity k ∈𝒦 at some time-space location n, its primary flow path specifies the next (terminal, sort, day) to which it should be loaded. Thus, the primary path identifies a unique outbound sort pair for k at n. Similarly, each alternate flow path identifies a (possibly different) outbound sort pair for k. Recall that primary and alternate flow paths for each k at n are specified in advance, and we assume that loading outbound on any of these options will lead to volume arriving on-time to its destination.
We will define compatible sort pairs for k at n to be the primary path sort pair (the primary sort pair) and any alternate path sort pair (an alternate sort pair). Furthermore, any sort pairs that are in load pairs with compatible sort pairs with an earlier origin sort are also compatible. When volume is assigned to such earlier sort pairs, the decision is to assign volume to trailers that are opened first for loading in those earlier sorts and held for dispatch. Figure <ref> illustrates four compatible sort pairs (outbound from terminal B) for a commodity k sorted in the twilight sort at terminal B.
§.§ Dynamic Load Planning Problem (DLPP)
Parcel carriers typically build a load plan in two phases: (1) the tactical flow and load planning phase specifies an initial plan and provides an input to the scheduling team; and (2) the load plan adjustment allows adjustments to the initial plans up to the day-of-operation. The scheduling and load dispatching teams then execute the
adjusted load plan. Weekly plans that determine the number of loads or
trailers to operate on each sort pair are fixed approximately two
weeks in advance of the operating week. However, due to demand
uncertainty, the volume forecast for commodities may change, and
adjustments to the load plan may be necessary to accommodate actual
volumes. These adjustments may lead to cost decreases when unnecessary
load capacity is removed from the plan.
Consider the following optimization problem during the two weeks
leading into the day-of-operation. Each terminal in the network has
a set of forecasted inbound commodities during some time period (for
example, a single operating day and multiple sorting periods). Each
such commodity arrives during a specific sorting period and has a
destination terminal and service class (specifying a due date at the
destination). Given this information, the fixed flow plan specifies a
primary flow path (next terminal and arriving sorting period) for each
commodity, and possibly also one or more alternate flow paths. Recall
that, if the commodity is assigned to any of these flow paths, then it
will reach its final destination on time according to plan. The
adjustment optimization problem is to assign each commodity to its
primary and/or one of its alternate flow paths while simultaneously
determining how many loads of different types are required for each
proposed flow paths. Note that existing flow and load planning
literature typically assumes that all commodities, arriving
at a terminal during a specific sorting period should be assigned to
the primary flow path. Here, the challenge is different, and is
instead to determine specifically how to split each commodity volume
among its possible compatible flow paths or sort pairs to drive high
load utilization levels and low costs while still meeting service
promises.
Consider the example shown in Figure <ref> with three
commodities (4 units destined to terminal C, 3 units destined to
terminal E, 3 units destined to F) sorted in the twilight sort of day 1 at
terminal B. In this example, we denote each commodity by its
destination terminal name. The commodity destined for terminal F has
three compatible sort pairs: (B,Twilight,1)→ (C,Sunrise,2)
is the primary sort pair, and (B,Twilight,1)→ (E,Sunrise,2)
and (B,Twilight,1)→ (D,Twilight,2) are the alternate sort
pairs. Splitting commodity volume destined to terminal F between the two
alternate sort pairs to C and D yields better consolidation (and
lower transportation cost) as the solution requires one less trailer
on the two arcs: (B,Twilight,1)→ (D,Twilight,2) and
(D,Twilight,2)→ (F,Day,3).
For a
given terminal, define S to be the set of outbound sort pairs and
let K be the set of commodities sorted at the terminal. Each
commodity k ∈ K has a cubic volume of q^k, and a set of
compatible sort pairs S^k. For every outbound sort pair s, there
is a set V_s of trailer types, that can be used to containerize the
total commodity volume allocated to the sort pair. Each sort pair can
have different set of allowed trailer types, i.e., V_s_1 can be
different from V_s_2 for two different sort pairs s_1,s_2 ∈
S. Each trailer type v ∈ V_s has a cubic capacity Q_v and has a
per-unit transportation cost c_v. A solution of the DLPP
determines the number of trailers of each type assigned to each sort
pair, as well as the volume of each commodity allocated to each
trailer. A solution must ensure that all the volume is assigned to
trailers and that the capacities of the trailers are not violated. The
goal of the DLPP is to find a solution that minimizes the costs of the
trailers. Appendix <ref> provides the complexity
results. The DLPP is strongly NP-hard. It becomes weakly NP-hard when
each commodity is compatible with exactly one or with all sort
pairs and there are multiple trailer types. It becomes polynomial when each commodity is compatible with exactly one or with all sort
pairs and there is only one type of trailer.
§.§ A Mixed-Integer Programming Formulation
An optimization model for the DLPP can be defined as follows in Model <ref>:
x,yMinimize ∑_s ∈ S∑_v ∈ V_s c_v y_s,v
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_k ∈ K:s ∈ S^k x^k_s,v≤ Q_v y_s,v, ∀ s ∈ S, v ∈ V_s,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
y_s,v∈ℤ_≥ 0 ∀ s ∈ S, v ∈ V.
It uses a non-negative continuous decision variable x^k_s,v to
represent the volume of commodity k allocated to trailer type v
operating on a sort pair s, and an integer decision variable
y_s,v to determine the number of trailers of type v installed on
sort pair s. The objective (<ref>) minimizes the total cost of creating
loads. In the experiments, c_v = Q_v ∀ v ∈ V, i.e., the
model minimizes the total trailer capacity required to containerize
the total commodity volume in the problem instances. Constraints
(<ref>) ensure that the total volume of each commodity
is assigned to its compatible sort pairs. Constraints
(<ref>) ensure that the total volume on a sort pair
respects the installed trailer capacity on it. Constraints
(<ref>)-(<ref>) define the domain and range of
variables.
§ GOAL-DIRECTED OPTIMIZATION
The optimization model of the DLPP has a large number of
symmetries. Figure <ref> depicts a simple instance with multiple optimal solutions that are operationally different from one another, yet they are equivalent from Model <ref> perspective as they require the same number of trailers of the same type. This is because in Model in <ref>, commodities are indifferent to the
sort pairs they are assigned to, as the volume allocation decisions
(x-variables) do not incur any cost.
Such symmetries are undesirable for many reasons. Paramount among them
are the realities in the field: the model is intended to be used and
validated by planners. If small variations of inputs produce
fundamentally different solutions, planners are unlikely to trust the
model. Indeed, since the model is used multiple times a day, it is
important to ensure that the successive optimal solutions are as
consistent as possible with each other. Fortunately, in practice, a
reference plan is always available and the DLPP should ideally
produce optimal solutions that are as close as possible to the
reference plan.
This section explores how to refine the model presented earlier to
satisfy this requirement, and presents a Goal-Directed Optimization
(GDO) approach to the DLPP. It uses a reference plan
to eliminate symmetries and ensure that the solution is compatible
with the planner-in-the-loop reality in the field. The use of a
reference plan eliminates many symmetries but not all. To break more
symmetries, the GDO approach also adds a flow diversion cost
that captures the cost of using alternate paths instead of the primary
path. For instance, in the example depicted in Figure <ref>,
only the solution shown in Figure <ref> is optimal following our assumptions. The flow
diversion cost is chosen to be proportional to the distance between
the next alternate terminal and the destination of the commodity, as
there is incentive to move commodities as close as possible to their
destination. For example, suppose a commodity k is in
Atlanta and is destined for Chicago. Let the primary
next terminal be Louisville (with flow diversion cost 0),
alternate 1 be Nashville, and alternate 2 be
Memphis. As Nashville is closer to Chicago than
Memphis, the flow diversion cost of allocating volume to
alternate 1 is lower than that of alternate 2. As a result, the
GDO approach has at its disposal a reference plan γ, where
γ_s,v denotes the number of trailers of type v planned to
operate on sort pair s. It also leverages the flow diversion
cost d^k_s that denotes the cost of allocating a per-unit volume
of commodity k ∈ K to a compatible sort pair s ∈ S^k.
The GDO approach first solves Model <ref>
to obtain the optimal objective value Z^*. It then solves a second
MIP Model to bias the trailer decisions so that they are as close as
possible to the reference plan and minimize diversion costs.
The second-stage model is defined as follows:
x,yMinimize ∑_s ∈ S∑_v ∈ V_s| y_s,v - γ_s,v| + ϵ∑_k ∈ K∑_s ∈ S_k∑_v ∈ V_sd^k_s x^k_s,v
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_k ∈ K:s ∈ S^k x^k_s,v≤ Q_v (y_s,v), ∀ s ∈ S, v ∈ V_s,
∑_s ∈ S∑_v ∈ V_s c_v y_s,v≤ Z^*,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
y_s,v∈ℤ_≥ 0 ∀ s ∈ S, v ∈ V.
The objective function (<ref>) minimizes the weighted sum of the Hamming
distance of the trailer decisions from the reference plan γ and
the flow diversion costs. The weight ϵ for the flow diversion cost is sufficiently small such that the cost does not dominate over the Hamming distance term in the objective function. The purpose of the flow diversion cost in (<ref>) is to break the symmetry between solutions with the same Hamming distance; it biases the solution to have more volume allocated to primary sort pairs than alternate sort pairs. Constraints (<ref>),
(<ref>), (<ref>) and (<ref>) are
the same as in Model <ref>. Constraint
(<ref>) ensures that the optimal solution does not use more
trailer capacity than Z^*. Note that the objective function is
non-linear due to the Hamming distance term. It can be linearized by
replacing | y_s,v - γ_s,v| with new variables
w_s,v≥ 0 (s ∈ S, v ∈ V_s) and imposing the following
constraints
y_s,v - γ_s,v≤ w_s,v ∀ s ∈ S, v ∈ V_s,
γ_s,v - y_s,v≤ w_s,v ∀ s ∈ S, v ∈ V_s,
Figure <ref> illustrates the sensitivity of the
trailer decisions (y-variables) subject to increases in the total
commodity volume (∑_k ∈ Kq^k) (x-axis) for the two models:
Model <ref> (red plot) and the GDO approach (blue
plot). As the total commodity volume increases, Model
<ref> exhibits solutions where the trailer decisions
fluctuate dramatically between 1 and 6 trailers for sort pair 1,
and between 1 and 5 trailers for sort pair 2. However, when
using GDO, the trailer decisions in GDO are more consistent and vary
between 1 and 2 trailers on sort pair 1, and is constant at 2 trailers on sort pair
2.
§ LEARNING-BASED OPTIMIZATION PROXIES
The GDO approach produces consistent solutions to the DLPP, but it is
too slow to be used with planners in the loop. This section proposes a
Machine Learning (ML) approach to the DLPP. Its goal is to move some
of the optimization burden offline and produce high-quality solutions
in real time. More precisely, the approach uses the concept of optimization proxies to produce high-quality solutions to an
optimization problem by learning its input/output mapping (see, for
instance,
(<cit.>)
for an overview of this concept and its applications).
The overall methodology underlying optimization proxies is depicted in Figure
<ref>. It consists of two stages,
* an offline stage where an ML model learns the input/output
mapping of the optimization problem;
* an online stage which is used in real time: it receives
an instance, applies the ML model to predict a (possibly infeasible) solution
and uses a repair procedure to deliver a feasible solution.
For the DLPP, the ML model learns the mapping between the (input)
commodity volumes and the (output) trailer decisions; in other words, given the
commodity volumes, the ML model predicts trailer decisions for every
sort pair. The trained ML model may sometimes underestimate
the number of trailers on some sort pairs when executed in real time. To circumvent this issue, the feasibility
restoration step projects the predicted trailer decisions back into the
feasible region; in addition, the feasibility restoration also
computes the volume allocation on the sort pairs. A key element in the
ML training is data augmentation that complements historical
data by generating realistic instances through input perturbations. The ML model formulation is introduced and discussed in more details in what follows.
§.§ The ML Model Formulation
This section defines a machine learning model f, parameterized by
θ, that maps the input parameters, i.e., the commodity volume,
to the optimal trailer decisions:
(<ref>)-(<ref>).
f_θ: ℝ_≥ 0^|K|⟶ℤ_≥ 0^|S| × |V|
𝐩⟼𝐲
The ML inputs are assumed to be taken from a distribution 𝒫
that captures the actual instances.
Given a dataset of input parameters {𝐩_i}_i ∈ N∼𝒫, where N is the set of instances, parametrization θ^* can be obtained by minimizing
the empirical risk shown in (<ref>), where
(<ref>) denotes the optimization problem
solved by Model <ref>, and l denotes the loss function
that measures the L1-distance of the predicted
(f_θ(𝐩)) and optimal (y^*) trailer decisions.
θMinimize 1/N∑_i∈ N l(f_θ(𝐩_i), 𝐲_i^*)
subject to (𝐱_i^*, 𝐲_i^*) = 𝐱, 𝐲∈𝒞(𝐩_i) c(𝐱, 𝐲) ,
It is important to highlight that an ML model could be used to predict
commodity volume allocation on the sort pairs (x-variables) instead
of the trailer decisions (y-variables). This may seem to be a good
approach since, after predicting volume allocation, one can easily
recover the trailer decisions and hence a feasible solution, by
setting y_s,v=⌈∑_k ∈ K:s ∈ S^k
x^k_s,v/Q_v⌉ ∀ s ∈ S, v ∈ V_s.
However, this approach has some shortcomings. First, the output
dimension is significantly larger than the input dimension which makes
it very difficult to develop an effective ML model even for the smallest
instances. Second, recovering trailer decisions is very sensitive to
the predicted volume allocation decisions. Consider an example where
100 cubic volume is allocated to a sort pair which requires two
trailers, each with capacity 50 cubic volume, in the optimal solution.
If the ML model predicts the volume on the sort pair to be 100.5,
then the total number of trailers required is ⌈100.5/50⌉ = 3 which generates a poor solution in terms of the objective function value of Model <ref>.
Experimental results confirmed that it is beneficial to learn the
mapping from input parameters to the trailer decisions rather than the
volume decisions. The trailer decisions 𝐲∈ℤ_≥ 0^|S| × |V| are more aggregated than the
volume allocation decisions ℝ_≥ 0^|K| × |S|
× |V|. The benefits comes from the significant reductions in
output dimensionality and variability. In addition, as presented in
section <ref>, once the trailer
decisions are known, restoring the feasibility of the solution is
relatively easy as the feasibility restoration MIP has a small number
of binary decision variables and therefore, it is easy to solve.
The ML model used in this paper is a deep neural network as illustrated in Figure
<ref>. It uses a Multi-Layer Perceptron (MLP),
where each dense layer is followed with a batch normalization (<cit.>),
a dropout (<cit.>), and a ReLU (Rectified Linear Unit) function.
It maps the input parameter 𝐩
to the flattened trailer decision 𝐲. The
last ReLU guarantees that the output of the neural network is
non-negative. The compatible trailer decisions 𝐲
are then generated by reshaping the flattened decision
𝐲 and masking it with the compatible
trailer mask 𝐦, where m_s, v = 1 indicates that
equipment type v ∈ V is compatible with sort pair s ∈ S. In the training
phase, the loss function is computed by measuring the distance of
predicted compatible trailer decision 𝐲 with the
optimal trailer decisions. Specifically, this work used smooth l_1 loss.
The loss is used to update the parameters of the MLP using stochastic gradient descent (<cit.>)
with back propagation (<cit.>). At inference time (i.e., in real time), the
compatible trailer decisions are rounded to an integer value.
§.§ MIP-based Feasibility Restoration
The proposed ML model predicts the number of trailers
y_s,v for each sort pair s ∈ S and equipment type v
∈ V_s. Let the total trailer capacity installed on each sort pair
s ∈ S be Λ_s = ∑_v ∈ V_sQ_v
(y_s,v). The system of equations
∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v≤Λ_s, ∀ s ∈ S,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
is then used to determine the volume of every commodity k ∈ K
allocated to its compatible sort pairs. However, it is possible that
some of the sort pairs do not have sufficient trailer capacity because
the ML model may underestimate the capacity. In that case,
(<ref>) is infeasible. The following linear program
zMinimize ∑_s ∈ S z_s
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v - z_s ≤Λ_s, ∀ s ∈ S,
x^k_s,v,z_s ≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
can be used determine the sort pairs with trailer capacity violations.
Its objective function (<ref>) minimizes the capacity
violations on the sort pairs. Constraints (<ref>)
ensure that total volume of every commodity is assigned to compatible
sort pairs. Constraints (<ref>) determine the sort
pair capacity violations. Constraints (<ref>) define the
domain and range of variables. When Model <ref>
has an optimal objective value equal to 0, it has recovered a
feasible solution to Model <ref>. Otherwise,
additional trailer capacity is required on sort pairs with capacity
violations.
This paper proposes a two-stage MIP-based feasibility restoration
process. In the first stage, Model <ref> is
solved to obtain an optimal solution z^*. Let the set of sort pairs
with trailer capacity violation be S = {s ∈ S: z^*_s >
0}. The feasibility restoration then identifies the cheapest
equipment v to serve the excess volume on sort pair s ∈S. The extra trailer capacity is given by ξ_s =
( ⌈z_s/Q_v⌉ * Q_v ) and the option to add the extra capacity to sort pair s ∈S is
added using a binary decision variable. The second stage
solves the following MIP model:
uMinimize ∑_s ∈S u_s ξ_s
subject to ∑_s ∈ S^k∑_v ∈ V_s x^k_s,v = q^k, ∀ k ∈ K,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v≤Λ_s + u_s ξ_s, ∀ s ∈S,
∑_v ∈ V_s∑_k ∈ K:s ∈ S^k x^k_s,v≤Λ_s, ∀ s ∈ S\S,
x^k_s,v≥ 0 ∀ k ∈ K, s ∈ S^k, v ∈ V_s,
u_s ∈{0,1} ∀ s ∈S,
The objective function in (<ref>) minimizes the total
trailer capacity added on each sort pair. Constraints
(<ref>) ensure that the commodity volume is
assigned to the compatible sort pairs. Constraints
(<ref>) and (<ref>) ensure
that commodity volume allocated to each sort pair respects the trailer
capacity. Constraints (<ref>) and
(<ref>) define domain and range of variables. The
number of binary variables in this model is at most the number of sort
pairs in the instance, i.e. | S |. The main difference
between Model <ref> and Model
<ref> is that Model <ref> uses
binary variables u_s instead of continuous variable z_s, to denote
the option of adding extra trailer capacity ξ_s on sort pairs s
∈S. When u_s = 1, extra trailer capacity is added to
sort pair s.
After solving Model (<ref>), adding ξ_s
capacity on every sort pair s ∈S yields a feasible
solution to Model (<ref>). However, the goal is to
use Model (<ref>) to obtain a better feasible
solution. Consider an example with a set of commodities all of which
can be allocated to any of the two sort pairs s_1 and s_2 and
trailer with capacity 2 units. Suppose the optimal solution of Model
<ref> is z_s_1=z_s_2=1. In this case, a
feasible solution to Model <ref> can be recovered by
adding two trailers, one on each sort pair. However, Model
(<ref>) (which has two binary variables) yields a
solution with only one trailer on any one of the two sort pairs. Algorithm 1 provides a summary of the feasibility restoration procedure.
§.§ Value of Symmetry-Breaking Data Generation for Learning
The optimization proxies are trained using the solutions provided by
the GDO which uses the same reference plan for all instances of a given terminal. As
a result, the proxies are consistent by design and do not rely on a
reference plan. The GDO approach is not only critical for
environments with planners in the loop, but it also has an additional
benefit: it makes the learning problem easier. This section provides
theoretical insights about why the data generation using GDO results
in better function approximation than data generation from Model
(<ref>) alone.
Observe that the solution trajectory associated with different
instances can often be effectively approximated by piecewise linear
functions, as depicted in Figure <ref>. This
approximation becomes exact in the case of linear programs and mixed
integer programs when the input reflects incremental changes in the
objective coefficients or right-hand sides of the constraints. This
paper utilizes ReLU-based neural networks to approximate the solutions
of optimization problems. These neural networks are capable of
capturing piecewise linear functions, which makes them well-suited for
this purpose. However, the ability of representing a target piecewise
linear function accurately depends on the model capacity. As the
complexity of the function grows with more pieces, a larger model is
required to obtain a high-quality approximations.
(Model Capacity) (<cit.>)
Let f: ℝ^d →ℝ be a piecewise linear function with p pieces.
If f is represented by a ReLU network with depth k+1, then it must have size at least 1/2kp^1/k-1. Conversely, any piecewise linear function f that is represented by a ReLU network of depth k+1 and size at most s, can have at most (2s/k)^k pieces.
Due to the symmetry in optimal solutions of Model
(<ref>), as shown in
Figure <ref>, the solution trajectory varies
dramatically. Theorem <ref> states that the approximation
of a more volatile solution trajectory (i.e., a piecewise linear
function with more pieces) requires a deep neural network with greater
capacity, which makes the learning task more challenging. In other
words, given a fixed-size ReLU network, higher variability of the
solution trajectory typically results in higher approximation
errors. These errors are bounded by the following theorem.
(Approximation Error) (<cit.>)
Suppose a piecewise linear function f_p', with p' pieces each of width h_k for k ∈ [p'], is used to approximate a piecewise linear f_p with p pieces, where p' ≤ p. Then the approximation error
f_p - f_p'_q ≤1/2h^2_max∑_1≤ k ≤ p|L_k+1 - L_k|,
holds where L_k is the slope of f_p on piece k and h_max is the maximum width of all pieces.
Theorem <ref> relates the approximation error of
a piecewise linear function with the total variation of its slopes.
It implies that the data generated using GDO (which exhibits lower
sensitivity than the data from Model (<ref>))
should facilitate learning and result in lower approximation errors.
§ GREEDY HEURISTIC (GH)
This section proposes a greedy heuristic to construct feasible
solutions for Model (<ref>) and benchmark the quality
of the solution obtained from optimization proxies. This heuristic
iteratively solves linear programs (LP) until all the y-variables
are integers, i.e., they satisfy the integrality tolerance
(10^-5). In each iteration, the algorithm identifies fractional
variables with minimum (⌈y_s,v⌉-y_s,v) value, updates the lower bound of variable y_s,v
to ⌈y_s,v⌉, and re-solves the LP as
shown in Algorithm <ref>. The main idea is that
for a given sort pair s ∈ S and trailer type v ∈ V_s, if
y_s,v has a fractional value very close to an integer ⌈y_s,v⌉, then, this indicates that there is
enough commodity volume to have at least ⌈y_s,v⌉ trailers on the sort pair. GH
greedily adjusts the lower bound of a y-variable in each
iteration till all y-variables can be labelled as integers, in which
case a feasible solution to Model (<ref>) has been
found.
§ COMPUTATIONAL STUDY
This section reports a series of experiments conducted on real-life
instances provided by our industry partner. Section <ref>
presents statistics for the problem instances. Section
<ref> discusses the experimental setup for the
optimization models and proxies. Section
<ref> evaluates the computational
performance of the optimization proxies against the greedy heuristic
(GH) and the optimization models (Model (<ref>) and GDO). Section <ref> evaluates the benefits
of GDO for learning.
§.§ Instances
The experiments are based on industrial instances for three different
terminals in the service network of our industry partner: medium (M),
large (L), and extra-large(XL). Each category
has a reference plan for a
terminal on a particular day as provided by our industry partner. Table <ref> reports the
statistics of the instances: #Arcs denoting the total
number of unique outgoing sort pairs or arcs from the terminal,
#Commodities denoting the number of commodities that are sorted at the terminal and loaded into
outbound trailers (rounded to nearest
multiple of 1,000), and #Loads denoting the number of planned loads in the reference load plan for the
corresponding terminals (rounded to the nearest multiple of 50). Note that, in addition to the planned loads,
small package companies typically operate empty trailers on
the outbound sort pairs for trailer repositioning. This study only
considers trailers that are filled with commodity volume and do not
include empty trailer capacity.
It is worth highlighting that the XL
instance operates more volume and capacity than the M and L instances
combined. Table <ref> reports some statistics for Model
(<ref>) for the three instances: #Integer-Vars and
#Continuous-Vars denoting the number of integer and continuous
decision variables, respectively, and #Constraints denoting the total number
of constraints.
§.§ Experimental Setup
Parameters for GDO
The cost of assigning commodity k to a sort pair s ∈ S^k
(denoted by d^k_s) is defined as
d^k_s =
0, if s is primary flow path for commodity k
(α^k_s + 10*β^k ) otherwise
β_k =
1, if commodity k belongs to one-day service class
2, if commodity k belongs to two-day service class
3, if commodity k belongs to three-day service class
4, otherwise
ϵ = 1/(max_k ∈ K, s ∈ S^k(α^k_s + 10*β^k) )∑_k ∈ K q^k
where α^k_s denotes the distance between the alternate next
terminal and the destination of commodity k ∈ K for sort s, and parameter β_k depends on the commodity service level.
Recall that a commodity k ∈ K is defined as all packages with the
same destination and service class. The term α^k_s ensures that
two commodities with different destinations have different flow
diversion cost. However, two commodities with different service class can
have the same destination. β^k ensures that such commodities have different flow diversion cost for the same
destination. The weight for the flow diversion cost is defined in (<ref>).
Data Generation for ML Model
The dataset is generated by perturbing the input parameters of
real-life instances provided by the industry partner with up to
20,000 commodities. Denote by 𝐩^ref the volume of
different commodities in a given reference plan. The DLPP instances are
generated by perturbing this reference commodity volume. Namely, for
instance i, 𝐩^(i) = γ^(i)×η^(i)×𝐩^ref, where γ^(i)∈ℝ denotes a
global scaling factor and η∈ℝ^|K| is the commodity
level multiplicative white noise. γ is sampled from a uniform
distribution U[80%, 120%], and for every commodity η is
sampled from a normal distribution with mean equals to 1 and standard deviation
of 0.05. For every category, 10,000 instances are generated, and a commercial solver is used to solve the GDO model for each instance. The dataset of
10,000 instances for each category is then split as follows: 80%
for the training set, 10% for the validation set, and 10% for
the test set.
Performance Metrics
The performance metrics in this study are designed to compare the
total trailer cost and the consistency of the
solutions generated by the optimization proxies against the total
trailer cost from Model (<ref>) and then the consistency
of solution from Model (<ref>) of the GDO approach. Given
an instance 𝐩 with optimal trailer decision 𝐲^*
and a feasible trailer decision 𝐲̂, the optimality gap
is defined as
Gap = (Ẑ - Z^*)/|Z^*|,
where Z^* is the optimal trailer cost of Model
(<ref>), and Ẑ is the trailer cost computed
from 𝐲̂. Recall that the total trailer cost does not
increase in Model (<ref>) of the GDO approach due to constraint
(<ref>). If Model (<ref>) cannot be solved
to optimality in 30 minutes, then the best lower bound obtained from
the solver run is used to compute the optimality gap instead of Z^*.
This paper proposes two metrics to quantify the consistency. The
first one is a normalized distance (Δ) between the optimized load plan and the load plan
𝐲, using shifted
geometric means as given by
Δ_s,v =
|y_s,v - y_s,v| if y_s,v = 0
|y_s,v - y_s,v|/y_s,v, otherwise ∀ s ∈ S, v ∈ V_s
Δ = exp(1/|S||V|∑_s∈ S∑_v ∈ V_slog (Δ_s,v + 0.01) ) - 1%.
From a planner perspective, this metric captures the deviation of the
optimized load plan with respect to the reference load plan. As mentioned
in Section <ref>, load plans that are as
close as possible to the reference plan are highly desirable.
The second metric is the total variation of the set of trailer decisions
across a set of N instances (for each terminal). For simplicity,
instances are ordered such that ∑_k ∈ K q^k_i+1≥∑_k
∈ K q^k_i ∀ i ∈{1,2,⋯,N-1}. The goal is to
analyze the variation in trailer decisions on sort pairs when the total
commodity volume is incrementally increased from ∑_k ∈ K
q^k_1 to ∑_k ∈ K q^k_N. Let {𝐲_i}^N_i=1
denote the set of trailer decisions of N instances. The total
variation is defined as:
TV({𝐲_i}^N_i=1) = ∑_i=1^N-1𝐲_i+1 - 𝐲_i_p,
where p=2.
This metric captures the sensitivity of the models, i.e., the impact of changes in total commodity volume on the trailer decisions of different sort pairs. Lower total variation implies that the trailer decisions are less sensitive to changes in total commodity volume. Planners are more amenable to such solutions because fewer (but effective) load plan modifications reduce the solution evaluation effort and is also easier to execute in practice.
The computational efficiency of different models is measured by the
training time of optimization proxies including the data-generation
time and the inference time. Unless specified otherwise, the average
metrics on the test dataset are reported in shifted geometric means:
μ_s(x_1, …, x_n) = exp(1/n∑_i log (x_i + s) ) - s,
where the shift is set as 0.01 for the optimality gap and normalized
distance, 1 second for the inference/solving time, and 1 cube for
the distance between the optimized load plan to the reference load plan.
Implementation Details
All optimization problems are formulated using the Gurobi Python interface,
and solved with Gurobi 9.5 (<cit.>) with 8 CPU threads and
default parameter settings, except for MIPFocus which is set
to a value of 3. All deep learning models are implemented using
PyTorch (<cit.>) and trained using the Adam
optimizer (<cit.>). The ML models are multiple layer
perceptron and are hyperparameter-tuned using a grid search with
learning rate in {10^-1, 10^-2}, number of layers in {3, 4,
5}, and hidden dimension in {128, 256}. For each system, the best
model is selected on the validation set and the performances on the
test set are reported. Experiments are conducted on dual Intel Xeon
[email protected] machines running Linux, on the PACE Phoenix cluster
(<cit.>). The training of ML models is performed on Tesla
V100-PCIE GPUs with 16GBs HBM2 RAM.
§.§ Computational Performance of the Optimization Proxies
This section presents numerical experiments used to assess the
performance of the proposed optimization proxies (Proxies) against the
optimization models (GDO) and the greedy heuristic (GH).
Optimality Gap
Table <ref> presents the optimality gaps of various
approaches, including the results of Model (<ref>)
under various time constraints. In the table, the columns under “Gap
of Model (<ref>)” denote the optimality gaps of the
model under various time limits. Similarly, columns Gap for GH
and Proxies denote optimality gaps for GH and the
optimization proxies. In addition, columns Time(s) denote the solving
times for GH and Proxies.
Recall that Model (<ref>) produces solutions that
exhibit considerable variability when the total commodity volume is
perturbed as detailed in Table <ref> and <ref>. As such, it is unlikely to be practical in
scenarios with planners in the loop. Hence, the table compares the
optimization proxies and the heuristics GH with an “idealized”
benchmark. With this caveat in place, observe the performance of the optimization
proxies under tight time constraints. Proxies generate solutions with
low optimality gaps and may be up to 10 to 50 times faster than GH,
and around 10 times faster than Model (<ref>) solved
with Gurobi. Second, although Model (<ref>)
efficiently produces solutions with low optimality gaps, closing the
optimizality gap proves to be a significant challenge due to the poor
LP relaxation. The performance of GH is also impeded by the
inefficiencies of the LP relaxation, as it solves the LP relaxations
over many iterations; it takes the GH around 30 iterations for
terminal M, 200 iterations for terminal L, and more than 1000
iterations for terminal XL to generate a feasible solution.
Consistency
Tables <ref> and <ref> report
the consistency of solutions obtained from different models in terms
of the normalized distance to the reference load plan and the total
variation of the generated solutions. As GDO requires running Model
(<ref>) and Model (<ref>) sequentially,
these experiments set the same time limits for the two stages. For
example, if a time limit of 30 seconds is set, GDO runs Model
(<ref>) for 30 seconds and subsequently runs Model
(<ref>) using the best upper bound obtained from Model
(<ref>) for another 30 seconds.
The high-level result is that proxies are ideally suited to
produce consistent plans. Table <ref> shows
that the proxies accurately predict, in a few seconds, the results
produced by GDO after an hour. Furthermore, Table <ref> shows that proxies produce solutions that have at
least an order of magnitude smaller total variations in trailer
decisions than both GDO and GH. Proxies produce load plans that
exhibit great stability with changing total commodity volume.
The fact that proxies improve the consistency of the GDO plans is
especially interesting: it means that the optimization proxies, by
virtue of the learning step, avoid oscillations present in the GDO
approach. Of course, it does so at a small loss in objective value
(if, for instance, the GDO model is allowed a minute to run instead of
the 2.5 seconds of the optimizations). But the consistency benefits
are substantial as shown in Table <ref>. The
proxies also provide dramatic improvements over the GH heuristic. Note
also that GDO itself brings significant improvements over Model
(<ref>).
In Table <ref>, observe that the normalized
distance for solution from GDO for the large (L) instance first increases from
0.45% to 11.40%, and then follows the expected
decreasing trend with increase in computational time limit. Recall
that GDO first minimizes the total trailer capacity required in Model
(<ref>) and then solves Model (<ref>) to
minimize the Hamming distance of the solution (and the flow diversion
cost) from the reference load plan. As shows in Table <ref>
the feasible solution obtained from Model <ref> is of
poor quality and is closer to the reference load plan in terms of the
number of trailers. Hence, the normalized distance value is small. As
the computational time limit increases, the feasible solution obtained
from Model (<ref>) exhibits a reduced total trailer
capacity compared to the reference load plan. Hence, the normalized
distance increases as the model tries to find more cost effective solutions. With further increases in computational time, the
normalized distances decrease as the solver finds a better solution
with a smaller Hamming distance using Model <ref>.
It is also interesting to observe that the total trailer capacity
predicted by the ML model, i.e., the capacity provided by all the
trailers predicted to be needed by the ML model, is very close to a
feasible solution. Only a few trailers must be added to recover a
feasible solution. Figure <ref> shows the
distribution of the predicted trailer capacity as a percentage of the
total trailer capacity in the feasible solution generated by the
proxies for each type of terminal. The results show that more than
98% of the trailer capacity is predicted correctly and less than
2% comes from feasibility restoration step generated by algorithm <ref>. More
accurate predictions might even result in a feasibility restoration
model that has fewer decision variables, hence, requiring less
computational time to produce a feasible solution. Appendix <ref> shows that one of the key benefits of the optimization proxies is that it replaces a model with large number of integer decision variables with a prediction model, and requires to solve a relatively simpler feasibility restoration model with small number of binary variables.
§.§ Value of Symmetry-breaking Data Generation
As discussed in section <ref>, the optimal (or near-optimal)
trailer decisions of Model (<ref>) are very sensitive
to changes in total commodity volume due to the presence of symmetries
in the model and the randomized nature of MIP solvers. The solutions
to Model (<ref>) are reported in red in the
plots of Figure <ref>, which illustrates this
behavior. This is not desirable in environments with
planners-in-the-loop, where similar solutions are expected for similar
instances. The GDO approach is much more consistent and its solutions
are shown in orange in the plots of Figure <ref>. The ML component of the optimization proxy uses GDO as
the expert to imitate and learn solution patterns from. As
shown in blue in the plots of Figure <ref>,
the ML model is effective in producing solutions that are close to the
solutions generated by GDO. It should be highlighted that the GDO
approach has two benefits. First, it generates consistent solutions that
are amenable to planner-in-the-loop environments. Second, it makes the
learning problem much more tractable. Designing an ML model for
(<ref>) is really challenging due to the high
sensitivities in small changes: typically an ML model for learning
Model (<ref>) would return an average value.
§ BENEFITS OF DYNAMIC LOAD PLANNING, OPTIMIZATION, AND LEARNING
This section discusses the benefits of the dynamic load planning approach,
optimization, and learning. The load planning methodology studied in
this paper is based on the concepts of primary and alternate flow
paths. With the availability of optimization models, it is possible to
evaluate the benefits of this approach for load consolidation, at
least from a local perspective.
The results in the paper also make it possible to evaluate the
benefits of optimization compared to human planners. During
operations, planners typically assign commodities to the primary flow
paths. If there is no capacity available on the primary flow path,
then planners allocate the remaining volume on the first alternate
flow path and, if there is no capacity on the first alternate, they
turn to the second alternate flow path, and so on. Observe that this
is a greedy strategy of loading commodity volume on trailers, and
hence, it is myopic in nature. A comparison between such a greedy approach
and the optimization models help assess the value of optimization. Of
course, the optimization models are too slow to be used with planners
in the loop. The optimization proxies proposed in the paper are the
technology enabler for translating the benefits of optimization into
practice.
The first results in this section aim at quantifying the value of a
network with alternate flow paths relative to a network with primary
flow options only. Figure <ref> presents some characteristic
of the networks studied in this paper: it shows the distribution of
the number of commodities with a specific number of alternate flow
paths for each instance. It highlights that the network has some
significant flexibility for many of its commodities.
Figure <ref> presents the benefits of the load
planning methodology. It compares the variation in trailer cubic
capacity required to containerize the total commodity volume
(blue curve) and the total volume allocated to alternate flow
paths (green curve) across four different load plans: Primary
Only, Reference Plan, 1-Alt and All-Alt for
the three instance categories. In the Primary Only plan, each
commodity can be assigned only to its primary flow path. The
Reference Plan, referred to as the P-Plan, is the reference load plan from our industry
partner. Note that in the reference plan each commodity can use any number
of compatible alternate flow paths. In the 1-Alt plan, each
commodity can be assigned to either its primary path or the
cheapest alternate path. In the All-Alt plan, each commodity
can be assigned to all the available paths, i.e. splitting is allowed. Observe that the curves are
on different scales: the left scale for the blue curve and the right
scale for the green curve. The P-Plan is produced by the
planners using the greedy approach proposed earlier.
Figure <ref> demonstrates a consistent trend in
cubic capacity required in the four different load plans: the capacity
monotonically decreases and the decreases are significant. Allowing
spittability of commodity volume across primary and alternate flow
paths improves trailer consolidation. These benefits are already
apparent in the P-Plan of the planners, despite the fact that
this is a greedy approach. The optimization model with a single
alternate flow path, i.e., the 1-Alt plan, brings another
step change in consolidation, highlighting the benefits of
optimization. This benefit stems for the fact that a large number of
commodities have at least one alternate flow path in all instances
(see Figure <ref>). Note also that the 1-Alt load
plan requires significantly smaller total trailer capacity than the
P-Plan, although the P-Plan has the flexibility of
using any number of alternate flow paths. The All-Alt plan
brings further benefits but they are rather incremental. Part of the
reasons comes from the fact that a relatively small fraction of the
commodities have more than one alternate flow path. It would be
interesting to study a network with more flexibility as this may bring
further load consolidation benefits.
There is an interesting phenomenon that appears in the medium-sized instance M: the volume
assigned on the alternate flow paths decreased when moving from
1-Alt to an All-Alt plan. This comes from the fact
that this instance has many commodities, with a smaller volume, that
have new alternate flow paths options available in the
All-Alt setting. As a result, commodities with larger volume
are allocated to their primary flow path (as the flow diversion cost
is proportional to the total commodity volume assigned to alternate
flow paths) and the commodities with smaller volume can be allocated
to the alternate flow path that is the primary for the commodities
with larger volume (and not the cheapest alternate path of the
1-Alt setting). Hence, the total volume assigned to the
alternate flow paths reduces, although the total number of commodities
that use alternate flow paths increases.
Figure <ref> compares the percentage of the total
commodity volume that is assigned to the alternate flow paths in the
P-Plan and the All-Alt plan. It is undesirable to
allocate a major proportion of the volume to the alternate flow paths
because the downstream buildings may not be better equipped to handle
or process the large inbound volume. Observe that, on average across
all the instances, the All-Alt plan (resp. P-Plan)
allocates around 17% (resp. 9%) commodity volume on the
alternate flow paths. The All-Alt plan reduces the total
trailer capacity by roughly 12%-15% relative to the
P-Plan. For the XL instance, there is a significant gap
between the P-Plan and All-Alt plan statistics
because most of the commodities in the P-Plan are allocated
to the primary flow paths. This is why the total commodity volume
allocated to the alternate flow paths in the P-Plan and
the Primary Only have a small difference; see Figure
<ref> for XL category.
These results show that optimization proxies can bring substantial
benefits in practice. They provide, in real time, significant
improvements over the existing planning process. Moreover, by virtue
of their training mimicking the GDO optimization, that makes sure that
plans evolve smoothly during the planning process: small changes in
inputs will not result in large changes in the proposed solutions.
These results are eminently practical. One of the challenges
in the operational setting is the need for additional trailers when
the total commodity volume increases. Planners can acquire these
trailers either through empty trailer repositioning or by engaging in
short-term trailer leasing with local companies. Conversely, if the
commodity volume decreases, planners are left with a plan that has low
trailer utilization. The optimization proxies address this issue
directly. Planners can also use the proposed optimization proxies to obtain
recommendations for load plan adjustment in the event of a disruption
(due to uncertainty in commodity volume), even for the largest
terminal, within a matter of seconds. Furthermore, the recommendations
from the optimization proxies are consistent with existing load plans,
which makes it easy for the planners to evaluate and implement the
suggestions. Finally, new terminals in the service network often do
not have dedicated planners to develop load plans and extra capacity
is built in the system to handle the commodity volume in the
worst-case scenario. Optimization proxies can be used as a decision
support tool at such terminals.
§ CONCLUSIONS AND FUTURE WORK
This paper studies the Dynamic Load Planning Problem (DLPP) that
considers both load and flow planning challenges jointly in order to adjust loads and flows
as the demand forecast keeps changing over time before the day of
operations. The paper is motivated by the need of a
decision-support tool to advice planners making these decisions at
terminals across the network. The paper formulates the problem as a
MIP and shows that it admits many symmetries. As a result, the
optimization solver may return fundamentally different solutions to
closely related problems (i.e., DLPPs with slightly different inputs),
confusing planners and reducing trust in optimization. To remedy this
limitation, the paper proposes a Goal-Directed Optimization (GDO) that
eliminates those symmetries by generating optimal solutions staying
close to a reference plan. The paper also proposes an optimization
proxy, combining learning and optimization, to provide
high-quality and consistent load plans. An extensive computational
study on industrial instances shows that the optimization proxy is
around 10 times faster than the commercial solver in obtaining the
same quality solutions and orders of magnitude faster for generating
solutions that are consistent with each other. The proposed approach
also highlights the benefits of the DLPP for load consolidation, and
the significant savings from the combination of machine learning and
optimization.
This research is the first stage of a multi-stage project with our
industry partner (a large parcel carrier) for solving load planning
problems. Future research will extend the proposed approach to clusters of
terminals, taking into account their capacities for processing
commodities. The resulting problem thus requires to determine both
inbound and outbound planning decisions at each terminal, which
significantly complicates the optimization and learning models.
§ ACKNOWLEDGEMENT
This research was partly supported by the NSF AI Institute for Advances in Optimization (Award 2112533).
§ APPENDIX
§.§ Complexity Results
Model <ref> is difficult to solve because in addition to determining the right combination of trailer types to contain volume on each arc, we need to determine the right splits of commodity volume on the given set of compatible arcs. We will analyze the complexity of Model <ref> using the special cases described below.
Case 1: There is only one trailer type available at the terminal, i.e., | V_s | = 1 ∀ s ∈ S. Each commodity k ∈ K is compatible with exactly one sort pair s_k, i.e., S^k = {s_k} ∀ k ∈ K
Case 2: There is only one trailer type available at the terminal, i.e., | V_s | = 1 ∀ s ∈ S. Each commodity k ∈ K is compatible with all sort pairs, i.e., S^k = S ∀ k ∈ K
Cases 1 and 2 are polynomial time solvable
In Case 1, the volume of each commodity k is assigned to its only compatible sort pair, s_k, i.e. x^k_s_k = q^k. Then, the optimal solution has y_s = ⌈∑_k ∈ K: s ∈ S^k x^k_s/Q⌉ = ⌈∑_k ∈ K: s ∈ S^k q^k/Q⌉ ∀ s ∈ S.
In Case 2, the optimal solution is to assign the volume of all commodities on any sort pair s ∈ S and set x^k_s = q^k ∀ k ∈ K, y_s = ⌈∑_k ∈ K q^k/Q⌉, y_s' = 0 ∀ s' ∈ S, s' ≠ s.
Case 3: Same as Case 1, but with more than one trailer type available at the terminal
Case 4: Same as Case 2, but with more than one trailer type available at the terminal
Cases 3 and 4 are weakly NP-Hard
In the optimal solution in Case 3 the volume of each commodity k is assigned to its only compatible sort pair s_k. Thus, it remains to decide the optimal combination of trailer types required to containerize the volume on every sort pair. This is the minimum knapsack problem (see <cit.> for the problem definition) for each sort pair (that has more than one trailer type) as shown in <ref> which is known to be weakly NP-Hard.
For every s ∈ S: yMinimize ∑_v ∈ V_s c_v y_s,v
subject to ∑_k ∈ K: s ∈ S^k q^k≤∑_v ∈ V_sQ_v (y_s,v),
y_s,v∈ℤ_≥ 0 ∀ v ∈ V_s
Similarly, for Case 4 there exists an optimal solution in which the volume of all commodities is assigned to one sort pair s^* ∈ S, i.e. x^k_s^* = q^k ∀ k ∈ K and it remains to solve a minimum knapsack problem for the sort pair s^* due to which Case 4 is weakly NP-Hard.
Case 5: Each commodity k ∈ K is compatible with a subset of sort pairs, i.e., S^k ⊂ S, and has unit volume, q^k = 1. There is only one trailer type with per-unit cost c_s=1 ∀ s ∈ S and capacity Q=max_s ∈ S{∑_k ∈ K1_s ∈ S^k}; hence, y_s ∈{0,1}∀ s ∈ S, as installing one unit of trailer is enough to containerize the total volume that can be assigned to the sort pair. Note that we ignore the index v for trailer because each sort pair has exactly one and the same trailer type.
In the optimal solution of Case 5, each commodity is assigned to exactly one compatible sort pair (i.e. there is no splitting of volume)
We will present a proof by contradiction. WLOG, suppose there exists an optimal solution in which the volume of a commodity k̂ is split between two sort pairs and the volume of all other commodities k ∈ K \{k̂} is assigned to exactly one sort pair s_k. Thus, we have x^k_s_k = q^k ∀ k ∈ K \{k̂} and x^k̂_s_1 + x^k̂_s_2 = q^k. Consider a solution x^k_s =x^k_s ∀ k ∈ K \{k̂} and x^k̂_s_1 = x^k̂_s_1 + ϵ,x^k̂_s_2 = x^k̂_s_2 - ϵ, where ϵ > 0 is a small real number. Note that x^k̂_s_1 + x^k̂_s_2 = q. Consider another solution x̅^k_s = x^k_s ∀ k ∈ K \{k̂} and x̅^k̂_s_1 = x^k̂_s_1 -ϵ,x̅^k̂_s_2 = x^k̂_s_2 + ϵ. Note that both solutions x and x̅ satisfy constraints (<ref>) and are feasible to constraints (<ref>) because we choose Q = max_s ∈ S{∑_k ∈ K1_s ∈ S^k}. The solution x can be written as a convex combination of the solution x and x̅ (x^k_s = 1/2x̅^k_s + 1/2x^k_s ∀ k ∈ K, s ∈ S^k) which contradicts the optimality of the solution.
Case 5 is strongly NP-Hard
We will show that this special case can be solved as a set cover problem which is known to be strongly NP-Hard (<cit.>). An instance of a set cover is given by a ground set U = {x_1, x_2, ⋯, x_n} and a collection of m subsets E_i ⊆ U ∀ i ∈{1,2,⋯,m} of the ground set U. The optimization problem is to find the smallest number of subsets i ∈{1,2,⋯,m} such that ⋃_i ∈{1,2,⋯,m} E_i = U.
From claim <ref> we know that each commodity is assigned to exactly one compatible sort pair in the optimal solution. Let commodity k ∈ K denote element x_k ∈ U, | K | = n and set of sort pairs S = {1,2,⋯,m}. Define K_i = {k ∈ K : x_k ∈ E_i} as the set of commodities or elements that can be covered by selecting sort pairs i ∈{1,2,⋯,m}. Now note that finding the smallest number of sort pairs s ∈ S such that all commodities in K are covered is equivalent to finding the smallest number of subsets i ∈{1,2,⋯,m} to cover all elements in U.
§.§ Additional Experimental Results
Table <ref> compares the the number of integer decision variables in Model <ref> and the average number of binary decision variables in Model <ref> across multiple test instances. The number of integer decision variables remain the same for each instance category because it depends in the number of arcs or sort pairs and trailer types; only the commodity volume changes across different test instances. However, the size of the feasibility restoration model <ref> depends on the predictions of the ML model. Recall that the ML model predicts the value of the integer decision variables of Model <ref>. Hence, if the predictions are accurate, then fewer sort pairs would have capacity violations. Consequently, there would be fewer binary decision variables in Model <ref>; the number of binary decision variables in Model <ref> is equal to the number of sort pairs with capacity violation. As the ML predictions can vary for different test instance with the same set of sort pairs due to different commodity volume, the number of binary variables in Model <ref> can be different for different test instances. This is why the Table <ref> reports fractional values for the average number of binary variables. It is worth highlighting that one of the key benefits of the optimization proxies is that it replaces a model with large number of integer decision variables with a prediction model, and requires to solve a relatively simpler model with small number of binary variables.
|
http://arxiv.org/abs/2307.05241v1 | 20230711131604 | Does pre-training on brain-related tasks results in better deep-learning-based brain age biomarkers? | [
"Bruno Machado Pacheco",
"Victor Hugo Rocha de Oliveira",
"Augusto Braga Fernandes Antunes",
"Saulo Domingos de Souza Pedro",
"Danilo Silva"
] | cs.CV | [
"cs.CV",
"eess.IV"
] |
Does brain-related pre-training results in better brain age biomarkers?
B. M. Pacheco et al.
Federal University of Santa Catarina (UFSC), Florianópolis, SC, Brazil
[email protected],[email protected],
[email protected] Alliar - NEPIA, Belo Horizonte, MG, Brazil
[email protected] 3778 Healthcare, Belo Horizonte, MG, Brazil
[email protected]
Does pre-training on brain-related tasks results in better deep-learning-based brain age biomarkers?
Bruno M. Pacheco1
Victor H. R. de Oliveira1 Augusto B. F. Antunes2 Saulo D. S. Pedro3 Danilo Silva1,
for the Alzheimer’s Disease Neuroimaging Initiative
Data used in preparation of this article were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at:
<http://adni.loni.usc.edu/wp-content/uploads/how_to_apply/ADNI_Acknowledgement_List.pdf>
August 12, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Brain age prediction using neuroimaging data has shown great potential as an indicator of overall brain health and successful aging, as well as a disease biomarker.
Deep learning models have been established as reliable and efficient brain age estimators, being trained to predict the chronological age of healthy subjects.
In this paper, we investigate the impact of a pre-training step on deep learning models for brain age prediction.
More precisely, instead of the common approach of pre-training on natural imaging classification, we propose pre-training the models on brain-related tasks, which led to state-of-the-art results in our experiments on ADNI data.
Furthermore, we validate the resulting brain age biomarker on images of patients with mild cognitive impairment and Alzheimer's disease.
Interestingly, our results indicate that better-performing deep learning models in terms of brain age prediction on healthy patients do not result in more reliable biomarkers.
§ INTRODUCTION
As human lifespan increases, there is a growing need for reliable methods to assess brain health and age-related changes in the brain.
Brain age prediction is a promising technique that uses neuroimaging data to estimate the apparent age of an individual's brain, which can serve as an indicator of overall brain health and successful aging, as well as a disease biomarker <cit.>.
Deep learning models have shown great potential in accurately predicting brain age from magnetic resonance imaging (MRI) data <cit.>.
Training deep learning models for brain age prediction shares several challenges with other neuroimaging tasks, in comparison to traditional computer vision, such as the increased GPU memory used from the 3D data and the extensive pre-processing required to account for the variability in the acquisition process.
In particular, available neuroimaging datasets are much smaller than existing natural imaging datasets <cit.>, and deep learning models are known to be very dependent on sample sizes.
Therefore, data-efficient training strategies are crucial to achieve high performance in brain age prediction.
In this paper, we explore the impact of pre-training deep learning models for brain age prediction.
Inspired by the learning process of expert neuroradiologists, we apply transfer learning by pre-training our brain age models on a brain-related task.
For comparison, we also train models without pre-training and models pre-trained on natural image classification.
We investigate the performance gain from pre-training and evaluate the models' brain age prediction as a biomarker for cognitive impairment.
More specifically:
* We pre-train deep learning models on the brain tumor segmentation task and compare them to models without pre-training and with pre-training on the ImageNet natural image classification task;
* We test the brain age models using data from the ADNI studies, and show that the models pre-trained on the brain-related task outperform the other models, achieving the state-of-the-art in brain age prediction;
* We evaluate the brain age prediction of all models as a biomarker for different clinical groups (healthy, mild cognitive impairment, and Alzheimer's disease);
* Our experiments suggest that, despite the common practice, better models in terms of brain age prediction of healthy patients do not result in more reliable biomarkers;
* All of our results are reported on a standardized, publicly available dataset, providing an easy comparison with future research.
§ RELATED WORK
Detecting aging features on brain MRI has been an active area of research for many years <cit.>.
The use of deep learning for brain age prediction has gained considerable attention in recent years <cit.>.
In this section, we provide a brief overview of related works on brain age prediction from MRIs using deep learning models.
One of the earliest applications of deep learning to brain age prediction was presented by Cole et al. <cit.>.
The authors employed a neural network comprising a convolutional backbone and a fully connected regression head to analyze a dataset of T1-weighted MRI scans from 2001 healthy subjects aged 18 to 90.
The training is performed solely on images of healthy subjects, following the hypothesis that the brain age of healthy subjects is close to their actual age.
Their deep learning model outperformed the machine learning approach (Gaussian Process Regressor).
The authors also assessed the reliability of the predictions across individuals and acquisition methods.
Jonsson et al. <cit.> developed a deep learning model for brain age prediction using brain MRI scans from 1264 healthy subjects aged 18 to 75.
They explored the impact of training and testing on distinct datasets, finding that the performance of brain age prediction degraded when the target dataset differed from the training dataset.
Bashyam et al. <cit.> proposed to improve brain age prediction by utilizing a larger dataset of brain MRI scans from 14,468 subjects.
The dataset included data acquired from different sites following different protocols, with subjects aged 3 to 95.
The authors pre-trained their neural network on the ImageNet dataset, which is an even larger dataset of natural images.
They found that models performing well at chronological age prediction might not be the best at providing brain age estimates that correlate to the diagnosis of diseases such as schizophrenia and Alzheimer’s.
Peng et al. <cit.> proposed quality brain age prediction using a lightweight deep learning model.
They used a dataset containing 14,503 subjects from the UK Biobank, with ages ranging from 44 to 80.
The authors showed that even though larger models perform well on natural image tasks, smaller models can perform equally well and sometimes even better on medical imaging tasks.
Multiple authors have reported brain age performance on MRI data from ADNI <cit.>.
To the best of our knowledge, neither has provided means to reproduce the dataset used for testing the models.
The studies either used a random, non-disclosed split, or did not provide which images have been selected from the ADNI database.
Therefore, a direct comparison is not possible, as we cannot perfectly replicate the evaluation setting.
Nonetheless, we highlight that the best performance reported was a mean absolute error of 3.10 years <cit.>.
Further details on the performance of each approach can be seen in Table <ref>.
Overall, these studies demonstrate the potential of pre-training for brain age prediction and the need for more efficient training strategies, as acquiring medical imaging data is very laborious.
In contrast to natural imaging datasets, as medical imaging requires an expensive procedure and legal authorization from each subject.
None of them, however, took advantage of models trained for other brain-related tasks, such as brain tumor segmentation.
Previous works also lack a standardized dataset on which we could perform a fair comparison, either because they use private datasets or because they do not share which samples were used for training or testing.
Therefore, our paper stands out by comparing brain age models pre-trained on brain tumor segmentation to models without pre-training or pre-trained on natural image classification.
Furthermore, we experimented on a standardized and publicly available dataset, providing reproducible results.
§ MATERIALS AND METHODS
§.§ Data
§.§.§ ADNI
The Alzheimer's Disease Neuroimaging Initiative (ADNI) was launched in 2003 with the primary goal of testing whether serial MRI, positron emission tomography (PET), other biological markers, and clinical and neuropsychological assessment can be combined to measure the progression of mild cognitive impairment (MCI) and early Alzheimer’s disease (AD) <cit.>.
The ADNI database contains longitudinal data from clinical evaluations, cognitive tests, biological samples, and various types of imaging data, including MRI, functional MRI and PET.
For up-to-date information, see <adni.loni.usc.edu>.
The study's data has been collected over several phases: ADNI-1, ADNI-GO, ADNI-2, and ADNI-3.
The dataset contains cohorts of individuals with AD, MCI, and healthy controls (Cognitively Normal, CN).
Furthermore, all exams underwent quality control for the image quality (e.g., subject motion, anatomic coverage).
In this paper, we employed available MPRAGE T1-weighted MRIs from all phases, filtering out images deemed “unusable” by the quality control assessment.
The images are available after gradient non-linearity and intensity inhomogeneity correction, when necessary.
An overview of the demographics from each dataset can be seen in Table <ref>.
More detailed information on the images from ADNI-1 used in this work can be found in Wyman et al. <cit.>.
§.§.§ BraTS
The brain tumor segmentation challenge (BraTS) <cit.> provides a dataset of structural brain MRIs along with expert annotations of tumorous regions <cit.>.
For each subject, four MRI scan modalities are available: T1-weighted, contrast-enhanced T1-weighted, T2-weighted, and T2-Flair.
All images are available after preprocessing (registration to a common atlas, interpolation to 1 mm3, and skull-stripping) <cit.>.
In the 2020 edition, the dataset contained 369 images from subjects aged 18 to 86 (avg. of 61.2).
https://www.med.upenn.edu/cbica/brats2020/data.html
§.§ Preprocessing
We applied minimal preprocessing to the ADNI images.
Our major goal was to ensure all images would have the same orientation and spatial resolution, and that they would present no skull or non-brain-tissue information, i.e., only brain voxels would be present in the image.
Only the brain information must be available in the image, otherwise, the deep learning models could learn to predict the age based on other structures.
We register all ADNI images to the MNI152 template, interpolate to 2 mm³ resolution, and apply skull-stripping using HD-BET <cit.>.
To feed the preprocessed 3D MRI scans to the 2D brain age models, each volume was sliced through the axial plane.
We discard the slices from the top 40 mm and the bottom 35 mm of the scan to exclude slices with little to no brain information.
Therefore, we extract 40 images from each 3D MRI scan.
With respect to the images from the BraTS dataset, no further (see Sec. <ref>) preprocessing is performed.
We feed all slices of the 3D MRI scan to the deep learning models.
§.§ Deep Learning Models
We use 2D deep convolutional neural networks for brain age prediction.
Our proposed model can be divided into a backbone and a head.
Intuitively, the backbone is responsible for extracting relevant features from the input image, while the head combines these features into the final prediction.
The backbone consists of several convolution filters that reduce the image size.
The head is a single unit with linear activation that is fully connected to the backbone’s output.
Figure <ref> illustrates our proposed model, highlighting both the backbone and the head.
Even though the architecture of the head is task-specific, the backbone’s architecture depends only on a few characteristics of the input (e.g., number of channels, minimum size).
Furthermore, learned features from one task can be useful for another, at least as a starting point, which is known as transfer learning <cit.>.
This allows us to reutilize the backbone of models trained for different tasks, that is, we can extract the backbone from a model trained for some task and use it as the backbone for our model designed for brain age prediction.
We use the backbones from two different architectures: ResNet <cit.> and U-Net <cit.>.
More specifically, we use the ResNet-50 architecture, available in the torchvision package <cit.>, and the 2D U-Net proposed in <cit.>, which is designed for medical image segmentation.
The U-Net is composed of an encoder, a bottleneck, and a decoder.
For our backbone, we use the encoder with the bottleneck.
To obtain the brain age prediction of a 3D MRI, we apply the model to the 40 axial slices of the image that contain brain information and take the mean of the outputs <cit.>.
The pipeline of operations can be seen in Figure <ref>
§.§ Pre-training on brain tumor segmentation
To leverage the knowledge from other brain-related tasks to brain age prediction, we pre-train our backbones in the brain tumor segmentation task.
We follow the BraTS challenge setup, with BraTS data, for both U-Net and ResNet backbones.
To be able to train a ResNet backbone in a segmentation task, we replace the original head of the ResNet with a U-Net decoder, in a ResUNet architecture <cit.>.
This means that the decoder matches the ResNet backbone with respect to the size of the intermediate feature maps so that the skip connections can be added in the same way as in the original U-Net implementation.
We follow Crimi et al. <cit.> for training all models on the BraTS data.
We first train the models on a random 80/20 split of the BraTS 2020 dataset.
The models are evaluated through the Dice score <cit.>, which measures the overlap between the predicted segmentation mask Ŷ and the ground truth Y as
Dice(Ŷ,Y) = 2 |Ŷ∩ Y|/|Ŷ| + |Y|
.
Based on the models' performance, we fixed the number of epochs to avoid overfitting.
Then, the entirety of the BraTS data is used to train the backbones for a fixed number of epochs.
§.§ Training on brain age prediction
To train the deep learning models on brain age prediction, we assume, following previous work, that the brain age of healthy subjects is close to their chronological age <cit.>.
Thus, we train and evaluate our models solely on the images of subjects belonging to the CN group.
To provide easy-to-compare results, we choose to evaluate our models on a standardized dataset.
Therefore, we follow the standardized split of the analysis set for ADNI-1 <cit.>, and use the standard test set as our test set, and their training set as our validation set.
In other words, we divide the preprocessed ADNI-1 T1-weighted scans from CN subjects following the standardized split to form our validation and test sets.
The remaining images (i.e., those from ADNI-GO, ADNI-2, and ADNI-3) compose our training set.
Detailed information on the images used in the training set can be found in our code repository[<https://github.com/gama-ufsc/brain-age>].
We train all models, regardless of backbone architecture or pre-training, using the Adam optimizer to minimize the mean squared error between the age predicted from each slice and the true age of the CN subjects.
The models are first trained on the training set.
The performance of these models on the validation set is used for hyperparameter tuning and early stopping.
Namely, batch size and learning rate were adjusted, and a moving average[At the end of each epoch, we compute the average over the 5 latest results, including the current one.] of the MAE on the validation set was used to determine the ideal epoch (i.e., the one with the smallest MAE) for stopping the training.
The models with the best performance on the validation set are then evaluated on the test set, which is unseen up to then.
§.§ Evaluation
The brain age models were evaluated on the error between the predicted age and the actual age of the CN subjects.
More specifically, we use the mean absolute error (MAE) as our standard evaluation metric.
We compute the error based on the predicted age of the whole 3D MRI, i.e., after averaging the predictions of all slices as described in Sec. <ref> and illustrated in Fig. <ref>.
Furthermore, we evaluate the capacity of the brain age estimate in differentiating between CN, MCI, and AD patients.
For this, we use the brain age delta Δ_BA = ŷ_BA - y_CA, which is the difference between the predicted brain age ŷ_BA and the chronological age y_CA of a subject.
As the progression toward Alzheimer’s diagnosis is associated with aging patterns, it is expected that the Δ_BA of a subject in the AD group is greater than that of a subject in the MCI group, and that the latter’s Δ_BA is still greater than the Δ_BA of a subject in the CN group.
Therefore, we compute the predicted Δ_BA for all images in the three groups (CN, MCI, AD) of the test set and apply a pairwise Mann-Whitney U (MWU) test <cit.>.
The MWU test is a nonparametric version of the t-test for independent samples.
In our case, the null hypothesis is that the Δ_BA from one group is not stochastically greater than the other.
§ EXPERIMENTS AND RESULTS
In our experiments, we evaluate the impact of brain-related pre-training using 3 backbones: the U-Net with random initialization, ResUNet with random initialization, and ResUNet pre-trained on the ImageNet.
To improve the reliability of our results, 5 models are trained for each experiment, e.g., 5 U-Net models with random weight initializations are evaluated without brain-related pre-training, against 5 (different) U-Net models with brain-related pre-training.
In other words, for each of the 3 backbones, we train 10 models: 5 with no brain-related pre-training, and 5 with brain-related pre-training.
All experiments reported below were performed on a Linux machine with 8 vCPUs, 30 GB of RAM and an Nvidia T4 GPU.
Further details regarding the implementation of the experiments and additional results can be seen in code our repository[<https://github.com/gama-ufsc/brain-age>].
§.§ Pre-training on BraTS
Following the procedure described in Sec. <ref>, we trained 5 U-Net models and 5 ResUNet models with random initialization on the brain tumor segmentation task.
We also used the backbone from ResNets pre-trained on the ImageNet’s natural image classification task, therefore, we trained 5 ResUNet models using the ImageNet pre-trained backbone.
ImageNet pre-trained models are readily available in the torchvision package.
Using an 80-20 random split, we observed that 30 epochs were enough to achieve peak performance and avoid overfitting for the U-Net models, while 50 epochs were enough for the ResUNet models.
The average performance of these models on the random split can be seen in Table <ref>.
We highlight that the models present performance on par with state-of-the-art brain tumor segmentation models <cit.>.
All models were then re-trained (with new initial random weights) on the entirety of the dataset for the same number of epochs.
§.§ Brain Age Prediction
We train 5 models of each combination of backbone and pre-training available.
Namely: U-Net backbone with random initialization or pre-trained on BraTS; ResNet backbone with random initialization or pre-trained on BraTS; and ResNet backbone pre-trained on ImageNet or pre-trained on ImageNet and then on BraTS.
As described in Sec. <ref>, we use the validation set to define the best set of hyperparameters for each backbone and pre-training combination.
More specifically, we used a batch size of 64 images for all models and a learning rate of 10^-3 for the models with U-Net backbone without pre-training, 10^-5 for the models with ResNet backbone with ImageNet and BraTS pre-training, and 10^-4 for all other models.
We trained all models for 50 epochs, early-stopping the training when a running average of the MAE on the validation set achieved the smallest value.
The average performance of the models on the test set can be seen in Table <ref>.
Furthermore, after hyperparameter tuning, we re-train the models on the union of the training and the validation sets.
This increases the amount of data used for training and increases the similarity between the distribution of the training data and the test data, as both validation and test sets are drawn from the same study (ADNI-1).
We defined the training budget for each backbone and pre-training combination as the average of the epochs in which the respective 5 models achieved the early-stopping criterion on the validation set.
Note that the test set is not considered in any step of this process, therefore, no data leakage occurs.
The average test set performance of the models trained on the training and validation sets can be seen in Table <ref>, under column “Test MAE (train+validation models)”.
In our experiments, pre-training on the brain tumor segmentation task was consistently advantageous.
Even though the performance difference was not highly significant at all experiments, the models with the proposed pre-training outperformed their counterpart in all configurations except for the ResNet backbone pre-trained on the ImageNet when trained on the union of the train and validation sets.
It is also evident, when comparing the ResNet backbones, that pre-training solely on BraTS was consistently better than pre-training on the ImageNet.
Our experiments show that the best brain age predictions are achieved by using a U-Net backbone pre-trained on BraTS, even though the ResNet backbones achieved a better brain tumor segmentation performance (see Table <ref>).
Finally, the use of the validation set for training (after hyperparameter tuning) improved the performance of all models, as can be seen in the column “Test MAE (train+validation models)” of Table <ref>.
§.§ Statistical Analysis of Δ_BA
To assess the significance of the resulting brain age indicator, we apply the MWU test to the predicted Δ_BA of all models for subjects in the CN, MCI, and AD groups of the validation and the test sets (see Sec. <ref>).
The tests on samples between CN and MCI patients and CN and AD patients all pointed to a strong differentiation, with p-values smaller than 0.1% for all models[The results of the tests on CN-MCI and CN-AD are available in our code repository <https://github.com/gama-ufsc/brain-age>.], indicating that the Δ_BA biomarker is useful to distinguish healthy patients from those with Alzheimer's disease or mild cognitive impairment.
The distinction, however, between MCI and AD patients was not as significant, as can be seen in Table <ref>.
Note that the images from MCI and AD patients of the validation set are not used for training or hyperparameter tuning, thus, the validation set
can be interpreted as an additional
test set in the case of differentiating between MCI and AD.
Even though the models with the U-Net backbones achieved more significant results in the statistical analysis of the biomarker, these results do not allow us to conclude that pre-training (with any of the tasks) had a positive impact, as was observed for brain age prediction.
In fact, the exact opposite is observed.
Using pre-trained backbones (for both BraTS and ImageNet pre-training) resulted, most of the times, in models that achieved a worse separation between AD and MCI patients, in comparison to their counterparts without pre-training.
The same effect was also observed in the use of the validation set for training, upon which no model achieved a significant distinction between AD and MCI patients.
§ DISCUSSION AND CONCLUSIONS
In this study, we investigated the transfer learning capacity of a brain-related task to the task of brain age prediction.
More specifically, we pre-trained deep learning models for brain age prediction on the task of brain tumor segmentation.
In comparison to pre-training on a natural image classification task or performing no pre-training at all, our results
suggest that the proposed pre-training may be a better option.
The only inconclusive case is on performing both pre-trainings, that is, first pre-training on ImageNet and then on brain tumor segmentation, which yielded mixed results for brain age prediction.
Furthermore, we observed that using the validation set for training the models (after hyperparameter tuning) significantly reduced the brain age prediction error in all scenarios, even though there was only a small increment in the number of subjects in the training set (15% more subjects, as can be verified through Table <ref>).
Using the U-Net backbone pre-trained on BraTS and then trained on the union of train and validation sets showed state-of-the-art results, with MAE values previously unseen on ADNI data (see Table <ref>).
We recall that the validation and test sets are built with images from the ADNI-1 study, while the training set is built from ADNI-GO, ADNI-2, and ADNI-3.
As the image acquisition protocols change between the studies, we can assume that there are different characteristics between their images.
Therefore, we can expect a distribution shift between the training and the test sets that does not exist between the validation and the test sets.
This allows us to conclude that the use of the validation set for training resulted in better models because it decreased the difference between the training and test distributions.
At the same time, by evaluating our models as biomarkers for cognitive impairment levels, we observed results that challenge the standard approach of training deep learning models for brain age.
Most of the modifications that improved the brain age predictive performance on healthy subjects (i.e., the chronological age prediction), resulted in models that were less reliable for distinguishing between patients with MCI and AD.
A similar behavior was reported by Bashyam et al. <cit.>, in which the models that achieved the lowest MAE did not provide the strongest distinction between healthy subjects and subjects with AD, MCI, Schizophrenia or Depression.
Our results show an inverse relationship between the performance on chronological age prediction (see Table <ref>) and the reliability of the biomarker for cognitive impairment levels (see Table <ref>).
This is particularly true for the use of the validation set for training brain age models, which significantly reduced the reliability in all scenarios.
Given that reducing the distribution shift between training and testing data degraded the performance of the biomarker, we speculate that state-of-the-art models got to a point of overfitting the images of healthy subjects, thus, degrading their performance on images from AD and MCI patients.
Therefore, we suggest that the validity of chronological age predictions as a means to develop brain age models is an important investigation topic for the development of the brain age biomarker.
splncs04
|
http://arxiv.org/abs/2307.08596v1 | 20230714070957 | Omnipotent Adversarial Training for Unknown Label-noisy and Imbalanced Datasets | [
"Guanlin Li",
"Kangjie Chen",
"Yuan Xu",
"Han Qiu",
"Tianwei Zhang"
] | cs.LG | [
"cs.LG",
"cs.CR",
"cs.CV"
] |
packeditemize
∙
definitionDefinition
theoremTheorem
lemmaLemma
proofProof
empty
Omnipotent Adversarial Training
for Unknown Label-noisy and Imbalanced Datasets
Guanlin Li
Nanyang Technological University, S-Lab
[email protected]
Kangjie Chen
Nanyang Technological University
[email protected]
Yuan Xu
Nanyang Technological University
[email protected]
Han Qiu
Tsinghua University
[email protected]
Tianwei Zhang
Nanyang Technological University
[email protected]
August 12, 2023
=============================================================================================================================================================================================================================================================================================================================================================
empty
Adversarial training is an important topic in robust deep learning, but the community lacks attention to its practical usage. In this paper, we aim to resolve a real-world application challenge, i.e., training a model on an imbalanced and noisy dataset to achieve high clean accuracy and robustness, with our proposed Omnipotent Adversarial Training (). Our strategy consists of two innovative methodologies to address the label noise and data imbalance in the training set. We first introduce an oracle into the adversarial training process to help the model learn a correct data-label conditional distribution. This carefully-designed oracle can provide correct label annotations for adversarial training. We further propose logits adjustment adversarial training to overcome the data imbalance challenge, which can help the model learn a Bayes-optimal distribution. Our comprehensive evaluation results show that outperforms other baselines by more than 20% clean accuracy improvement and 10% robust accuracy improvement under the complex combinations of data imbalance and label noise scenarios. The code can be found in <https://github.com/GuanlinLee/OAT>.
§ INTRODUCTION
Exploring how to enhance the adversarial robustness of deep learning models has constantly attracted attention from both industry and academia. Adversarial robustness refers to the ability of a deep learning model to resist against adversarial attacks. Madry et al. <cit.> proposed adversarial training (AT), a popular strategy to improve the model's robustness. Due to its high computational cost, numerous works further proposed computation-friendly AT methods <cit.> to be applicable to large-scale datasets.
Although significant efforts have been devoted to making AT more efficient and practical, there still exists a gap to address the real-world applications. The main obstacle is that these works idealize the dataset as being completely clean and uniformly distributed.
However, in real-world scenarios, annotations are often noisy <cit.> and datasets tend to be long-tailed <cit.>, making these methods less effective.
Label noise is a common occurrence in datasets due to variations in the experience and expertise of data annotators. As not all annotators are experts, error labels are present in many real-world datasets. For example, as reported in <cit.>, the Clothing1M dataset <cit.> contains about 38.5% noise, and the WebVision dataset <cit.> was found to have around 20.0% noise. Although some crowdsourcing platforms, like Amazon Mechanical Turk <cit.>, can provide some mechanisms like voting to reduce the ratio of noisy labels in the datasets, it remains challenging to guarantee completely clean label mapping. Consequently, label noise is still an open problem in deep learning model training processes.
On the other hand, data imbalance can occur when it is difficult to collect sufficient samples for several specific classes, resulting in an insufficient number of examples for these classes and causing data imbalance <cit.>. Typically, we call a dataset long-tailed if most of the data belong to several classes, called head classes, and fewer data belong to other classes, known as tail classes <cit.>. Given that this is the natural property of the data distribution, it is challenging to create a perfectly balanced dataset in practice.
Additionally, label noise can exacerbate data imbalance by introducing additional noise to the tail classes. Thus, it is important to consider both label noise and data imbalance together when developing a robust deep learning model.
Most of existing solutions focus on the robust training over clean and balanced datasets.
To the best of our knowledge, only two works have examined label noise in the context of adversarial training <cit.>. However, both of them aim at addressing overfitting issues rather than training models to achieve high robustness on datasets with label noise.
Meanwhile, only one published work studies AT on long-tailed datasets <cit.>. No attention has been given to the joint effects of label noise and data imbalance on model robustness. Actually, label noise and data imbalance influence the training process from two different aspects, i.e., incorrect label mapping and overfitting head classes, respectively. Existing approaches for either label noise or data imbalance are insufficient to address their joint effects. A combination of <cit.> and <cit.> cannot achieve promising results either. The reason comes from the poor label refurbishment effective in <cit.> under massive label noise, making the models fail to converge during AT (proved in our experiments in Section <ref>). In AT, it is more challenging to separate the data with correct and wrong labels and then correct wrong labels based on the model's predictions <cit.>, because the high value of the robust loss <cit.> and low confidence scores on the training data <cit.> are consistent on all data and are unrelated with the correctness of labels. On the contrary, in normal training, the model will give higher loss values and lower confidence scores on data with wrong labels. So, simply combining previous methods cannot essentially address the problems, and it is necessary to design a solution dedicated to AT on imbalanced and label noisy datasets.
Challenges arise when we train a robust model on a noisy and imbalanced dataset.
First, in AT, generating adversarial examples (AEs) relies on the gradients, which are calculated with the label and the model's prediction, to update the perturbation for the target model. With noisy labels, the generated AEs become less reliable, reducing the effectiveness of AT. Additionally, incorrect annotations prevent the model from learning the correct mapping between data and labels, which harms the clean accuracy of the robust model.
Second, an imbalanced dataset decreases the model's generalizability and makes the model lean to classify a sample into head classes <cit.>. This can result in poor performance on tail classes and lower overall robustness of the model.
Unfortunately, without correct labels, prior solutions for data imbalance cannot work properly, because the label distribution can be misleading.
Therefore, if we can extract data with wrong annotations in the training set and provide correct labels to them with high probability, we will have the opportunity to mitigate the adverse effects of training models under noisy labels. Furthermore, if we can correct the wrong labels, we will recover a correct label distribution, which is helpful to address the overfitting problem caused by data imbalance.
Based on the above insights, we propose a novel training strategy, named Omnipotent Adversarial Training (), which aims to obtain a robust model trained on a noisy and imbalanced dataset.
The proposed is a two-step training scheme, i.e., the oracle training process and robust model training process. Specifically, in the first step, we introduce an oracle to provide correct annotations for a noisy dataset. Unlike existing label correction methods that rely solely on model predictions <cit.>, we adopt a novel method to predict labels using high-dimensional feature embeddings and a k-nearest neighbors algorithm. To overcome the data imbalance challenge in the oracle training process, we propose a dataset re-sampling method. Moreover, to further improve the label correction process, we adopt the self-supervised contrastive learning method to train the oracle.
In the second step, to address the data imbalance problem, we introduce the logits adjustment adversarial training, which can help the model learn a Bayes-optimal distribution. By obtaining correct labels from the oracle, we can approximate the true label distribution, which is adopted to adjust the model's predictions, allowing the model to achieve comparable robustness to previous AT methods <cit.>. Furthermore, we introduce interactions between the oracle and the model to make the model obtain high clean accuracy and robustness even on an imbalanced dataset with massive label noise.
Extensive experimental results show that achieves higher clean accuracy and robustness on the noisy and imbalanced training dataset. Overall, our contributions can be summarized as follows.
* We propose the first AT strategy, , aiming to solve a real-world problem, i.e., adversarial training on a noisy and imbalanced dataset.
* outperforms previous works under various practical scenarios. Specifically, it achieves up to 80.72% clean accuracy and 42.84% robust accuracy on a heavy imbalanced dataset with massive label noise, which is about 50% and 20% higher than SOTA methods.
* Our comprehensive experiments can inspire researchers to propose more approaches to minimize the performance gap between ideal datasets and practical datasets.
§ RELATED WORKS
§.§ Noisy Label Recognition
Label noise is a common threat in practice because the data annotation process heavily depends on the knowledge of the workers.
Recently, numerous works aim to address the label noise in image recognition from different perspectives, including new model architectures <cit.>, robust loss functions <cit.>, label correction <cit.> and sample selection <cit.>. Specifically, Goldberger et al. <cit.> proposed a noise adaptation layer to model the label transition pattern with a noise transition matrix. However, the estimation error between the adaptation layer and real label noise distribution is large when the noise rate is high in the training set, causing worse results. For the robust loss functions, Ghosh et al. <cit.> proved that the Mean Absolute Error (MAE) loss is robust to the label noise, but it harms the model's generalizability. Label correction <cit.> is another way to address the label noise problem.
Existing methods aim to learn the correct label mapping and then correct the wrong labels. Li et al. <cit.> proposed a sample selection method, adopting two models to adaptively choose samples with smaller loss values as clean data and samples with larger loss values as noisy data. Then, each model predicts a label for the noisy data and provides them to its peer model to learn together with clean data.
§.§ Long-tailed Recognition
Data imbalance is common in collected large datasets, since data belonging to some categories are naturally rare, e.g., special diseases in medical datasets (Skin-7 <cit.>), endangered species in animal datasets (iNaturalist 2018 <cit.>). Such imbalanced data distribution will harm the model's generalizability <cit.>. Long-tailed recognition is proposed to solve this real-world problem and train models on imbalanced datasets. A straightforward approach is to re-sample the training distribution to make it more balance, such as random under-sampling head classes <cit.> and random over-sampling tail classes <cit.>. Recently, a logits adjustment method is proposed <cit.>, solving the dilemma that models lean to classify samples into head classes with high probability.
§.§ Adversarial Training
Adversarial training (AT) <cit.> is one of the most famous approaches to increase the robustness of models. It generates on-the-fly AEs to train the models. Recently, several works are proposed to promote AT in real-world applications. Zheng et al. <cit.> proposed an efficient AT method based on the transferability of AEs to reduce the AE generation cost, making it possible to adopt AT on large datasets, such as ImageNet <cit.>. Researchers also studied the behaviors of models trained on randomly labeled datasets with AT and found that models trained with AT can memorize those random labels <cit.>. Based on the observation, they proposed new training algorithms to address the overfitting problem, which can also be adopted to train models on noisy datasets. For another practical problem, RoBal <cit.> is proposed to meet the imbalanced dataset scenario.
To the best of our knowledge, there is no work focusing on training models on both imbalanced and noisy datasets with AT. We step forward to real-world applications and explore this threat model in this paper.
Our method combines label refurbishment and distribution re-balancing, achieving state-of-the-art results under different combinations of label noise and data imbalance settings.
§ PRELIMINARIES
In the following, we provide the necessary definitions of datasets, label noise, and label distribution before presenting the proposed methods.
For supervised learning algorithms, we consider a dataset with two basic components, i.e., the set of data and the label mapping. We give a formal definition of a dataset[We leave the open-set problem <cit.> as future work. In this paper, all data with incorrect labels have correct labels within the label set of the dataset <cit.>.] as follows:
Suppose a set 𝒮 and a mapping 𝒜 satisfy 𝒜(x) ∈ [C], where x∈𝒮. The tuple (𝒮, 𝒜) is called a dataset 𝒟(𝒮, 𝒜). C represents the number of classes. 𝒜(x) is the label of data x.
Clearly, given a set 𝒮 with the cardinality |𝒮|, and the number of classes is C, where |𝒮| > C, there are C + |𝒮| ! ∑_i=2^C (Ci|𝒮| - 1i-1 (i)!) different mappings, where |𝒮| ! and (i)! are the factorial of |𝒮| and i. We introduce a set 𝔄 to represent all possible label mappings 𝒜:
Given a set 𝒮 and the number of classes C, 𝔄 contains all mappings 𝒜, satisfying 𝒜(x) ∈ [C] for x ∈𝒮.
With set 𝔄, we can give a special label mapping 𝒜_gt under certain culture knowledge 𝔎. Every person with knowledge 𝔎 will agree with the output of 𝒜_gt for every x ∈𝒮. Then, we call the dataset 𝒟(𝒮, 𝒜_gt) a clean dataset without label noise. Otherwise, any 𝒜∈𝔄 that is not 𝒜_gt constructs a noisy dataset 𝒟(𝒮, 𝒜). So, whether a dataset contains label noise is depended on 𝒜 and independent of 𝒮. Formally, we can define the noise ratio (NR) of a dataset 𝒟(𝒮, 𝒜) as NR = ∑_x∈𝒮1(𝒜(x)!=𝒜_gt(x))/|𝒮|, where |𝒮| is the number of the data in set 𝒮. With previous definitions, we can give a formal definition of label distribution for a given dataset 𝒟(𝒮, 𝒜).
Given a dataset 𝒟(𝒮, 𝒜), N_i = ∑_x∈𝒮1(𝒜(x)=i), representing the number of data in the set 𝒮 mapped into class i by 𝒜.
In Definition <ref>, we count the number of data for each class i based on the output of 𝒜. So, given a dataset 𝒟(𝒮, 𝒜), we can calculate its imbalanced ratio (IR) under 𝒜 and the true imbalanced ratio (IR_gt) under 𝒜_gt, and IR = min (N_i)/max (N_i). Usually, if 𝒜≠𝒜_gt, the label distributions will be different for the clean dataset and noisy datasets. We use 𝒟 to represent a dataset if no ambiguity in the following sections.
In practice, obtaining the mapping 𝒜_gt requires lots of additional effort, so the dataset owner usually adopts a plausible mapping 𝒜 to approximate the correct mapping, which will introduce label noise into the dataset. Under this situation, both the mapping 𝒜_gt and the corresponding correct label distribution are unknown. So, reconstructing a more precise label mapping 𝒜' from the known one 𝒜 to decrease the label noise in the dataset and calculating the correct label distribution are both required to train a model with AT, for AE generation and loss backpropagation.
§ OMNIPOTENT ADVERSARIAL TRAINING
To address the label noise and imbalanced data distribution problems, we introduce an oracle 𝒪 into the training process to improve the robustness of the AT-model ℳ, and propose a new training framework, named Omnipotent Adversarial Training ().
Figure <ref> illustrates the overall workflow of , which consists of two key processes: the oracle training (OT) and the adversarial training (AT).
In , the model owner aims to leverage the oracle 𝒪 to provide correct annotations to train an AT-model ℳ to obtain robustness on the dataset 𝒟. The oracle can be represented as 𝒪(·) = 𝒪_C (𝒪_F (·)), where 𝒪_F is the feature encoder, and 𝒪_C is the classification layer. The AT-model ℳ can be represented as ℳ(·) = ℳ_C (ℳ_F (·)), where ℳ_F is the feature encoder, and ℳ_C is the classification layer. We use the same architecture for 𝒪 and ℳ.
In every training epoch, we first train the oracle, then adopt it to predict the labels for the dataset 𝒟, and finally use the predictions as annotations to generate AEs and train the AT-model ℳ. Below, we present the details of the OT and AT processes.
§.§ Oracle Training
Unlike the traditional model training process that focuses on achieving strong generalizability on test data, oracle training aims to optimize the oracle's ability to predict training samples as accurately as the ground-truth set 𝒜_gt. This unique objective motivates us to develop an effective approach to training the oracle. If the oracle is trained under the annotations from the label mapping 𝒜, the training set 𝒟 can be both noisy and imbalanced, hindering the oracle's ability to approximate the target mapping 𝒜_gt. To overcome these issues, we introduce four main techniques, i.e., dataset re-sampling, label refurbishment, dataset split, and contrastive self-supervised learning.
Dataset Re-sampling (1 in Figure <ref>). Training a model to fit an imbalanced label distribution is more challenging than training a model on a balanced one <cit.>. Based on this prior, we over-sample the dataset 𝒟(𝒮, 𝒜) to make the number of data for every class equal. Specifically, we first find out the largest number of data N_max=max(N_i) among all classes. For each class i, we fix all data x, satisfying 𝒜(x)=i. So, there will be N_i data in class i. Then, we randomly and repeatedly select N_max - N_i data from the fixed data with replacement and add them into the set 𝒮 for class i.
This process yields N_max samples for every class, and we refer to the resulting balanced dataset as 𝒟'(𝒮', 𝒜).
The dataset re-sampling process is only launched at the first time running the OT process, and the set 𝒮' is generated once and for all.
Label Refurbishment and Dataset Split (2 in Figure <ref>). This technique is introduced to improve the prediction accuracy of the oracle 𝒪. It has been found that the model first learns samples with correct labels <cit.>. So, in the early training phase, the model gives higher confidence scores for correctly labeled data. Due to the model's generalizability, the samples with incorrect labels will be classified into correct classes with high confidence. Our idea is to use a threshold θ_r to refurbish labels as follows:
𝒜_r(x) = { 𝒜(x), max(σ(𝒪(x)))<θ_r
(σ(𝒪(x))), max(σ(𝒪(x)))≥θ_r
.
where 𝒪(x) is the logits output of data x and σ(·) is the softmax function. After label refurbishment, we will obtain a dataset 𝒟'(𝒮', 𝒜_r), which could contain less label noise.
To train our oracle as meticulously as possible, we split the dataset 𝒟'(𝒮', 𝒜_r) into a clean one and a noisy one. Previous works adopt the values of the loss function <cit.> or predicted confidence scores <cit.> to identify whether the data have correct annotations or not, which is not stable and can fail under massive label noise <cit.>. Different from them, we adopt a non-parametric k-nearest neighbors (k-NN) model 𝒦 to split the dataset. The insight behind our method is that models trained in a contrastive self-supervised manner will automatically map the data belonging to the same class into the neighbor feature embedding <cit.>, which indicates that data in the same class will have more similar features than data from different classes. Therefore, we first adopt 𝒦 to find the k-nearest neighbors for each data x in the feature space. Then, we calculate the predicted label L^𝒦_x from 𝒦 by finding the class which contains most of the neighbors for each data x. If the label L^𝒦_x is the same as 𝒜_r(x), we add x into the clean set 𝒮'_C. Otherwise, we add x into the noisy set 𝒮'_N. After the label refurbishment and dataset split, we will have two new datasets, 𝒟'(𝒮'_C, 𝒜_r) containing less label noise and 𝒟'(𝒮'_N, 𝒜_r) containing more label noise, which are named 𝒟'_C and 𝒟'_N, respectively.
Contrastive Self-Supervised Learning (3 in Figure <ref>). In prior works, models trained in a self-supervised manner are proved to be more robust against label noise <cit.> and label imbalance <cit.>. So, we borrow a contrastive learning approach, BYOL <cit.>, but removing the momentum encoder, for two reasons. The first one is that Chen et al. <cit.> proved that using a shared feature encoder to replace the momentum encoder can also achieve good results. The second reason is that using a shared encoder can improve the efficiency and reduce the training cost. We introduce additional two modules 𝒪_H and 𝒪_P to participate in the contrastive learning part. Because the contrastive learning does not require the labels, we directly adopt the full dataset 𝒟' to train the oracle, and the loss can be represented as:
ℒ_COS = -𝔼_x∽𝒟'𝒪_H (𝒪_F (τ_1 (x))) *𝒪_P (𝒪_H (𝒪_F (τ_2 (x))))/‖𝒪_H (𝒪_F (τ_1 (x)))‖_2 * ‖𝒪_P (𝒪_H (𝒪_F (τ_2 (x))))‖_2,
where τ_1 is a weak data augmentation strategy (only cropping and flipping) and τ_2 is a strong data augmentation strategy based on the AutoAugment <cit.>.
For the supervised learning part, we only adopt the sample in the previous separated clean dataset 𝒟'_C, and the loss is:
ℒ_CE = 𝔼_x,𝒜_r(x)∽𝒟'_Ccross-entropy(𝒪 (x), 𝒜_r(x)).
Furthermore, to better leverage the knowledge from the oracle, we expect that the oracle can provide the AT-model ℳ more different prediction distributions from ℳ. So, we adopt a penalty term described as follows:
ℒ_MSE = -𝔼_x∽𝒟'_CMSE(σ(𝒪(x)), σ(ℳ(x)))
Overall, the loss function for the oracle training is
ℒ_𝒪 = ℒ_COS + ℒ_CE + ℒ_MSE.
§.§ Adversarial Training
Although we adopt an oracle to correct the wrong annotations, it is not enough to train a robust model on a dataset with unknown label distributions. Based on a previous study <cit.>, it is important to design specific approaches to addressing the dataset imbalance, because training a model on the long-tailed dataset can cause it to badly overfit the head classes. In the AT stage of , we combine two approaches, i.e., label distribution estimation and logits adjustment AT, to address the challenges together.
Label Distribution Estimation (4 in Figure <ref>). As the considered training set can be both noisy and imbalanced, it is important to infer the correct label annotations and label distribution. To obtain a relatively precise label distribution, we first adopt the oracle 𝒪 to predict the label for each sample in the dataset 𝒟. To make it clear, we define a new label mapping based on the oracle as follows:
𝒜^𝒪 (x) = (σ(𝒪(x))), x ∈𝒮.
So, the label distribution predicted by the oracle is
N^𝒪_i = ∑_x∈𝒮1(𝒜^𝒪(x)=i), i∈ [C],
where C is the number of classes in the dataset 𝒟.
Logits Adjustment AT (5 in Figure <ref>). To overcome the over-confidence in long-tailed recognition, we study the previous logits adjustment approach <cit.> with the label distribution N^𝒪_i. Specifically, we adjust the model ℳ's output logits during the training process in the following way:
l = ℳ(x) + log ([N^𝒪_1, N^𝒪_2, …, N^𝒪_C]).
Whether the label distribution is a uniform one or a long-tailed one, the logits adjustment translates the model's confidence scores into Bayes-optimal predictions <cit.> under the current label distribution, making it a universal solution for all possible label distributions.
The logits adjustment AT can be divided into two steps, i.e., AE generation and model training. In the AE generation step, we simply follow PGD-AT <cit.> to generate AE. This step can be formulated as
x_adv = PGD(ℳ, x, 𝒜^𝒪(x)),
where the PGD attack accepts as input a classifier model ℳ, a clean sample x and its corresponding label 𝒜^𝒪(x), and returns an AE x_adv. We adjust the output logits for the model during the AE generation.
In the model training step, we consider the oracle as a soft label generator, and adopt its confidence scores as labels to train the AT-model ℳ. It can be seen as a strong and adaptive label smooth method <cit.>, which further addresses the robust overfitting <cit.>. The loss function is written as
ℒ_CE = -𝔼_x∽𝒟 ∑_i=1^C log( σ(ℳ(x_adv)
+log([N^𝒪_1, N^𝒪_2, …, N^𝒪_C]))_i) * σ(𝒪(x))_i.
To further leverage the feature embedding generated by the oracle, we add a contrastive learning loss into the model training step. This loss has the same formula as the contrastive loss in the oracle training process:
ℒ_COS = -𝔼_x∽𝒟𝒪_H (𝒪_F (x)) *𝒪_P (𝒪_H (ℳ_F (x_adv)))/‖𝒪_H (𝒪_F (x))‖_2 * ‖𝒪_P (𝒪_H (ℳ_F (x_adv)))‖_2,
where we consider the PGD attack as a very strong data augmentation strategy.
Overall, the loss function for the adversarial training is
ℒ_ℳ = ℒ_CE + ℒ_COS.
In our experiment, we consider ℒ_MSE in ℒ_𝒪 and ℒ_COS in ℒ_ℳ are two terms under oracle-model interactions. We will explore the effectiveness of the interaction though ablation studies in Section <ref>.
§ EXPERIMENTS
§.§ Configurations
Datasets and models.
We adopt two datasets to evaluate our proposed , i.e., CIFAR-10 and CIFAR-100 <cit.>. We generate imbalanced datasets based on the exponential method <cit.>, which is widely used in previous papers <cit.>. For the label noise generation, we consider two types of label noise, i.e., symmetric noise and asymmetric noise, which are common settings in previous works <cit.>. Specifically, the symmetric noise means the noisy label is uniformly selected from all possible labels except the ground-truth one. The asymmetric noise is to simulate a more practical scenario, where the ground-truth label can only be changed into a new one with similar semantic information, e.g., truck → automobile, bird → airplane, deer → horse, and cat → dog. We only apply the asymmetric noise to CIFAR-10, as we cannot find prior works studying the asymmetric noise in CIFAR-100. When we generate a label-noisy and imbalanced dataset, we first generate a dataset under the given NR and then use the exponential method on the noisy labels to sample it to obtain a long-tailed dataset under the given IR, which can guarantee that all classes contain at least one correct sample. So in some cases, the ground-truth label distribution can be a balanced one and the noisy label distribution is badly imbalanced, which increases the difficulty of adversarial training. For the model structure, because the oracle and AT-model in are based on ResNet-18 <cit.>, to make a fair comparison, we implement all baseline methods on ResNet-18.
Baseline.
We consider five baseline methods, i.e., PGD-AT <cit.>, TRADES <cit.>, SAT <cit.>, TE <cit.> and RoBal <cit.>. Specifically, PGD-AT and TRADES are two representative AT strategies, which are proposed to improve the model's robustness on balanced and clean datasets. SAT and TE study the memorization of AT under random labels. Some of their experimental results are obtained from datasets with random noise and achieve good results. So we consider that they can be adopted to train models on noisy datasets. In order to make a fair comparison, we adopt the PGD version of SAT and TE, based on their official implementations. RoBal is proposed to solve the long-tailed AT challenge. We compare with these baseline methods under various settings.
Implementation Details.
For , we adopt the same k-NN structure as SSR+ <cit.> with k=200, and follow the hyperparameter setup in its implementation, i.e., θ_r=0.8. 𝒪_H and 𝒪_P are two MLPs with one hidden layer, whose hidden dimension is 256 and output dimension is 128. We discuss the training cost overhead in Appendix <ref>.
To evaluate the robustness and clean accuracy of baselines and , we follow the training strategy proposed in <cit.>, except for RoBal, which follows a different training setting for long-tailed datasets <cit.>. All other hyperparameters in baseline methods are followed in their official implementations. Specifically, for all methods, we use SGD as the optimizer, with the initial learning rate 0.1, momentum 0.9, weight decay 0.0005, and batch size 128. For RoBal, the total number of training epochs is 80, and we decay the learning rate at the 60-th and 75-th epoch with a factor 0.1. For others, the total number of training epochs is 200, and the learning rate decays at the 100-th and 150-th epoch with a factor 0.1. Note that the learning rate decay is only for the AT-model in , while the oracle does not need to adjust the learning rate, because we observe a larger learning rate can slow down the convergence speed of the oracle and improve the AT-model's robustness by introducing uncertainty in the oracle's predictions. For adversarial training, except for TRADES, we adopt l_∞-norm PGD <cit.>, with a maximum perturbation size ϵ=8/255 for 10 iterations, and step length α=2/255 in each iteration. For TRADES, we follow its official implementation, with a maximum perturbation size ϵ=8/255 for 10 iterations, the step length α=2/255 in each iteration, and robust loss scale β=6.0.
Metrics.
In the main paper, we report the clean accuracy (CA) and robust accuracy (RA) under AutoAttack <cit.>. Other results under different attacks can be found in Appendix <ref>. We save the “Best” model with the highest robustness on the test set under PGD-20 and the “Last” model at the end of training. Due to page limit, some results of the “Last” models are in Appendix <ref>.
§.§ Ablation Study
We first explore the effectiveness of different components proposed in , including the oracle-model interactions and logits adjustment. Table <ref> presents the results on a balanced and imbalanced clean dataset, respectively. It is clear that with the oracle-model interaction, both clean accuracy and robust accuracy are improved. Furthermore, the results indicate that with the interaction, the robust overfitting is mitigated. On the other hand, the logits adjustment will harm the clean accuracy and robustness of models trained on the balanced dataset and cause some robust overfitting on the imbalanced dataset, because the estimated label distribution from the oracle is not as exact as the ground-truth distribution. However, when we train models on an imbalanced dataset, the clean accuracy and robustness of the best model indicate that the effectiveness of the logits adjustment is significant. Overall, both oracle-model interaction and logits adjustment are essential components in .
§.§ Results under Label Noise
We study the models trained on balanced but noisy datasets. Tables <ref> and <ref> show the results of the balanced CIFAR-10 dataset containing symmetric and asymmetric noise, respectively. Table <ref> illustrates the results of models trained on the balanced CIFAR-100 dataset with symmetric noise. The symmetric noise can harm the clean accuracy of baseline models to a bigger degree than harming the robustness. Clearly, decreasing the clean accuracy will reduce the robust accuracy. So when the noise ratio reaches 0.8, we observe models trained with baseline methods do not converge, and the robustness is close to zero. Based on the results, it is clear that achieves consistent high clean accuracy and robust accuracy under different settings. Specifically, SAT adopts the model's confidence scores to refurbish the labels, and achieves lower clean accuracy, as the model trained with AEs will be less overconfident of the data <cit.> and have slower convergence speed, making the label refurbishment fail. On the other hand, TE only works under less label noise and fails when there are massive noise in the dataset. For example, on CIFAR-10 and NR = 0.6, the clean accuracy of the model with the best robust accuracy of is about 32% higher than the one of SAT. The robustness of this model is about 6% higher than the one of TE. Besides, with the increasing noise ratio, we find that both clean accuracy and robustness face the overfitting challenge. Among all methods, achieves the best results to alleviate overfitting, because of the adaptive label smoothing from the oracle.
§.§ Results under Data Imbalance
We then study the models trained on imbalanced clean datasets. In long-tailed recognition, the main challenge is the overfitting problem, where the model will give high confidence scores to head classes. Table <ref> displays the results of models trained on long-tailed CIFAR-10 and CIFAR-100.
In this setting, the training algorithms only need to address the long-tailed challenges. So, RoBal, which is designed for long-tailed AT, achieves competitive results compared with . On the other hand, outperforms RoBal in two aspects: consistency and generalization. First, achieves better clean accuracy and robust accuracy on different datasets and different IR values. For example, on CIFAR-10 and IR = 0.05, the clean and robust accuracy and robustness of the “Best” model from is about 4% and 1% higher than the ones of RoBal. On CIFAR-100 and IR = 0.02, our “Best” model achieves 41.82% clean accuracy and 14.18% robust accuracy, which are 7% and 2% higher than that of RoBal. Second, RoBal requires different hyperparameters for CIFAR-10 and CIFAR-100, but does not require changing the hyperparameters. Overall, for the long-tailed AT task, is more advanced than RoBal.
§.§ Results under Label Noise and Data Imbalance
Finally, we study the models trained on imbalanced and noisy datasets. Tables <ref> and <ref> present the results on imbalanced datasets containing symmetric noise. Table <ref> shows the results on imbalanced CIFAR-10 with asymmetric noise. We consider various IR and NR combinations: IR is selected from {0.1, 0.05, 0.02} and NR is selected from {0.4, 0.6}. Results of other setups are in Appendix <ref> and <ref>.
The results prove that outperforms other baselines in both clean accuracy and robustness under various setups and datasets. One important reason is that previous methods cannot correctly predict the label distribution for an imbalanced and noisy dataset, which hinders the AE generation process. Without valid AEs and corresponding labels to train the model, either clean accuracy or robustness will significantly decrease. In contrast, the oracle in can naturally predict the label distribution because of the four techniques we propose in the oracle training process. As a result, can achieve both high clean accuracy and robust accuracy. For example, on CIFAR-10, IR = 0.05, NR = 0.6 of symmetric noise, the clean accuracy and robust accuracy of the “Best” model from are about 27% and 7% higher than the ones of RoBal, respectively.
Asymmetric noise can transform the dataset from a balanced one into an imbalanced one. For example, under asymmetric noise, the number of samples in class “truck” will be significantly less than that in class “automobile”. RoBal achieves better results than other baselines. However, because of the label distribution estimation and logits adjustment in , it outperforms RoBal in both clean accuracy and robustness, which proves that is the best choice for different types of label noise.
§.§ Label Distribution Correction
To evaluate the quality of the estimated label distribution, we illustrate the oracle's predicted labels in Figure <ref>. Other cases can be found in Appendix <ref>. We use “Prior” to represent the label distribution of the known dataset, and “GT” to represent the ground-truth distribution of clean labels, which is unknown for a noisy dataset. We plot the estimated label distribution in the 10th, 50th, and 100th training epoch, respectively. We consider a complex case, where both clean labels and noisy labels are long-tailed. The results prove that our oracle can correctly produce the label distribution under this scenario. So outperforms other baselines in various settings.
§ CONCLUSION AND FUTURE WORK
We propose a new training strategy, , to solve real-world adversarial training challenges, including label noise and data imbalance. By introducing an oracle, our method achieves state-of-the-art results under different evaluation setups. We hope the dataset re-sampling, logits adjustment AT and other proposed techniques can inspire researchers to explore more effective training strategies for practical usage.
The main limitation of is the performance drop under massive asymmetric noise, although it is much better than prior works. From the results, we can find that models trained on a dataset containing massive asymmetric label noise will have lower clean accuracy and become easier to overfit the training set. It is important to address this challenge as future work.
§ ACKNOWLEDGEMENT
This work is supported under the RIE2020 Industry Alignment Fund–Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contributions from the industry partner(s). It is also supported in part by Singapore Ministry of Education (MOE) AcRF Tier 2 MOE-T2EP20121-0006 and AcRF Tier 1 RS02/19.
ieee_fullname
§ FULL TABLES OF MAIN PAPER
Due to the page limit, we cannot show the whole tables in our main paper. So, we give the full results in this supplementary materials for readers' further reference. These tables contain more results under different configurations, and the results prove the advantages of in both clean accuracy and robustness. Specifically, we show the full results of models trained on balanced but noisy datasets in Tables <ref>, and <ref>. The results in Tables <ref> and <ref> are for models trained on clean but imbalanced datasets. In Tables <ref>, <ref>, and <ref>, the models are trained on imbalanced and noisy datasets for further evaluation of the complex scenarios.
§ OTHER SETUPS FOR IMBALANCED AND NOISY DATASETS
Besides the settings discussed in our main paper, i.e., the IR is selected from {0.1, 0.05, 0.02} and the NR is selected from {0.4, 0.6}, we show the results of NR is 0.2 under different IRs. The results in Tables <ref>, <ref>, and <ref> are for CIFAR-10 with symmetric noise, CIFAR-10 with asymmetric noise, and CIFAR-100 with symmetric noise, respectively. The results prove that outperforms all baselines under various setups.
§ OTHER ATTACKS
Besides AutoAttack <cit.>, we consider other L_∞-norm and L_2-norm attacks to evaluate the robustness of the models trained with . Specifically, in Tables <ref>, <ref>, and <ref>, we show the results of models under four L_∞-norm attacks, i.e., PGD-20, PGD-100 <cit.>, CW-100 <cit.> and AutoAttack (AA) <cit.>. For CW attacks, we replace the CE loss in PGD attacks with CW loss. The attack settings are ϵ=8/255 and η=2/255. The number of attack steps is 20 for PGD-20, and 100 for PGD-100 and CW-100. In Tables <ref>, <ref>, and <ref>, we show the results of models under three L_2-norm attacks. For the PGD attacks, the max perturbation size is ϵ=0.5, and the step length is α=0.1. We consider the 20-step attack, PGD-20, and the 100-step attack, PGD-100. For the CW attack, we replace the CE loss in PGD attack with CW loss. Overall, under both L_∞-norm and L_2-norm attacks, the models trained with achieving high clean accuracy and robust accuracy, which proves that is an advanced strategy for addressing the data imbalance and label noise challenges in adversarial training.
§ UNDER EXTREME SETTINGS
Besides the experimental setups discussed in our main paper, we further consider more challenging and extreme label noise and data imbalance configurations. In Table <ref>, we consider that the 80% labels in datasets are incorrect. The results prove that can still achieve high clean accuracy and robustness under various data imbalance ratios, while other baseline methods cannot converge under such massive label noise.
§ LABEL DISTRIBUTION CORRECTION
To evaluate the quality of the estimated label distribution, we illustrate the oracle's predicted labels in Figure <ref>. We use “Prior” to represent the label distribution of the known dataset, and “GT” to represent the ground-truth distribution of clean labels, which is unknown for a noisy dataset. We plot the estimated label distribution in the 10th, the 50th, and the 100th training epoch, respectively. In Figure <ref> and Figure <ref>, we show the estimated distribution for clean datasets. The results prove that our oracle can correctly predict balanced and imbalanced label distribution. On the other hand, in Figure <ref> and Figure <ref>, we plot the label distribution of noisy datasets. Specifically, in Figure <ref>, the ground-truth labels are almost balanced, and the noisy labels are long-tailed. In Figure <ref>, both clean labels and noisy labels are long-tailed. The results prove that our oracle can correctly produce the label distribution under complex scenarios. So, outperforms other baselines in various settings.
§ TRAINING COST OVERHEAD
We compare the training time cost between and PGD-AT on one RTX 3090 GPU card. We implement our code with Pytorch. The Pytorch version is 1.12, and the cuda version is 11.6. When we train a model on CIFAR-10, the training time cost per epoch is 110 seconds for PGD-AT. For our , the oracle training time cost per epoch is 39 seconds, and the adversarial training time cost per epoch is 116 seconds. So, the total training time for one epoch is 155 seconds, which is only 45 seconds longer than the PGD-AT. Considering the clean accuracy and robustness we obtain with , the time cost overhead is acceptable.
|