Unnamed: 0
int64
0
2.72k
title
stringlengths
14
153
Arxiv link
stringlengths
1
31
authors
stringlengths
5
1.5k
arxiv_id
float64
2k
2.41k
abstract
stringlengths
435
2.86k
Model
stringclasses
1 value
GitHub
stringclasses
1 value
Space
stringclasses
1 value
Dataset
stringclasses
1 value
id
int64
0
2.72k
2,600
Transferable and Principled Efficiency for Open-Vocabulary Segmentation
http://arxiv.org/abs/2404.07448
Jingxuan Xu, Wuyang Chen, Yao Zhao, Yunchao Wei
2,404.07448
Recent success of pre-trained foundation vision-language models makes Open-Vocabulary Segmentation (OVS) possible. Despite the promising performance this approach introduces heavy computational overheads for two challenges: 1) large model sizes of the backbone; 2) expensive costs during the fine-tuning. These challenges hinder this OVS strategy from being widely applicable and affordable in real-world scenarios. Although traditional methods such as model compression and efficient fine-tuning can address these challenges they often rely on heuristics. This means that their solutions cannot be easily transferred and necessitate re-training on different models which comes at a cost. In the context of efficient OVS we target achieving performance that is comparable to or even better than prior OVS works based on large vision-language foundation models by utilizing smaller models that incur lower training costs. The core strategy is to make our efficiency principled and thus seamlessly transferable from one OVS framework to others without further customization. Comprehensive experiments on diverse OVS benchmarks demonstrate our superior trade-off between segmentation accuracy and computation costs over previous works. Our code is available on https://github.com/Xujxyang/OpenTrans
[]
[]
[]
[]
2,600
2,601
A Unified and Interpretable Emotion Representation and Expression Generation
http://arxiv.org/abs/2404.01243
Reni Paskaleva, Mykyta Holubakha, Andela Ilic, Saman Motamed, Luc Van Gool, Danda Paudel
2,404.01243
Canonical emotions such as happy sad and fear are easy to understand and annotate. However emotions are often compound e.g. happily surprised and can be mapped to the action units (AUs) used for expressing emotions and trivially to the canonical ones. Intuitively emotions are continuous as represented by the arousal-valence (AV) model. An interpretable unification of these four modalities --namely Canonical Compound AUs and AV-- is highly desirable for a better representation and understanding of emotions. However such unification remains to be unknown in the current literature. In this work we propose an interpretable and unified emotion model referred as C2A2. We also develop a method that leverages labels of the non-unified models to annotate the novel unified one. Finally we modify the text-conditional diffusion models to understand continuous numbers which are then used to generate continuous expressions using our unified emotion model. Through quantitative and qualitative experiments we show that our generated images are rich and capture subtle expressions. Our work allows a fine-grained generation of expressions in conjunction with other textual inputs and offers a new label space for emotions at the same time.
[]
[]
[]
[]
2,601
2,602
Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution
Shangchen Zhou, Peiqing Yang, Jianyi Wang, Yihang Luo, Chen Change Loy
null
Text-based diffusion models have exhibited remarkable success in generation and editing showing great promise for enhancing visual content with their generative prior. However applying these models to video super-resolution remains challenging due to the high demands for output fidelity and temporal consistency which is complicated by the inherent randomness in diffusion models. Our study introduces Upscale-A-Video a text-guided latent diffusion framework for video upscaling. This framework ensures temporal coherence through two key mechanisms: locally it integrates temporal layers into U-Net and VAE-Decoder maintaining consistency within short sequences; globally without training a flow-guided recurrent latent propagation module is introduced to enhance overall video stability by propagating and fusing latent across the entire sequences. Thanks to the diffusion paradigm our model also offers greater flexibility by allowing text prompts to guide texture creation and adjustable noise levels to balance restoration and generation enabling a trade-off between fidelity and quality. Extensive experiments show that Upscale-A-Video surpasses existing methods in both synthetic and real-world benchmarks as well as in AI-generated videos showcasing impressive visual realism and temporal consistency.
[]
[]
[]
[]
2,602
2,603
EvDiG: Event-guided Direct and Global Components Separation
Xinyu Zhou, Peiqi Duan, Boyu Li, Chu Zhou, Chao Xu, Boxin Shi
null
Separating the direct and global components of a scene aids in shape recovery and basic material understanding. Conventional methods capture multiple frames under high frequency illumination patterns or shadows requiring the scene to keep stationary during the image acquisition process. Single-frame methods simplify the capture procedure but yield lower-quality separation results. In this paper we leverage the event camera to facilitate the separation of direct and global components enabling video-rate separation of high quality. In detail we adopt an event camera to record rapid illumination changes caused by the shadow of a line occluder sweeping over the scene and reconstruct the coarse separation results through event accumulation. We then design a network to resolve the noise in the coarse separation results and restore color information. A real-world dataset is collected using a hybrid camera system for network training and evaluation. Experimental results show superior performance over state-of-the-art methods.
[]
[]
[]
[]
2,603
2,604
DeIL: Direct-and-Inverse CLIP for Open-World Few-Shot Learning
Shuai Shao, Yu Bai, Yan Wang, Baodi Liu, Yicong Zhou
null
Open-World Few-Shot Learning (OFSL) is a critical field of research concentrating on the precise identification of target samples in environments with scarce data and unreliable labels thus possessing substantial practical significance. Recently the evolution of foundation models like CLIP has revealed their strong capacity for representation even in settings with restricted resources and data. This development has led to a significant shift in focus transitioning from the traditional method of "building models from scratch" to a strategy centered on "efficiently utilizing the capabilities of foundation models to extract relevant prior knowledge tailored for OFSL and apply it judiciously". Amidst this backdrop we unveil the Direct-and-Inverse CLIP (DeIL) an innovative method leveraging our proposed "Direct-and-Inverse" concept to activate CLIP-based methods for addressing OFSL. This concept transforms conventional single-step classification into a nuanced two-stage process: initially filtering out less probable categories followed by accurately determining the specific category of samples. DeIL comprises two key components: a pre-trainer (frozen) for data denoising and an adapter (tunable) for achieving precise final classification. In experiments DeIL achieves SOTA performance on 11 datasets.
[]
[]
[]
[]
2,604
2,605
4D-DRESS: A 4D Dataset of Real-World Human Clothing With Semantic Annotations
Wenbo Wang, Hsuan-I Ho, Chen Guo, Boxiang Rong, Artur Grigorev, Jie Song, Juan Jose Zarate, Otmar Hilliges
null
The studies of human clothing for digital avatars have predominantly relied on synthetic datasets. While easy to collect synthetic data often fall short in realism and fail to capture authentic clothing dynamics. Addressing this gap we introduce 4D-DRESS the first real-world 4D dataset advancing human clothing research with its high-quality 4D textured scans and garment meshes. 4D-DRESS captures 64 outfits in 520 human motion sequences amounting to 78k textured scans. Creating a real-world clothing dataset is challenging particularly in annotating and segmenting the extensive and complex 4D human scans. To address this we develop a semi-automatic 4D human parsing pipeline. We efficiently combine a human-in-the-loop process with automation to accurately label 4D scans in diverse garments and body movements. Leveraging precise annotations and high-quality garment meshes we establish several benchmarks for clothing simulation and reconstruction. 4D-DRESS offers realistic and challenging data that complements synthetic sources paving the way for advancements in research of lifelike human clothing. Website: https://ait.ethz.ch/4d-dress
[]
[]
[]
[]
2,605
2,606
Feedback-Guided Autonomous Driving
Jimuyang Zhang, Zanming Huang, Arijit Ray, Eshed Ohn-Bar
null
While behavior cloning has recently emerged as a highly successful paradigm for autonomous driving humans rarely learn to perform complex tasks such as driving via imitation or behavior cloning alone. In contrast learning in humans often involves additional detailed guidance throughout the interactive learning process i.e. where feedback often via language provides detailed information as to which part of their trial was performed incorrectly or suboptimally and why. Motivated by this observation we introduce an efficient feedback-based framework for improving behavior-cloning-based training of sensorimotor driving agents. Our key insight is to leverage recent advances in Large Language Models (LLMs) to provide corrective fine-grained feedback regarding the underlying reason behind driving prediction failures. Moreover our introduced network architecture is efficient enabling the first sensorimotor end-to-end training and evaluation of LLM-based driving models. The resulting agent achieves state-of-the-art performance in open-loop evaluation on nuScenes outperforming prior state-of-the-art by over 8.1% and 57.1% in accuracy and collision rate respectively. In CARLA our camera-based agent improves by 16.6% in driving score over prior LIDAR-based approaches.
[]
[]
[]
[]
2,606
2,607
Large Language Models are Good Prompt Learners for Low-Shot Image Classification
http://arxiv.org/abs/2312.04076
Zhaoheng Zheng, Jingmin Wei, Xuefeng Hu, Haidong Zhu, Ram Nevatia
2,312.04076
Low-shot image classification where training images are limited or inaccessible has benefited from recent progress on pre-trained vision-language (VL) models with strong generalizability e.g. CLIP. Prompt learning methods built with VL models generate text features from the class names that only have confined class-specific information. Large Language Models (LLMs) with their vast encyclopedic knowledge emerge as the complement. Thus in this paper we discuss the integration of LLMs to enhance pre-trained VL models specifically on low-shot classification. However the domain gap between language and vision blocks the direct application of LLMs. Thus we propose LLaMP Large Language Models as Prompt learners that produces adaptive prompts for the CLIP text encoder establishing it as the connecting bridge. Experiments show that compared with other state-of-the-art prompt learning methods LLaMP yields better performance on both zero-shot generalization and few-shot image classification over a spectrum of 11 datasets. Code will be made available at: https://github.com/zhaohengz/LLaMP.
[]
[]
[]
[]
2,607
2,608
Specularity Factorization for Low-Light Enhancement
http://arxiv.org/abs/2404.01998
Saurabh Saini, P J Narayanan
2,404.01998
We present a new additive image factorization technique that treats images to be composed of multiple latent specular components which can be simply estimated recursively by modulating the sparsity during decomposition. Our model-driven RSFNet estimates these factors by unrolling the optimization into network layers requiring only a few scalars to be learned. The resultant factors are interpretable by design and can be fused for different image enhancement tasks via a network or combined directly by the user in a controllable fashion. Based on RSFNet we detail a zero-reference Low Light Enhancement (LLE) application trained without paired or unpaired supervision. Our system improves the state-of-the-art performance on standard benchmarks and achieves better generalization on multiple other datasets. We also integrate our factors with other task specific fusion networks for applications like deraining deblurring and dehazing with negligible overhead thereby highlighting the multi-domain and multi-task generalizability of our proposed RSFNet. The code and data is released for reproducibility on the project homepage.
[]
[]
[]
[]
2,608
2,609
Paint3D: Paint Anything 3D with Lighting-Less Texture Diffusion Models
http://arxiv.org/abs/2312.13913
Xianfang Zeng, Xin Chen, Zhongqi Qi, Wen Liu, Zibo Zhao, Zhibin Wang, Bin Fu, Yong Liu, Gang Yu
2,312.13913
This paper presents Paint3D a novel coarse-to-fine generative framework that is capable of producing high-resolution lighting-less and diverse 2K UV texture maps for untextured 3D meshes conditioned on text or image inputs. The key challenge addressed is generating high-quality textures without embedded illumination information which allows the textures to be re-lighted or re-edited within modern graphics pipelines. To achieve this our method first leverages a pre-trained depth-aware 2D diffusion model to generate view-conditional images and perform multi-view texture fusion producing an initial coarse texture map. However as 2D models cannot fully represent 3D shapes and disable lighting effects the coarse texture map exhibits incomplete areas and illumination artifacts. To resolve this we train separate UV Inpainting and UVHD diffusion models specialized for the shape-aware refinement of incomplete areas and the removal of illumination artifacts. Through this coarse-to-fine process Paint3D can produce high-quality 2K UV textures that maintain semantic consistency while being lighting-less significantly advancing the state-of-the-art in texturing 3D objects.
[]
[]
[]
[]
2,609
2,610
VILA: On Pre-training for Visual Language Models
http://arxiv.org/abs/2312.07533
Ji Lin, Hongxu Yin, Wei Ping, Pavlo Molchanov, Mohammad Shoeybi, Song Han
2,312.07533
Visual language models (VLMs) rapidly progressed with the recent success of large language models. There have been growing efforts on visual instruction tuning to extend the LLM with visual inputs but lacks an in-depth study of the visual language pre-training process where the model learns to perform joint modeling on both modalities. In this work we examine the design options for VLM pre-training by augmenting LLM towards VLM through step-by-step con- trollable comparisons. We introduce three main findings: (1) freezing LLMs during pre-training can achieve decent zero-shot performance but lack in-context learning capabil- ity which requires unfreezing the LLM; (2) interleaved pre- training data is beneficial whereas image-text pairs alone are not optimal; (3) re-blending text-only instruction data to image-text data during instruction fine-tuning not only remedies the degradation of text-only tasks but also boosts VLM task accuracy. With an enhanced pre-training recipe we build VILA a Visual Language model family that consis- tently outperforms the state-of-the-art models e.g. LLaVA- 1.5 across main benchmarks without bells and whistles. Multi-modal pre-training also helps unveil appealing prop- erties of VILA including multi-image reasoning enhanced in-context learning and better world knowledge. VILA is also deployable on Jetson Orin for on-device VLM.
[]
[]
[]
[]
2,610
2,611
DiLiGenRT: A Photometric Stereo Dataset with Quantified Roughness and Translucency
Heng Guo, Jieji Ren, Feishi Wang, Boxin Shi, Mingjun Ren, Yasuyuki Matsushita
null
Photometric stereo faces challenges from non-Lambertian reflectance in real-world scenarios. Systematically measuring the reliability of photometric stereo methods in handling such complex reflectance necessitates a real-world dataset with quantitatively controlled reflectances. This paper introduces DiLiGenRT the first real-world dataset for evaluating photometric stereo methods under quantified reflectances by manufacturing 54 hemispheres with varying degrees of two reflectance properties: Roughness and Translucency. Unlike qualitative and semantic labels such as diffuse and specular that have been used in previous datasets our quantified dataset allows comprehensive and systematic benchmark evaluations. In addition it facilitates selecting best-fit photometric stereo methods based on the quantitative reflectance properties. Our dataset and benchmark results are available at https://photometricstereo.github.io/diligentrt.html.
[]
[]
[]
[]
2,611
2,612
De-Diffusion Makes Text a Strong Cross-Modal Interface
http://arxiv.org/abs/2311.00618
Chen Wei, Chenxi Liu, Siyuan Qiao, Zhishuai Zhang, Alan Yuille, Jiahui Yu
2,311.00618
We demonstrate text as a strong cross-modal interface. Rather than relying on deep embeddings to connect image and language as the interface representation our approach represents an image as text from which we enjoy the interpretability and flexibility inherent to natural language. We employ an autoencoder that uses a pre-trained text-to-image diffusion model for decoding. The encoder is trained to transform an input image into text which is then fed into the fixed text-to-image diffusion decoder to reconstruct the original input a process we term De-Diffusion. Experiments validate both the precision and comprehensiveness of De-Diffusion text representing images such that it can be readily ingested by off-the-shelf text-to-image tools and LLMs for diverse multi-modal tasks. For example a single De-Diffusion model can generalize to provide transferable prompts for different text-to-image tools and also achieves a new state of the art on open-ended vision-language tasks by simply prompting large language models with few-shot examples. Project page: https://dediffusion.github.io/
[]
[]
[]
[]
2,612
2,613
End-to-End Spatio-Temporal Action Localisation with Video Transformers
http://arxiv.org/abs/2304.12160
Alexey A. Gritsenko, Xuehan Xiong, Josip Djolonga, Mostafa Dehghani, Chen Sun, Mario Lucic, Cordelia Schmid, Anurag Arnab
2,304.1216
The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end transformer based model that directly ingests an input video and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames or full tubelet annotations. And in both cases it predicts coherent tubelets as the output. Moreover our end-to-end model requires no additional pre-processing in the form of proposals or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments and significantly advance the state-of-the-art on five different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.
[]
[]
[]
[]
2,613
2,614
Text-Guided Variational Image Generation for Industrial Anomaly Detection and Segmentation
http://arxiv.org/abs/2403.06247
Mingyu Lee, Jongwon Choi
2,403.06247
We propose a text-guided variational image generation method to address the challenge of getting clean data for anomaly detection in industrial manufacturing. Our method utilizes text information about the target object learned from extensive text library documents to generate non-defective data images resembling the input image. The proposed framework ensures that the generated non-defective images align with anticipated distributions derived from textual and image-based knowledge ensuring stability and generality. Experimental results demonstrate the effectiveness of our approach surpassing previous methods even with limited non-defective data. Our approach is validated through generalization tests across four baseline models and three distinct datasets. We present an additional analysis to enhance the effectiveness of anomaly detection models by utilizing the generated images.
[]
[]
[]
[]
2,614
2,615
Self-Adaptive Reality-Guided Diffusion for Artifact-Free Super-Resolution
http://arxiv.org/abs/2403.16643
Qingping Zheng, Ling Zheng, Yuanfan Guo, Ying Li, Songcen Xu, Jiankang Deng, Hang Xu
2,403.16643
Artifact-free super-resolution (SR) aims to translate low-resolution images into their high-resolution counterparts with a strict integrity of the original content eliminating any distortions or synthetic details. While traditional diffusion-based SR techniques have demonstrated remarkable abilities to enhance image detail they are prone to artifact introduction during iterative procedures. Such artifacts ranging from trivial noise to unauthentic textures deviate from the true structure of the source image thus challenging the integrity of the super-resolution process. In this work we propose Self-Adaptive Reality-Guided Diffusion (SARGD) a training-free method that delves into the latent space to effectively identify and mitigate the propagation of artifacts. Our SARGD begins by using an artifact detector to identify implausible pixels creating a binary mask that highlights artifacts. Following this the Reality Guidance Refinement (RGR) process refines artifacts by integrating this mask with realistic latent representations improving alignment with the original image. Nonetheless initial realistic-latent representations from lower-quality images result in over-smoothing in the final output. To address this we introduce a Self-Adaptive Guidance (SAG) mechanism. It dynamically computes a reality score enhancing the sharpness of the realistic latent. These alternating mechanisms collectively achieve artifact-free super-resolution. Extensive experiments demonstrate the superiority of our method delivering detailed artifact-free high-resolution images while reducing sampling steps by 2X. We release our code at https://github.com/ProAirVerse/Self-Adaptive-Guidance-Diffusion.git.
[]
[]
[]
[]
2,615
2,616
End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames
http://arxiv.org/abs/2311.17241
Shuming Liu, Chen-Lin Zhang, Chen Zhao, Bernard Ghanem
2,311.17241
Recently temporal action detection (TAD) has seen significant performance improvement with end-to-end training. However due to the memory bottleneck only models with limited scales and limited data volumes can afford end-to-end training which inevitably restricts TAD performance. In this paper we reduce the memory consumption for end-to-end training and manage to scale up the TAD backbone to 1 billion parameters and the input video to 1536 frames leading to significant detection performance. The key to our approach lies in our proposed temporal-informative adapter (TIA) which is a novel lightweight module that reduces training memory. Using TIA we free the humongous backbone from learning to adapt to the TAD task by only updating the parameters in TIA. TIA also leads to better TAD representation by temporally aggregating context from adjacent frames throughout the backbone. We evaluate our model across four representative datasets. Owing to our efficient design we are able to train end-to-end on VideoMAEv2-giant and achieve 75.4% mAP on THUMOS14 being the first end-to-end model to outperform the best feature-based methods.
[]
[]
[]
[]
2,616
2,617
Multimodal Representation Learning by Alternating Unimodal Adaptation
http://arxiv.org/abs/2311.10707
Xiaohui Zhang, Jaehong Yoon, Mohit Bansal, Huaxiu Yao
2,311.10707
Multimodal learning which integrates data from diverse sensory modes plays a pivotal role in artificial intelligence. However existing multimodal learning methods often struggle with challenges where some modalities appear more dominant than others during multimodal learning resulting in suboptimal performance. To address this challenge we propose MLA (Multimodal Learning with Alternating Unimodal Adaptation). MLA reframes the conventional joint multimodal learning process by transforming it into an alternating unimodal learning process thereby minimizing interference between modalities. Simultaneously it captures cross-modal interactions through a shared head which undergoes continuous optimization across different modalities. This optimization process is controlled by a gradient modification mechanism to prevent the shared head from losing previously acquired information. During the inference phase MLA utilizes a test-time uncertainty-based model fusion mechanism to integrate multimodal information. Extensive experiments are conducted on five diverse datasets encompassing scenarios with complete modalities and scenarios with missing modalities. These experiments demonstrate the superiority of MLA over competing prior approaches. Our code is available at https://github.com/Cecile-hi/Multimodal-Learning-with-Alternating-Unimodal-Adaptation.
[]
[]
[]
[]
2,617
2,618
MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints
Pengfei Xie, Wenqiang Xu, Tutian Tang, Zhenjun Yu, Cewu Lu
null
This work proposes a novel learning framework for visual hand dynamics analysis that takes into account the physiological aspects of hand motion. The existing models which are simplified joint-actuated systems often produce unnatural motions. To address this we integrate a musculoskeletal system with a learnable parametric hand model MANO to create a new model MS-MANO. This model emulates the dynamics of muscles and tendons to drive the skeletal system imposing physiologically realistic constraints on the resulting torque trajectories. We further propose a simulation-in-the-loop pose refinement framework BioPR that refines the initial estimated pose through a multi-layer perceptron (MLP) network. Our evaluation of the accuracy of MS-MANO and the efficacy of the BioPR is conducted in two separate parts. The accuracy of MS-MANO is compared with MyoSuite while the efficacy of BioPR is benchmarked against two large-scale public datasets and two recent state-of-the-art methods. The results demonstrate that our approach consistently improves the baseline methods both quantitatively and qualitatively.
[]
[]
[]
[]
2,618
2,619
Generate Like Experts: Multi-Stage Font Generation by Incorporating Font Transfer Process into Diffusion Models
Bin Fu, Fanghua Yu, Anran Liu, Zixuan Wang, Jie Wen, Junjun He, Yu Qiao
null
Few-shot font generation (FFG) produces stylized font images with a limited number of reference samples which can significantly reduce labor costs in manual font designs. Most existing FFG methods follow the style-content disentanglement paradigm and employ the Generative Adversarial Network (GAN) to generate target fonts by combining the decoupled content and style representations. The complicated structure and detailed style are simultaneously generated in those methods which may be the sub-optimal solutions for FFG task. Inspired by most manual font design processes of expert designers in this paper we model font generation as a multi-stage generative process. Specifically as the injected noise and the data distribution in diffusion models can be well-separated into different sub-spaces we are able to incorporate the font transfer process into these models. Based on this observation we generalize diffusion methods to model font generative process by separating the reverse diffusion process into three stages with different functions: The structure construction stage first generates the structure information for the target character based on the source image and the font transfer stage subsequently transforms the source font to the target font. Finally the font refinement stage enhances the appearances and local details of the target font images. Based on the above multi-stage generative process we construct our font generation framework named MSD-Font with a dual-network approach to generate font images. The superior performance demonstrates the effectiveness of our model. The code is available at: https://github.com/fubinfb/MSD-Font .
[]
[]
[]
[]
2,619
2,620
Pre-training Vision Models with Mandelbulb Variations
Benjamin Naoto Chiche, Yuto Horikawa, Ryo Fujita
null
The use of models that have been pre-trained on natural image datasets like ImageNet may face some limitations. First this use may be restricted due to copyright and license on the training images and privacy laws. Second these datasets and models may incorporate societal and ethical biases. Formula-driven supervised learning (FDSL) enables model pre-training to circumvent these issues. This consists of generating a synthetic image dataset based on mathematical formulae and pre-training the model on it. In this work we propose novel FDSL datasets based on Mandelbulb Variations. These datasets contain RGB images that are projections of colored objects deriving from the 3D Mandelbulb fractal. Pre-training ResNet-50 on one of our proposed datasets MandelbulbVAR-1k enables an average top-1 accuracy over target classification datasets that is at least 1% higher than pre-training on existing FDSL datasets. With regard to anomaly detection on MVTec AD pre-training the WideResNet-50 backbone on MandelbulbVAR-1k enables PatchCore to achieve 97.2% average image-level AUROC. This is only 1.9% lower than pre-training on ImageNet-1k (99.1%) and 4.5% higher than pre-training on the second-best performing FDSL dataset i.e. VisualAtom-1k (92.7%). Regarding Vision Transformer (ViT) pre-training another dataset that we propose and coin MandelbulbVAR-Hybrid-21k enables ViT-Base to achieve 82.2% top-1 accuracy on ImageNet-1k which is 0.4% higher than pre-training on ImageNet-21k (81.8%) and only 0.1% lower than pre-training on VisualAtom-1k (82.3%).
[]
[]
[]
[]
2,620
2,621
Diffuse Attend and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion
http://arxiv.org/abs/2308.12469
Junjiao Tian, Lavisha Aggarwal, Andrea Colaco, Zsolt Kira, Mar Gonzalez-Franco
2,308.12469
Producing quality segmentation masks for images is a fundamental problem in computer vision. Recent research has explored large-scale supervised training to enable zero-shot transfer segmentation on virtually any image style and unsupervised training to enable segmentation without dense annotations. However constructing a model capable of segmenting anything in a zero-shot manner without any annotations is still challenging. In this paper we propose to utilize the self-attention layers in stable diffusion models to achieve this goal because the pre-trained stable diffusion model has learned inherent concepts of objects within its attention layers. Specifically we introduce a simple yet effective iterative merging process based on measuring KL divergence among attention maps to merge them into valid segmentation masks. The proposed method does not require any training or language dependency to extract quality segmentation for any images. On COCO-Stuff-27 our method surpasses the prior unsupervised zero-shot transfer SOTA method by an absolute 26% in pixel accuracy and 17% in mean IoU.
[]
[]
[]
[]
2,621
2,622
TransNeXt: Robust Foveal Visual Perception for Vision Transformers
http://arxiv.org/abs/2311.17132
Dai Shi
2,311.17132
Due to the depth degradation effect in residual connections many efficient Vision Transformers models that rely on stacking layers for information exchange often fail to form sufficient information mixing leading to unnatural visual perception. To address this issue in this paper we propose Aggregated Attention a biomimetic design-based token mixer that simulates biological foveal vision and continuous eye movement while enabling each token on the feature map to have a global perception. Furthermore we incorporate learnable tokens that interact with conventional queries and keys which further diversifies the generation of affinity matrices beyond merely relying on the similarity between queries and keys. Our approach does not rely on stacking for information exchange thus effectively avoiding depth degradation and achieving natural visual perception. Additionally we propose Convolutional GLU a channel mixer that bridges the gap between GLU and SE mechanism which empowers each token to have channel attention based on its nearest neighbor image features enhancing local modeling capability and model robustness. We combine aggregated attention and convolutional GLU to create a new visual backbone called TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves state-of-the-art performance across multiple model sizes. At a resolution of 224^2 TransNeXt-Tiny attains an ImageNet accuracy of 84.0% surpassing ConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet accuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of 384^2 a COCO object detection mAP of 57.1 and an ADE20K semantic segmentation mIoU of 54.7.
[]
[]
[]
[]
2,622
2,623
Implicit Discriminative Knowledge Learning for Visible-Infrared Person Re-Identification
http://arxiv.org/abs/2403.11708
Kaijie Ren, Lei Zhang
2,403.11708
Visible-Infrared Person Re-identification (VI-ReID) is a challenging cross-modal pedestrian retrieval task due to significant intra-class variations and cross-modal discrepancies among different cameras. Existing works mainly focus on embedding images of different modalities into a unified space to mine modality-shared features. They only seek distinctive information within these shared features while ignoring the identity-aware useful information that is implicit in the modality-specific features. To address this issue we propose a novel Implicit Discriminative Knowledge Learning (IDKL) network to uncover and leverage the implicit discriminative information contained within the modality-specific. First we extract modality-specific and modality-shared features using a novel dual-stream network. Then the modality-specific features undergo purification to reduce their modality style discrepancies while preserving identity-aware discriminative knowledge. Subsequently this kind of implicit knowledge is distilled into the modality-shared feature to enhance its distinctiveness. Finally an alignment loss is proposed to minimize modality discrepancy on enhanced modality-shared features. Extensive experiments on multiple public datasets demonstrate the superiority of IDKL network over the state-of-the-art methods.
[]
[]
[]
[]
2,623
2,624
Modeling Dense Multimodal Interactions Between Biological Pathways and Histology for Survival Prediction
http://arxiv.org/abs/2304.06819
Guillaume Jaume, Anurag Vaidya, Richard J. Chen, Drew F.K. Williamson, Paul Pu Liang, Faisal Mahmood
2,304.06819
Integrating whole-slide images (WSIs) and bulk transcriptomics for predicting patient survival can improve our understanding of patient prognosis. However this multimodal task is particularly challenging due to the different nature of these data: WSIs represent a very high-dimensional spatial description of a tumor while bulk transcriptomics represent a global description of gene expression levels within that tumor. In this context our work aims to address two key challenges: (1) how can we tokenize transcriptomics in a semantically meaningful and interpretable way? and (2) how can we capture dense multimodal interactions between these two modalities? Here we propose to learn biological pathway tokens from transcriptomics that can encode specific cellular functions. Together with histology patch tokens that encode the slide morphology we argue that they form appropriate reasoning units for interpretability. We fuse both modalities using a memory-efficient multimodal Transformer that can model interactions between pathway and histology patch tokens. Our model SURVPATH achieves state-of-the-art performance when evaluated against unimodal and multimodal baselines on five datasets from The Cancer Genome Atlas. Our interpretability framework identifies key multimodal prognostic factors and as such can provide valuable insights into the interaction between genotype and phenotype. Code available at https://github.com/mahmoodlab/SurvPath.
[]
[]
[]
[]
2,624
2,625
Mining Supervision for Dynamic Regions in Self-Supervised Monocular Depth Estimation
http://arxiv.org/abs/2404.14908
Hoang Chuong Nguyen, Tianyu Wang, Jose M. Alvarez, Miaomiao Liu
2,404.14908
This paper focuses on self-supervised monocular depth estimation in dynamic scenes trained on monocular videos. Existing methods jointly estimate pixel-wise depth and motion relying mainly on an image reconstruction loss. Dynamic regions remain a critical challenge for these methods due to the inherent ambiguity in depth and motion estimation resulting in inaccurate depth estimation. This paper proposes a self-supervised training framework exploiting pseudo depth labels for dynamic regions from training data. The key contribution of our framework is to decouple depth estimation for static and dynamic regions of images in the training data. We start with an unsupervised depth estimation approach which provides reliable depth estimates for static regions and motion cues for dynamic regions and allows us to extract moving object information at the instance level. In the next stage we use an object network to estimate the depth of those moving objects assuming rigid motions. Then we propose a new scale alignment module to address the scale ambiguity between estimated depths for static and dynamic regions. We can then use the depth labels generated to train an end-to-end depth estimation network and improve its performance. Extensive experiments on the Cityscapes and KITTI datasets show that our self-training strategy consistently outperforms existing self-/unsupervised depth estimation methods.
[]
[]
[]
[]
2,625
2,626
Gradient Alignment for Cross-Domain Face Anti-Spoofing
http://arxiv.org/abs/2402.18817
Binh M. Le, Simon S. Woo
2,402.18817
Recent advancements in domain generalization (DG) for face anti-spoofing (FAS) have garnered considerable attention. Traditional methods have focused on designing learning objectives and additional modules to isolate domain-specific features while retaining domain-invariant characteristics in their representations. However such approaches often lack guarantees of consistent maintenance of domain-invariant features or the complete removal of domain-specific features. Furthermore most prior works of DG for FAS do not ensure convergence to a local flat minimum which has been shown to be advantageous for DG. In this paper we introduce GAC-FAS a novel learning objective that encourages the model to converge towards an optimal flat minimum without necessitating additional learning modules. Unlike conventional sharpness-aware minimizers GAC-FAS identifies ascending points for each domain and regulates the generalization gradient updates at these points to align coherently with empirical risk minimization (ERM) gradient updates. This unique approach specifically guides the model to be robust against domain shifts. We demonstrate the efficacy of GAC-FAS through rigorous testing on challenging cross-domain FAS datasets where it establishes state-of-the-art performance.
[]
[]
[]
[]
2,626
2,627
Physics-guided Shape-from-Template: Monocular Video Perception through Neural Surrogate Models
David Stotko, Nils Wandel, Reinhard Klein
null
3D reconstruction of dynamic scenes is a long-standing problem in computer graphics and increasingly difficult the less information is available. Shape-from-Template (SfT) methods aim to reconstruct a template-based geometry from RGB images or video sequences often leveraging just a single monocular camera without depth information such as regular smartphone recordings. Unfortunately existing reconstruction methods are either unphysical and noisy or slow in optimization. To solve this problem we propose a novel SfT reconstruction algorithm for cloth using a pre-trained neural surrogate model that is fast to evaluate stable and produces smooth reconstructions due to a regularizing physics simulation. Differentiable rendering of the simulated mesh enables pixel-wise comparisons between the reconstruction and a target video sequence that can be used for a gradient-based optimization procedure to extract not only shape information but also physical parameters such as stretching shearing or bending stiffness of the cloth. This allows to retain a precise stable and smooth reconstructed geometry while reducing the runtime by a factor of 400-500 compared to ?-SfT a state-of-the-art physics-based SfT approach.
[]
[]
[]
[]
2,627
2,628
S2MVTC: a Simple yet Efficient Scalable Multi-View Tensor Clustering
Zhen Long, Qiyuan Wang, Yazhou Ren, Yipeng Liu, Ce Zhu
null
Anchor-based large-scale multi-view clustering has attracted considerable attention for its effectiveness in handling massive datasets. However current methods mainly seek the consensus embedding feature for clustering by exploring global correlations between anchor graphs or projection matrices.In this paper we propose a simple yet efficient scalable multi-view tensor clustering (S2MVTC) approach where our focus is on learning correlations of embedding features within and across views. Specifically we first construct the embedding feature tensor by stacking the embedding features of different views into a tensor and rotating it. Additionally we build a novel tensor low-frequency approximation (TLFA) operator which incorporates graph similarity into embedding feature learning efficiently achieving smooth representation of embedding features within different views. Furthermore consensus constraints are applied to embedding features to ensure inter-view semantic consistency. Experimental results on six large-scale multi-view datasets demonstrate that S2MVTC significantly outperforms state-of-the-art algorithms in terms of clustering performance and CPU execution time especially when handling massive data. The code of S2MVTC is publicly available at https://github.com/longzhen520/S2MVTC.
[]
[]
[]
[]
2,628
2,629
OpticalDR: A Deep Optical Imaging Model for Privacy-Protective Depression Recognition
http://arxiv.org/abs/2402.18786
Yuchen Pan, Junjun Jiang, Kui Jiang, Zhihao Wu, Keyuan Yu, Xianming Liu
2,402.18786
Depression Recognition (DR) poses a considerable challenge especially in the context of the growing concerns surrounding privacy. Traditional automatic diagnosis of DR technology necessitates the use of facial images undoubtedly expose the patient identity features and poses privacy risks. In order to mitigate the potential risks associated with the inappropriate disclosure of patient facial images we design a new imaging system to erase the identity information of captured facial images while retain disease-relevant features. It is irreversible for identity information recovery while preserving essential disease-related characteristics necessary for accurate DR. More specifically we try to record a de-identified facial image (erasing the identifiable features as much as possible) by a learnable lens which is optimized in conjunction with the following DR task as well as a range of face analysis related auxiliary tasks in an end-to-end manner. These aforementioned strategies form our final Optical deep Depression Recognition network (OpticalDR). Experiments on CelebA AVEC 2013 and AVEC 2014 datasets demonstrate that our OpticalDR has achieved state-of-the-art privacy protection performance with an average AUC of 0.51 on popular facial recognition models and competitive results for DR with MAE/RMSE of 7.53/8.48 on AVEC 2013 and 7.89/8.82 on AVEC 2014 respectively. Code is available at https://github.com/divertingPan/OpticalDR.
[]
[]
[]
[]
2,629
2,630
Observation-Guided Diffusion Probabilistic Models
http://arxiv.org/abs/2310.04041
Junoh Kang, Jinyoung Choi, Sungik Choi, Bohyung Han
2,310.04041
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM) which effectively addresses the tradeoff between quality control and fast sampling. Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain in a principled way. This is achieved by introducing an additional loss term derived from the observation based on a conditional discriminator on noise level which employs a Bernoulli distribution indicating whether its input lies on the (noisy) real manifold or not. This strategy allows us to optimize the more accurate negative log-likelihood induced in the inference stage especially when the number of function evaluations is limited. The proposed training scheme is also advantageous even when incorporated only into the fine-tuning process and it is compatible with various fast inference strategies since our method yields better denoising networks using the exactly the same inference procedure without incurring extra computational cost. We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines. Our implementation is available at https://github.com/Junoh-Kang/OGDM_edm.
[]
[]
[]
[]
2,630
2,631
You'll Never Walk Alone: A Sketch and Text Duet for Fine-Grained Image Retrieval
Subhadeep Koley, Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
null
Two primary input modalities prevail in image retrieval: sketch and text. While text is widely used for inter-category retrieval tasks sketches have been established as the sole preferred modality for fine-grained image retrieval due to their ability to capture intricate visual details. In this paper we question the reliance on sketches alone for fine-grained image retrieval by simultaneously exploring the fine-grained representation capabilities of both sketch and text orchestrating a duet between the two. The end result enables precise retrievals previously unattainable allowing users to pose ever-finer queries and incorporate attributes like colour and contextual cues from text. For this purpose we introduce a novel compositionality framework effectively combining sketches and text using pre-trained CLIP models while eliminating the need for extensive fine-grained textual descriptions. Last but not least our system extends to novel applications in composed image retrieval domain attribute transfer and fine-grained generation providing solutions for various real-world scenarios.
[]
[]
[]
[]
2,631
2,632
Spatial-Aware Regression for Keypoint Localization
Dongkai Wang, Shiliang Zhang
null
Regression-based keypoint localization shows advantages of high efficiency and better robustness to quantization errors than heatmap-based methods. However existing regression-based methods discard the spatial location prior in input image with a global pooling leading to inferior accuracy and are limited to single instance localization tasks. We study the regression-based keypoint localization from a new perspective by leveraging the spatial location prior. Instead of regressing on the pooled feature the proposed Spatial-Aware Regression (SAR) maintains the spatial location map and outputs spatial coordinates and confidence score for each grid which are optimized with a unified objective. Benefited by the location prior these spatial-aware outputs can be efficiently optimized resulting in better localization performance. Moreover incorporating spatial prior makes SAR more general and can be applied into various keypoint localization tasks. We test the proposed method in 4 keypoint localization tasks including single/multi-person 2D/3D pose estimation and the whole-body pose estimation. Extensive experiments demonstrate its promising performance e.g. consistently outperforming recent regressions-based methods.
[]
[]
[]
[]
2,632
2,633
S2MAE: A Spatial-Spectral Pretraining Foundation Model for Spectral Remote Sensing Data
Xuyang Li, Danfeng Hong, Jocelyn Chanussot
null
In the expansive domain of computer vision a myriad of pre-trained models are at our disposal. However most of these models are designed for natural RGB images and prove inadequate for spectral remote sensing (RS) images. Spectral RS images have two main traits: (1) multiple bands capturing diverse feature information (2) spatial alignment and consistent spectral sequencing within the spatial-spectral dimension. In this paper we introduce Spatial-SpectralMAE (S2MAE) a specialized pre-trained architecture for spectral RS imagery. S2MAE employs a 3D transformer for masked autoencoder modeling integrating learnable spectral-spatial embeddings with a 90% masking ratio. The model efficiently captures local spectral consistency and spatial invariance using compact cube tokens demonstrating versatility to diverse input characteristics. This adaptability facilitates progressive pretraining on extensive spectral datasets. The effectiveness of S2MAE is validated through continuous pretraining on two sizable datasets totaling over a million training images. The pre-trained model is subsequently applied to three distinct downstream tasks with in-depth ablation studies conducted to emphasize its efficacy.
[]
[]
[]
[]
2,633
2,634
EFormer: Enhanced Transformer towards Semantic-Contour Features of Foreground for Portraits Matting
http://arxiv.org/abs/2308.12831
Zitao Wang, Qiguang Miao, Yue Xi, Peipei Zhao
2,308.12831
The portrait matting task aims to extract an alpha matte with complete semantics and finely detailed contours. In comparison to CNN-based approaches transformers with self-attention module have a better capacity to capture long-range dependencies and low-frequency semantic information of a portrait. However recent research shows that the self-attention mechanism struggles with modeling high-frequency contour information and capturing fine contour details which can lead to bias while predicting the portrait's contours. To deal with this issue we propose EFormer to enhance the model's attention towards both the low-frequency semantic and high-frequency contour features. For the high-frequency contours our research demonstrates that cross-attention module between different resolutions can guide our model to allocate attention appropriately to these contour regions. Supported by this we can successfully extract the high-frequency detail information around the portrait's contours which were previously ignored by self-attention. Based on the cross-attention module we further build a semantic and contour detector (SCD) to accurately capture both the low-frequency semantic and high-frequency contour features. And we design a contour-edge extraction branch and semantic extraction branch to extract refined high-frequency contour features and complete low-frequency semantic information respectively. Finally we fuse the two kinds of features and leverage the segmentation head to generate a predicted portrait matte. Experiments on VideoMatte240K (JPEG SD Format) and Adobe Image Matting (AIM) datasets demonstrate that EFormer outperforms previous portrait matte methods.
[]
[]
[]
[]
2,634
2,635
MultiPly: Reconstruction of Multiple People from Monocular Video in the Wild
Zeren Jiang, Chen Guo, Manuel Kaufmann, Tianjian Jiang, Julien Valentin, Otmar Hilliges, Jie Song
null
We present MultiPly a novel framework to reconstruct multiple people in 3D from monocular in-the-wild videos. Reconstructing multiple individuals moving and interacting naturally from monocular in-the-wild videos poses a challenging task. Addressing it necessitates precise pixel-level disentanglement of individuals without any prior knowledge about the subjects. Moreover it requires recovering intricate and complete 3D human shapes from short video sequences intensifying the level of difficulty. To tackle these challenges we first define a layered neural representation for the entire scene composited by individual human and background models. We learn the layered neural representation from videos via our layer-wise differentiable volume rendering. This learning process is further enhanced by our hybrid instance segmentation approach which combines the self-supervised 3D segmentation and the promptable 2D segmentation module yielding reliable instance segmentation supervision even under close human interaction. A confidence-guided optimization formulation is introduced to optimize the human poses and shape/appearance alternately. We incorporate effective objectives to refine human poses via photometric information and impose physically plausible constraints on human dynamics leading to temporally consistent 3D reconstructions with high fidelity. The evaluation of our method shows the superiority over prior art on publicly available datasets and in-the-wild videos.
[]
[]
[]
[]
2,635
2,636
Unsupervised 3D Structure Inference from Category-Specific Image Collections
Weikang Wang, Dongliang Cao, Florian Bernard
null
Understanding 3D object structure from image collections of general object categories remains a long-standing challenge in computer vision. Due to the high relevance of image keypoints (e.g. for graph matching controlling generative models scene understanding etc.) in this work we specifically focus on inferring 3D structure in terms of sparse keypoints. Existing 3D keypoint inference approaches rely on strong priors such as spatio-temporal consistency multi-view images of the same object 3D shape priors (e.g. templates skeleton) or supervisory signals e.g. in the form of 2D keypoint annotations. In contrast we propose the first unsupervised 3D keypoint inference approach that can be trained for general object categories solely from an inhomogeneous image collection (containing different instances of objects from the same category). Our experiments show that our method not only improves upon unsupervised 2D keypoint inference but more importantly it also produces reasonable 3D structure for various object categories both qualitatively and quantitatively.
[]
[]
[]
[]
2,636
2,637
DiG-IN: Diffusion Guidance for Investigating Networks - Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual Explanations
Maximilian Augustin, Yannic Neuhaus, Matthias Hein
null
While deep learning has led to huge progress in complex image classification tasks like ImageNet unexpected failure modes e.g. via spurious features call into question how reliably these classifiers work in the wild. Furthermore for safety-critical tasks the black-box nature of their decisions is problematic and explanations or at least methods which make decisions plausible are needed urgently. In this paper we address these problems by generating images that optimize a classifier-derived objective using a framework for guided image generation. We analyze the decisions of image classifiers by visual counterfactual explanations (VCEs) detection of systematic mistakes by analyzing images where classifiers maximally disagree and visualization of neurons and spurious features. In this way we validate existing observations e.g. the shape bias of adversarially robust models as well as novel failure modes e.g. systematic errors of zero-shot CLIP classifiers. Moreover our VCEs outperform previous work while being more versatile.
[]
[]
[]
[]
2,637
2,638
RepViT: Revisiting Mobile CNN From ViT Perspective
http://arxiv.org/abs/2307.09283
Ao Wang, Hui Chen, Zijia Lin, Jungong Han, Guiguang Ding
2,307.09283
Recently lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency compared with lightweight Convolutional Neural Networks (CNNs) on resource-constrained mobile devices. Researchers have discovered many structural connections between lightweight ViTs and lightweight CNNs. However the notable architectural disparities in the block structure macro and micro designs between them have not been adequately examined. In this study we revisit the efficient design of lightweight CNNs from ViT perspective and emphasize their promising prospect for mobile devices. Specifically we incrementally enhance the mobile-friendliness of a standard lightweight CNN i.e. MobileNetV3 by integrating the efficient architectural designs of lightweight ViTs. This ends up with a new family of pure lightweight CNNs namely RepViT. Extensive experiments show that RepViT outperforms existing state-of-the-art lightweight ViTs and exhibits favorable latency in various vision tasks. Notably on ImageNet RepViT achieves over 80% top-1 accuracy with 1.0 ms latency on an iPhone 12 which is the first time for a lightweight model to the best of our knowledge. Besides when RepViT meets SAM our RepViT-SAM can achieve nearly 10xfaster inference than the advanced MobileSAM. Codes and models are available at https://github.com/THU-MIG/RepViT.
[]
[]
[]
[]
2,638
2,639
MonoNPHM: Dynamic Head Reconstruction from Monocular Videos
http://arxiv.org/abs/2312.06740
Simon Giebenhain, Tobias Kirschstein, Markos Georgopoulos, Martin Rünz, Lourdes Agapito, Matthias Nießner
2,312.0674
We present Monocular Neural Parametric Head Models (MonoNPHM) for dynamic 3D head reconstructions from monocular RGB videos. To this end we propose a latent appearance space that parameterizes a texture field on top of a neural parametric model. We constrain predicted color values to be correlated with the underlying geometry such that gradients from RGB effectively influence latent geometry codes during inverse rendering. To increase the representational capacity of our expression space we augment our backward deformation field with hyper-dimensions thus improving color and geometry representation in topologically challenging expressions. Using MonoNPHM as a learned prior we approach the task of 3D head reconstruction using signed distance field based volumetric rendering. By numerically inverting our backward deformation field we incorporated a landmark loss using facial anchor points that are closely tied to our canonical geometry representation. We incorporate a facial landmark loss by numerically inverting our backward deformation field tied with our canonical geometry to observed 2D facial landmarks in posed space. To evaluate the task of dynamic face reconstruction from monocular RGB videos we record 20 challenging Kinect sequences under casual conditions. MonoNPHM outperforms all baselines with a significant margin and makes an important step towards easily accessible neural parametric face models through RGB tracking.
[]
[]
[]
[]
2,639
2,640
Realigning Confidence with Temporal Saliency Information for Point-Level Weakly-Supervised Temporal Action Localization
Ziying Xia, Jian Cheng, Siyu Liu, Yongxiang Hu, Shiguang Wang, Yijie Zhang, Liwan Dang
null
Point-level weakly-supervised temporal action localization (P-TAL) aims to localize action instances in untrimmed videos through the use of single-point annotations in each instance. Existing methods predict the class activation sequences without any boundary information and the unreliable sequences result in a significant misalignment between the quality of proposals and their corresponding confidence. In this paper we surprisingly observe the most salient frame tend to appear in the central region of the each instance and is easily annotated by humans. Guided by the temporal saliency information we present a novel proposal-level plug-in framework to relearn the aligned confidence of proposals generated by the base locators. The proposed approach consists of Center Score Learning (CSL) and Alignment-based Boundary Adaptation (ABA). In CSL we design a novel center label generated by the point annotations for predicting aligned center scores. During inference we first fuse the center scores with the predicted action probabilities to obtain the aligned confidence. ABA utilizes the both aligned confidence and IoU information to enhance localization completeness. Extensive experiments demonstrate the generalization and effectiveness of the proposed framework showcasing state-of-the-art or competitive performances across three benchmarks. Our code is available at https://github.com/zyxia1009/CVPR2024-TSPNet.
[]
[]
[]
[]
2,640
2,641
ConsistNet: Enforcing 3D Consistency for Multi-view Images Diffusion
http://arxiv.org/abs/2310.10343
Jiayu Yang, Ziang Cheng, Yunfei Duan, Pan Ji, Hongdong Li
2,310.10343
Given a single image of a 3D object this paper proposes a novel method (named ConsistNet) that can generate multiple images of the same object as if they are captured from different viewpoints while the 3D (multi-view) consistencies among those multiple generated images are effectively exploited. Central to our method is a lightweight multi-view consistency block that enables information exchange across multiple single-view diffusion processes based on the underlying multi-view geometry principles. ConsistNet is an extension to the standard latent diffusion model and it consists of two submodules: (a) a view aggregation module that unprojects multi-view features into global 3D volumes and infers consistency and (b) a ray aggregation module that samples and aggregates 3D consistent features back to each view to enforce consistency. Our approach departs from previous methods in multi-view image generation in that it can be easily dropped in pre-trained LDMs without requiring explicit pixel correspondences or depth prediction. Experiments show that our method effectively learns 3D consistency over a frozen Zero123-XL backbone and can generate 16 surrounding views of the object within 11 seconds on a single A100 GPU.
[]
[]
[]
[]
2,641
2,642
GenN2N: Generative NeRF2NeRF Translation
http://arxiv.org/abs/2404.02788
Xiangyue Liu, Han Xue, Kunming Luo, Ping Tan, Li Yi
2,404.02788
We present GenN2N a unified NeRF-to-NeRF translation framework for various NeRF translation tasks such as text-driven NeRF editing colorization super-resolution inpainting etc. Unlike previous methods designed for individual translation tasks with task-specific schemes GenN2N achieves all these NeRF editing tasks by employing a plug-and-play image-to-image translator to perform editing in the 2D domain and lifting 2D edits into the 3D NeRF space. Since the 3D consistency of 2D edits may not be assured we propose to model the distribution of the underlying 3D edits through a generative model that can cover all possible edited NeRFs. To model the distribution of 3D edited NeRFs from 2D edited images we carefully design a VAE-GAN that encodes images while decoding NeRFs. The latent space is trained to align with a Gaussian distribution and the NeRFs are supervised through an adversarial loss on its renderings. To ensure the latent code does not depend on 2D viewpoints but truly reflects the 3D edits we also regularize the latent code through a contrastive learning scheme. Extensive experiments on various editing tasks show GenN2N as a universal framework performs as well or better than task-specific specialists while possessing flexible generative power. More results on our project page: https://xiangyueliu.github.io/GenN2N/.
[]
[]
[]
[]
2,642
2,643
Theoretically Achieving Continuous Representation of Oriented Bounding Boxes
http://arxiv.org/abs/2402.18975
Zikai Xiao, Guoye Yang, Xue Yang, Taijiang Mu, Junchi Yan, Shimin Hu
2,402.18975
Considerable efforts have been devoted to Oriented Object Detection (OOD). However one lasting issue regarding the discontinuity in Oriented Bounding Box (OBB) representation remains unresolved which is an inherent bottleneck for extant OOD methods. This paper endeavors to completely solve this issue in a theoretically guaranteed manner and puts an end to the ad-hoc efforts in this direction. Prior studies typically can only address one of the two cases of discontinuity: rotation and aspect ratio and often inadvertently introduce decoding discontinuity e.g. Decoding Incompleteness (DI) and Decoding Ambiguity (DA) as discussed in literature. Specifically we propose a novel representation method called Continuous OBB (COBB) which can be readily integrated into existing detectors e.g. Faster-RCNN as a plugin. It can theoretically ensure continuity in bounding box regression which to our best knowledge has not been achieved in literature for rectangle-based object representation. For fairness and transparency of experiments we have developed a modularized benchmark based on the open-source deep learning framework Jittor's detection toolbox JDet for OOD evaluation. On the popular DOTA dataset by integrating Faster-RCNN as the same baseline model our new method outperforms the peer method Gliding Vertex by 1.13% mAP50 (relative improvement 1.54%) and 2.46% mAP75 (relative improvement 5.91%) without any tricks.
[]
[]
[]
[]
2,643
2,644
Universal Robustness via Median Randomized Smoothing for Real-World Super-Resolution
http://arxiv.org/abs/2405.14934
Zakariya Chaouai, Mohamed Tamaazousti
2,405.14934
Most of the recent literature on image Super-Resolution (SR) can be classified into two main approaches. The first one involves learning a corruption model tailored to a specific dataset aiming to mimic the noise and corruption in low-resolution images such as sensor noise. However this approach is data-specific tends to lack adaptability and its accuracy diminishes when faced with unseen types of image corruptions. A second and more recent approach referred to as Robust Super-Resolution (RSR) proposes to improve real-world SR by harnessing the generalization capabilities of a model by making it robust to adversarial attacks. To delve further into this second approach our paper explores the universality of various methods for enhancing the robustness of deep learning SR models. In other words we inquire: \enquote Which robustness method exhibits the highest degree of adaptability when dealing with a wide range of adversarial attacks ? . Our extensive experimentation on both synthetic and real-world images empirically demonstrates that median randomized smoothing (MRS) is more general in terms of robustness compared to adversarial learning techniques which tend to focus on specific types of attacks. Furthermore as expected we also illustrate that the proposed universal robust method enables the SR model to handle standard corruptions more effectively such as blur and Gaussian noise and notably corruptions naturally present in real-world images. These results support the significance of shifting the paradigm in the development of real-world SR methods towards RSR especially via MRS.
[]
[]
[]
[]
2,644
2,645
One-dimensional Adapter to Rule Them All: Concepts Diffusion Models and Erasing Applications
http://arxiv.org/abs/2312.16145
Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, Guiguang Ding
2,312.16145
The prevalent use of commercial and open-source diffusion models (DMs) for text-to-image generation prompts risk mitigation to prevent undesired behaviors. Existing concept erasing methods in academia are all based on full parameter or specification-based fine-tuning from which we observe the following issues: 1) Generation alteration towards erosion: Parameter drift during target elimination causes alterations and potential deformations across all generations even eroding other concepts at varying degrees which is more evident with multi-concept erased; 2) Transfer inability & deployment inefficiency: Previous model-specific erasure impedes the flexible combination of concepts and the training-free transfer towards other models resulting in linear cost growth as the deployment scenarios increase. To achieve non-invasive precise customizable and transferable elimination we ground our erasing framework on one-dimensional adapters to erase multiple concepts from most DMs at once across versatile erasing applications. The concept-SemiPermeable structure is injected as a Membrane (SPM) into any DM to learn targeted erasing and meantime the alteration and erosion phenomenon is effectively mitigated via a novel Latent Anchoring fine-tuning strategy. Once obtained SPMs can be flexibly combined and plug-and-play for other DMs without specific re-tuning enabling timely and efficient adaptation to diverse scenarios. During generation our Facilitated Transport mechanism dynamically regulates the permeability of each SPM to respond to different input prompts further minimizing the impact on other concepts. Quantitative and qualitative results across 40 concepts 7 DMs and 4 erasing applications have demonstrated the superior erasing of SPM. Our code and pre-tuned SPMs are available on the project page https://lyumengyao.github.io/projects/spm.
[]
[]
[]
[]
2,645
2,646
Learning Large-Factor EM Image Super-Resolution with Generative Priors
Jiateng Shou, Zeyu Xiao, Shiyu Deng, Wei Huang, Peiyao Shi, Ruobing Zhang, Zhiwei Xiong, Feng Wu
null
As the mainstream technique for capturing images of biological specimens at nanometer resolution electron microscopy (EM) is extremely time-consuming for scanning wide field-of-view (FOV) specimens. In this paper we investigate a challenging task of large-factor EM image super-resolution (EMSR) which holds great promise for reducing scanning time relaxing acquisition conditions and expanding imaging FOV. By exploiting the repetitive structures and volumetric coherence of EM images we propose the first generative learning-based framework for large-factor EMSR. Specifically motivated by the predictability of repetitive structures and textures in EM images we first learn a discrete codebook in the latent space to represent high-resolution (HR) cell-specific priors and a latent vector indexer to map low-resolution (LR) EM images to their corresponding latent vectors in a generative manner. By incorporating the generative cell-specific priors from HR EM images through a multi-scale prior fusion module we then deploy multi-image feature alignment and fusion to further exploit the inter-section coherence in the volumetric EM data. Extensive experiments demonstrate that our proposed framework outperforms advanced single-image and video super-resolution methods for 8x and 16x EMSR (i.e. with 64 times and 256 times less data acquired respectively) achieving superior visual reconstruction quality and downstream segmentation accuracy on benchmark EM datasets. Code is available at https://github.com/jtshou/GPEMSR.
[]
[]
[]
[]
2,646
2,647
DIMAT: Decentralized Iterative Merging-And-Training for Deep Learning Models
http://arxiv.org/abs/2404.08079
Nastaran Saadati, Minh Pham, Nasla Saleem, Joshua R. Waite, Aditya Balu, Zhanong Jiang, Chinmay Hegde, Soumik Sarkar
2,404.08079
Recent advances in decentralized deep learning algorithms have demonstrated cutting-edge performance on various tasks with large pre-trained models. However a pivotal prerequisite for achieving this level of competitiveness is the significant communication and computation overheads when updating these models which prohibits the applications of them to real-world scenarios. To address this issue drawing inspiration from advanced model merging techniques without requiring additional training we introduce the Decentralized Iterative Merging-And-Training (DIMAT) paradigm--a novel decentralized deep learning framework. Within DIMAT each agent is trained on their local data and periodically merged with their neighboring agents using advanced model merging techniques like activation matching until convergence is achieved. DIMAT provably converges with the best available rate for nonconvex functions with various first-order methods while yielding tighter error bounds compared to the popular existing approaches. We conduct a comprehensive empirical analysis to validate DIMAT's superiority over baselines across diverse computer vision tasks sourced from multiple datasets. Empirical results validate our theoretical claims by showing that DIMAT attains faster and higher initial gain in accuracy with independent and identically distributed (IID) and non-IID data incurring lower communication overhead. This DIMAT paradigm presents a new opportunity for the future decentralized learning enhancing its adaptability to real-world with sparse and light-weight communication and computation.
[]
[]
[]
[]
2,647
2,648
MMA: Multi-Modal Adapter for Vision-Language Models
Lingxiao Yang, Ru-Yuan Zhang, Yanchen Wang, Xiaohua Xie
null
Pre-trained Vision-Language Models (VLMs) have served as excellent foundation models for transfer learning in diverse downstream tasks. However tuning VLMs for few-shot generalization tasks faces a discrimination -- generalization dilemma i.e. general knowledge should be preserved and task-specific knowledge should be fine-tuned. How to precisely identify these two types of representations remains a challenge. In this paper we propose a Multi-Modal Adapter (MMA) for VLMs to improve the alignment between representations from text and vision branches. MMA aggregates features from different branches into a shared feature space so that gradients can be communicated across branches. To determine how to incorporate MMA we systematically analyze the discriminability and generalizability of features across diverse datasets in both the vision and language branches and find that (1) higher layers contain discriminable dataset-specific knowledge while lower layers contain more generalizable knowledge and (2) language features are more discriminable than visual features and there are large semantic gaps between the features of the two modalities especially in the lower layers. Therefore we only incorporate MMA to a few higher layers of transformers to achieve an optimal balance between discrimination and generalization. We evaluate the effectiveness of our approach on three tasks: generalization to novel classes novel target datasets and domain generalization. Compared to many state-of-the-art methods our MMA achieves leading performance in all evaluations. Code is at https://github.com/ZjjConan/Multi-Modal-Adapter
[]
[]
[]
[]
2,648
2,649
Kandinsky Conformal Prediction: Efficient Calibration of Image Segmentation Algorithms
http://arxiv.org/abs/2311.11837
Joren Brunekreef, Eric Marcus, Ray Sheombarsing, Jan-Jakob Sonke, Jonas Teuwen
2,311.11837
Image segmentation algorithms can be understood as a collection of pixel classifiers for which the outcomes of nearby pixels are correlated. Classifier models can be calibrated using Inductive Conformal Prediction but this requires holding back a sufficiently large calibration dataset for computing the distribution of non-conformity scores of the model's predictions. If one only requires only marginal calibration on the image level this calibration set consists of all individual pixels in the images available for calibration. However if the goal is to attain proper calibration for each individual pixel classifier the calibration set consists of individual images. In a scenario where data are scarce (such as the medical domain) it may not always be possible to set aside sufficiently many images for this pixel-level calibration.The method we propose dubbed "Kandinsky calibration" makes use of the spatial structure present in the distribution of natural images to simultaneously calibrate the classifiers of "similar" pixels. This can be seen as an intermediate approach between marginal (imagewise) and conditional (pixelwise) calibration where non-conformity scores are aggregated over similar image regions thereby making more efficient use of the images available for calibration. We run experiments on segmentation algorithms trained and calibrated on subsets of the public MS-COCO and Medical Decathlon datasets demonstrating that Kandinsky calibration method can significantly improve the coverage. When compared to both pixelwise and imagewise calibration on little data the Kandinsky method achieves much lower coverage errors indicating the data efficiency of the Kandinsky calibration.
[]
[]
[]
[]
2,649
2,650
Diversity-aware Channel Pruning for StyleGAN Compression
http://arxiv.org/abs/2403.13548
Jiwoo Chung, Sangeek Hyun, Sang-Heon Shim, Jae-Pil Heo
2,403.13548
StyleGAN has shown remarkable performance in unconditional image generation. However its high computational cost poses a significant challenge for practical applications. Although recent efforts have been made to compress StyleGAN while preserving its performance existing compressed models still lag behind the original model particularly in terms of sample diversity. To overcome this we propose a novel channel pruning method that leverages varying sensitivities of channels to latent vectors which is a key factor in sample diversity. Specifically by assessing channel importance based on their sensitivities to latent vector perturbations our method enhances the diversity of samples in the compressed model. Since our method solely focuses on the channel pruning stage it has complementary benefits with prior training schemes without additional training cost. Extensive experiments demonstrate that our method significantly enhances sample diversity across various datasets. Moreover in terms of FID scores our method not only surpasses state-of-the-art by a large margin but also achieves comparable scores with only half training iterations.
[]
[]
[]
[]
2,650
2,651
BioCLIP: A Vision Foundation Model for the Tree of Life
http://arxiv.org/abs/2311.18803
Samuel Stevens, Jiaman Wu, Matthew J Thompson, Elizabeth G Campolongo, Chan Hee Song, David Edward Carlyn, Li Dong, Wasila M Dahdul, Charles Stewart, Tanya Berger-Wolf, Wei-Lun Chao, Yu Su
2,311.18803
Images of the natural world collected by a variety of cameras from drones to individual phones are increasingly abundant sources of biological information. There is an explosion of computational methods and tools particularly computer vision for extracting biologically relevant information from images for science and conservation. Yet most of these are bespoke approaches designed for a specific task and are not easily adaptable or extendable to new questions contexts and datasets. A vision model for general organismal biology questions on images is of timely need. To approach this we curate and release TreeOfLife-10M the largest and most diverse ML-ready dataset of biology images. We then develop BioCLIP a foundation model for the tree of life leveraging the unique properties of biology captured by TreeOfLife-10M namely the abundance and variety of images of plants animals and fungi together with the availability of rich structured biological knowledge. We rigorously benchmark our approach on diverse fine-grained biology classification tasks and find that BioCLIP consistently and substantially outperforms existing baselines (by 17% to 20% absolute). Intrinsic evaluation reveals that BioCLIP has learned a hierarchical representation conforming to the tree of life shedding light on its strong generalizability. All data code and models will be publicly released upon acceptance.
[]
[]
[]
[]
2,651
2,652
From Pixels to Graphs: Open-Vocabulary Scene Graph Generation with Vision-Language Models
http://arxiv.org/abs/2404.00906
Rongjie Li, Songyang Zhang, Dahua Lin, Kai Chen, Xuming He
2,404.00906
Scene graph generation (SGG) aims to parse a visual scene into an intermediate graph representation for downstream reasoning tasks. Despite recent advancements existing methods struggle to generate scene graphs with novel visual relation concepts. To address this challenge we introduce a new open-vocabulary SGG framework based on sequence generation. Our framework leverages vision-language pre-trained models (VLM) by incorporating an image-to-graph generation paradigm. Specifically we generate scene graph sequences via image-to-text generation with VLM and then construct scene graphs from these sequences. By doing so we harness the strong capabilities of VLM for open-vocabulary SGG and seamlessly integrate explicit relational modeling for enhancing the VL tasks. Experimental results demonstrate that our design not only achieves superior performance with an open vocabulary but also enhances downstream vision-language task performance through explicit relation modeling knowledge.
[]
[]
[]
[]
2,652
2,653
Deep Imbalanced Regression via Hierarchical Classification Adjustment
http://arxiv.org/abs/2310.17154
Haipeng Xiong, Angela Yao
2,310.17154
Regression tasks in computer vision such as age estimation or counting are often formulated into classification by quantizing the target space into classes. Yet real-world data is often imbalanced -- the majority of training samples lie in a head range of target values while a minority of samples span a usually larger tail range. By selecting the class quantization one can adjust imbalanced regression targets into balanced classification outputs though there are trade-offs in balancing classification accuracy and quantization error. To improve regression performance over the entire range of data we propose to construct hierarchical classifiers for solving imbalanced regression tasks. The fine-grained classifiers limit the quantization error while being modulated by the coarse predictions to ensure high accuracy. Standard hierarchical classification approaches when applied to the regression problem fail to ensure that predicted ranges remain consistent across the hierarchy. As such we propose a range-preserving distillation process that effectively learns a single classifier from the set of hierarchical classifiers. Our novel hierarchical classification adjustment (HCA) for imbalanced regression shows superior results on three diverse tasks: age estimation crowd counting and depth estimation. Code is available at https://github.com/xhp-hust-2018-2011/HCA.
[]
[]
[]
[]
2,653
2,654
Adaptive Fusion of Single-View and Multi-View Depth for Autonomous Driving
http://arxiv.org/abs/2403.07535
Junda Cheng, Wei Yin, Kaixuan Wang, Xiaozhi Chen, Shijie Wang, Xin Yang
2,403.07535
Multi-view depth estimation has achieved impressive performance over various benchmarks. However almost all current multi-view systems rely on given ideal camera poses which are unavailable in many real-world scenarios such as autonomous driving. In this work we propose a new robustness benchmark to evaluate the depth estimation system under various noisy pose settings. Surprisingly we find current multi-view depth estimation methods or single-view and multi-view fusion methods will fail when given noisy pose settings. To address this challenge we propose a single-view and multi-view fused depth estimation system which adaptively integrates high-confident multi-view and single-view results for both robust and accurate depth estimations. The adaptive fusion module performs fusion by dynamically selecting high-confidence regions between two branches based on a wrapping confidence map. Thus the system tends to choose the more reliable branch when facing textureless scenes inaccurate calibration dynamic objects and other degradation or challenging conditions. Our method outperforms state-of-the-art multi-view and fusion methods under robustness testing. Furthermore we achieve state-of-the-art performance on challenging benchmarks (KITTI and DDAD) when given accurate pose estimations. Project website: https://github.com/Junda24/AFNet/
[]
[]
[]
[]
2,654
2,655
Neural Clustering based Visual Representation Learning
http://arxiv.org/abs/2403.17409
Guikun Chen, Xia Li, Yi Yang, Wenguan Wang
2,403.17409
We investigate a fundamental aspect of machine vision: the measurement of features by revisiting clustering one of the most classic approaches in machine learning and data analysis. Existing visual feature extractors including ConvNets ViTs and MLPs represent an image as rectangular regions. Though prevalent such a grid-style paradigm is built upon engineering practice and lacks explicit modeling of data distribution. In this work we propose feature extraction with clustering (FEC) a conceptually elegant yet surprisingly ad-hoc interpretable neural clustering framework which views feature extraction as a process of selecting representatives from data and thus automatically captures the underlying data distribution. Given an image FEC alternates between grouping pixels into individual clusters to abstract representatives and updating the deep features of pixels with current representatives. Such an iterative working mechanism is implemented in the form of several neural layers and the final representatives can be used for downstream tasks. The cluster assignments across layers which can be viewed and inspected by humans make the forward process of FEC fully transparent and empower it with promising ad-hoc interpretability. Extensive experiments on various visual recognition models and tasks verify the effectiveness generality and interpretability of FEC. We expect this work will provoke a rethink of the current de facto grid-style paradigm.
[]
[]
[]
[]
2,655
2,656
Continual Self-supervised Learning: Towards Universal Multi-modal Medical Data Representation Learning
http://arxiv.org/abs/2311.17597
Yiwen Ye, Yutong Xie, Jianpeng Zhang, Ziyang Chen, Qi Wu, Yong Xia
2,311.17597
Self-supervised learning (SSL) is an efficient pre-training method for medical image analysis. However current research is mostly confined to certain modalities consuming considerable time and resources without achieving universality across different modalities. A straightforward solution is combining all modality data for joint SSL which poses practical challenges. Firstly our experiments reveal conflicts in representation learning as the number of modalities increases. Secondly multi-modal data collected in advance cannot cover all real-world scenarios. In this paper we reconsider versatile SSL from the perspective of continual learning and propose MedCoSS a continuous SSL approach for multi-modal medical data. Different from joint representation learning MedCoSS assigns varying data modalities to separate training stages creating a multi-stage pre-training process. We propose a rehearsal-based continual learning approach to manage modal conflicts and prevent catastrophic forgetting. Specifically we use the k-means sampling to retain and rehearse previous modality data during new modality learning. Moreover we apply feature distillation and intra-modal mixup on buffer data for knowledge retention bypassing pretext tasks. We conduct experiments on a large-scale multi-modal unlabeled dataset including clinical reports X-rays CT MRI and pathological images. Experimental results demonstrate MedCoSS's exceptional generalization ability across 9 downstream datasets and its significant scalability in integrating new modality data. The code and pre-trained model are available at https://github.com/yeerwen/MedCoSS.
[]
[]
[]
[]
2,656
2,657
Sparse Semi-DETR: Sparse Learnable Queries for Semi-Supervised Object Detection
Tahira Shehzadi, Khurram Azeem Hashmi, Didier Stricker, Muhammad Zeshan Afzal
null
In this paper we address the limitations of the DETR-based semi-supervised object detection (SSOD) framework particularly focusing on the challenges posed by the quality of object queries. In DETR-based SSOD the one-to-one assignment strategy provides inaccurate pseudo-labels while the one-to-many assignments strategy leads to overlapping predictions. These issues compromise training efficiency and degrade model performance especially in detecting small or occluded objects. We introduce Sparse Semi-DETR a novel transformer-based end-to-end semi-supervised object detection solution to overcome these challenges. Sparse Semi-DETR incorporates a Query Refinement Module to enhance the quality of object queries significantly improving detection capabilities for small and partially obscured objects. Additionally we integrate a Reliable Pseudo-Label Filtering Module that selectively filters high-quality pseudo-labels thereby enhancing detection accuracy and consistency. On the MS-COCO and Pascal VOC object detection benchmarks Sparse Semi-DETR achieves a significant improvement over current state-of-the-art methods that highlight Sparse Semi-DETR's effectiveness in semi-supervised object detection particularly in challenging scenarios involving small or partially obscured objects.
[]
[]
[]
[]
2,657
2,658
Towards Efficient Replay in Federated Incremental Learning
http://arxiv.org/abs/2403.05890
Yichen Li, Qunwei Li, Haozhao Wang, Ruixuan Li, Wenliang Zhong, Guannan Zhang
2,403.0589
In Federated Learning (FL) the data in each client is typically assumed fixed or static. However data often comes in an incremental manner in real-world applications where the data domain may increase dynamically. In this work we study catastrophic forgetting with data heterogeneity in Federated Incremental Learning (FIL) scenarios where edge clients may lack enough storage space to retain full data. We propose to employ a simple generic framework for FIL named Re-Fed which can coordinate each client to cache important samples for replay. More specifically when a new task arrives each client first caches selected previous samples based on their global and local importance. Then the client trains the local model with both the cached samples and the samples from the new task. Theoretically we analyze the ability of Re-Fed to discover important samples for replay thus alleviating the catastrophic forgetting problem. Moreover we empirically show that Re-Fed achieves competitive performance compared to state-of-the-art methods.
[]
[]
[]
[]
2,658
2,659
SimAC: A Simple Anti-Customization Method for Protecting Face Privacy against Text-to-Image Synthesis of Diffusion Models
http://arxiv.org/abs/2312.07865
Feifei Wang, Zhentao Tan, Tianyi Wei, Yue Wu, Qidong Huang
2,312.07865
Despite the success of diffusion-based customization methods on visual content creation increasing concerns have been raised about such techniques from both privacy and political perspectives. To tackle this issue several anti-customization methods have been proposed in very recent months predominantly grounded in adversarial attacks. Unfortunately most of these methods adopt straightforward designs such as end-to-end optimization with a focus on adversarially maximizing the original training loss thereby neglecting nuanced internal properties intrinsic to the diffusion model and even leading to ineffective optimization in some diffusion time steps. In this paper we strive to bridge this gap by undertaking a comprehensive exploration of these inherent properties to boost the performance of current anti-customization approaches. Two aspects of properties are investigated: 1) We examine the relationship between time step selection and the model's perception in the frequency domain of images and find that lower time steps can give much more contributions to adversarial noises. This inspires us to propose an adaptive greedy search for optimal time steps that seamlessly integrates with existing anti-customization methods. 2) We scrutinize the roles of features at different layers during denoising and devise a sophisticated feature-based optimization framework for anti-customization. Experiments on facial benchmarks demonstrate that our approach significantly increases identity disruption thereby protecting user privacy and copyright. Our code is available at: https://github.com/somuchtome/SimAC.
[]
[]
[]
[]
2,659
2,660
Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction
Xiaoyang Lyu, Chirui Chang, Peng Dai, Yang-Tian Sun, Xiaojuan Qi
null
Scene reconstruction from multi-view images is a fundamental problem in computer vision and graphics. Recent neural implicit surface reconstruction methods have achieved high-quality results; however editing and manipulating the 3D geometry of reconstructed scenes remains challenging due to the absence of naturally decomposed object entities and complex object/background compositions. In this paper we present Total-Decom a novel method for decomposed 3D reconstruction with minimal human interaction. Our approach seamlessly integrates the Segment Anything Model (SAM) with hybrid implicit-explicit neural surface representations and a mesh-based region-growing technique for accurate 3D object decomposition. Total-Decom requires minimal human annotations while providing users with real-time control over the granularity and quality of decomposition. We extensively evaluate our method on benchmark datasets and demonstrate its potential for downstream applications such as animation and scene editing.
[]
[]
[]
[]
2,660
2,661
Accelerating Neural Field Training via Soft Mining
http://arxiv.org/abs/2312.00075
Shakiba Kheradmand, Daniel Rebain, Gopal Sharma, Hossam Isack, Abhishek Kar, Andrea Tagliasacchi, Kwang Moo Yi
2,312.00075
We present an approach to accelerate Neural Field training by efficiently selecting sampling locations. While Neural Fields have recently become popular it is often trained by uniformly sampling the training domain or through handcrafted heuristics. We show that improved convergence and final training quality can be achieved by a soft mining technique based on importance sampling: rather than either considering or ignoring a pixel completely we weigh the corresponding loss by a scalar. To implement our idea we use Langevin Monte-Carlo sampling. We show that by doing so regions with higher error are being selected more frequently leading to more than 2x improvement in convergence speed. The code and related resources for this study are publicly available at https://ubc-vision.github.io/nf-soft-mining/.
[]
[]
[]
[]
2,661
2,662
Ensemble Diversity Facilitates Adversarial Transferability
Bowen Tang, Zheng Wang, Yi Bin, Qi Dou, Yang Yang, Heng Tao Shen
null
With the advent of ensemble-based attacks the transferability of generated adversarial examples is elevated by a noticeable margin despite many methods only employing superficial integration yet ignoring the diversity between ensemble models. However most of them compromise the latent value of the diversity between generated perturbation from distinct models which we argue is also able to increase the adversarial transferability especially heterogeneous attacks. To address the issues we propose a novel method of Stochastic Mini-batch black-box attack with Ensemble Reweighing using reinforcement learning (SMER) to produce highly transferable adversarial examples. We emphasize the diversity between surrogate models achieving individual perturbation iteratively. In order to customize the individual effect between surrogates ensemble reweighing is introduced to refine ensemble weights by maximizing attack loss based on reinforcement learning which functions on the ultimate transferability elevation. Extensive experiments demonstrate our superiority to recent ensemble attacks with a significant margin across different black-box attack scenarios especially on heterogeneous conditions.
[]
[]
[]
[]
2,662
2,663
Fair-VPT: Fair Visual Prompt Tuning for Image Classification
Sungho Park, Hyeran Byun
null
Despite the remarkable success of Vision Transformers (ViT) across diverse fields in computer vision they have a clear drawback of expensive adaption cost for downstream tasks due to the increased scale. To address this Visual Prompt Tuning (VPT) incorporates learnable parameters in the input space of ViT. While freezing the ViT backbone and tuning only the prompts it exhibits superior performances to full fine-tuning. However despite the outstanding advantage we point out that VPT may lead to serious unfairness in downstream classification. Initially we investigate the causes of unfairness in VPT identifying the biasedly pre-trained ViT as a principal factor. Motivated by this observation we propose a Fair Visual Prompt Tuning (Fair-VPT) which removes biased information in the pre-trained ViT while adapting it to downstream classification tasks. To this end we categorize prompts into "cleaner prompts" and "target prompts". Based on this we encode the class token in two different ways by either masking or not masking the target prompts in the self-attention process. These encoded tokens are trained with distinct objective functions resulting in the inclusion of different information in the target and cleaner prompts. Moreover we introduce a disentanglement loss based on contrastive learning to further decorrelate them. In experiments across diverse benchmarks the proposed method demonstrates the most superior performance in terms of balanced classification accuracy and fairness.
[]
[]
[]
[]
2,663
2,664
Uncertainty-Aware Source-Free Adaptive Image Super-Resolution with Wavelet Augmentation Transformer
http://arxiv.org/abs/2303.17783
Yuang Ai, Xiaoqiang Zhou, Huaibo Huang, Lei Zhang, Ran He
2,303.17783
Unsupervised Domain Adaptation (UDA) can effectively address domain gap issues in real-world image Super-Resolution (SR) by accessing both the source and target data. Considering privacy policies or transmission restrictions of source data in practical scenarios we propose a SOurce-free Domain Adaptation framework for image SR (SODA-SR) to address this issue i.e. adapt a source-trained model to a target domain with only unlabeled target data. SODA-SR leverages the source-trained model to generate refined pseudo-labels for teacher-student learning. To better utilize pseudo-labels we propose a novel wavelet-based augmentation method named Wavelet Augmentation Transformer (WAT) which can be flexibly incorporated with existing networks to implicitly produce useful augmented data. WAT learns low-frequency information of varying levels across diverse samples which is aggregated efficiently via deformable attention. Furthermore an uncertainty-aware self-training mechanism is proposed to improve the accuracy of pseudo-labels with inaccurate predictions being rectified by uncertainty estimation. To acquire better SR results and avoid overfitting pseudo-labels several regularization losses are proposed to constrain target LR and SR images in the frequency domain. Experiments show that without accessing source data SODA-SR outperforms state-of-the-art UDA methods in both synthetic->real and real->real adaptation settings and is not constrained by specific network architectures.
[]
[]
[]
[]
2,664
2,665
Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling
Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang, Pedro Miraldo, Suhas Lohit, Moitreya Chatterjee
null
Extensions of Neural Radiance Fields (NeRFs) to model dynamic scenes have enabled their near photo-realistic free-viewpoint rendering. Although these methods have shown some potential in creating immersive experiences two drawbacks limit their ubiquity: (i) a significant reduction in reconstruction quality when the computing budget is limited and (ii) a lack of semantic understanding of the underlying scenes. To address these issues we introduce Gear-NeRF which leverages semantic information from powerful image segmentation models. Our approach presents a principled way for learning a spatio-temporal (4D) semantic embedding based on which we introduce the concept of gears to allow for stratified modeling of dynamic regions of the scene based on the extent of their motion. Such differentiation allows us to adjust the spatio-temporal sampling resolution for each region in proportion to its motion scale achieving more photo-realistic dynamic novel view synthesis. At the same time almost for free our approach enables free-viewpoint tracking of objects of interest -- a functionality not yet achieved by existing NeRF-based methods. Empirical studies validate the effectiveness of our method where we achieve state-of-the-art rendering and tracking performance on multiple challenging datasets. The project page is available at: https://merl.com/research/highlights/gear-nerf.
[]
[]
[]
[]
2,665
2,666
CaDeT: a Causal Disentanglement Approach for Robust Trajectory Prediction in Autonomous Driving
Mozhgan Pourkeshavarz, Junrui Zhang, Amir Rasouli
null
For safe motion planning in real-world autonomous vehicles require behavior prediction models that are reliable and robust to distribution shifts. The recent studies suggest that the existing learning-based trajectory prediction models do not posses such characteristics and are susceptible to small perturbations that are not present in the training data largely due to overfitting to spurious correlations while learning. In this paper we propose a causal disentanglement representation learning approach aiming to separate invariant (causal) and variant (spurious) features for more robust learning. Our method benefits from a novel intervention mechanism in the latent space that estimates potential distribution shifts resulted from spurious correlations using uncertain feature statistics hence maintaining the realism of interventions. To facilitate learning we propose a novel invariance objective based on the variances of the distributions over uncertain statistics to induce the model to focus on invariant representations during training. We conduct extensive experiments on two large-scale autonomous driving datasets and show that besides achieving state-of-the-art performance our method can significantly improve prediction robustness to various distribution shifts in driving scenes. We further conduct ablative studies to evaluate the design choices in our proposed framework.
[]
[]
[]
[]
2,666
2,667
Spacetime Gaussian Feature Splatting for Real-Time Dynamic View Synthesis
http://arxiv.org/abs/2312.16812
Zhan Li, Zhang Chen, Zhong Li, Yi Xu
2,312.16812
Novel view synthesis of dynamic scenes has been an intriguing yet challenging problem. Despite recent advancements simultaneously achieving high-resolution photorealistic results real-time rendering and compact storage remains a formidable task. To address these challenges we propose Spacetime Gaussian Feature Splatting as a novel dynamic scene representation composed of three pivotal components. First we formulate expressive Spacetime Gaussians by enhancing 3D Gaussians with temporal opacity and parametric motion/rotation. This enables Spacetime Gaussians to capture static dynamic as well as transient content within a scene. Second we introduce splatted feature rendering which replaces spherical harmonics with neural features. These features facilitate the modeling of view- and time-dependent appearance while maintaining small size. Third we leverage the guidance of training error and coarse depth to sample new Gaussians in areas that are challenging to converge with existing pipelines. Experiments on several established real-world datasets demonstrate that our method achieves state-of-the-art rendering quality and speed while retaining compact storage. At 8K resolution our lite-version model can render at 60 FPS on an Nvidia RTX 4090 GPU.
[]
[]
[]
[]
2,667
2,668
Instruct-Imagen: Image Generation with Multi-modal Instruction
Hexiang Hu, Kelvin C.K. Chan, Yu-Chuan Su, Wenhu Chen, Yandong Li, Kihyuk Sohn, Yang Zhao, Xue Ben, Boqing Gong, William Cohen, Ming-Wei Chang, Xuhui Jia
null
This paper presents Instruct-Imagen a model that tackles heterogeneous image generation tasks and generalizes across unseen tasks. We introduce multi-modal instruction for image generation a task representation articulating a range of generation intents with precision. It uses natural language to amalgamate disparate modalities (e.g. text edge style subject etc.) such that abundant generation intents can be standardized in a uniform format. We then build Instruct-Imagen by fine-tuning a pre-trained text-to-image diffusion model with two stages. First we adapt the model using the retrieval-augmented training to enhance model's capabilities to ground its generation on external multi-modal context. Subsequently we fine-tune the adapted model on diverse image generation tasks that requires vision-language understanding (e.g. subject-driven generation etc.) each paired with a multi-modal instruction encapsulating the task's essence. Human evaluation on various image generation datasets reveals that Instruct-Imagen matches or surpasses prior task-specific models in-domain and demonstrates promising generalization to unseen and more complex tasks. Our evaluation suite will be made publicly available.
[]
[]
[]
[]
2,668
2,669
Prompting Vision Foundation Models for Pathology Image Analysis
Chong Yin, Siqi Liu, Kaiyang Zhou, Vincent Wai-Sun Wong, Pong C. Yuen
null
The rapid increase in cases of non-alcoholic fatty liver disease (NAFLD) in recent years has raised significant public concern. Accurately identifying tissue alteration regions is crucial for the diagnosis of NAFLD but this task presents challenges in pathology image analysis particularly with small-scale datasets. Recently the paradigm shift from full fine-tuning to prompting in adapting vision foundation models has offered a new perspective for small-scale data analysis. However existing prompting methods based on task-agnostic prompts are mainly developed for generic image recognition which fall short in providing instructive cues for complex pathology images. In this paper we propose Q uantitative A ttribute-based P rompting (QAP) a novel prompting method specifically for liver pathology image analysis. QAP is based on two quantitative attributes namely K-function-based spatial attributes and histogram-based morphological attributes which are aimed for quantitative assessment of tissue states. Moreover a conditional prompt generator is designed to turn these instance-specific attributes into visual prompts. Extensive experiments on three diverse tasks demonstrate that our task-specific prompting method achieves better diagnostic performance as well as better interpretability. Code is available at \href https://github.com/7LFB/QAP https://github.com/7LFB/QAP .
[]
[]
[]
[]
2,669
2,670
Rethinking Few-shot 3D Point Cloud Semantic Segmentation
http://arxiv.org/abs/2403.00592
Zhaochong An, Guolei Sun, Yun Liu, Fayao Liu, Zongwei Wu, Dan Wang, Luc Van Gool, Serge Belongie
2,403.00592
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS) with a focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution. The former arises from non-uniform point sampling allowing models to distinguish the density disparities between foreground and background for easier segmentation. The latter results from sampling only 2048 points limiting semantic information and deviating from the real-world practice. To address these issues we introduce a standardized FS-PCS setting upon which a new benchmark is built. Moreover we propose a novel FS-PCS model. While previous methods are based on feature optimization by mainly refining support features to enhance prototypes our method is based on correlation optimization referred to as Correlation Optimization Segmentation (COSeg). Specifically we compute Class-specific Multi-prototypical Correlation (CMC) for each query point representing its correlations to category prototypes. Then we propose the Hyper Correlation Augmentation (HCA) module to enhance CMC. Furthermore tackling the inherent property of few-shot training to incur base susceptibility for models we propose to learn non-parametric prototypes for the base classes during training. The learned base prototypes are used to calibrate correlations for the background class through a Base Prototypes Calibration (BPC) module. Experiments on popular datasets demonstrate the superiority of COSeg over existing methods. The code is available at github.com/ZhaochongAn/COSeg.
[]
[]
[]
[]
2,670
2,671
SEED-Bench: Benchmarking Multimodal Large Language Models
Bohao Li, Yuying Ge, Yixiao Ge, Guangzhi Wang, Rui Wang, Ruimao Zhang, Ying Shan
null
Multimodal large language models (MLLMs) building upon the foundation of powerful large language models (LLMs) have recently demonstrated exceptional capabilities in generating not only texts but also images given interleaved multimodal inputs (acting like a combination of GPT-4V and DALL-E 3). However existing MLLM benchmarks remain limited to assessing only models' comprehension ability of single image-text inputs failing to keep up with the strides made in MLLMs. A comprehensive benchmark is imperative for investigating the progress and uncovering the limitations of current MLLMs. In this work we categorize the capabilities of MLLMs into hierarchical levels from L_0 to L_4 based on the modalities they can accept and generate and propose SEED-Bench a comprehensive benchmark that evaluates the hierarchical capabilities of MLLMs. Specifically SEED-Bench comprises 24K multiple-choice questions with accurate human annotations which spans 27 dimensions including the evaluation of both text and image generation. Multiple-choice questions with groundtruth options derived from human annotation enables an objective and efficient assessment of model performance eliminating the need for human or GPT intervention during evaluation. We further evaluate the performance of 22 prominent open-source MLLMs and summarize valuable observations. By revealing the limitations of existing MLLMs through extensive evaluations we aim for SEED-Bench to provide insights that will motivate future research towards the goal of General Artificial Intelligence.
[]
[]
[]
[]
2,671
2,672
BrainWash: A Poisoning Attack to Forget in Continual Learning
http://arxiv.org/abs/2311.11995
Ali Abbasi, Parsa Nooralinejad, Hamed Pirsiavash, Soheil Kolouri
2,311.11995
Continual learning has gained substantial attention within the deep learning community offering promising solutions to the challenging problem of sequential learning. Yet a largely unexplored facet of this paradigm is its susceptibility to adversarial attacks especially with the aim of inducing forgetting. In this paper we introduce "BrainWash" a novel data poisoning method tailored to impose forgetting on a continual learner. By adding the BrainWash noise to a variety of baselines we demonstrate how a trained continual learner can be induced to forget its previously learned tasks catastrophically even when using these continual learning baselines. An important feature of our approach is that the attacker requires no access to previous tasks' data and is armed merely with the model's current parameters and the data belonging to the most recent task. Our extensive experiments highlight the efficacy of BrainWash showcasing degradation in performance across various regularization and memory replay-based continual learning methods. Our code is available here: https://github.com/mint-vu/Brainwash
[]
[]
[]
[]
2,672
2,673
GreedyViG: Dynamic Axial Graph Construction for Efficient Vision GNNs
http://arxiv.org/abs/2405.06849
Mustafa Munir, William Avery, Md Mostafijur Rahman, Radu Marculescu
2,405.06849
Vision graph neural networks (ViG) offer a new avenue for exploration in computer vision. A major bottleneck in ViGs is the inefficient k-nearest neighbor (KNN) operation used for graph construction. To solve this issue we propose a new method for designing ViGs Dynamic Axial Graph Construction (DAGC) which is more efficient than KNN as it limits the number of considered graph connections made within an image. Additionally we propose a novel CNN-GNN architecture GreedyViG which uses DAGC. Extensive experiments show that GreedyViG beats existing ViG CNN and ViT architectures in terms of accuracy GMACs and parameters on image classification object detection instance segmentation and semantic segmentation tasks. Our smallest model GreedyViG-S achieves 81.1% top-1 accuracy on ImageNet-1K 2.9% higher than Vision GNN and 2.2% higher than Vision HyperGraph Neural Network (ViHGNN) with less GMACs and a similar number of parameters. Our largest model GreedyViG-B obtains 83.9% top-1 accuracy 0.2% higher than Vision GNN with a 66.6% decrease in parameters and a 69% decrease in GMACs. GreedyViG-B also obtains the same accuracy as ViHGNN with a 67.3% decrease in parameters and a 71.3% decrease in GMACs. Our work shows that hybrid CNN-GNN architectures not only provide a new avenue for designing efficient models but that they can also exceed the performance of current state-of-the-art models.
[]
[]
[]
[]
2,673
2,674
Relightable and Animatable Neural Avatar from Sparse-View Video
http://arxiv.org/abs/2308.07903
Zhen Xu, Sida Peng, Chen Geng, Linzhan Mou, Zihan Yan, Jiaming Sun, Hujun Bao, Xiaowei Zhou
2,308.07903
This paper tackles the problem of creating relightable and animatable neural avatars from sparse-view (or monocular) videos of dynamic humans under unknown illumination. Previous neural human reconstruction methods produce animatable avatars from sparse views using deformed Signed Distance Fields (SDF) but are non-relightable. While differentiable inverse rendering methods have succeeded in the material recovery of static objects it is not straightforward to extend them to dynamic humans since it is computationally intensive to compute pixel-surface intersection and light visibility on deformed SDFs for relighting. To solve this challenge we propose a Hierarchical Distance Query (HDQ) algorithm to approximate the world space SDF under arbitrary human poses. Specifically we estimate coarse SDF based on a parametric human model and compute fine SDF by exploiting the invariance of SDF w.r.t. local deformation. Based on HDQ we leverage sphere tracing to efficiently estimate the surface intersection and light visibility. This allows us to develop the first system to recover relightable and animatable neural avatars from sparse or monocular inputs. Experiments show that our approach produces superior results compared to state-of-the-art methods. Our project page is available at https://zju3dv.github.io/relightable_avatar.
[]
[]
[]
[]
2,674
2,675
FreePoint: Unsupervised Point Cloud Instance Segmentation
http://arxiv.org/abs/2305.06973
Zhikai Zhang, Jian Ding, Li Jiang, Dengxin Dai, Guisong Xia
2,305.06973
Instance segmentation of point clouds is a crucial task in 3D field with numerous applications that involve localizing and segmenting objects in a scene. However achieving satisfactory results requires a large number of manual annotations which is time-consuming and expensive. To alleviate dependency on annotations we propose a novel framework FreePoint for underexplored unsupervised class-agnostic instance segmentation on point clouds. In detail we represent the point features by combining coordinates colors and self-supervised deep features. Based on the point features we perform a bottom-up multicut algorithm to segment point clouds into coarse instance masks as pseudo labels which are used to train a point cloud instance segmentation model. We propose an id-as-feature strategy at this stage to alleviate the randomness of the multicut algorithm and improve the pseudo labels' quality. During training we propose a weakly-supervised two-step training strategy and corresponding losses to overcome the inaccuracy of coarse masks. FreePoint has achieved breakthroughs in unsupervised class-agnostic instance segmentation on point clouds and outperformed previous traditional methods by over 18.2% and a competitive concurrent work UnScene3D by 5.5% in AP. Additionally when used as a pretext task and fine-tuned on S3DIS FreePoint performs significantly better than existing self-supervised pre-training methods with limited annotations and surpasses CSC by 6.0% in AP with 10% annotation masks. Code will be released at https://github.com/zzk273/FreePoint.
[]
[]
[]
[]
2,675
2,676
Pose Adapted Shape Learning for Large-Pose Face Reenactment
Gee-Sern Jison Hsu, Jie-Ying Zhang, Huang Yu Hsiang, Wei-Jie Hong
null
We propose the Pose Adapted Shape Learning (PASL) for large-pose face reenactment. The PASL framework consists of three modules namely the Pose-Adapted face Encoder (PAE) the Cycle-consistent Shape Generator (CSG) and the Attention-Embedded Generator (AEG). Different from previous approaches that use a single face encoder for identity preservation we propose multiple Pose-Adapted face Encodes (PAEs) to better preserve facial identity across large poses. Given a source face and a reference face the CSG generates a recomposed shape that fuses the source identity and reference action in the shape space and meets the cycle consistency requirement. Taking the shape code and the source as inputs the AEG learns the attention within the shape code and between the shape code and source style to enhance the generation of the desired target face. As existing benchmark datasets are inappropriate for evaluating large-pose face reenactment we propose a scheme to compose large-pose face pairs and introduce the MPIE-LP (Large Pose) and VoxCeleb2-LP datasets as the new large-pose benchmarks. We compared our approach with state-of-the-art methods on MPIE-LP and VoxCeleb2-LP for large-pose performance and on VoxCeleb1 for the common scope of pose variation.
[]
[]
[]
[]
2,676
2,677
Object Pose Estimation via the Aggregation of Diffusion Features
http://arxiv.org/abs/2403.18791
Tianfu Wang, Guosheng Hu, Hongguang Wang
2,403.18791
Estimating the pose of objects from images is a crucial task of 3D scene understanding and recent approaches have shown promising results on very large benchmarks. However these methods experience a significant performance drop when dealing with unseen objects. We believe that it results from the limited generalizability of image features. To address this problem we have an in-depth analysis on the features of diffusion models e.g. Stable Diffusion which hold substantial potential for modeling unseen objects. Based on this analysis we then innovatively introduce these diffusion features for object pose estimation. To achieve this we propose three distinct architectures that can effectively capture and aggregate diffusion features of different granularity greatly improving the generalizability of object pose estimation. Our approach outperforms the state-of-the-art methods by a considerable margin on three popular benchmark datasets LM O-LM and T-LESS. In particular our method achieves higher accuracy than the previous best arts on unseen objects: 98.2% vs. 93.5% on Unseen LM 85.9% vs. 76.3% on Unseen O-LM showing the strong generalizability of our method. Our code is released at https://github.com/Tianfu18/diff-feats-pose.
[]
[]
[]
[]
2,677
2,678
Circuit Design and Efficient Simulation of Quantum Inner Product and Empirical Studies of Its Effect on Near-Term Hybrid Quantum-Classic Machine Learning
Hao Xiong, Yehui Tang, Xinyu Ye, Junchi Yan
null
For the essential operation namely inner product (IP) as widely adopted in classic computing e.g. matrix multiplication its quantum counterpart: quantum inner product (QIP) has also been recently theoretically explored with a verifiable lower complexity on quantum computers. However it remains unclear for the embodiment of the quantum circuits (QC) for QIP let alone a (thorough) evaluation of the QIP circuits especially in a practical context in the NISQ era by applying QIP to ML via hybrid quantum-classic pipelines. In this paper we carefully design the QIP circuits from scratch whose complexity is in accordance with the theoretical complexity. To make the simulation tractable on classic computers especially when it is integrated in the gradient-based hybrid ML pipelines we further devise a highly-efficient simulation scheme by directly simulates the output state. Experiments show that the scheme accelerates the simulation for more than 68k times compared with the previous circuit simulator. This allows our empirical evaluation on typical machine learning tasks ranging from supervised and self-supervised learning via neural nets to K-Means clustering. The results show that the calculation error brought by typical quantum mechanisms would incur in general little influence on the final numerical results given sufficient qubits. However certain tasks e.g. ranking in K-Means could be more sensitive to quantum noise.
[]
[]
[]
[]
2,678
2,679
How to Make Cross Encoder a Good Teacher for Efficient Image-Text Retrieval?
Yuxin Chen, Zongyang Ma, Ziqi Zhang, Zhongang Qi, Chunfeng Yuan, Bing Li, Junfu Pu, Ying Shan, Xiaojuan Qi, Weiming Hu
null
Dominant dual-encoder models enable efficient image-text retrieval but suffer from limited accuracy while the cross-encoder models offer higher accuracy at the expense of efficiency. Distilling cross-modality matching knowledge from cross-encoder to dual-encoder provides a natural approach to harness their strengths. Thus we investigate the following valuable question: how to make cross-encoder a good teacher for dual-encoder? Our findings are threefold: (1) Cross-modal similarity score distribution of cross-encoder is more concentrated while the result of dual-encoder is nearly normal making vanilla logit distillation less effective. However ranking distillation remains practical as it is not affected by the score distribution. (2) Only the relative order between hard negatives conveys valid knowledge while the order information between easy negatives has little significance. (3) Maintaining the coordination between distillation loss and dual-encoder training loss is beneficial for knowledge transfer. Based on these findings we propose a novel Contrastive Partial Ranking Distillation (CPRD) method which implements the objective of mimicking relative order between hard negative samples with contrastive learning. This approach coordinates with the training of the dual-encoder effectively transferring valid knowledge from the cross-encoder to the dual-encoder. Extensive experiments on image-text retrieval and ranking tasks show that our method surpasses other distillation methods and significantly improves the accuracy of dual-encoder.
[]
[]
[]
[]
2,679
2,680
Diffeomorphic Template Registration for Atmospheric Turbulence Mitigation
http://arxiv.org/abs/2405.03662
Dong Lao, Congli Wang, Alex Wong, Stefano Soatto
2,405.03662
We describe a method for recovering the irradiance underlying a collection of images corrupted by atmospheric turbulence. Since supervised data is often technically impossible to obtain assumptions and biases have to be imposed to solve this inverse problem and we choose to model them explicitly. Rather than initializing a latent irradiance ("template") by heuristics to estimate deformation we select one of the images as a reference and model the deformation in this image by the aggregation of the optical flow from it to other images exploiting a prior imposed by Central Limit Theorem. Then with a novel flow inversion module the model registers each image TO the template but WITHOUT the template avoiding artifacts related to poor template initialization. To illustrate the robustness of the method we simply (i) select the first frame as the reference and (ii) use the simplest optical flow to estimate the warpings yet the improvement in registration is decisive in the final reconstruction as we achieve state-of-the-art performance despite its simplicity. The method establishes a strong baseline that can be further improved by integrating it seamlessly into more sophisticated pipelines or with domain-specific methods if so desired.
[]
[]
[]
[]
2,680
2,681
Selective Nonlinearities Removal from Digital Signals
Krzysztof A. Maliszewski, Magdalena A. Urba?ska, Varvara Vetrova, Sylwia M. Kolenderska
null
Many instruments performing optical and non-optical imaging and sensing such as Optical Coherence Tomography (OCT) Magnetic Resonance Imaging or Fourier-transform spectrometry produce digital signals containing modulations sine-like components which only after Fourier transformation give information about the structure or characteristics of the investigated object. Due to the fundamental physics-related limitations of such methods the distribution of these signal components is often nonlinear and when not properly compensated leads to the resolution precision or quality drop in the final image. Here we propose an innovative approach that has the potential to allow cleaning of the signal from the nonlinearities but most of all it now allows to switch the given order off leaving all others intact. The latter provides a tool for more in-depth analysis of the nonlinearity-inducing properties of the investigated object which can lead to applications in early disease detection or more sensitive sensing of chemical compounds. We consider OCT signals and nonlinearities up to the third order. In our approach we propose two neural networks: one to remove solely the second-order nonlinearity and the other for removing solely the third-order nonlinearity. The input of the networks is a novel two-dimensional data structure with all the information needed for the network to infer a nonlinearity-free signal. We describe the developed networks and present the results for second-order and third-order nonlinearity removal in OCT data representing the images of various objects: a mirror glass and fruits.
[]
[]
[]
[]
2,681
2,682
NB-GTR: Narrow-Band Guided Turbulence Removal
Yifei Xia, Chu Zhou, Chengxuan Zhu, Minggui Teng, Chao Xu, Boxin Shi
null
The removal of atmospheric turbulence is crucial for long-distance imaging. Leveraging the stochastic nature of atmospheric turbulence numerous algorithms have been developed that employ multi-frame input to mitigate the turbulence. However when limited to a single frame existing algorithms face substantial performance drops particularly in diverse real-world scenes. In this paper we propose a robust solution to turbulence removal from an RGB image under the guidance of an additional narrow-band image broadening the applicability of turbulence mitigation techniques in real-world imaging scenarios. Our approach exhibits a substantial suppression in the magnitude of turbulence artifacts by using only a pair of images thereby enhancing the clarity and fidelity of the captured scene.
[]
[]
[]
[]
2,682
2,683
Can Biases in ImageNet Models Explain Generalization?
http://arxiv.org/abs/2404.01509
Paul Gavrikov, Janis Keuper
2,404.01509
The robust generalization of models to rare in-distribution (ID) samples drawn from the long tail of the training distribution and to out-of-training-distribution (OOD) samples is one of the major challenges of current deep learning methods. For image classification this manifests in the existence of adversarial attacks the performance drops on distorted images and a lack of generalization to concepts such as sketches. The current understanding of generalization in neural networks is very limited but some biases that differentiate models from human vision have been identified and might be causing these limitations. Consequently several attempts with varying success have been made to reduce these biases during training to improve generalization. We take a step back and sanity-check these attempts. Fixing the architecture to the well-established ResNet-50 we perform a large-scale study on 48 ImageNet models obtained via different training methods to understand how and if these biases - including shape bias spectral biases and critical bands - interact with generalization. Our extensive study results reveal that contrary to previous findings these biases are insufficient to accurately predict the generalization of a model holistically. We provide access to all checkpoints and evaluation code at https://github.com/paulgavrikov/biases_vs_generalization/
[]
[]
[]
[]
2,683
2,684
NRDF: Neural Riemannian Distance Fields for Learning Articulated Pose Priors
http://arxiv.org/abs/2403.03122
Yannan He, Garvita Tiwari, Tolga Birdal, Jan Eric Lenssen, Gerard Pons-Moll
2,403.03122
Faithfully modeling the space of articulations is a crucial task that allows recovery and generation of realistic poses and remains a notorious challenge. To this end we introduce Neural Riemannian Distance Fields (NRDFs) data-driven priors modeling the space of plausible articulations represented as the zero-level-set of a neural field in a high-dimensional product-quaternion space. To train NRDFs only on positive examples we introduce a new sampling algorithm ensuring that the geodesic distances follow a desired distribution yielding a principled distance field learning paradigm. We then devise a projection algorithm to map any random pose onto the level-set by an adaptive-step Riemannian optimizer adhering to the product manifold of joint rotations at all times. NRDFs can compute the Riemannian gradient via backpropagation and by mathematical analogy are related to Riemannian flow matching a recent generative model. We conduct a comprehensive evaluation of NRDF against other pose priors in various downstream tasks i.e. pose generation image-based pose estimation and solving inverse kinematics highlighting NRDF's superior performance. Besides humans NRDF's versatility extends to hand and animal poses as it can effectively represent any articulation.
[]
[]
[]
[]
2,684
2,685
RepAn: Enhanced Annealing through Re-parameterization
Xiang Fei, Xiawu Zheng, Yan Wang, Fei Chao, Chenglin Wu, Liujuan Cao
null
The simulated annealing algorithm aims to improve model convergence through multiple restarts of training. However existing annealing algorithms overlook the correlation between different cycles neglecting the potential for incremental learning. We contend that a fixed network structure prevents the model from recognizing distinct features at different training stages. To this end we propose RepAn redesigning the irreversible re-parameterization (Rep) method and integrating it with annealing to enhance training. Specifically the network goes through Rep expansion restoration and backpropagation operations during training and iterating through these processes in each annealing round. Such a method exhibits good generalization and is easy to apply and we provide theoretical explanations for its effectiveness. Experiments demonstrate that our method improves baseline performance by 6.38% on the CIFAR-100 dataset and 2.80% on ImageNet achieving state-of-the-art performance in the Rep field. The code is available at https://github.com/xfey/RepAn.
[]
[]
[]
[]
2,685
2,686
Generative Quanta Color Imaging
http://arxiv.org/abs/2403.19066
Vishal Purohit, Junjie Luo, Yiheng Chi, Qi Guo, Stanley H. Chan, Qiang Qiu
2,403.19066
The astonishing development of single-photon cameras has created an unprecedented opportunity for scientific and industrial imaging. However the high data throughput generated by these 1-bit sensors creates a significant bottleneck for low-power applications. In this paper we explore the possibility of generating a color image from a single binary frame of a single-photon camera. We evidently find this problem being particularly difficult to standard colorization approaches due to the substantial degree of exposure variation. The core innovation of our paper is an exposure synthesis model framed under a neural ordinary differential equation (Neural ODE) that allows us to generate a continuum of exposures from a single observation. This innovation ensures consistent exposure in binary images that colorizers take on resulting in notably enhanced colorization. We demonstrate applications of the method in single-image and burst colorization and show superior generative performance over baselines. Project website can be found at https://vishal-s-p.github.io/projects/2023/generative_quanta_color.html
[]
[]
[]
[]
2,686
2,687
Panda-70M: Captioning 70M Videos with Multiple Cross-Modality Teachers
Tsai-Shien Chen, Aliaksandr Siarohin, Willi Menapace, Ekaterina Deyneka, Hsiang-wei Chao, Byung Eun Jeon, Yuwei Fang, Hsin-Ying Lee, Jian Ren, Ming-Hsuan Yang, Sergey Tulyakov
null
The quality of the data and annotation upper-bounds the quality of a downstream model. While there exist large text corpora and image-text pairs high-quality video-text data is much harder to collect. First of all manual labeling is more time-consuming as it requires an annotator to watch an entire video. Second videos have a temporal dimension consist of a number of scenes stacked together and show multiple actions. Accordingly to establish a video dataset with high-quality captions we propose an automatic approach leveraging multimodal inputs such as textual video description subtitles and individual video frames. Specifically we curate 3.8M high-resolution videos from the publicly available HD-VILA-100M dataset. We then split them into semantically consistent video clips and apply multiple cross-modality teacher models to obtain captions for each video. Next we finetune a retrieval model on a small subset where the best caption of each video is manually selected and then employ the model in the whole dataset to select the best caption as the annotation. In this way we get 70M videos paired with high-quality text captions. We dub the dataset as Panda-70M. We show the value of the proposed dataset on three downstream tasks: video captioning video and text retrieval and text-driven video generation. The models trained on the proposed data score substantially better on the majority of metrics across all the tasks.
[]
[]
[]
[]
2,687
2,688
Overload: Latency Attacks on Object Detection for Edge Devices
http://arxiv.org/abs/2304.05370
Erh-Chung Chen, Pin-Yu Chen, I-Hsin Chung, Che-Rung Lee
2,304.0537
Nowadays the deployment of deep learning-based applications is an essential task owing to the increasing demands on intelligent services. In this paper we investigate latency attacks on deep learning applications. Unlike common adversarial attacks for misclassification the goal of latency attacks is to increase the inference time which may stop applications from responding to the requests within a reasonable time. This kind of attack is ubiquitous for various applications and we use object detection to demonstrate how such kind of attacks work. We also design a framework named Overload to generate latency attacks at scale. Our method is based on a newly formulated optimization problem and a novel technique called spatial attention. This attack serves to escalate the required computing costs during the inference time consequently leading to an extended inference time for object detection. It presents a significant threat especially to systems with limited computing resources. We conducted experiments using YOLOv5 models on Nvidia NX. Compared to existing methods our method is simpler and more effective. The experimental results show that with latency attacks the inference time of a single image can be increased ten times longer in reference to the normal setting. Moreover our findings pose a potential new threat to all object detection tasks requiring non-maximum suppression (NMS) as our attack is NMS-agnostic.
[]
[]
[]
[]
2,688
2,689
DreamControl: Control-Based Text-to-3D Generation with 3D Self-Prior
http://arxiv.org/abs/2312.06439
Tianyu Huang, Yihan Zeng, Zhilu Zhang, Wan Xu, Hang Xu, Songcen Xu, Rynson W.H. Lau, Wangmeng Zuo
2,312.06439
3D generation has raised great attention in recent years. With the success of text-to-image diffusion models the 2D-lifting technique becomes a promising route to controllable 3D generation. However these methods tend to present inconsistent geometry which is also known as the Janus problem. We observe that the problem is caused mainly by two aspects i.e. viewpoint bias in 2D diffusion models and overfitting of the optimization objective. To address it we propose a two-stage 2D-lifting framework namely DreamControl which optimizes coarse NeRF scenes as 3D self-prior and then generates fine-grained objects with control-based score distillation. Specifically adaptive viewpoint sampling and boundary integrity metric are proposed to ensure the consistency of generated priors. The priors are then regarded as input conditions to maintain reasonable geometries in which conditional LoRA and weighted score are further proposed to optimize detailed textures. DreamControl can generate high-quality 3D content in terms of both geometry consistency and texture fidelity. Moreover our control-based optimization guidance is applicable to more downstream tasks including user-guided generation and 3D animation.
[]
[]
[]
[]
2,689
2,690
Infrared Small Target Detection with Scale and Location Sensitivity
http://arxiv.org/abs/2403.19366
Qiankun Liu, Rui Liu, Bolun Zheng, Hongkui Wang, Ying Fu
2,403.19366
Recently infrared small target detection (IRSTD) has been dominated by deep-learning-based methods. However these methods mainly focus on the design of complex model structures to extract discriminative features leaving the loss functions for IRSTD under-explored. For example the widely used Intersection over Union (IoU) and Dice losses lack sensitivity to the scales and locations of targets limiting the detection performance of detectors. In this paper we focus on boosting detection performance with a more effective loss but a simpler model structure. Specifically we first propose a novel Scale and Location Sensitive (SLS) loss to handle the limitations of existing losses: 1) for scale sensitivity we compute a weight for the IoU loss based on target scales to help the detector distinguish targets with different scales: 2) for location sensitivity we introduce a penalty term based on the center points of targets to help the detector localize targets more precisely. Then we design a simple Multi-Scale Head to the plain U-Net (MSHNet). By applying SLS loss to each scale of the predictions our MSHNet outperforms existing state-of-the-art methods by a large margin. In addition the detection performance of existing detectors can be further improved when trained with our SLS loss demonstrating the effectiveness and generalization of our SLS loss. The code is available at https://github.com/ying-fu/MSHNet.
[]
[]
[]
[]
2,690
2,691
Self-supervised Debiasing Using Low Rank Regularization
http://arxiv.org/abs/2210.05248
Geon Yeong Park, Chanyong Jung, Sangmin Lee, Jong Chul Ye, Sang Wan Lee
2,210.05248
Spurious correlations can cause strong biases in deep neural networks impairing generalization ability. While most existing debiasing methods require full supervision on either spurious attributes or target labels training a debiased model from a limited amount of both annotations is still an open question. To address this issue we investigate an interesting phenomenon using the spectral analysis of latent representations: spuriously correlated attributes make neural networks inductively biased towards encoding lower effective rank representations. We also show that a rank regularization can amplify this bias in a way that encourages highly correlated features. Leveraging these findings we propose a self-supervised debiasing framework potentially compatible with unlabeled samples. Specifically we first pretrain a biased encoder in a self-supervised manner with the rank regularization serving as a semantic bottleneck to enforce the encoder to learn the spuriously correlated attributes. This biased encoder is then used to discover and upweight bias-conflicting samples in a downstream task serving as a boosting to effectively debias the main model. Remarkably the proposed debiasing framework significantly improves the generalization performance of self-supervised learning baselines and in some cases even outperforms state-of-the-art supervised debiasing approaches.
[]
[]
[]
[]
2,691
2,692
ODIN: A Single Model for 2D and 3D Segmentation
http://arxiv.org/abs/2401.02416
Ayush Jain, Pushkal Katara, Nikolaos Gkanatsios, Adam W. Harley, Gabriel Sarch, Kriti Aggarwal, Vishrav Chaudhary, Katerina Fragkiadaki
2,401.02416
State-of-the-art models on contemporary 3D segmentation benchmarks like ScanNet consume and label dataset-provided 3D point clouds obtained through post processing of sensed multiview RGB-D images. They are typically trained in-domain forego large-scale 2D pre-training and outperform alternatives that featurize the posed RGB-D multiview images instead. The gap in performance between methods that consume posed images versus post-processed 3D point clouds has fueled the belief that 2D and 3D perception require distinct model architectures. In this paper we challenge this view and propose ODIN (Omni-Dimensional INstance segmentation) a model that can segment and label both 2D RGB images and 3D point clouds using a transformer architecture that alternates between 2D within-view and 3D cross-view information fusion. Our model differentiates 2D and 3D feature operations through the positional encodings of the tokens involved which capture pixel coordinates for 2D patch tokens and 3D coordinates for 3D feature tokens. ODIN achieves state-of-the-art performance on ScanNet200 Matterport3D and AI2THOR 3D instance segmentation benchmarks and competitive performance on ScanNet S3DIS and COCO. It outperforms all previous works by a wide margin when the sensed 3D point cloud is used in place of the point cloud sampled from 3D mesh. When used as the 3D perception engine in an instructable embodied agent architecture it sets a new state-of-the-art on the TEACh action-from-dialogue benchmark. Our code and checkpoints can be found at the project website: https://odin-seg.github.io.
[]
[]
[]
[]
2,692
2,693
SD4Match: Learning to Prompt Stable Diffusion Model for Semantic Matching
http://arxiv.org/abs/2310.17569
Xinghui Li, Jingyi Lu, Kai Han, Victor Adrian Prisacariu
2,310.17569
In this paper we address the challenge of matching semantically similar keypoints across image pairs. Existing research indicates that the intermediate output of the UNet within the Stable Diffusion (SD) can serve as robust image feature maps for such a matching task. We demonstrate that by employing a basic prompt tuning technique the inherent potential of Stable Diffusion can be harnessed resulting in a significant enhancement in accuracy over previous approaches. We further introduce a novel conditional prompting module that conditions the prompt on the local details of the input image pairs leading to a further improvement in performance. We designate our approach as SD4Match short for Stable Diffusion for Semantic Matching. Comprehensive evaluations of SD4Match on the PF-Pascal PF-Willow and SPair-71k datasets show that it sets new benchmarks in accuracy across all these datasets. Particularly SD4Match outperforms the previous state-of-the-art by a margin of 12 percentage points on the challenging SPair-71k dataset. Code is available at the project website: https://sd4match.active.vision.
[]
[]
[]
[]
2,693
2,694
InitNO: Boosting Text-to-Image Diffusion Models via Initial Noise Optimization
http://arxiv.org/abs/2404.04650
Xiefan Guo, Jinlin Liu, Miaomiao Cui, Jiankai Li, Hongyu Yang, Di Huang
2,404.0465
Recent strides in the development of diffusion models exemplified by advancements such as Stable Diffusion have underscored their remarkable prowess in generating visually compelling images. However the imperative of achieving a seamless alignment between the generated image and the provided prompt persists as a formidable challenge. This paper traces the root of these difficulties to invalid initial noise and proposes a solution in the form of Initial Noise Optimization (InitNO) a paradigm that refines this noise. Considering text prompts not all random noises are effective in synthesizing semantically-faithful images. We design the cross-attention response score and the self-attention conflict score to evaluate the initial noise bifurcating the initial latent space into valid and invalid sectors. A strategically crafted noise optimization pipeline is developed to guide the initial noise towards valid regions. Our method validated through rigorous experimentation shows a commendable proficiency in generating images in strict accordance with text prompts. Our code is available at https://github.com/xiefan-guo/initno.
[]
[]
[]
[]
2,694
2,695
Neural Video Compression with Feature Modulation
http://arxiv.org/abs/2402.17414
Jiahao Li, Bin Li, Yan Lu
2,402.17414
The emerging conditional coding-based neural video codec (NVC) shows superiority over commonly-used residual coding-based codec and the latest NVC already claims to outperform the best traditional codec. However there still exist critical problems blocking the practicality of NVC. In this paper we propose a powerful conditional coding-based NVC that solves two critical problems via feature modulation. The first is how to support a wide quality range in a single model. Previous NVC with this capability only supports about 3.8 dB PSNR range on average. To tackle this limitation we modulate the latent feature of the current frame via the learnable quantization scaler. During the training we specially design the uniform quantization parameter sampling mechanism to improve the harmonization of encoding and quantization. This results in a better learning of the quantization scaler and helps our NVC support about 11.4 dB PSNR range. The second is how to make NVC still work under a long prediction chain. We expose that the previous SOTA NVC has an obvious quality degradation problem when using a large intra-period setting. To this end we propose modulating the temporal feature with a periodically refreshing mechanism to boost the quality. Notably under single intra-frame setting our codec can achieve 29.7% bitrate saving over previous SOTA NVC with 16% MACs reduction. Our codec serves as a notable landmark in the journey of NVC evolution. The codes are at https://github.com/microsoft/DCVC.
[]
[]
[]
[]
2,695
2,696
Data Poisoning based Backdoor Attacks to Contrastive Learning
http://arxiv.org/abs/2211.08229
Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, Neil Zhenqiang Gong
2,211.08229
Contrastive learning (CL) pre-trains general-purpose encoders using an unlabeled pre-training dataset which consists of images or image-text pairs. CL is vulnerable to data poisoning based backdoor attacks (DPBAs) in which an attacker injects poisoned inputs into the pre-training dataset so the encoder is backdoored. However existing DPBAs achieve limited effectiveness. In this work we take the first step to analyze the limitations of existing backdoor attacks and propose new DPBAs called CorruptEncoder to CL. CorruptEncoder introduces a new attack strategy to create poisoned inputs and uses a theory-guided method to maximize attack effectiveness. Our experiments show that CorruptEncoder substantially outperforms existing DPBAs. In particular CorruptEncoder is the first DPBA that achieves more than 90% attack success rates with only a few (3) reference images and a small poisoning ratio (0.5%). Moreover we also propose a defense called localized cropping to defend against DPBAs. Our results show that our defense can reduce the effectiveness of DPBAs but it sacrifices the utility of the encoder highlighting the need for new defenses.
[]
[]
[]
[]
2,696
2,697
Multimodal Sense-Informed Forecasting of 3D Human Motions
Zhenyu Lou, Qiongjie Cui, Haofan Wang, Xu Tang, Hong Zhou
null
Predicting future human pose is a fundamental application for machine intelligence which drives robots to plan their behavior and paths ahead of time to seamlessly accomplish human-robot collaboration in real-world 3D scenarios. Despite encouraging results existing approaches rarely consider the effects of the external scene on the motion sequence leading to pronounced artifacts and physical implausibilities in the predictions. To address this limitation this work introduces a novel multi-modal sense-informed motion prediction approach which conditions high-fidelity generation on two modal information: external 3D scene and internal human gaze and is able to recognize their salience for future human activity. Furthermore the gaze information is regarded as the human intention and combined with both motion and scene features we construct a ternary intention-aware attention to supervise the generation to match where the human wants to reach. Meanwhile we introduce semantic coherence-aware attention to explicitly distinguish the salient point clouds and the underlying ones to ensure a reasonable interaction of the generated sequence with the 3D scene. On two real-world benchmarks the proposed method achieves state-of-the-art performance both in 3D human pose and trajectory prediction. More detailed results are available on the page: https://sites.google.com/view/cvpr2024sif3d.
[]
[]
[]
[]
2,697
2,698
FlowerFormer: Empowering Neural Architecture Encoding using a Flow-aware Graph Transformer
http://arxiv.org/abs/2403.12821
Dongyeong Hwang, Hyunju Kim, Sunwoo Kim, Kijung Shin
2,403.12821
The success of a specific neural network architecture is closely tied to the dataset and task it tackles; there is no one-size-fits-all solution. Thus considerable efforts have been made to quickly and accurately estimate the performances of neural architectures without full training or evaluation for given tasks and datasets. Neural architecture encoding has played a crucial role in the estimation and graphbased methods which treat an architecture as a graph have shown prominent performance. For enhanced representation learning of neural architectures we introduce FlowerFormer a powerful graph transformer that incorporates the information flows within a neural architecture. FlowerFormer consists of two key components: (a) bidirectional asynchronous message passing inspired by the flows; (b) global attention built on flow-based masking. Our extensive experiments demonstrate the superiority of FlowerFormer over existing neural encoding methods and its effectiveness extends beyond computer vision models to include graph neural networks and auto speech recognition models. Our code is available at http://github.com/y0ngjaenius/CVPR2024_FLOWERFormer.
[]
[]
[]
[]
2,698
2,699
EmoGen: Emotional Image Content Generation with Text-to-Image Diffusion Models
http://arxiv.org/abs/2401.04608
Jingyuan Yang, Jiawei Feng, Hui Huang
2,401.04608
Recent years have witnessed remarkable progress in image generation task where users can create visually astonishing images with high-quality. However exsiting text-to-image diffusion models are proficient in generating concrete concepts (dogs) but encounter challenges with more abstract ones (emotions). Several efforts have been made to modify image emotions with color and style adjustments facing limitations in effectively conveying emotions with fixed image contents. In this work we introduce Emotional Image Content Generation (EIGC) a new task to generate semantic-clear and emotion-faithful images given emotion categories. Specifically we propose an emotion space and construct a mapping network to align it with powerful Contrastive Language-Image Pre-training (CLIP) space providing a concrete interpretation of abstract emotions. Attribute loss and emotion confidence are further proposed to ensure the semantic diversity and emotion fidelity of the generated images. Our method outperforms the state-the-art text-to-image approaches both quantitatively and qualitatively where we derive three custom metrics i.e.emotion accuracy semantic clarity and semantic diversity. In addition to generation our method can help emotion understanding and inspire emotional art design. Project page: https://vcc.tech/research/2024/EmoGen.
[]
[]
[]
[]
2,699