librarian-bot commited on
Commit
496a152
·
verified ·
1 Parent(s): bd4a425

Scheduled Commit

Browse files
data/2412.13185.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.13185", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Motion-2-to-3: Leveraging 2D Motion Data to Boost 3D Motion Generation](https://huggingface.co/papers/2412.13111) (2024)\n* [Diffusion Implicit Policy for Unpaired Scene-aware Motion Synthesis](https://huggingface.co/papers/2412.02261) (2024)\n* [Motion Prompting: Controlling Video Generation with Motion Trajectories](https://huggingface.co/papers/2412.02700) (2024)\n* [Fleximo: Towards Flexible Text-to-Human Motion Video Generation](https://huggingface.co/papers/2411.19459) (2024)\n* [One-shot Human Motion Transfer via Occlusion-Robust Flow Prediction and Neural Texturing](https://huggingface.co/papers/2412.06174) (2024)\n* [Motion Control for Enhanced Complex Action Video Generation](https://huggingface.co/papers/2411.08328) (2024)\n* [MotionStone: Decoupled Motion Intensity Modulation with Diffusion Transformer for Image-to-Video Generation](https://huggingface.co/papers/2412.05848) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.14233.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.14233", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Exploring Multi-Grained Concept Annotations for Multimodal Large Language Models](https://huggingface.co/papers/2412.05939) (2024)\n* [Grounding-IQA: Multimodal Language Grounding Model for Image Quality Assessment](https://huggingface.co/papers/2411.17237) (2024)\n* [CompCap: Improving Multimodal Large Language Models with Composite Captions](https://huggingface.co/papers/2412.05243) (2024)\n* [Benchmarking Large Vision-Language Models via Directed Scene Graph for Comprehensive Image Captioning](https://huggingface.co/papers/2412.08614) (2024)\n* [VisOnlyQA: Large Vision Language Models Still Struggle with Visual Perception of Geometric Information](https://huggingface.co/papers/2412.00947) (2024)\n* [FINECAPTION: Compositional Image Captioning Focusing on Wherever You Want at Any Granularity](https://huggingface.co/papers/2411.15411) (2024)\n* [TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning](https://huggingface.co/papers/2412.08176) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.14462.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.14462", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Coherent 3D Scene Diffusion From a Single RGB Image](https://huggingface.co/papers/2412.10294) (2024)\n* [DreamMix: Decoupling Object Attributes for Enhanced Editability in Customized Image Inpainting](https://huggingface.co/papers/2411.17223) (2024)\n* [ObjectMate: A Recurrence Prior for Object Insertion and Subject-Driven Generation](https://huggingface.co/papers/2412.08645) (2024)\n* [MureObjectStitch: Multi-reference Image Composition](https://huggingface.co/papers/2411.07462) (2024)\n* [Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models](https://huggingface.co/papers/2411.07232) (2024)\n* [MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes](https://huggingface.co/papers/2412.11457) (2024)\n* [SSEditor: Controllable Mask-to-Scene Generation with Diffusion Model](https://huggingface.co/papers/2411.12290) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.14642.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.14642", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Training and Evaluating Language Models with Template-based Data Generation](https://huggingface.co/papers/2411.18104) (2024)\n* [M$^{3}$-20M: A Large-Scale Multi-Modal Molecule Dataset for AI-driven Drug Design and Discovery](https://huggingface.co/papers/2412.06847) (2024)\n* [SDDBench: A Benchmark for Synthesizable Drug Design](https://huggingface.co/papers/2411.08306) (2024)\n* [MolCap-Arena: A Comprehensive Captioning Benchmark on Language-Enhanced Molecular Property Prediction](https://huggingface.co/papers/2411.00737) (2024)\n* [GT23D-Bench: A Comprehensive General Text-to-3D Generation Benchmark](https://huggingface.co/papers/2412.09997) (2024)\n* [CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding](https://huggingface.co/papers/2412.12075) (2024)\n* [Chemical Language Model Linker: blending text and molecules with modular adapters](https://huggingface.co/papers/2410.20182) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.14835.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.14835", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Training-Free Mitigation of Language Reasoning Degradation After Multimodal Instruction Tuning](https://huggingface.co/papers/2412.03467) (2024)\n* [LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant](https://huggingface.co/papers/2412.01720) (2024)\n* [A Survey of Mathematical Reasoning in the Era of Multimodal Large Language Model: Benchmark, Method&Challenges](https://huggingface.co/papers/2412.11936) (2024)\n* [RAG-Star: Enhancing Deliberative Reasoning with Retrieval Augmented Verification and Refinement](https://huggingface.co/papers/2412.12881) (2024)\n* [RARE: Retrieval-Augmented Reasoning Enhancement for Large Language Models](https://huggingface.co/papers/2412.02830) (2024)\n* [Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models](https://huggingface.co/papers/2411.14432) (2024)\n* [MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale](https://huggingface.co/papers/2412.05237) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.15115.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.15115", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Tulu 3: Pushing Frontiers in Open Language Model Post-Training](https://huggingface.co/papers/2411.15124) (2024)\n* [The Zamba2 Suite: Technical Report](https://huggingface.co/papers/2411.15242) (2024)\n* [A Post-Training Enhanced Optimization Approach for Small Language Models](https://huggingface.co/papers/2411.02939) (2024)\n* [Phi-4 Technical Report](https://huggingface.co/papers/2412.08905) (2024)\n* [Pipeline Analysis for Developing Instruct LLMs in Low-Resource Languages: A Case Study on Basque](https://huggingface.co/papers/2412.13922) (2024)\n* [Training and Evaluating Language Models with Template-based Data Generation](https://huggingface.co/papers/2411.18104) (2024)\n* [Reinforcement Learning Enhanced LLMs: A Survey](https://huggingface.co/papers/2412.10400) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.15204.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.15204", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [CG-Bench: Clue-grounded Question Answering Benchmark for Long Video Understanding](https://huggingface.co/papers/2412.12075) (2024)\n* [Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models](https://huggingface.co/papers/2411.14432) (2024)\n* [Large Language Models Can Self-Improve in Long-context Reasoning](https://huggingface.co/papers/2411.08147) (2024)\n* [BigDocs: An Open and Permissively-Licensed Dataset for Training Multimodal Models on Document and Code Tasks](https://huggingface.co/papers/2412.04626) (2024)\n* [Abstract2Appendix: Academic Reviews Enhance LLM Long-Context Capabilities](https://huggingface.co/papers/2411.05232) (2024)\n* [MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark](https://huggingface.co/papers/2410.19168) (2024)\n* [Enhancing LLM Reasoning via Critique Models with Test-Time and Training-Time Supervision](https://huggingface.co/papers/2411.16579) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.15213.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.15213", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Zoomed In, Diffused Out: Towards Local Degradation-Aware Multi-Diffusion for Extreme Image Super-Resolution](https://huggingface.co/papers/2411.12072) (2024)\n* [Image Regeneration: Evaluating Text-to-Image Model via Generating Identical Image with Multimodal Large Language Models](https://huggingface.co/papers/2411.09449) (2024)\n* [Conditional Text-to-Image Generation with Reference Guidance](https://huggingface.co/papers/2411.16713) (2024)\n* [CoCoNO: Attention Contrast-and-Complete for Initial Noise Optimization in Text-to-Image Synthesis](https://huggingface.co/papers/2411.16783) (2024)\n* [ZoomLDM: Latent Diffusion Model for multi-scale image generation](https://huggingface.co/papers/2411.16969) (2024)\n* [The Silent Prompt: Initial Noise as Implicit Guidance for Goal-Driven Image Generation](https://huggingface.co/papers/2412.05101) (2024)\n* [Self-Guidance: Boosting Flow and Diffusion Generation on Their Own](https://huggingface.co/papers/2412.05827) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
data/2412.15216.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"paper_url": "https://huggingface.co/papers/2412.15216", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Multi-Reward as Condition for Instruction-based Image Editing](https://huggingface.co/papers/2411.04713) (2024)\n* [Pathways on the Image Manifold: Image Editing via Video Generation](https://huggingface.co/papers/2411.16819) (2024)\n* [AnyEdit: Mastering Unified High-Quality Image Editing for Any Idea](https://huggingface.co/papers/2411.15738) (2024)\n* [InsightEdit: Towards Better Instruction Following for Image Editing](https://huggingface.co/papers/2411.17323) (2024)\n* [ReEdit: Multimodal Exemplar-Based Image Editing with Diffusion Models](https://huggingface.co/papers/2411.03982) (2024)\n* [Towards a Training Free Approach for 3D Scene Editing](https://huggingface.co/papers/2412.12766) (2024)\n* [LoRA of Change: Learning to Generate LoRA for the Editing Instruction from A Single Before-After Image Pair](https://huggingface.co/papers/2411.19156) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}