librarian-bot
commited on
Scheduled Commit
Browse files- data/2412.11634.json +1 -0
- data/2412.13377.json +1 -0
- data/2412.14283.json +1 -0
- data/2412.14475.json +1 -0
- data/2412.14689.json +1 -0
- data/2412.15084.json +1 -0
- data/2412.15191.json +1 -0
- data/2412.15200.json +1 -0
- data/2412.15214.json +1 -0
data/2412.11634.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.11634", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [High-Fidelity Document Stain Removal via A Large-Scale Real-World Dataset and A Memory-Augmented Transformer](https://huggingface.co/papers/2410.22922) (2024)\n* [CLIP-SR: Collaborative Linguistic and Image Processing for Super-Resolution](https://huggingface.co/papers/2412.11609) (2024)\n* [Generative Image Layer Decomposition with Visual Effects](https://huggingface.co/papers/2411.17864) (2024)\n* [HoliSDiP: Image Super-Resolution via Holistic Semantics and Diffusion Prior](https://huggingface.co/papers/2411.18662) (2024)\n* [Beyond Pixels: Text Enhances Generalization in Real-World Image Restoration](https://huggingface.co/papers/2412.00878) (2024)\n* [FaithDiff: Unleashing Diffusion Priors for Faithful Image Super-resolution](https://huggingface.co/papers/2411.18824) (2024)\n* [DreamClear: High-Capacity Real-World Image Restoration with Privacy-Safe Dataset Curation](https://huggingface.co/papers/2410.18666) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.13377.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.13377", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [QUENCH: Measuring the gap between Indic and Non-Indic Contextual General Reasoning in LLMs](https://huggingface.co/papers/2412.11763) (2024)\n* [LLMSteer: Improving Long-Context LLM Inference by Steering Attention on Reused Contexts](https://huggingface.co/papers/2411.13009) (2024)\n* [LLM-Ref: Enhancing Reference Handling in Technical Writing with Large Language Models](https://huggingface.co/papers/2411.00294) (2024)\n* [SandboxAQ's submission to MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval](https://huggingface.co/papers/2410.21501) (2024)\n* [Adapting Large Language Models to Log Analysis with Interpretable Domain Knowledge](https://huggingface.co/papers/2412.01377) (2024)\n* [Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models](https://huggingface.co/papers/2411.00686) (2024)\n* [TQA-Bench: Evaluating LLMs for Multi-Table Question Answering with Scalable Context and Symbolic Extension](https://huggingface.co/papers/2411.19504) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.14283.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.14283", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Pathways on the Image Manifold: Image Editing via Video Generation](https://huggingface.co/papers/2411.16819) (2024)\n* [3D-Consistent Image Inpainting with Diffusion Models](https://huggingface.co/papers/2412.05881) (2024)\n* [Re-Attentional Controllable Video Diffusion Editing](https://huggingface.co/papers/2412.11710) (2024)\n* [Stable Flow: Vital Layers for Training-Free Image Editing](https://huggingface.co/papers/2411.14430) (2024)\n* [SeedEdit: Align Image Re-Generation to Image Editing](https://huggingface.co/papers/2411.06686) (2024)\n* [BrushEdit: All-In-One Image Inpainting and Editing](https://huggingface.co/papers/2412.10316) (2024)\n* [Uniform Attention Maps: Boosting Image Fidelity in Reconstruction and Editing](https://huggingface.co/papers/2411.19652) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.14475.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.14475", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [LamRA: Large Multimodal Model as Your Advanced Retrieval Assistant](https://huggingface.co/papers/2412.01720) (2024)\n* [EFSA: Episodic Few-Shot Adaptation for Text-to-Image Retrieval](https://huggingface.co/papers/2412.00139) (2024)\n* [Compositional Image Retrieval via Instruction-Aware Contrastive Learning](https://huggingface.co/papers/2412.05756) (2024)\n* [CompCap: Improving Multimodal Large Language Models with Composite Captions](https://huggingface.co/papers/2412.05243) (2024)\n* [MAmmoTH-VL: Eliciting Multimodal Reasoning with Instruction Tuning at Scale](https://huggingface.co/papers/2412.05237) (2024)\n* [Personalizing Multimodal Large Language Models for Image Captioning: An Experimental Analysis](https://huggingface.co/papers/2412.03665) (2024)\n* [BLIP3-KALE: Knowledge Augmented Large-Scale Dense Captions](https://huggingface.co/papers/2411.07461) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.14689.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.14689", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Leveraging Programmatically Generated Synthetic Data for Differentially Private Diffusion Training](https://huggingface.co/papers/2412.09842) (2024)\n* [Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models](https://huggingface.co/papers/2412.02980) (2024)\n* [Analyzing and Improving Model Collapse in Rectified Flow Models](https://huggingface.co/papers/2412.08175) (2024)\n* [Collapse or Thrive? Perils and Promises of Synthetic Data in a Self-Generating World](https://huggingface.co/papers/2410.16713) (2024)\n* [Large Language Models Can Self-Improve in Long-context Reasoning](https://huggingface.co/papers/2411.08147) (2024)\n* [Evaluating Language Models as Synthetic Data Generators](https://huggingface.co/papers/2412.03679) (2024)\n* [A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions](https://huggingface.co/papers/2412.08864) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.15084.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.15084", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [Entropy-Regularized Process Reward Model](https://huggingface.co/papers/2412.11006) (2024)\n* [Self-Generated Critiques Boost Reward Modeling for Language Models](https://huggingface.co/papers/2411.16646) (2024)\n* [GFlowNet Fine-tuning for Diverse Correct Solutions in Mathematical Reasoning Tasks](https://huggingface.co/papers/2410.20147) (2024)\n* [Mars-PO: Multi-Agent Reasoning System Preference Optimization](https://huggingface.co/papers/2411.19039) (2024)\n* [Weighted-Reward Preference Optimization for Implicit Model Fusion](https://huggingface.co/papers/2412.03187) (2024)\n* [Preference Optimization for Reasoning with Pseudo Feedback](https://huggingface.co/papers/2411.16345) (2024)\n* [Reinforcement Learning Enhanced LLMs: A Survey](https://huggingface.co/papers/2412.10400) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.15191.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.15191", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [SAVGBench: Benchmarking Spatially Aligned Audio-Video Generation](https://huggingface.co/papers/2412.13462) (2024)\n* [Tell What You Hear From What You See -- Video to Audio Generation Through Text](https://huggingface.co/papers/2411.05679) (2024)\n* [Track4Gen: Teaching Video Diffusion Models to Track Points Improves Video Generation](https://huggingface.co/papers/2412.06016) (2024)\n* [MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation](https://huggingface.co/papers/2412.04448) (2024)\n* [YingSound: Video-Guided Sound Effects Generation with Multi-modal Chain-of-Thought Controls](https://huggingface.co/papers/2412.09168) (2024)\n* [Dense Audio-Visual Event Localization under Cross-Modal Consistency and Multi-Temporal Granularity Collaboration](https://huggingface.co/papers/2412.12628) (2024)\n* [LetsTalk: Latent Diffusion Transformer for Talking Video Synthesis](https://huggingface.co/papers/2411.16748) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.15200.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.15200", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models](https://huggingface.co/papers/2412.09648) (2024)\n* [Controllable Shadow Generation with Single-Step Diffusion Models from Synthetic Data](https://huggingface.co/papers/2412.11972) (2024)\n* [Structured 3D Latents for Scalable and Versatile 3D Generation](https://huggingface.co/papers/2412.01506) (2024)\n* [Boosting 3D object generation through PBR materials](https://huggingface.co/papers/2411.16080) (2024)\n* [3D MedDiffusion: A 3D Medical Diffusion Model for Controllable and High-quality Medical Image Generation](https://huggingface.co/papers/2412.13059) (2024)\n* [TexGaussian: Generating High-quality PBR Material via Octree-based 3D Gaussian Splatting](https://huggingface.co/papers/2411.19654) (2024)\n* [MaterialPicker: Multi-Modal Material Generation with Diffusion Transformers](https://huggingface.co/papers/2412.03225) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|
data/2412.15214.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"paper_url": "https://huggingface.co/papers/2412.15214", "comment": "This is an automated message from the [Librarian Bot](https://huggingface.co/librarian-bots). I found the following papers similar to this paper. \n\nThe following papers were recommended by the Semantic Scholar API \n\n* [ObjCtrl-2.5D: Training-free Object Control with Camera Poses](https://huggingface.co/papers/2412.07721) (2024)\n* [OmniDrag: Enabling Motion Control for Omnidirectional Image-to-Video Generation](https://huggingface.co/papers/2412.09623) (2024)\n* [InTraGen: Trajectory-controlled Video Generation for Object Interactions](https://huggingface.co/papers/2411.16804) (2024)\n* [3DTrajMaster: Mastering 3D Trajectory for Multi-Entity Motion in Video Generation](https://huggingface.co/papers/2412.07759) (2024)\n* [AnchorCrafter: Animate CyberAnchors Saling Your Products via Human-Object Interacting Video Generation](https://huggingface.co/papers/2411.17383) (2024)\n* [TIV-Diffusion: Towards Object-Centric Movement for Text-driven Image to Video Generation](https://huggingface.co/papers/2412.10275) (2024)\n* [I2VControl: Disentangled and Unified Video Motion Synthesis Control](https://huggingface.co/papers/2411.17765) (2024)\n\n\n Please give a thumbs up to this comment if you found it helpful!\n\n If you want recommendations for any Paper on Hugging Face checkout [this](https://huggingface.co/spaces/librarian-bots/recommend_similar_papers) Space\n\n You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: `@librarian-bot recommend`"}
|