Andreas Geiger

Publications of Yong Cao

Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, ...
S. Eger, Y. Cao, J. D'Souza, A. Geiger, C. Greisinger, S. Gross, Y. Hou, B. Krenn, A. Lauscher, Y. Li, et al.
Arxiv, 2025
Abstract: With the advent of large multimodal language models, science is now at a threshold of an AI-based technological transformation. Recently, a plethora of new AI models and tools has been proposed, promising to empower researchers and academics worldwide to conduct their research more effectively and efficiently. This includes all aspects of the research cycle, especially (1) searching for relevant literature; (2) generating research ideas and conducting experimentation; generating (3) text-based and (4) multimodal content (e.g., scientific figures and diagrams); and (5) AI-based automatic peer review. In this survey, we provide an in-depth overview over these exciting recent developments, which promise to fundamentally alter the scientific research process for good. Our survey covers the five aspects outlined above, indicating relevant datasets, methods and results (including evaluation) as well as limitations and scope for future research. Ethical concerns regarding shortcomings of these tools and potential for misuse (fake science, plagiarism, harms to research integrity) take a particularly prominent place in our discussion. We hope that our survey will not only become a reference guide for newcomers to the field but also a catalyst for new AI-based initiatives in the area of "AI4Science".
Latex Bibtex Citation:
@article{Eger2025ARXIV,
  author = {Steffen Eger and Yong Cao and Jennifer D'Souza and Andreas Geiger and Christian Greisinger and Stephanie Gross and Yufang Hou and Brigitte Krenn and Anne Lauscher and Yizhi Li and Chenghua Lin and Nafise Sadat Moosavi and Wei Zhao and Tristan Miller},
  title = {Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation},
  journal = {Arxiv},
  year = {2025}
}
Scholar Inbox: Personalized Paper Recommendations for Scientists
M. Flicke, G. Angrabeit, M. Iyengar, V. Protsenko, I. Shakun, J. Cicvaric, B. Kargi, H. He, L. Schuler, L. Scholz, et al.
Arxiv, 2025
Abstract: Scholar Inbox is a new open-access platform designed to address the challenges researchers face in staying current with the rapidly expanding volume of scientific literature. We provide personalized recommendations, continuous updates from open-access archives (arXiv, bioRxiv, etc.), visual paper summaries, semantic search, and a range of tools to streamline research workflows and promote open research access. The platform's personalized recommendation system is trained on user ratings, ensuring that recommendations are tailored to individual researchers' interests. To further enhance the user experience, Scholar Inbox also offers a map of science that provides an overview of research across domains, enabling users to easily explore specific topics. We use this map to address the cold start problem common in recommender systems, as well as an active learning strategy that iteratively prompts users to rate a selection of papers, allowing the system to learn user preferences quickly. We evaluate the quality of our recommendation system on a novel dataset of 800k user ratings, which we make publicly available, as well as via an extensive user study. https://www.scholar-inbox.com/
Latex Bibtex Citation:
@article{Flicke2025ARXIV,
  author = {Markus Flicke and Glenn Angrabeit and Madhav Iyengar and Vitalii Protsenko and Illia Shakun and Jovan Cicvaric and Bora Kargi and Haoyu He and Lukas Schuler and Lewin Scholz and Kavyanjali Agnihotri and Yong Cao and Andreas Geiger},
  title = {Scholar Inbox: Personalized Paper Recommendations for Scientists},
  journal = {Arxiv},
  year = {2025}
}


eXTReMe Tracker