Address :
Chair of Data Science (Informatik X)
University of Würzburg
Emil-Fischer-Straße 50
97074 Würzburg Germany

Email : andrzej.dulny[at]uni-wuerzburg.de

About me

I am a professor at the University of Würzburg and the head of the Data Science Chair and the founding spokesman of the Center for Artificial Intelligence and Data Science. Prior, I was a senior researcher at the University of Kassel. I started my research at the AIFB Institute at the University of Karlsruhe where I was working on text mining, ontology learning and semantic web related topics. My previous work also involved working at the KDE group of the University of Kassel on topics like data mining, semantic web mining and social media analysis. For a couple of years I’ve been a member of the L3S Research Center located in Hannover.

Research Interests

I’m a data science expert focusing on developing new data science algorithms and machine learning models for a diverse set of applications and in several interdisciplinary collaborations, which provide interesting challenges for my research. Understanding the models by explainable AI techniques enables my group to effectively build models tailored to the specific challenges of the various application areas.

In the past few years, applying data science and machine learning to ecosystems, environmental & climate data has become one of my central research areas. We have successfully developed deep learning methods for improving climate models in the BigData@Geo project and its successor BigData@Geo 2.0 (jointly with Heiko Paeth) as well as machine learning-based air pollution models in the EveryAware and p2Map project. We’re also analyzing data from smart beehives to understand bee behavior and detect anomalies as swarming events in the we4Bee and BeeConnected (collaboration with Ingolf Steffan-Dewenter) projects.

Another of my major research areas is the work on LLM for Text Mining and NLP in combination with explicitly represented knowledge aka knowledge graphs. Here my group focuses on adapting LLMs and extracting or enriching them with knowledge for our applications, for example in LitBERT to learn more about characters and character networks in novels. We have already worked on methods for representation learning, information extraction, metric and ontology learning and KG enrichment for the Semantic Web and a combination of semantic representations with language models. Specifically, we are developing models for sentiment analysis, scene segmentation and relation detection. With these models, we are able to analyze the development of texts over longer periods: For example, we can follow the plot in fictional novels by tracking the detected relations between characters over scenes, or measure the development of engagement in streams on twitch.tv using sentiment analysis.

To achieve our research objectives, we’re utilizing a rich set of methodological approaches like Knowledge enriched ML, Large Language Models, Time Series and Sequence Modeling, Representation and Metric Learning and Deep Learning for Imbalanced Data, which are described in detail on my group’s research page. For a lot of our research results, we have developed and maintain tools and websites. The most known tools are Bibsonomy, a social bookmark system for publications and We4Bee, a smart beehive monitoring system.

In terms of scientific self-governance, I actively contribute as a PC member, reviewer, and editor across various journals, conferences, and workshops, most recently as an editor in chief for the new diamond open access journal Transactions on Graph Data and Knowledge (TGDK).

Updated: