Textual data is the main backbone in information exchange especially when people are involved in the process, be it reports, legal documents, news articles, scientific publications, tweets etc. The unstructured nature of textual data is still a challenge for the automatic processing by computers, especially when the tasks require a deeper understanding of the content – basically when it comes down to the question: What does it mean?
Using the example of identifying reviewers for scientific publications, Eva Eggeling shows how AI – in this case Natural Language Processing (NLP) – is used to extract information beyond the original scope of the data.
Eva Eggeling is heading the Fraunhofer Center for Data Driven Design with business units in Graz and Klagenfurt. She has experience in various interdisciplinary projects (data assimilation, forming processes, materials research, and biomedical applications) with a focus on simulation. She has been a researcher for seven years at the German Fraunhofer Institute for Algorithms and Scientific Computing SCAI and coordinated scientific projects and research assignments until 2005. In March 2008, she took over the management of Visual Computing, Fraunhofer Austria Research GmbH. Since October 2019, she also heads the newly founded Innovation Center for Digitization and Artificial Intelligence KI4LIFE in Klagenfurt.
(Image / Video (c) Thomas van Emmerik)