Wydawnictwa Uczelniane / TUL Press
Stały URI zbioruhttp://hdl.handle.net/11652/17
Przeglądaj
2 wyniki
Wyniki wyszukiwania
Pozycja Brief Overview of Selected Research Directions and Applications of Process Mining in KRaKEn Research Group(Wydawnictwo Politechniki Łódzkiej, 2023) Kluza, Krzysztof; Zaremba, Mateusz; Sepioło, Dominik; Wiśniewski, Piotr; Adrian, Weronika T.; Gaudio, Maria Teresa; Jemioło, Paweł; Adrian, Marek; Jobczyk, Krystian; Ślażyński, Mateusz; Stachuta-Terlecka, Bernadetta; Ligęza, AntoniProcess mining allows for exploring processes using data from event logs. By providing insights into how processes are actually executed, rather than how they are supposed to be executed, process mining can be used for optimizing business processes and improving organizational efficiency. In this exploratory paper, we report on selected research threads related to process mining carried out within KRaKEn Research Group at AGH University of Science and Technology. We introduce a collection of initial ideas that require further exploration. Our research threads are concerned with the use of process mining techniques 1) for discovering processes from unstructured data, specifically text from e-mails, 2) for explaining black-box machine learning models, using process models as a global explanation, and 3) for analyzing data from different food industry systems to identify inefficiencies and provide recommendations for improvement.Pozycja Improvement of Attention Mechanism Explainability in Prediction of Chemical Molecules’ Properties(Wydawnictwo Politechniki Łódzkiej, 2023) Durys, Bartosz; Tomczyk, ArkadiuszIn this paper, the analysis of selected graph neural network operators is presented. The classic Graph Convolutional Network (GCN) was compared with methods containing trainable attention coefficients: Graph Attention Network (GAT) and Graph Transformer (GT). Moreover, which is an original contribution of this work, training of GT was modified with an additional loss function component enabling easier explainability of the produced model. The experiments were conducted using datasets with chemical molecules where both classification and regression tasks are considered. The results show that additional constraint not only does not make the results worse but, in some cases, it improves predictions.