Download the PDF version of the FSTA 2024 Book of Abstracts.

Download the PDF version of the FSTA 2024 Programme.

Currently, the potential of the data collected from sensors is only partially explored in healthcare. One of the main challenges is providing adequate explanations concerning the underlying structure of data and models. At the same time, the need for explanations is of utmost importance not only due to various regulations but also to increase trust among systems’ users.

The proposed approach combines theoretical aspects of semi-supervised learning from partially-labeled sensor data with fuzzy linguistic summarization. Linguistic summarization belongs to the class of data-to-text approaches. We construct linguistic summaries for the partially labelled data streams, and the drifts in data streams are reflected in the construction of linguistic variables. The proposed approach enables to summarize of large data streams into meaningful and human-consistent information granules.

At the same time, semi-supervised fuzzy clustering is particularly promising for explaining sensor data because it enables to capture the information about the hidden structure of evolving data streams which are sparsely labelled and subject to multiple sources of uncertainty, and this was the main motivation for exploring this approach.

Semi-supervised learning is also often said to be “halfway between supervised and unsupervised learning”. We will also explain and verify how to assess the impact of partial supervision properly and what are its consequences. Furthermore, in some application contexts, one can question whether all available labels are equally valid and shall be extrapolated. To alleviate the problem of misguided supervision affecting the model’s performance, we discuss a regularization approach that incorporates the uncertainty into the fuzzy c-means semi-supervised learning.

Finally, we present a case study in smartphone-based mental health monitoring. Acoustic features of speech are promising as objective markers for mental health monitoring. Specialized smartphone apps can gather such acoustic data without disrupting the daily activities of patients. Nonetheless, the psychiatric assessment of the patient’s mental state is typically a sporadic occurrence that takes place every few months. Consequently, only a slight fraction of the acoustic data is labelled and applicable for supervised learning. Numerical experiments for real-life and simulated data illustrate the performance of the proposed uncertainty-aware semi-supervised models.

Formal Concept Analysis (FCA) is a mathematical theory employed for the analysis of data and classification with wide popularity in numerous application domains. FCA techniques extract special clusters called formal concepts from a given formal context. A formal context is a triple composed of a set of objects, a set of attributes, and a binary relation between objects and attributes. Several approaches extending FCA were developed by considering fuzzy formal contexts and fuzzy formal concepts, where attributes are satisfied by objects with truth degrees belonging to a graded scale, which is usually the real interval [0, 1]. Fuzzy formal concepts are mathematically constructed using the fuzzy quantifiers for all and there exists (the universal and existential quantifiers). We introduced a particular class of fuzzy quantifiers as new tools to capture more detailed information from datasets in FCA. These are interpretations in a model of special formulas called intermediate quantifiers of the formal theory of intermediate generalized quantifiers. Thus, we mainly achieved the following goals. Firstly, we proposed a novel notion of fuzzy formal concepts based on the intermediate quantifiers almost all, most, many, few, and some; moreover, we provided concrete models of graded extensions of Aristotle’s square, in terms of fuzzy formal concepts. Secondly, we employed a wider class of intermediate quantifiers to extract fuzzy formal concepts from more complex datasets, which are composed of a family of formal contexts (instead of a single one) and several fuzzy relations between objects of different types.

Data privacy provides definitions of what privacy is, as well as methods to protect data and metrics to evaluate how much disclosure takes place for a given data release.

In this talk we will describe the usual workflow for building a data release. This consists of anonymizing or masking the data (i.e., applying a data protection mechanism), evaluating its utility, and analyzing its risk. A good masking method is one that achieves a good trade-off between risk and utility.

Then, we will show the use of fuzzy clustering in the process of data masking, and the use of fuzzy measures and metric learning to evaluate in what extent the masked data is safe and avoids identity disclosure.

In the context of integration procedures, the standard additivity of set functions was found to be rather restrictive already at the beginning of the 20th century. This limitation prompted researchers to explore set functions with more flexible forms of additivity, leading to the development of fuzzy measures, set functions without any form of additivity property. The fuzzy integral, an integral built with respect to fuzzy measures, extends the classical integral, offering a more flexible representation of uncertainty. This extension provides a systematic and mathematically rigorous framework for modeling and managing uncertainty.

One effective strategy for addressing imprecision in data aggregation involves using intervals, where the width of the interval reflects the uncertainty associated with each object. The use of intervals has proven to be a suitable approach for handling imprecision, leading to significant efforts in developing mechanisms to fuse information in the interval-valued setting. However, a notable challenge in the aggregation of intervals lies in the absence of an natural/intuitive total order, particularly for functions where the total order is an essential component, such as fuzzy integrals.

This presentation explores various approaches to define and handle fuzzy integrals within the interval-valued setting, offering insights into how these methods address the challenges posed by imprecision and uncertainty in data aggregation.

The fusion, or equivalently, the aggregation of information in its broad sense is one of the main step in almost any data processing system. Its principal objective is to seek for a representative value that allows summarizing information from all the given data. Within the field of fuzzy logic, one of the key concepts for information fusion is the notion of aggregation function.

In recent years, numerous studies have emerged demonstrating that the original properties of aggregation functions can be very restrictive. Indeed, we find various examples of functions that even though they do not satisfy to be aggregation functions; they lead to better results in various applications. In this talk, we will present some of the main generalizations of aggregation functions along with their applications.

Institute for Research and Applications of Fuzzy Modeling of the University of Ostrava 2019.

Premium Template by WowThemesNet