Investigating the concept of 'computing with words', a branch of fuzzy logic to handle imprecision in the human-machine interaction.

Fuzzy logic: Improving interfaces with computing with words

 

Technology has become increasingly responsive to human behaviour, with AI tools and machine learning in high demand in modern society. The unpredictability and vagueness of human perception, however, is incredibly hard to capture, and there remains a certain gap in communication between biological and technological systems.

 

Moreno Colombo and colleagues at the Human-IST Institute in Fribourg, Switzerland, in conjunction with the FMsquare Foundation, have investigated the concept of ‘computing with words’ by way of handling imprecision in the interaction between humans and machines, a branch of ‘fuzzy logic’ that has wide and varied applications.

Read the original research: www.springer.com/series/11223 

Image Source: Deemerwha Studio / Shutterstock

 

Transcript:

Hello and welcome to Research Pod. Thank you for listening and joining us today.

In this episode, we look at the work of Moreno Colombo, senior researcher at the Human IST institute of the University of Fribourg, Switzerland, on the application of ‘fuzzy logic’ in the development of natural human-computer interfaces that better adapt to the user’s needs. Fuzzy logic is a varied approach to classical logic that accounts for the vagueness of human reasoning by allowing propositions to be partially true or false – making it an excellent tool for the handling of imprecision in the interaction between humans and machines, with wide and varied applications.

 

Human beings are social animals, meaning they need to interact with others in their everyday life. An interaction can be defined as an exchange of information between two or more actors, and different types of interaction exist, depending on the nature of the involved actors.

 

The interaction between biological systems refers to how different natural actors pursue a common goal. For example, how people interact with the environment around them or with others in conversation, using both words and body language. This type of exchange is characterised by the flexibility and adaptivity of actors to one another.

 

In contrast, however, is the interaction between machines. This represents the exchange of information between computer hardware or pieces of software and is based on rigid protocols that give meaning to artificial representations of information. All the machine’s components must understand these protocols in advance for the correct exchange of information, leaving no room for flexibility.

 

Finally, mixed interaction refers to exchanges between biological and artificial agents. A common example of this is human-computer interaction. Creating mixed interfaces that are useful, safe, usable, ethical, and functional requires consideration of the user’s needs and preferences, by using methodologies and design principles derived from human-to-human interaction.

 

However, traditional mixed interfaces often lack naturalness in the way that people interact with technology. This is because most interfaces are designed for the average user, but are restricted by the computer’s capabilities, rather than the natural way in which humans communicate and interact with each other.

 

For example, typing on a keyboard or clicking a mouse is not the most natural way for people to communicate, and can be difficult and frustrating for many users. In contrast, natural interfaces that use speech recognition or gesture-based controls are much more intuitive and user-friendly, as they are closer to the way humans naturally interact with one another.

 

Research has shown that natural interfaces can improve user experience and increase user satisfaction with technology. What’s more, natural interfaces have the potential to improve accessibility for users with disabilities or limited mobility. For example, speech recognition technology can allow users with motor impairments to interact with technology using their voice, while gesture-based controls can assist users with visual impairments.

 

In other words, in most traditional interfaces labelled as natural, the adaptivity still lies mostly on the side of the human user, who has to adapt to the protocols that the machine understands, instead of having a reciprocal adaptation to understanding the other, as happens in nature.

 

However, it is not enough to use a modality (for example, voice) that is more natural for people to directly obtain a natural interface. When conversing with a virtual assistant, such as Apple’s Siri or Amazon’s Alexa, for example, the conversation is artificial and based on protocols, despite using a modality that is natural for most people. In fact, the virtual assistants’ abilities to respond to conversational prompts or directions are limited to a specific set of terms, despite the many ways of making the same request in a way that other humans would naturally understand. The difference between ‘Will it rain today?’ and ‘Do you think I need an umbrella?’ are only slight in terms of context, but are worlds apart to a computational process.

 

There are many potential benefits to changing this paradigm to meet the user’s needs and desires in keeping with human unpredictability. Indeed, the automatic adaptation to each user’s way of communicating not only leads to more personalised user experience, but also to accessibility by design.

 

Accessibility features for users with specific requirements for written text, sounds or physical interfaces could be automatically handled by the system, such as the possibility of using a different modality than the one that the interface was designed for, and improve the intrinsic robustness of the interaction, where small errors or imprecisions are tolerated and understood by the artificial system. Examples of these accommodations include technologies ranging from text-to-speech and subtitling, to instant translations between spoken and signed languages.

 

These emerging benefits would make the interaction with machines more natural to better emulate – not necessarily based on words – human conversation. When conversing with a person who doesn’t share your language, people adapt to understand the other person’s way of communicating, by finding common ground – or parallelism – between the cadence or tone of what is said, and the semantics of the gestures used by the other. Think of tourists asking for directions, or interacting with babies – while some barriers in communication persist, humans are great at finding ways around them. Imagine if computers could do the same.

 

The definition and development of techniques allowing computers to break these barriers is the main goal of ‘phenotropic interaction’ – literally ‘the interaction of surfaces’, which refers to any point of contact between two objects on which one can apply pattern recognition to extract the meaning of the shared data, instead of relying on hard protocols.

 

The term ‘surface’ was borrowed from human vision; the retina is the surface allowing us to interact with the outside world by finding patterns in what we see, a technique that works also in the case of missing data, such as in correspondence with the optical nerve.

 

In other words, phenotropic interaction studies the main components that should be considered when designing an adaptive human-machine interface. The design principles for such an interactive system include the importance of not relying on strict protocols like the language limitations of virtual assistants described earlier.

 

There is also the need to work with approximate data, such as human perceptions (for example ‘hot’ weather, or a ‘nice’ person); the inclusion of techniques to ensure robustness in the design of the interaction to ensure all accessibility needs are met; the ability for the interface to improve over time via ‘learning’ from users’ feedback, and to handle several modalities such as sound, images, and text.

 

Including these design principles in the development of natural interfaces requires a fundamental theory to be considered – Computing with Words and Perceptions. This branch of ‘fuzzy logic’ aims at performing automated computation and reasoning on human concepts like natural language and perceptions.

 

For computers, whose language for any type of operation is mathematics, it’s challenging to correctly handle perceptions and natural language. The pipeline of computing with words allows this by effectively translating these to mathematical objects on which the computer can subsequently perform a computation to solve problems originally expressed in natural language.

 

Expanding a computer’s ability to process the vagueness and subjectivity that is intrinsic in human perception and communication, by the nature of the distinctly human way that we sense our environment, the mathematical objects employed to represent the semantics of words and perceptions requires some measure of fuzzy logic.

 

Indeed, fuzzy logic is a mathematical theory where imprecise information is tolerated and can be accurately described by allowing propositions to be partially true or false. For example, while a new-born baby may be legitimately described as ‘young’, especially in relation to their older parents, all of them may still be described as ‘young’ from the perspective of their grandparents.

 

The technique of computing with words can give machines a better understanding of human desires, perceptions, and views of the world. Moreover, this provides an advanced understanding and adaptation of a machine to a person’s way of communicating, be it any spoken language, body language, or gestures for instance.

 

The first user experiments with phenotropic interfaces built with ‘computing with words’ have been carried out in 2022 by Moreno Colombo and colleagues at the Human-IST Institute in Fribourg, Switzerland in the context of virtual assistants. The team compared these interfaces with traditional implementations of virtual assistants and reported a significantly improved user experience. Interaction with phenotropic interfaces was perceived as being more natural and human-like; getting a useful interaction out of the platform took less work, less frustration, and came with a better flexiblity than the interaction with the classical interfaces.

 

These promising preliminary results show the potential for the design principles of phenotropic interfaces and the application of fuzzy logic to improve the user experience for people interacting with technology. This gives a first direction to follow for the future development of better interfaces that can contribute to making technology more adaptive to the needs of the user and more accessible, thanks to the application of methods from the theory of computing with words and perceptions.

 

This improvement, providing an advanced understanding and adaptation of the interaction to the user, is fundamental for better integrating the human being into an invisible network of intelligence. Indeed, as opposed to the most recent trends in machine learning, phenotropic interfaces do not rival natural with artificial intelligence, but rather bring these two into a holistic symbiosis.

 

Colombo’s development of the criteria to be followed to obtain intelligent, adaptive interfaces resulted in the definition of a set of design principles to be followed in further developments of such interfaces. Most of these principles can be effectively implemented using concepts from the theory of computing with words and perceptions, which handles natural language and human perceptions mathematically through fuzzy sets, as proved in the mentioned user study in the context of conversational interfaces.

 

That’s all for this episode – thanks for listening. Be sure to check out links to Colombo’s thesis, soon to be published in the Fuzzy Management Methods series of Springer in the show notes below, as well as links to the Human-IST Institute and the FMsquare Foundation. And, as always, stay subscribed to Research Pod for all for the latest science!

 

See you again soon.

Leave a Reply

Your email address will not be published.

Top
Researchpod Let's Talk

Share This

Copy Link to Clipboard

Copy