Skip to main content

Humans consider computers as such. It means machines without any kind of social role. However, when the machine fails, the Human gets irritated when it does not respond according to the social principles of the Human being.

This rationale, grounded since the 1990s by the study by Clifford Nass, cannot be overlooked. One of my main methods for exercising, teaching, and living as a professional in the field is to create, imagine, and constantly metaphorize the real in digital. The social relationship we establish with the human beings around us tends to be polite, kind, pleasant, and full of voluntary and involuntary feedback that helps decision-making and expression. A few days ago I heard a teammate say an excellent and exceptional thing to another, “The system must perform better. Because it’s horrible… have you ever imagined talking to me and not having any reaction from my part for 30 seconds?”. Meanwhile, thanks to Miguel for that. 

When I said, at the beginning of the previous paragraph, that we cannot forget this well-founded study, I mean that we have to take into consideration, not only the performance and metaphors of interaction patterns but also during the crucial process. For example, when we do Usability Tests. It is also clear from this study that, although the Human does not consider the machine as a social being, although they constantly want it to behave as such. In other words, Clifford Nass found that: “They (the Users) acted more politely in front of the computer that had been theirs.” In Nass’s experimentation, users were always more critical when they did not use their personal/professional computers.

It is fascinating to consider that many times the machine/computer that runs our digital product is the layer of intermediation between the user’s will and the implementation of action on the product. This layer is supported by artifacts like buttons, keyboards, and a panoply of hardware replicated in thousands of copies with the same characteristics globally dispersed throughout the world. Thus, this conscientious and polite relationship between “User vs. my computer/machine” is, in part, deposited in the intermediary and not in the respective digital product/software. How do we deal with this? My sincere opinion… We don’t do it. We let be the intermediary, as responsible for transposing the User’s will to the digital product. We should allow this relationship of intimacy between layers and take benefits from this. How? For example, when we perform usability tests. We take advantage of this knowledge to make sessions where the prototype runs, if possible, on our user’s computer/machine. With this, we will be able to absorb more transparent feedback and without focusing on the technological constraints and the respective frictions of being an intermediary without any type of intimacy with the user. There is nothing better than doing contextual usability tests on the spot, in our user’s usual place, always using his or her work materials, computer, machine, and/or auxiliaries for it.

Machines do not speak for themselves, with free will, but they create a relationship with us and sometimes make us feel good or bad. With all this, we drag a social and essential principle for all human beings, the trust. As a UX designer, I focus daily on gaining the trust of my users, through all the good procedures of personal empathy and, of course, transposing the needs and solving their daily problems. Trust. First created between me and my users, to later start to create my users’ trust in our software/digital product and benefits, of course, from the trust that they place on their computers for the execution of their tasks. Three levels of essential trust to create and support a good digital product. If one of these levels slips, there may have a problem. Even if it is the trust between the User and their computer/machine. How many of you have lost confidence in a machine? In a car, on a mobile phone, or even on a computer, and when that happens, we focus our attention on this deficient level instead of the software we use.

After ensuring this link of trust between the User and “my computer/machine”, there is another mountain to climb. The trust relationship between the User and software/digital product. We are experiencing times where smart interfaces are quickly replacing the software with closed circuits, that is, those that do not develop themselves. This does not mean that all interfaces that are created today without artificial intelligence are deprecated or out of their time. But it is imperatively significant our Users stay comfortable and that they give confidence even to interfaces without AI. This paradigm drags concepts from Intelligent Interfaces that must be taken into account when we create a digital product.

Foremost, we must take into consideration that we are no longer in the Age of making digital products “à la Greek Oracle.” The term coined by Clancey and Shortliffe (1984) and later by Miller (1994) describes a process in which the machine was the expert and guided the user to an action. The “Greek Oracle” approach demonstrates weaknesses. One of the main ones is when the system is faced with unanticipated variability and, for the first time, assigns the hot potato to the user’s hand (which blocks completely, as expected). Needless to say, creating interfaces with this approach nowadays is very risky. In addition to the obvious AI-equipped interfaces, we must create interfaces as an element of “Cooperative Systems” to increase trust with our decision-maker, our user. Once again we are facing an approach that addresses a metaphor, yes, one more. This metaphor equates a system as a team player, and this becomes essential to produce interfaces with an excellent level of confidence for both parts.

In the upcoming article, I will explore how many types and approaches of “Systems as Team Players” that we should know during the UX strategy phase.

And, of course, a GIF that describes the confidence in our dear personal computer 😀

JL

Bibliography

  • Kuang, Cliff. USER FRIENDLY : How the Hidden Rules of Design Are Changing the Way We Live, Work, and Play. 2020.
  • Reeves, Byron, and Clifford Nass. “The Media Equation: How People Treat Computers, Television, & New Media like Real People & Places.” Computers & Mathematics with Applications, vol. 33, no. 5, Mar. 1997, p. 128, 10.1016/s0898-1221(97)82929-x.
  • Nass, Clifford. “The Man Who Lied to His Laptop: What Machines Teach Us about Human Relationships.” Choice Reviews Online, vol. 48, no. 07, 1 Mar. 2011, pp. 48-3960-48–3960, 10.5860/choice.48-3960. Accessed 18 July 2020.

João Lima

João Lima

→ UX Design Guru at Critical TechWorks - BMW Group → uiux.pt Founder → UX Teacher

Leave a Reply