Chat Feedback

While working for one of the Big 5 banks in South Africa, my design lead tasked me with creating a Net Promoter Score rating and feedback system for the chat feature on their app and online banking portal. While the chat feature had existed for a while, there was no way for clients to officially provide their feedback. At the same time, the bank was not getting accurate feedback from clients in-branch.

 

Research

I started by gathering the open working files for the chat designs. Creating a feedback system is simple enough, but I wanted to do it right. While discussing the existing rating system that is used in the branches with a stakeholder, they explained that the feedback they get is a little garbled: clients would select a bad score (1) and comment that the banker’s assistance was outstanding, or they would choose a rating of 5 and complain about how horrible their experience was in-branch.

This meant only one thing: it was not clear whether a score of 1 was good, as in, this person is number one, or bad, as in 1 out of 5 stars. I wanted to see if there were any best-practice papers on this on the web, and I found wonderful resources on tests done previously. However, we needed to conduct our own usability test to validate or disqualify my desk research. The results were surprising, with 100% of clients understanding that 1 is a bad score and 5 is a good score. This baffled me, but unfortunately, I couldn’t spend any more time on usability tests and had to come up with a design that would satisfy all user types.

While the results on the 1-5 score were a bust, the usability test validated our assumptions that the two questions could be understood incorrectly. The testing also provided insights into when clients would take the time to write a review after selecting a score, and why.

 

Concepts

I created three concepts: emoticons, stars and thumbs up and down.

While star ratings are relatively commonly used, the emoticon design was the most beneficial for the use case, and the least likely to fail. Because we needed a score out of five, the thumb rating would not suffice.

 

Design

We decided to add colour to the emoticons on hover to further clarify their meaning. Since we are in a western country, red generally means bad and green means good. I created a scale from red to orange to green, using the colours we already had in our design system.

Examples of the colour blindness test results

I ensured the visual accessibility of these colours by putting them through various colour blindness tests and used these results to reassure stakeholders that these colours are indeed accessible. However, I am ashamed that, while I knew that the contrast of the faces against these colours is not high enough, I decided to stick with the choices I have made because the faces are visible before hovering over them. As for screen readers, the developers ensured that tagging of the icons are clear.

I worked closely with copywriters to reword the questions that were so famously misunderstood by our usability testing respondents and re-tested them using the guerilla method, as time limits did not allow a second usability test.

For the comments section, I proposed a radio-selection design with the option to add their own comments. While this was well received by stakeholders, the developers informed me that they could unfortunately not change the functionality of the feedback form, and they could only change the wording of the questions. Therefore, I stuck with the original design of a comment box. However, this was still well received by clients and the feedback they were getting was making sense again.

 

Tools

Sketch, InVision, Coblis Color Blindness Simulator