Fresh out of college, I joined the creative intern summer class at one of the biggest ad agencies in the world. My novice self found everything impressive, from the war rooms littered with visual concepts to the cereal machine on the 7th floor. Every Friday, the designers on my team would print out a week’s worth of UI mocks, stick them on whiteboards, debate visual styles, and throw out any mocks deemed subpar. By the end, each board was left with only a few “best options”.
You’re probably thinking, “Sounds like a standard critique. What’s wrong with that?” Nothing, just the method itself.
Contrary to popular thinking, design is more science than art. While art is meant to be interpreted, design is meant to be understood. This fundamental difference means that while art can be critiqued, design should be researched, tested, and iterated on.
Now, don’t get me wrong – it is absolutely important for a design to go through design reviews. At Kickstarter, we walk a feature through three stages of design reviews:
- The Studio: exploration and discovery
- The Critique: narrow down on one path
- The QA: pre-implementation sanity check
Notice how the critique is only one part of the process? Even then, we spend more time dissecting test results than throwing mocks on the floor.
You are not your user
It is so easy to judge a design. The deliverables are visual-based, and people like to comment on how things look. It is also just as easy to assume how things should behave: “The average user will notice this first” or “As a user, I’d rather X than Y.” Assumptions should not be taken for absolute. You have to validate them since they are just opinions after all – duh, right? Yet, as much as the UX world likes talking about validating assumptions, they don’t do it enough. And when they do, their method is once again skewed towards qualitative methods like the usual user interview or cognitive walkthrough. Arguably, it is a lot easier to start talking to people than, well, to query some data.
Quantitative data tells you the what; qualitative data tells you the why – that’s user research 101. Both are equally important in verifying hypotheses, yet I’d argue that one should happen before the other, for ease of process. When you are able to start with quantitative and do some preliminary slicing and dicing of your users first, you can identify usage disparity in different user segments, analyse their characteristics, and evaluate a solution’s impact on each –objectively and with scalability in mind. You can then leverage these high-level findings for qualitative research; for example, by making sure that the interview pool is representative of a wide range of users, not just ones who scream the loudest. Hence, I find that design reviews are a lot more useful when people suggest what data to collect to validate my decisions or how to devise a test, rather than offering subjective criticism.
Stop designing for Dribbble
Quantitative data could even be used to inform the most skin-deep UI decisions. As product designers, how many times have you finished a gorgeous design on Figma just to realise that it failed to handle a large volume of user input during implementation? A quick scroll down Dribbble shows myriad eye-catching dashboard graphics. Yes, I call them graphics, not mockups, because they are not representative of real products.
To design real applications is to know its constraints and edge cases (problems which occur at the extreme ends of operating perameters). How do we make 80% of exploratory designs work well and the other 20% not work too badly?
The answer is we need to consider all possible UI scenarios by querying the usage of the product at hand, before even architecting the interface. I recently helped Kickstarter ship Add-ons, a feature that allows creators to offer optional “add-on” rewards to backers. Some of the very first questions I asked were: What is the maximum/average number of rewards a creator has, sliced by categories and funding tiers? How many items on average/at maximum does a reward have? What percentage of projects itemise? Then by doing some data modelling, I got a pretty good picture of the UI’s capacity and what edge cases it should cover.
When an engineer is working with you, there is nothing more annoying than constantly having to ask for new mocks because your original design could not handle the 20% case. The only way to get better at this is to scope the UI early on via quantitative methods to make sure it can handle everything.
Observe, collect, draw!
I once attended a talk with famed Italian information designer Giorgia Lupi. To my surprise, she always started visualizing data by hand regardless of the volume, be it 50 or 50,000 data points. Lupi co-authored the workbook Observe, Collect, Draw!, in which she designed various exercises to inquire, collect, and categorize raw data. If you are a designer looking to get into the world of data, this is how you should start, simply by asking as many “how many,” “how much,” and “what’s the percentage of” questions as possible. When you’re presented with a data set, scroll through and eyeball the data. Get a feel of the raw data; your brain can derive a lot from it.
Start using a data discovery tool like Looker or Metabase, if your organization has not. Study your application’s data architecture and practice running queries using purely the graphical user interface (GUI). Tools like Looker vastly lower the barriers to entry for data querying and analysis with their GUIs, which means the harder task here is to navigate the unique data structure of your application. I suggest reading through Mozilla web docs to have a basic understanding of arrays, objects, and data types. These are the building blocks of an application – understand them and you will unlock a whole new level of product design.