Enabling Early Identification of High ESG (Environmental, Social & Governance) Risk in Supply Chains
I led product design for the SupplyShift sustainability platform, transforming risk assessment in supply chains with a streamlined workflow that identifies high-risk suppliers in minutes instead of weeks. This innovation enabled users to act swiftly on critical insights, reducing the initial time to receive risk information by 99%, uncovering an average of 16% high-risk suppliers, and increasing user engagement by 15%.
Company
SupplyShift & Sphera
Scope
New feature
My Role
Lead Product Designer
Team
1 PM, 4 engineers, and Implementation & CS
For this project, I wanted to make sure that I took enough time to interview stakeholders who were involved in the project. This was to ensure that I was gathering knowledge that every stakeholder had to be sure that I was seeing the bigger picture of the project from all vantage points possible. I also wanted to interview members of the implementation team, since they work so closely with our customers every day. This would help me to uncover the problems we were hoping to solve with this project.
For this project, I conducted six one-on-one interviews, including project stakeholders and implementation team members. This was mostly for research, but also to ensure that all stakeholders were on the same page about the project direction. Some questions that I was hoping to answer were:
Once I had conducted all the interviews, I went back through the recordings to take down notes, main points of concern, and any helpful quotes that might be useful when it came to communicating project direction. To find all common pain points and areas of concern, I created an affinity map of all collected notes.
Common problems that I found were:
After the initial research was completed, I paused to reconnect with my Product Manager so that we could discuss project goals for moving forward. We came up with the following goals to keep in mind:
With the common pain points identified and our goals in sight, I also formed some How Might We statements to help both myself and my product manager to focus on the most pressing areas.
After a rough roadmap had been decided on, I moved forward with creating some task and user flows for what the experience would be like. I made sure to check in with my product manager and members of the engineering team to ensure that the flows made sense to them and that everything was feasible to be built out.
I then started working on low-fidelity wireframes in Figma. There were different ways I thought about presenting the ratings, so I wanted to try different layouts to see what made sense to my product manager, the engineers, and our stakeholders. We also wanted to think about future iterations where we allowed our users to dig even deeper into the data than the initial ratings.
Ultimately, we decided to move forward with a layout that split the ESG ratings into categories so that it was a bit clearer to our users in which areas the suppliers might struggle or excel. With that in mind, I moved into creating the high-fidelity wireframes so that I could start showing the work for testing.
Because the engineers were unsure how long it would take a new score to load, I wanted to make sure to include a loading state for a "retrieving" score.
It was important that our users be able to use this data flexibly, so I made sure to include the ability to sort by score in each column.
Being able to filter the scores went without saying, so I made the filtering flexible in each column category. In order to kee the scores understandable, I labeled the score buckets with labels like "high risk" and "low risk."
Because we wanted to move quickly, I conducted three usability tests with the high fidelity prototype with our implementation team members. My primary goal was to make sure that the layout made sense and was relatively easy to understand at first glance. The tasks for each test were:
The tests were run successfully and the testers had no issues with completing the tasks as requested. I did receive feedback that finding more information about the scores was a bit of a hassle when it was hidden in the filter sidebar, so I moved into some small revisions.
While working on the suggested revision by my testers, I also received a request by my Product Manager to break down the design work into smaller pieces for shorter bursts of engineering effort. In the end, this was helpful when it came to finding a new place in the design for the informational modal link. The first phase of the SupplyScreen broke out the filtering system for further tickets down the road, opening up space on the main page for a link that was easily spotted by later testers.
I also decided to add some clarifying tooltips for users when they hovered on the "retrieving" status of a score.
Analyzed the proportion of user time spent engaging with the feature relative to their overall activity across the platform.
Evaluated the classification of suppliers identified as high risk based on SupplyShift's proprietary scoring system.
Assessed the decrease in time required for users to obtain high-risk data from their suppliers, comparing the traditional assessment process to the streamlined approach enabled by the new SupplyScreen module.