FreshGrade Next
UX Designer
• 40+ critical research findings • ~2–4 months saved in product-planning time • +100–500% usability success rates for key features
Teachers found it hard to organize and viewing class content. They also needed Gradebook and lesson planning tools.
We invested in user research and team collaboration to make a scalable new FreshGrade product. Thanks to new processes and a growing team, we could align the product with the needs and expectations of teachers.
• Product Director • UX Design team (Summer 2019) • Sales, Customer Success, and Engineering
The FreshGrade promise(s)
With a five-year-old piece of software, changes in leadership, big clients, and a big sales team, promises will be made at EdTech startup, FreshGrade. These promises were for the coming school year. Thankfully FreshGrade's market research guided our small product team to the right problems solve for. We built a next-generation product for schools designed by reimagining the classic app’s solutions with the findings from user research conducted across North America.
FreshGrade's key feature was collaborative student portfolios. Students and teachers could share learning, and parents could engage too. With a window into the classroom, parents could actively participate in their child's learning. Teachers could assess student work at a birds-eye view in their Gradebook.
We wanted to keep all these benefits, get rid of the slow FreshGrade Classic platform, and solve pre-existing problems in the user experience. With a six-month timeline, we needed to build fast.
- Teachers and students needed a way to better organize and view class content. Year over year the content piles up and it becomes difficult to find things
- Teachers didn't need a one-size-fits-all Gradebook, but instead one tailored to their individual assessment style
- Likewise, teachers needed ways to lesson plan in a way that was more tailored to their individual assessment style
Here are some things we learned along the way:
We tested early, high-level solutions
At the time I was brought on as a UX designer, FreshGrade had a short-term UX designer concluding a contract and handing over their work. The documentation was high-level and conceptual. Alongside the then-head-of-product the designer it was clear they had invested in a great deal of complex modelling— creating app permissions that extend from School Administrator to Teacher to Student and Parent.
When developing a research plan, we decided the first step was to test the early wireframes with 5–7 users across Canada and the United States. We wanted to see how far we were from something clear and usable.
FreshGrade was new to testing, and I was stepping in as the sole UXer. Short on time and resources, we asked salespeople to help recruit and moderate off-site tests. The sales team was enthusiastic. I trained four on the basics of moderation, supplied them with paper prototypes, instructions, and a script. They hosted tests in Texas, Ontario, and BC. Three key markets for our new product. I recorded and took notes remotely via video, and jumped in with questions or requests only when it was necessary.
It was unconventional to ask salespeople to be unbiased, fly-on-the-wall observers when their primary job is to sell! But they were respectful, attentive, and incredibly helpful moderators. Occasionally one posed a leading question, but this would have been expected from anyone new to user research.
With the tests we found three major patterns.
- Teachers wanted to organize lessons and grades in a few ways; by subject, standard, or category
- Teachers didn't understand how to sort, so we would need to make design improvements
- Much of the language we used was too high-level and confusing to teachers
There were dozens of findings. But the biggest take-aways were the actionable design revisions. We needed to revisit gradebook- and activity-creation flows with our new understanding of how teachers planned to use them. Our biggest challenge would be to make the previously high-level designs clear and usable.
We followed the steps and trusted the process
As our new product manager ramped up, we went back to retrace steps that were skipped previously. With each feature we defined scope through users stories and acceptance criteria. And we documented.
We defined scope in short, collaborative meetings. A technical stakeholder clarified technical feasibility. A UX stakeholder and a customer care person vied for users and their experience. And a business leader addressed business and market requirements.
Once we reached consensus on user stories, we moved each feature seamlessly on to user flows. We used flows to described the path(s) a user would take to complete a task or a portion of a task. We aimed to consult with developers for each flow. Though programmers aren't user experts, they provide incredible feedback in building a technically consistent product.
Clear documentation fed into a clear product
We had to iterate to get the documentation right. We wanted a way to share acceptance criteria (AC), stories, user flows, wireframes, high-fidelity mockups, and prototypes with developers. Meanwhile using Confluence (wiki) for stories, whimsical.io for flows and wireframes, and Invision for mockups. Not to mention our developers had been using Google Docs for their technical documentation. It was a messy start.
Our now team-of-three UX people iterated our way to a good system. We kept business requirements on feature-specific wiki pages. Each page linked to one or more user flows and/or wireframes. And under each wireframe were links to mockups. With different states depending on user-type, inputs, assessment types, errors, etc.
We tried a new collaborative process
Our design team was new, and our development team was too. We didn't want to make the mistake of creating silos. So we design handoff became conversational meetings. Way more effective and inclusive than throwing documentation over a cubicle wall.
My favourite part was working with developers when technical challenges came up. With a tight September deadline, shortcuts were welcome. And no design was set in stone if it meant we could get things out the door on time. We made critical revisions along the path to completion thanks to open and honest communication.
We tested the beta to make refinements before launch
We quietly launched the product's beta, and set out to test the riskiest parts of it. I assembled a plan to test the first-time user experience (FTUE). We wanted to know know the obstacles in signing up, setting up, and attempting the first core tasks.
We hosted remote, moderated tests with 5 teachers across subjects and grades. FreshGrade's market had expanded to other states in the first test, so we conducted new tests with users from Alberta, Ontario, BC, New York, and Texas.
With the findings, I collated and presented 40+ findings using a new tool called Dovetail. Our product team prioritized findings by business and UX importance. I then came up with design suggestions for each finding, working with our UX Lead to refine them. Later we turned the suggestions into actions. Prioritizing them as design and developer tasks in Jira, our company's product management software.
Fortunately we had the chance to refine some features with test feedback, before the launch. There were some big wins with very little work. It's what we had time for, and we had a happy backlog for post-launch.
We made layout improvements that made it easier for teachers to create classrooms. We improved language across the product so users could better understand features. And we were able to validate many design improvements we made from earlier tests.
After the rush, there was time to thoroughly research what competitors were doing (better late than never)
Once the dust had settled and the product had launched, I had time to do a deep dive into how three leading competitors were trying to solve the problems we were trying to. If the timeline sounds backwards, you're probably right. It's always best to research competitors at the beginning of a project. But because we had little to no time in the early stages, our glances at competitors had been cursory.
The competitor report was more of a retrospective, with 40+ noteworthy findings to evaluate our positioning. Making sure the process was well documented with screenshots and content audits, I performed feature-usability tests with each competitor. For example, we added a submissions feature. So I tried Google Classroom's submissions feature. Meanwhile observing layouts, patterns, and content along the way.
Just like with our usability tests, I was able to collate findings, prioritize, and develop suggestions.
I was happy about what I was able to contribute to FreshGrade’s product pipeline. Valuable insights, and a prioritized backlog for development, and an exciting new, usable product.