UX RESEARCH + INFORMATION ARCHITECTURE + USER TESTING

Financial Services: New Information Architecture + Sitemap

Assignment + Approach

My client sought to rewrite its website content and overhaul its information architecture for an improved experience with better content targeting. Our team proposed and executed a number of methods to understand the existing structure, content, stakeholder insights and competitive landscape. I wrote the scripts for and conducted stakeholder interviews with key senior level stakeholders to gain a firmer understanding of departmental objectives, company culture and the bank’s clients. I identified three key audiences and content needs. Using these profiles and insights from other discovery activities, I created a sitemap for testing. Key user tasks were developed for each of the three segments and three separate tree tests were conducted to validate the sitemap by assessing initial clicks, paths taken and task completion rates. The data was then compiled and evaluated and additional recommendations were made for sitemap improvements. The leadership team was receptive to the recommendations and plans to implement them by the end of 2018, after which point we will continue to monitor site traffic and make adjustments accordingly.

Note: Due to NDA, all references to the client’s name have been redacted to preserve confidentiality.


Audit Activities: Existing Site Structure, Competitive Audit

The existing sitemap was comprised of audience-, task- and scenario-based entry points, with an overlap of content links. This structure made it difficult to know how to self-segment and where to start looking for content. This was particularly true when trying to distinguish Personal vs. Private Banking clients and Commercial vs. Small Business clients. Much of the content, products and services overlapped, but the need states for each of the audience type—and messaging—was distinct. A competitive audit was conducted to identify trends for distinguishing these user groups through navigation, which factored into the initial sitemap recommendation.

 
Sample: Excerpt from high-level competitor audit findings that factored into the initial sitemap recommendation.

Sample: Excerpt from high-level competitor audit findings that factored into the initial sitemap recommendation.

 
 
Sample: Excerpt from deeper-level competitor audit findings that factored into the initial sitemap recommendation.

Sample: Excerpt from deeper-level competitor audit findings that factored into the initial sitemap recommendation.

 

Stakeholder Interviews

Understanding the personal relationship between the bankers and their clients was crucial to establishing the role of the site. I created scripts for and administered one-hour interviews with ten key senior-level stakeholders. Insights from these interviews revealed common client questions, concerns and objectives as well as department objectives and messaging, in-depth details about product and service offerings and the overall image the bank currently and wants to portray. These insights helped us to incorporate better-informed content hierarchies into the test sitemap.

 
Sample: Excerpt from stakeholder interview script.

Sample: Excerpt from stakeholder interview script.

 

User Archetypes

The client came to us with outdated, demographic-centric personas. What was needed was a better understanding of existing clients as well as prospects and an understanding of their basic needs. With no budget to create full-blown user profiles, I used research up to that point to create high-level archetypes who we would then recruit for the tree tests.

 
Sample: Excerpt from archetype document: archetype overview.

Sample: Excerpt from archetype document: archetype overview.

 
 
Sample: Excerpt from archetype document: archetype detail.

Sample: Excerpt from archetype document: archetype detail.

 

User Testing: Task Script

The test tasks aimed to validate key content groupings, paths and controversial sections for each of the participant groups. Some of the tasks were co-authored with the client and the research team ensured that all task questions were properly wordsmithed. With some of the page content still unknown to us, we ended up with more paths than was optimal for some of the tasks. This ended up benefiting us in the long run, as we were able to use the results to determine the most popular and commonly used paths.

 
Sample: Excerpt from tree test task scripts.

Sample: Excerpt from tree test task scripts.

 
 

User Testing: Tree Test Results

Overall, the scores were not that great. Some of the controversial decisions were proven to be failures, while others worked well for most users. In the end, it was determined that a number of terms were too broad and some sections were not easily discernible. After compiling the data and visualizing the findings, recommendations were categorized as either quick fixes, near-term fixes requiring discussion and consensus or long-term/future state fixes after the completion of a content gap analysis and new content creation.

 
Excerpt: Tree Test Findings document overall executive summary.

Excerpt: Tree Test Findings document overall executive summary.

 
 
Excerpt: Tree Test Findings document section executive summary (what worked).

Excerpt: Tree Test Findings document section executive summary (what worked).

 
 
Excerpt: Tree Test Findings document section executive summary (areas of opportunity).

Excerpt: Tree Test Findings document section executive summary (areas of opportunity).

 
 
Excerpt: Task ranking.

Excerpt: Task ranking.

 
 
Excerpt: Task result detail.

Excerpt: Task result detail.

 
 
Excerpt: Recommendation related to a task finding.

Excerpt: Recommendation related to a task finding.

 
 
Excerpt: Quick fixes.

Excerpt: Quick fixes.

 
 
Excerpt: Long-term fixes requiring more consideration and client input.

Excerpt: Long-term fixes requiring more consideration and client input.

 

Conclusion

The test did exactly what it was supposed to do: validate assumptions about content groupings and labeling, and provide insight into how users think about and look for content. While the task scores were not great, the test exposed the shortcomings of using in-house terms—even industry-standard language—to describe products and services. It revealed the need for an audience-based top-level navigation and that users liked both topic- and scenario-based entry points and the second and third levels. Tree tests are also to be put into perspective; in the real world users will have the benefit of UI design, copy and images to guide them through an experience. As long as the tree test provides answers to assumptions—regardless of the task scores—it’s a worthy test. This client is currently implementing the recommendations and will be monitoring the effectiveness of the updates.

 
 

VIEW OTHER PROJECTS:   GE: Research, IA, UXD   |    miRide: UX Strategy & Design