UI UX
My team devised and built Anomaly Detection, a greenfield feature launched as part of a strategic, multi-channel campaign. As one of three new products under the umbrella of Honeycomb Intelligence, an AI-native observability suite for software developers, its unveiling generated early access signups from enterprise teams including Slack, Intercom, HelloFresh, and Duolingo.
Honeycomb's software is property of Hound Technology, Inc. This case study is not sponsored by Honeycomb. The information presented in this case study is derived from: social media, blog posts, documentation, press releases, and analyst reports.
The Signals Team shipped an MVP that learns service data normalcy and automatically surfaces deviations.
While Honeycomb's value was well understood by the market, my team found that friction in alert creation inhibited the realization of that value. Teams, whose primary use case was understanding service health and being alerted about issues, were often discouraged and/or confused about how to get started. Configuring useful alerts was complicated and required a deep understanding of an application's system, as well as its nuanced performance patterns. For this cohort, the learning curve was simply too steep to overcome.
My team recognized a pivotal moment to: greatly reduce time to value (TTV) by getting started with alerts faster, reduce alert fatigue by highlighting only meaningful deviations, and lower mean time to resolution (MTTR) by enabing investigations with our complementary AI feature, Canvas. Paired with Canvas, users could investigate their anomalies without needing to write the perfect query; they could simply ask a question like, “Why are response times slower for these users?” We believed these improvements would put us in a position to increase adoption for both new and existing teams and win more deals with enterprise organizations who demanded out-of-the-box tooling.
As evidenced by Ramp.com's vendor assessment, Honeycomb's set up contributed towards its reputation as a tool best suited for advanced teams.
We envisioned a world where Anomaly Detection eliminated long-standing friction for alert creation. By learning normal service behavior and improving over time, the algorithm would automatically detect true irregularities in signals like error rate and latency, before they impact our customers' customers. The value proposition: reduce false positives and alert fatigue for software developers.
Informed by a proof of concept from our data/UX consultant and technical discovery from our engineering lead, the triad converged on acceptance criteria. We felt confident that this scope sufficiently balanced impact and effort to: deliver just enough value, facilitate conversations with early access users to learn what else would be needed, and not be overly demanding on our backend:
To understand engagement and effectiveness, we planned to instrument our code to analyze metrics like: service list page loads, service name clicks, notification recipient adds, and 'Run Query' button clicks.
The team discussed and captured technical and UX risks that would help inform future decisions. These risks would remind us which aspects of the experience to check in on throughout our early access phase.
To reduce engineering effort and still provide a delightful experience, I proposed making use of our existing components and patterns wherever possible. This meant upcycling our list view, which was already being used for Triggers and SLOs. The previous quarter, another Product Designer contributed a drawer component to our design library, a panel that would slide over the top of a page from the right side of the browser. This panel would be a perfect home for our anomaly detail content. During development, engineering ran into an issue where tool tips within the drawer didn't display. I worked with the App Enablement Team, who maintains our design library, to address this at the component level. This small investment allowed tool tips to be used in any future implementations of the drawer component, for all product teams.
In order to indicate on the graph where an anomaly begins, we repurposed functionality and styling from markers; vertical bars on our Query page that denoted deployments. We invested some effort to represent the "normal" range using light gray shading, and highlight anomalous periods using semi-transparent red shading.
For the recipients table, we decided to invest in a new interaction pattern to quickly add, edit, remove, and test a recipient in-line, without leaving the table. Existing instances of recipients in the product required users to configure them in a modal, which we felt would detract from the user's flow. I paired with engineering to define the interaction details, and worked with the App Enablement Team to contribute the new pattern to our design library. This was an investment that allowed other product teams to use the pattern in their own list views moving forward.
Sampling of early sketches to explore information hierarchy and graphing for data presence.
The MVP we set out to implement and test with Customer Development Partners.
The MVP we set out to implement and test with Customer Development Partners.
While our mighty engineering team charged ahead with technical discovery and backend development to get our infrastructure in place, I ran on a parallel path to test the usability and comprehensibility of my designs using a clickable Figma prototype. I partnered with the Director of Product, our Product Manager (PM), and the Customer Success Team to recruit Customer Development Partners (CDPs) for user interviews. My PM and I met with a handful of these external users, as well as some internal users, to assess the usability of my design: Was the information architecture clear? Was it obvious how to navigate through services? Was the information presented in the graph easily understood? What would they want to do next? Is it clear how to take that action? How would this experience stack up against others participants had used? What was missing?
Alerting users about a drop in data or an absence of data was something we were particularly excited about because of its impact/effort ratio. Knowing whether events are flowing from a service is a valuable signal that can launch an investigation, and engineering estimated a relatively low level of effort to enable this.
An important nuance of data presence is that you can have a lack of data that isn't sustained long enough to be an anomaly. The UX challenge: How do you visually distinguish between an interruption in data that's anomalous and one that isn't? Despite having many data visualizations types across the platform, Honeycomb had never needed to graph binary data. As such, it was crucial to represent data presence in an intuitive way, since our explorations would likely become the standard for graphing binary data elsewhere in Honeycomb. To derisk, I worked with our tech lead to understand what data visualizations were supported by our charting library. With this constraint in mind, I mocked a fictitious scenario using three graph styles and posted them in a survey to our customer Slack space. We acknowledged a certain amount of bias from this channel, given many of the responses come from a group of Honeycomb champions, but we felt some signal would increase our confidence. The feedback indicated that a categorical bar chart, where a lack of data was represented by a gap, was most intuitive (Opt A).
I tested chart options for the data presence signal type to derisk our data visualization.
I brought my research to the Design Team's weekly critique to get internal feedback about the styling and comprehensibility. This ensured my designs were aligned with our design library and consistent with other graphs in the product.
Anomaly Detection was unveiled during a coordinated, multi-channel launch in collaboration with the Product Marketing Team. In person, it was demo'ed at Observability Day San Francisco | What's Next: The Future of Observability in the Age of AI, a live event for software developers sponsored by Amazon AWS. Digitally, it was launched via AI Week, a LinkedIn campaign that raised awareness about each component of our new AI suite, Honeycomb Intelligence. Teams signed up for the early access program in person and by contacting their Customer Success representative.
Product and marketing leaders demo'ed our Anomaly Detection MVP during AI Week, an online campaign that coincided with Observability Day.
Our co-founders and Developer Relations Team demo'ed Anomaly Detection at Observability Day, an AWS-sponsored event in San Francisco that coincided with AI Week.
Initial signals from the early access announcement indicated positive sentiment. Honeycomb users and other DevOps professionals provided positive feedback on LinkedIn and in various articles about Honeycomb Intelligence:
The one-two punch of Observability Day and AI Week yielded more sign-ups than we were able to accommodate (a good problem to have!), given the overhead of manually onboarding teams' services during the early access phase. This was certainly not detrimental to the project, but additional usage would have provided more, helpful user data about our MVP's capabilities and usability.
Despite the enthusiasm expressed by so many teams during the early access phase, we were surprised to see low and slow adoption over the remainder of the quarter. We believed the delays were a symptom of poor timing — the latter of half of Q4 is a challenging time given the holidays and competing end-of-year priorities like OKRs, reporting, and planning.
On the UX side, we realized the drawer component wouldn't scale well if settings and additional signal types were added. Anomaly details would likely need to be migrated to their own dedicated page.