What I’ve Learned as a Product Management Intern at LogDNA (Round 2!)

Madison Gong
10 min readSep 1, 2020
LogDNA is a log management startup in Silicon Valley.

I’m back for Round 2! This summer, I had the opportunity to return to LogDNA and intern again on their product management team. Although my internship was remote due to COVID-19, the knowledge and personal growth I gained were just as valuable, if not more, as last year. In fact, it was a super exciting time to be back at LogDNA. In July, LogDNA announced its Series C investment and new CEO Tucker Callaway. It’s hard to believe that in just one year, LogDNA has grown from a team of about 60 people to over 100. The product team is no longer a trio of three musketeers, but 10 people strong consisting of product managers, designers, and an analyst!

With the company growing and looking to tackle new goals, my projects this summer involved both “traditional” and “non-traditional” product management work. This combination of work turned out to be the key to my personal growth as a product manager because I was able to understand a new depth of a product manager’s responsibilities. What exactly did I learn? Let me start by explaining my “traditional” product management work.

Enter LogDNA’s usage dashboard! Otherwise known as “some good ole’ fashioned product work.” Helping to implement improvements to the usage dashboard was one of my main projects this summer and it allowed me to leverage all the product skills I’ve learned so far in my career. From initially scoping out the project, to working with multiple coworkers and then collaborating with engineering, this was definitely “traditional” product work through and through.

Want to read this story later? Save it in Journal.

I first learned about some areas of improvement for the usage dashboard through a few observations of my coworkers and our users. There seemed to be a general lack of understanding on how to interpret and use the usage dashboard. Specifically, users were confused about the different units of measurements LogDNA was using for the usage graph and percentage breakdowns. Before I show you around the usage dashboard, let me explain why this particular feature is so valuable to LogDNA customers. The usage dashboard’s main role is to help users better understand their usage data. By being able to track data anomalies and different logging sources, users are able to make more informed decisions about their usage and overall use of LogDNA.

Screenshot of the usage dashboard with the areas causing confusion circled in pink.

Given the value of the usage dashboard to our customers, it was important to locate the exact areas that were causing confusion. As I mentioned above, the first major cause of confusion was due to the units of measurements being used in the usage graph and percentage breakdowns. The usage graph was being measured in volume (B, MB, GB, etc.) while the percentage breakdowns of our top logging apps and sources were based on the number of log lines each app/source consisted of. Users were assuming that the percentage breakdowns were also being measured in volume, causing incorrect tracking and prediction of their data. The second source of confusion was due to the “squiggly lines” (as my manager and I affectionately called them) next to each individual app/source percentage breakdown. Users had no idea what these lines represented and it added another layer of confusion to the dashboard.

At first, I thought all these areas of confusion would be a quick and easy fix. All we had to do was use a common unit of measurement, add some descriptions to what units the graph and percentage breakdown were using, and get rid of the squiggly lines. Boom! Problem solved and all confusion gone, right? As compelling as it was go with these quick solutions, I knew the dashboard had been built this way for a reason. It was my job as the product manager to make sure I understood the dashboard’s full story, which is why I ended up searching LogDNA to find the original creator of the dashboard. When I finally found the creator of the dashboard, I left our conversation with a brand new perspective on how to improve the usage dashboard.

The reason why the usage graph and percentage breakdown were measured in different units is that LogDNA’s architecture didn’t support measuring the sources/apps in volume. Giving users the percentage breakdown in the number of log lines was the best that we could do, even though volume is technically a more accurate representation of users’ usage data. Additionally, the “squiggly lines” were actually called sparkline graphs and had the valuable ability to show users the trend of their usage data over the past 30 days. For example, the sparkline graph would be able to show the user if a source suddenly spiked or dipped in its number of logs. After learning these two valuable pieces of background information about the dashboard, I knew I needed to approach improving the usage dashboard in a whole new way while composing my PRD.

First mockup of usage dashboard improvements

Since the percentage breakdowns had to be measured in the number of log lines, one of the first improvements we made was to add a toggle to the usage graph. The usage graph could be measured in both volume and number of log lines, so the toggle would allow users to switch between these units and enable them to at least see their data in all log lines. The second improvement we made was adding a description of units to both the graph and percentage breakdown to prevent any confusion on what measurements they were using. Finally, the third improvement was to enlarge and add a key to the sparkline graphs. Since the graphs provided crucial trend information, it was important to make sure users could easily view and gain information from them.

Second mockup of usage dashboard improvements

However, as all product managers know, you’re never done after just one mockup! After going through design feedback with our product designers and my manager, we made several refinements to our improvements. First, we changed the display of the toggle from icons to “GB/Lines” to represent the two options of volume or log lines. Second, we changed the display of the sparkline graphs from a dropdown format to a much larger graph above all the percentage breakdowns. This new format was not only easier for users to see, but it allowed a higher level of interaction by giving users the option to graph multiple lines at once. Additionally, we added a “threshold” line to the sparkling graphs so that users could see how much of their monthly usage they had consumed.

Third mockup of usage dashboard improvements

To bring the project into its final phase, I led our web engineering team through the mockups and, you guessed it, we did another round of revisions! These improvements focused on making every part of the usage dashboard as clear as possible for our users. First, we moved the threshold line to the bar graph so that users could see it in the volume measurement and track usage by coloring the bars. Second, we gave users the option to switch between a monthly and daily view in the sparkline graphs. Finally, we revised our descriptions and added even more descriptions to the sources/apps so that users would be able to understand every new part of the usage dashboard.

Whew! As you can see in my “short” overview of my usage dashboard improvement work, it consisted of a lot of product analysis, design feedback iterations, PRD writing, and collaborating with other teams. From my past product experiences, this is all considered to be very “traditional” product management work. Last summer at LogDNA, I spent most of my time on this traditional product management work and it was through this type of work that I was able to come up with my own answer to the question, “What is Product Management?”. What I found is that there is no one definition of product management, rather, there are common qualities that make up a good product manager. You can read more about how I came up with these qualities in my LogDNA reflection from last year, but to quickly recap…

  1. Good product managers master more than just direct communication.
  2. Good product managers seek the truth, not validation.
  3. Good product managers understand they are not necessarily the leader.

As I reflected on my time at LogDNA this summer, I realized I discovered another quality of a good product manager.

4. Good product managers know “non” product work is product work.

Wait, hold on! What does that mean? I just spent half this post talking about my experience with traditional product work. Why would I think it’s important for a good product manager to do “non” product work?

Enter growth — aka the final chapter to this summer saga! For the other half of my projects, I worked with LogDNA’s newly formed growth team to help increase our self-service users. The goal of the growth team was to test a series of short-term growth experiments. These experiments could focus on anything at LogDNA as long as they helped to increase our self-service users in some way. Being a part of the growth team was an entirely new experience for me as a product manager because even though the work we were doing was for LogDNA’s product, it was quite different from the traditional product management work I had become used to. It was from these growth experiments, two in particular, that I realized the importance of “non” product work for a product manager.

Example of an email flow we created for self-service users in our 14-day trial

The first growth experiment was creating a series of email flows for our self-service users in order to help increase user engagement. The main goal was to create a customized email flow for each self-service user stage — before, during, and after LogDNA’s free trial. I started off by helping to construct the emails that users would receive during their 14-day trial. The first email drafts focused on some of LogDNA’s most useful features and were sent out to all trial users. However, just like design mockups, the growth team and I began to refine the trial emails as we received feedback from users. What started out as five emails soon turned into a complex system of over 15 custom-designed emails for each different step of the user journey. We not only had to create a custom email for each decision a user might make during the free trial, but also plan the timing of every email and specify the exact information we wanted to track.

At first glance, I didn’t think much about creating and writing a bunch of emails. However, I quickly realized there is a whole world of nuances and complex organization behind a successful email campaign. I learned that it wasn’t enough to just send out emails talking about our product, rather, the emails were supposed to focus on the user and what information they could benefit the most from depending on their individual use of LogDNA.

One of the blogs I wrote for LogDNA talking about our SEO improvements

The second experiment was conducting an extensive search engine optimization (SEO) overhaul of LogDNA’s website, blog, and general online presence. The main goal of this SEO experiment was to increase LogDNA’s online visibility and make it easier for more self-service users to find LogDNA content. I started this experiment by analyzing LogDNA’s SEO from the past year in Google analytics. This included information such as how often our content was being clicked on, what searches we were appearing the most in, and what content our users were most interested in. Based on that initial analysis, we realized our SEO in all our content needed to be updated and began the process of optimizing every webpage LogDNA had ever published. Using SEO tools, I edited each webpage’s essential SEO information, such as the keyword and meta description, to increase the relevancy of each page.

After each webpage’s SEO was optimized, I then helped to set up new tracking metrics for work so that we could see if the improvements had any effect in a few months. Similar to the email flows, my work with LogDNA’s SEO improvements completely changed my perspective on the process of establishing a strong online presence via content. It’s quite easy to generate massive amounts of content for a company, but without carefully optimizing each piece of content, your online presence will easily be overlooked by potential users.

Let’s get back to the big question at hand — what do I mean why I say “non” product work is product work? My time with LogDNA’s growth team gave me the opportunity to do work that is not typically associated with a product manager’s role, such as composing email campaigns and improving SEO. However, even though these projects did not follow the traditional idea of what a product manager is supposed to do, they were still being done for the good of the product. Product managers don’t create PRDs for fun or run countless user interviews just for the heck of it — we do these tasks for the good of our product and users.

The field of product management is ever-changing and it’s often challenging to pinpoint the exact tasks you are supposed to do. There is no book, essay, or set of standards written down that spell out what all product managers should be doing. However, we do know that our job is to help our product in any way possible. If that means writing a PRD, writing email flows, or even standing on the sidewalk with a sign that says “USE OUR PRODUCT”, then we’ll do it. That is why a good product manager will know “non” product work is product work—because no matter what work you are doing, it’s always for the product.

I hope reading about both my “traditional” and “non-traditional” product work at LogDNA helped you gain a new perspective on the role of a product manager. Be sure to check out LogDNA and all the amazing work they’re doing!

📝 Save this story in Journal.

👩‍💻 Wake up every Sunday morning to the week’s most noteworthy stories in Tech waiting in your inbox. Read the Noteworthy in Tech newsletter.

--

--