/
Design process

What Does a Head of Product Actually Do? A Week in the Life of Head of Product

0

mins to read

Heads of Product. They are in the epicenter of digital product leadership, but they are also the most mysterious members of product teams. They are omnipresent. They are omniscient. They are omnipotent.

*David Attenborough Voice*

Heads of Product are difficult to see in their natural habitat because of their constant busyness and non-public nature, but today you have a rare chance to observe this professional during her work. 

We at Eleken UI/UX agency were lucky enough to talk with Tanya, a Head of Product from an AI-powered video meetings app called Whoosh. The company has recently become Eleken’s client, so we took the opportunity to walk our readers through Tanya’s weekly routine.

Meet Tanya, Head of Product at Whoosh
Meet Tanya, Head of Product at Whoosh

During our conversation with Tanya, we identified four main categories of tasks a Head of Product is responsible for:

  • Project management tasks
  • Strategic tasks
  • Product tasks
  • Operational tasks

Below, we will take a closer look at each category and block the time that Tanya spends on her tasks in Google Calendar. By the end of the article, you’ll have a visual understanding of what the Head of Product’s responsibilities are and how much effort they take.   

Project management tasks

Since a Head of Product is a leadership role, Tanya is often busy with setting tasks for her colleagues and tracking their results. In turn, she reports back to C-level executives — CEO and Chief Product Officer (CPO). Tanya keeps project management activities on track thanks to regular daily and weekly meetings.

Daily meetings

Our Head of Product has three short daily meetings that go one after another in the morning.

  • 9:30 — a daily call with a Business Analyst (BA) and Chief Product Officer (CPO).
  • 10:00 — a daily stand-up with two development streams.
  • 10:15  — a daily stand-up with two other development streams.
Daily meetings in Head of Product’s calendar
Daily meetings in Head of Product’s calendar

Weekly meetings with product team managers

Two Product Managers that are responsible for two development streams each work under Tanya’s direct supervision. Twice a week, Tanya has half-hour meetings with each Product Manager.

  • 11:00, 11:30, Monday — setting weekly tasks for Product Managers.
  • 11:00, 11:30, Thursday — PM’s weekly progress reports.

The Head of Product also has weekly meetings with product department leads:

  • 14:00, Monday — weekly meeting with Art-lead to discuss art content tasks.
  • 15:00, Tuesday — weekly meeting with marketing & sales departments.
  • 17:30, Friday — delivery meeting with a Delivery manager and Tech Lead to discuss the team’s productivity and velocity.
Weekly meetings with product team managers in Head of Product’s calendar
Weekly meetings with product team managers in Head of Product’s calendar

C-Suite weekly meetings

Twice a week, our Head of Product has meetings with her manager, Chief Product Officer.  

  • 13:00, Tuesday — Head of Product discusses with CPO her own tasks for a week. For instance, last week Tanya had to do research on Saas benchmarks on Unit economics. Before that, she examined a psychological problems of Zoom fatigue.
  • 13:00, Friday — based on the information the Head of Product got during the meetings with PMs, BA, Art and Design departments, she prepares a weekly report on the product team progress and issues to be solved.

On Fridays, there is a short meeting where PMs and Head of Product present their results, successes and failures:

  • 15:00, Friday — Growth meeting with CPO.
C-Suite weekly meetings in Head of Product’s calendar
C-Suite weekly meetings in Head of Product’s calendar

Monthly report

Once per month, the Head of Product writes a Management Discussion and Analysis report (MD&A). It includes: 

  • A summary on product development: what has been done during the month from the technical side, in terms of CusDev,  growth hacking, A/B testing, and others. 
  • Product metrics summary: how the KPIs changed when compared to the previous month. 
  • Competitor analysis on web and mobile. Via tools like Sensor Tower and Semrush Tanya checks how much the number of users is up or down in competitors’ apps.
  • Video conferencing news.

A monthly report is a lengthy document that takes around three days to write if you assign two hours per day to this task. We won’t block this time in the calendar because the week we are modeling opens a new month (and because we need some space for the following tasks).

Strategic tasks

When regular meetings are over, the Head of Product gets a chance to tackle her own cognitively demanding product tasks. To win some time for concentrated deep work, Tanya has declared Wednesday a no-meetings day.

She disconnects from operational tasks, turns off notifications and avoids communication with colleagues until a report or research is done. When Tanya is back after an hour or two, she receives a million messages, but such is the price of concentration.

Feature analysis 

As we have mentioned before, Tanya has recently researched a problem of Zoom fatigue. Let’s take this case as an example. 

At first, the Head of Product explored the issue in terms of psychology — why virtual platforms lead to tiredness and burnout, and how to avoid this. Tanya bundled all the information in one document and turned her insights into actionable recommendations for a product team about what they can do to solve the Zoom fatigue trouble in the Whoosh app.

All the work took Tanya about two weeks.

Strategic tasks in Head of Product’s calendar
Strategic tasks in Head of Product’s calendar

Competitor analysis & testing

One part of feature planning is competitor analysis that takes several hours to complete.

For instance, the Whoosh product team has recently worked on integrating private & group chats into video conferencing. Before they started, Tanya analyzed how this feature is implemented by around 15 of competitors, made screenshots, put them in Figma pointing out good and poor choices. Then she concluded how the feature should be realized in Whoosh. 

Several times a month, Tanya has a call dedicated to competitor testing. She meets with a QA team to see some competitor features in action. They record the testing so that it can be analyzed further.

Competitor analysis & testing in Head of Product’s calendar
Competitor analysis & testing in Head of Product’s calendar

Product tasks

Obviously, the bulk of the Head of Product’s tasks falls into the category of product design and development. Let’s get started with a roadmap.

Roadmap update

At the end of each month, it takes Tanya a couple of days to check what has been done, schedule and prioritize features. If anything of the plan is pending, it moves to the next month. Tanya presents the updated roadmap during the C-level meeting with the Tech Lead, Delivery Manager, СТО, СРО, and CEO.

Roadmap update in Head of Product’s calendar
Roadmap update in Head of Product’s calendar

Sprint planning & grooming

Whoosh tech team works in two-week sprints that need to be planned and tracked. That’s why the team meets biweekly to discuss what they are going to do during the upcoming sprint. 

  • 16:00, Wednesday — sprint planning call for the tech team.

A sprint planning session demands from the Head of Product significant preparation. So, earlier this day Tanya takes a time span to think of the list of tasks to be done during the sprint, determine the release version and assign tasks to team members. To decide on this, you need to check everyone’s workload…

You got it, planning before sprint planning requires a few hours of concentration during a no-meeting Wednesday.

  • 12:30, Wednesday — Head of Product’s planning before the sprint planning call.

Another biweekly sprint activity is retrospective (“retro” for short). Before this meeting, all team members fill in an anonymous table where they indicate pros and cons of the previous sprint. On the call, the Head of Product with the tech team resolves arising problems and discusses positive moments.

  • 15:00, Wednesday — sprint retro call.

In the middle of the sprints, the Whoosh tech team has meetings called to keep tasks clear, organized and ready to be worked on. That’s called feature grooming

A grooming meeting brings together only a narrow circle of people who work on a particular feature. For instance, a recent call dedicated to a Google Calendar integration was attended by a backend developer, Flutter developer, Tech Lead and Head of Product. They looked at the task description, added necessary subtasks and requested UI/UX design refinements in cases something was missing.

  • 14:30, Thursday — feature grooming call.

A logical continuation of feature grooming is a feature demo. It’s a meeting where a team that works on a big feature shows the rest of the team intermediate results. 

  • 17:00, Wednesday — feature demo call.
 Sprint planning & grooming in Head of Product’s calendar

Design planning & grooming

Just like a tech team, product designers also require some task planning and prioritization that happens during a dedicated one-hour call. Before the call, Tanya takes some time to think of the tasks she needs to assign to each design team member for the next week.

  • 14:00, Thursday — planning before design planning
  • 15:30, Thursday — design planning call itself

On Wednesdays, the Head of Product has design grooming sessions. It’s a meeting with a design team where they have an opportunity to show how their work is going, ask questions and share the results. After, the results get verified on weekly usability testing sessions.

  • 15:00, Tuesday — design grooming.
  • 17:00, Tuesday — UX user testing + CustDev research.
Design planning & grooming in Head of Product’s calendar
Design planning & grooming in Head of Product’s calendar

Operational tasks

Besides product tasks that move the company forward and happen according to a roadmap, there are also operational tasks. 

Operational tasks planning

Operational tasks are the issues that occur without warning and drag the product back until they are fixed. Such issues appear all the time, and the Head of Product needs to devote about an hour per day to plan some specific steps to the solution, create tasks and assign people. 

For instance, operational planning sometimes happens along with daily calls. This way, Tanya starts solving problems as soon as they are discovered.

Operational tasks planning in Head of Product’s calendar
Operational tasks in Head of Product’s calendar

Random calls

Random calls are unscheduled short hurdles in Slack, needed to clarify a confusing point. Tanya has a huge team of around 30 people, so she has from three to six clarifying calls per day. Sometimes there are even more requests than she can handle.

For instance, on the day when we talked to Tanya, she already had three calls:

  • A clarifying call with a designer;
  • A quick call with a developer to discuss his A/B testing question;
  • A quick call with a QA who also had a question on testing.

Random calls in Head of Product’s calendar
Random calls in Head of Product’s calendar

Let's call it a week

Now, when you are overwhelmed after just reading the list of tasks that Tatiana performs for the week, she wishes everyone a great weekend and closes her laptop. She needs a good rest because in two days she’ll be back to serving the customers' problems, building a great product to ease their lives and dealing with operational chaos.

It took Tanya four years to come from non-technical education and no product experience to a Head of Product leading a team of 30 people. In our upcoming interview, you will be able to read more about Tanya’s career path and Head of Product’s tips & tricks. So stay tuned!

Dana Yatsenko

Author

Table of contents

Top Stories

Design process
/
0
min read

How to Use ChatGPT as a UX Researcher: Benefits, Limitations, Examples, and Best Practices

All eyes are now on ChatGPT, the AI tool that managed to hit the one million users mark just five days after its launch. Being totally free, but very effective and able to provide tailored answers to users’ questions, ChatGPT is becoming an integral part of our daily lives, both personal and professional. 

Here at Eleken, we’ve already tested ChatGPT and agreed that it is quite useful for doing UX research. Based on our experience, the tool offers rather accurate and realistic responses to a wide range of inputs. But we also have to say that the tool comes with certain flaws, which we’re going to discuss later on.

In this article, we’ll take a look at the benefits and limitations of using ChatGPT. We will also provide some examples and tips when you want to use it for your project, so keep reading.

Benefits of using ChatGPT for UX research

Conducting UX research not only helps designers develop products that appeal to future users but also ensures the designed solutions will be competitive. But the truth is, UX research is a time-consuming and complex process.

According to the State of User Research survey, the typical study takes from 1 to 4 weeks to plan, while 61% of respondents admit that they conduct half of their research sessions with participants from their own audience. Such research practice may lead to inaccurate results and bring bias in an assessment of market needs.

In this respect, ChatGPT can be very helpful for speeding up the process and providing unbiased opinion. As a learning-based natural language processing tool, it is trained on a diverse range of text data sourced from the Internet in a variety of ways. 

Deloitte.com
Source: Deloitte.com

 The use cases for ChatGPT in UX research are endless, starting from gathering inspiration and ideas to providing a color palette for an app or a website. The tool can help develop the concept and recommend fonts or an icon for a non-standard project.       

With that said, the benefits of using ChatGPT for UX research include:

  • Natural, human-like performance. Based on the recent GPT-4 Technical Report, ChatGPT can now understand both text and image inputs, and generate a wide range of responses in a close-to-human manner.  The research showed promising results, demonstrating that ChatGPT can interact with human language and follow instructions creatively, like humans do, which is beneficial for UX researchers looking to collect user feedback.
Source: openai.com
  • Great level of accuracy. The accuracy of GPT-4 has improved compared to previous models, achieving scores of over 80%. This means it performs better in predicting and generating more contextually appropriate responses. Simply put, the tool can now give better and more sensible responses based on the question asked.
Source: openai.com
  • Multi-purpose. From text generation to question answering and building user personas, there are many ways how UX researchers can utilize ChatGPT.
  • Cost-effectiveness. Lastly, ChatGPT is not only fast and accurate, but comes at no cost, letting you optimize the budget for UX research.

Okay, now it seems like we’re complete ChatGPT advocates and are about to encourage you to use the tool ASAP. But as we mentioned above, ChatGPT is not almighty, and we’re about to explain why.

The limitations of ChatGPT 

Despite all the benefits, caution is also needed. ChatGPT sometimes generates gibberish responses. The tool still lacks knowledge of current events and can miss subtle details. Plus, the source of the information is unknown, which is why you can't trust it fully. 

Apart from that, Stanford University names such areas of risk as cybersecurity and trustworthiness. There still exists the difficulty of eliciting bad behavior or the risk of generating harmful advice.

The OpenAI team admits that ChatGPT can’t fully replace humans, but rather serves as a helpful assistant. So, when you want to use ChatGPT for UX research, you will still have to validate the information generated by the tool and engage a UX professional who knows all the nuances of UX research.

Some examples of using ChatGPT

Here at Eleken, we find ChatGPT quite useful for conducting UX research. Here are some examples of when the tool can help, in particular:

Gathering market insights

The UX research process involves identifying competitors and understanding the market, gaining insights into user preferences and behavioral patterns. Gathered data helps understand how users may interact with the product we design. 

Let’s say you need to collect information about the target market and potential users to verify the success of a new student engagement app. Here’s what ChatGPT recommends.

Prompt: I want to gain insights into user preferences and behavioral patterns to understand how they will interact with the student engagement app.

Additional prompt examples:

  • Create a list of top competitors in the education market. Write their strengths and weaknesses.
  • I want to know how to conduct primary market research to gain insights into customer needs and preferences for the education sector. How can I do it?
  • How can I use secondary market research like the education sector competitor analysis to gain a better understanding of the market?
  • What are effective ways to segment and target the education sector audience for market research?
  • How can I effectively collect survey responses and conduct questionnaires to gather market research data?
  • What are some best practices for conducting focus groups and user interviews to gain insights into customer behavior and preferences?
  • Imagine you are a UX researcher. Write me a detailed UX research plan. Include JTBD statements, personas, and desired outcomes.

Helping create interview questions for qualitative user research

Doing both quantitative (qual) and qualitative (quant) UX research is essential for all kinds of projects. While qual research provides observational findings and allows you to understand users’ emotions and behaviors, quant research offers metrics and actual data. Sadly, ChatGPT can’t help with the latter much, but can it help us with qualitative research? Let’s find out.

Prompt: I want to improve the design of the student engagement app. How can I craft effective interview questions for qualitative research?

Additional prompt examples:

  • What are the best ways to analyze and interpret qualitative research data?
  • How can I use data visualization to present user research findings?
  • How can I do a survey for quantitative research?
  • What are some best practices for conducting surveys for student engagement apps?

Building buyer personas

UX researchers create personas to identify ideal users and assess design decisions. Let’s ask ChatGPT to build a user persona for the student engagement app.

Prompt: Build a user persona for a new student engagement app that automates the onboarding and arrival process for university students and agents.

ChatGPT created a user persona by the name of Sarah. For this persona, it specified age,  background, goals, challenges, and needs. Additionally, the tool offered a list of key features for the app. 

Additional prompt examples:

  • What are the best practices for building user personas based on real user research and information?
  • Create a persona for a time-tracking web application interested in managing their working time and getting insightful reports.
  • How can I use user personas to improve the usability and user-centeredness of web applications?

Preparing interview questions 

Once we’ve created user personas, the next step is to find users who match those criteria and interview them. Let’s ask ChatGPT to generate the right questions.

Prompt: I want to interview time-tracking web app users. I want to know what functionality they need more. Write 10 interview questions I can ask them.

Additional prompt examples:

  • What are the best practices I can use when conducting user interviews?
  • Generate questions for a research interview to gather feedback on a new student engagement app.

Selecting color palettes and features

ChatGPT can also be used to prioritize the features users expect to see in a product. But you can easily use it to pick up certain color palettes that appeal to the target audience or select complementary colors. 

Let’s ask ChatGPT to suggest primary and secondary colors for a time-tracking web app and a list of features users want in the app.

Prompt: Can you suggest primary and secondary colors for a time-tracking web app? List features users expect to see in this app. Represent in a chart.

Additional prompt examples:

  • Suggest 2 color palettes for the student engagement app.
  • Can you create a list of features users want from a student engagement app? 
  • Can you prioritize key features and requirements for a time-tracking web app?

Tips and best practices for incorporating ChatGPT into UX research

Working with ChatGPT reduces time spent on UX research, making the process of gathering information easier. The researchers can leverage the capabilities of the tool with minimal effort. Here are some tips that can help:

  • Ask simple, open-ended questions. To get more accurate results, it is better to use open-ended questions starting with "What," "How can I," "Why," "Can you explain", and such, instead of yes/no questions or the ones that provide a single word or phrase responses. Complex sentences, unusual words, and structures or technical jargon are best to avoid since they can confuse the chatbot.
  • Mind the length. According to Arxiv, ChatGPT has about 4,096 of context length (that’s 3,072 or 6,144 words), so keep that in mind when you’re about to have open debates with the chatbot. 
  • Test it on your own. The best way to understand if ChatGPT fits your needs or not is to give it a try. Ask for custom responses, add context, and specify information about what you need – this way the tool can better adapt to your needs.
  • Teach the bot. ChatGPT is an AI learning model, which means you can teach it and improve it. So when, for example, you don’t like the answer provided by the chatbot, you can regenerate the response and rate whether the result was better or worse.

Conclusion

ChatGPT offers great opportunities for UX researchers. But if you want to unlock the full potential of the technology, you have to spend a day or two learning how to use it. 

Remember, no matter how good ChatGPT is, it’s still a machine and you can’t fully replace humans with it. So if you want to use ChatGPT for things like research, it would be wise to have a professional you can reach out to. And you can find them at Eleken, by the way. 

If you want to learn more about UX research methods or need a consultation on what tools to use for UX research, the Eleken team is here to help you. Drop us a line here!

Design process
/
0
min read

How Not to Design One More Product That Would Look the Same as the Competitors?

Well, just make something different, hello? Why do we even write an article to answer this simple question? Because there’s more to it. If making something novel was that simple, there wouldn’t be so many similar projects around.

For years of experience in product design, the question of “how to make a product stand out” has been one of the most interesting challenges to date (and we have lots of interesting challenges). This article is dedicated to summing up our experience and showing some ways of how to make an original product.

Why do products look the same?

Let’s look at the very beginning of work on the product. As UI/UX designers, we know well that competitive research is a very important stage in the design process. Knowing the market is essential to build a successful product. The downside that this knowledge brings is the rush to match competitors.

When a founder sees that other products have something that their product doesn’t have, they hurry to get the same feature. So instead of focusing on their strengths, they try to race on many tracks at the same time. Where does it lead them?

Creeping featurism

The term “featurism” appeared in the 1960s to describe weird architecture, and later made its way into racism studies and, finally, product design, known as featuritis or creeping featurism.

It describes a situation when a product grows by adding new features constantly to keep up with the competitors. On one hand, it often comes from active listening to users requests (which is highly praised in the field of user experience). One the other hand, it may end up in a bloated software where numerous features overwhelm users and negatively affect the overall product design. As a result, a feature creep is born.

If you haven’t got enough weird terms in this paragraph, here is one more: feeping creaturism that is used to describe the feelings that developers have when they are tasked with creeping another feature into an already bloated product.

And now, the last weird term: feature karma. It is a rule: when you add a new feature that makes the product more complex, you have to take away one of the previous ones. Easier said than done, though. Later we will give you some tips on how to avoid creeping featurism.

Here is an example from Victor Papanek book, “Design for the real world”. Back in the 60s and 70s when people used slide projectors, Kodak was leading the market with a new invention: slide projectors with gravity feed systems.

It was produced by both Kodak USA and Kodak Germany, but with some differences. US projector had several models ranging from $60 to $1,500. Each one had various features added, like remote control or extra lenses. Also, each one had an advanced version for professional use, called Ektagraphic (+$10 to 20). They were more resistant to short circuits due to special insulated wiring (why wasn’t it a default feature for all the models?).

At the same time, projectors produced in Germany had just one simple model, Kodak Carousel ‘S’ which included the safety feature and sold for $75. All the additional features could be bought separately and attached to the basic model. The difference in approach was so big, that some people in the US ended up buying their projectors from German Kodak (although back then the shipping wasn’t as easy as it is now). Long story short, US projectors were feature creeps.

Modularity is one of the ways of addressing the issue. When talking about software products, it means making a basic product with the essential set of features and adding the new ones that can be downloaded and used by those who need them.

Another familiar case of a feature creep is Facebook. They have been trying to redesign the main page to make it less cluttered, but there is still a much work to do. To learn more on design for simplicity, we have some tips from our UI/UX experts in the related article.

It scrolls a few pages down…

When less is more

If you think of two products working in the same niche, one of them has way more features that the other, which one has more competitive advantage? The former one. But what if the second one is much easier to use and is cheaper? Now it’s more complicated.

An example can be Canva. For graphic editors, it’s hard to compete with Adobe in terms of features. So, Canva did the opposite. They provided very basic features that allow for lower quality graphic editing. Yet they have a clear competitive advantage: price and ease of using.

To use Canva, there is no need to pass a learning course like with some Adobe products. Plus it saves time with numerous royalty free images and templates. And the quality is good enough for social media materials. By the way, many images on this blog are created with Canva — and it works great for this purpose.

Users who need less features than a product is offering are called “overserved”, and they might as well switch to a more basic alternative.

How to deal with creeping featurism?

First two methods are modularity and targeting overserved users, as described above. Another one is making the simple features visible and understandable for users who use the product on the basic level, while “hiding” more complex features further in navigation. Advanced users will find them with one click, while newbies will not be scared off with a complicated look.

Here’s an example. One of our clients, AdvanResearch, had a product for foot tracking analytics. It was used mostly by professional analysts who usually dealt with complex features set. At some point, the company decided to widen their pool of users and include business owners who were new to this kind of tool.

The objective of redesign was to make a product more accessible to the new audience. Here is what we did to address this:

  • added tips behind question marks
  • simplified navigation with fewer sections
  • split the reports into “standard” (presets for new users) and “custom” (for those who know well what pieces of data they need and can assemble report to fit their needs).

When more is necessary

As designers, we often prioritize simplicity, clean visuals, and minimalism. But we also understand product managers who have to satisfy the users and can’t prioritize “clean design look”.

“Some things just have to be complex”, says our lead designer Maksym. If you scroll through the list of our case studies, you’ll see that we often work on sophisticated tools for professionals in such fields as code security, geo-mapping data analysis, and others. It’s not the same as creating niche lifestyle apps. There has to be many things on the screen and it’s not always possible to just put away some of them.

And whenever you see a product that looks and feels simple, remember that you don’t know the amount of work behind this simplicity. To have a glimpse on the secrets behind minimal-looking products, read our article “Complexity of simplicity”.

References trap

When clients come to us and explain what they want to get, they often give references. Often it sounds like “we want something like Stripe”. When designers go through initial research and start working on visual style, they also show some of the references to client. It allows the team to align their image of visual direction to make sure the final result will match the expectations.

We like Stripe design, too, and we understand why product owners want their products to look like Stripe. Yet our designers don’t just copy-paste their style. There are some ways of responding to clients’ wishes and avoid copycats.

For example, when our designer Alexandra was working on Spoonfed, a food logistics product, she created icons inspired by the ones that Stripe uses. This subtle detail added a little spark to otherwise minimal and classic design.

It’s hard to draw the lines between copying references and syncing the visual mood of the team. Experienced designers have a sense of it, so the basic advice we can give here is “pick a good designer and trust them”.

How to experiment with design

Product owners often prefer to play safe and go for neutral design. They say, “it is a B2B product, our clients are serious analysts, we don’t want it to be acid purple”. That’s the position that we always respect.

Behind the fear of experiments is often a fear of lost time and money in case the users don’t appreciate an “out-of-the-crowd” design. We argue, however, that the risk can be minimized.

What is the smart way of experimenting with design? Ask a designer to make a few versions of visual styles and ask users what they think of it. We often do this task for our trial (which lasts for three days), and then it’s up to the client whether they want to dare or play safe.

Once they do dare, their product won’t look like “everyone else’s”. Is it your objective? Then let’s schedule a call and start a trial one week from today.

Don't want to miss anything?

Get weekly updates on the newest design stories, case studies and tips right in your mailbox.

Success!

Your email has been submitted successfully. Check your email for first article we’ve sent you.

Oops! Something went wrong while submitting the form.
Don't want to miss anything?

Get weekly updates on the newest design stories, case studies and tips right in your mailbox.

Success!

Your email has been submitted successfully. Check your email for first article we’ve sent you.

Oops! Something went wrong while submitting the form.