Saturday 30 December 2023

Azurite - docker


In our previous post we configured a local repository which can read flat files from our hard drive, next we're going to set up a docker image with azurite.

The Azurite open-source emulator provides a free local environment for testing your Azure Blob, Queue Storage, and Table Storage applications. When you're satisfied with how your application is working locally, switch to using an Azure Storage account in the cloud. The emulator provides cross-platform support on Windows, Linux, and macOS.


Though one can connect to Azurite using standard http, We're going to opt for https, it adds a bit of complexity, however it's worth the investment, since later on it will be simpler to create a production version of an AzureRepo.

We're going to have to install the mkcert
mkcert is a simple tool for making locally-trusted development certificates. It requires no configuration.
GitHub - FiloSottile/mkcert: A simple zero-config tool to make locally trusted development certificates with any names you'd like.

I always suggest you go to the official documentation, because it may very well change and to be honest generally whenever i read my own blog posts, I usually go to back to the source material. That said at the time of this article you can install mkcert on a mac using homebrew 

brew install mkcert
brew install nss # if you use Firefox

next with mkcert installed you'll have to install a local CA 

mkcert -install


You'll have to provide your password to install the mkcert utility, but once it's setup you'll be able to easily make as many locally trusted development certificates as you please.

Now that you have the mkcert utility installed create a folder in your project called _azuriteData,  realistically this folder can have any name you like, however it's nice when folders and variables describe what they are used for; makes future you not want to kill present you. Once you have your folder created navigate into it and enter in the following command
 
mkcert 127.0.0.1


This will create two files inside your _azuriteData folder or whatever directory you're currently in, a key.pem file and a .pem file; I'm not going to get into the deep details of what these two files are, partially because that's not the aim of this post and partially because I have a passing awareness when it comes to security, high level, and with little practical experience, you know like most consultants.

However I can tell you that PEM stand for Privacy Enhanced Mail, but the format is widely used for various cryptographic purposes beyond email security. You can think of the two files as the public key (.pem) and the corresponding private key (key.pem) for symmetric encryption.

In essence what happens is that the message is encrypted with the public key (127.0.0.1.pem), and then decrypted with the private one (127.0.0.1-key.pem); however you really don't need to worry too much about the details.

Your folder structure should like something like this:


next we are going to create a docker compose file, for this to work you need to have docker installed on your system, you can download the install file from https://www.docker.com/.

At the root of your project create a docker-compose-yaml file, this file will contain all the instructions you need to install and run an azurite storage emulator locally for testing purposes.


version: '3.9'
services:
azurite:
image: mcr.microsoft.com/azure-storage/azurite
container_name: "azurite-data"
hostname: azurite
restart: always
command: "azurite \
--oauth basic \
--cert /workspace/127.0.0.1.pem \
--key /workspace/127.0.0.1-key.pem \
--location /workspace \
--debug /workspace/debug.log \
--blobHost 0.0.0.0 \
--queueHost 0.0.0.0 \
--tableHost 0.0.0.0 \
--loose"
ports:
- "10000:10000"
- "10001:10001"
- "10002:10002"
volumes:
- ./_azuriteData:/workspace


One of the trickiest things about this compose file is that whatever local folder contains our public/private pem keys must be mapped to the 'workspace' of our docker container. This command basically tells docker to use the azurite-data folder as the volume for this container. 

now if we go to our terminal and we enter the command 

docker-compose up

The first thing that docker will do is download the azurite image, once the download is complete, the image should be running in our docker container, that is of course if you have docker installed and running on your local machine.


You should see something like the above. Now if you navigate to your docker dashboard you should see your environment up and running

some caveats you may get some sort of 

open /Users/[user]/.docker/buildx/current: permission denied

in that case run the following command, or commands

sudo chown -R $(whoami) ~/.docker
sudo chown -R $(whoami) ./_azurite-data

The above sets the current user to the owner of the docker application folder as well as the folder which you will map to your azurite container running in docker, this should fix your docker permission issues.

Thursday 28 December 2023

Dependency injection

I've spoken about the inversion of control pattern before, in essence it is a fundamental design pattern in object oriented languages. It separates the definition of capability from the logic that executes that capability. That may sound a bit strange but what it means is that the class which you interact with defines functions and methods, however the logic which those functions and methods use are defined in a separate class. At run time the class you interact with, either receives or initiates a class which implements the methods and functions needed.

The idea is that you can have multiple implementations for the same methods and then swap out those implementations without modifying your main codebase. There are a number of advantages to writing code this way, one of the primary ones is that this patter facilitates unit testing, however in this post we'll focus on the benefits of having multiple implementations of the same interface.


in the above UML diagram, we define an interface for a repository as well as a person class, we then define two repo classes which would both implement the IRepo interface, finally the Program class would either receive a concrete implementation of the IRepo class or it would receive instructions on which concrete implementation to use. (I've minimised the number of arrows to try and keep the diagram somewhat readable)

Let's try and implement the above in our Minimal API, start by creating the following folder structure with classes and interfaces


It's common to have a separate folder for all interfaces, personally I prefer to have my interfaces closer to their concrete classes, to be honest it really doesn't matter how you do it, what matters is that you are consistent, personally this is the way I like to set up my projects, but there's nothing wrong with doing it differently.

So let's start with the IPerson interface


namespace pav.mapi.example.models
{
public interface IPerson
{
string Id { get; }
string FirstName { get; set; }
string LastName { get; set; }
DateOnly? BirthDate { get; set; }
}
}

as you can see from the namespace, I don't actually separate it from the Person class, the folder is just there as an easy way to group our model interfaces. Next let's implement our Person class


using pav.mapi.example.models;

namespace pav.mapi.example.models
{
public class Person : IPerson
{
public string Id { get; set; } = "";
public string FirstName { get; set; } = "";
public string LastName { get; set; } = "";
public DateOnly? BirthDate { get; set; } = null;

public Person(){}

public Person(string FirstName, string LastName, DateOnly BirthDate) : this()
{
this.Id = Guid.NewGuid().ToString();
this.FirstName = FirstName;
this.LastName = LastName;
this.BirthDate = BirthDate;
}

public int getAge()
{
var today = DateOnly.FromDateTime(DateTime.Now);
var age = today.Year - this.BirthDate.Value.Year;
return this.BirthDate > today.AddYears(-age) ? --age : age;
}

public string getFullName()
{
return $"{this.FirstName} {this.LastName}";
}
}
}

An important thing to call out is, that this is a contrived example, setting the ID property with both a public setter as well as a public getter is not ideal, however I want to focus on the Inversion of control pattern and not get bogged down in the ideal ways of defining properties; I may tackle this in a future post.

Now on to defining our repo interface, simply define two functions, one to get people and one to get a specific person.


using pav.mapi.example.models;

namespace pav.mapi.example.repos
{
public interface IRepo
{
public Task<IPerson> GetPersonAsync(string id);
public Task<IPerson[]> GetPeopleAsync();
}
}

with that done, let's implement our Local repo, it's going to be very simple we are going to use use our path from our local appsettings file to load a JSON file full of people.


using System.Text.Json;
using pav.mapi.example.models;

namespace pav.mapi.example.repos
{
public class LocalRepo : IRepo
{
private string _filePath;
public LocalRepo(string FilePath)
{
this._filePath = FilePath;
}

public async Task<IPerson[]> GetPeopleAsync()
{
var opt = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
};

try
{
using (FileStream stream = File.OpenRead(this._filePath))
{
var ppl = await JsonSerializer.DeserializeAsync<Person[]>(stream, opt);
if (ppl != null)
return ppl;

throw new Exception();
}

}
catch (Exception)
{
throw;
}
}

public async Task<IPerson> GetPersonAsync(string id)
{
var people = await this.GetPeopleAsync();
var person = people?.First(p => p.Id == id);
if (person != null)
return person;

throw new KeyNotFoundException($"no person with id {id} exitsts");
}
}
}

Next let's implement our DockerRepo


using pav.mapi.example.models;

namespace pav.mapi.example.repos
{
public class DockerRepo : IRepo
{
public Task<IPerson[]> GetPeopleAsync()
{
throw new NotImplementedException();
}

public Task<IPerson> GetPersonAsync(string id)
{
throw new NotImplementedException();
}
}
}

For now we'll just leave it unimplemented, later on when we set up a docker container, we'll implement our functions.

Finally let's specify which implementation to use based on our loaded profile in our Main


using pav.mapi.example.models;
using pav.mapi.example.repos;

namespace pav.mapi.example
{
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var flatFileLocation = builder.Configuration.GetValue<string>("flatFileLocation");

if (String.IsNullOrEmpty(flatFileLocation))
throw new Exception("Flat file location not specified");

if(builder.Environment.EnvironmentName != "Production")
switch (builder.Environment.EnvironmentName)
{
case "Local":
builder.Services.AddScoped<IRepo, LocalRepo>(x => new LocalRepo(flatFileLocation));
goto default;
case "Development":
builder.Services.AddScoped<IRepo, DockerRepo>(x => new DockerRepo());
goto default;
default:
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
break;
}

var app = builder.Build();

switch (app.Environment.EnvironmentName)
{
case "Local":
case "Development":
app.UseSwagger();
app.UseSwaggerUI();
break;
}

app.MapGet("/v1/dataSource", () => flatFileLocation);
app.MapGet("/v1/people", async (IRepo repo) => await GetPeopleAsync(repo));
app.MapGet("/v1/person/{personId}", async (IRepo repo, string personId) => await GetPersonAsync(repo, personId));

app.Run();
}

public static async Task<IPerson[]> GetPeopleAsync(IRepo repo)
{
return await repo.GetPeopleAsync();
}

public static async Task<IPerson> GetPersonAsync(IRepo repo, string personId)
{
var people = await GetPeopleAsync(repo);
return people.First(p => p.Id == personId);
}
}
}

Before we test our code, we first need to setup a flat file somewhere on our hard drive, 


[
{
"id": "1",
"firstName": "Robert",
"lastName": "Smith",
"birthDate": "1984-01-26"
},
{
"id": "2",
"firstName": "Eric",
"lastName": "Johnson",
"birthDate": "1988-08-28"
}
]

I created the above file in the directory "/volumes/dev/people.json",  it's important to know the path because in the next step we are going to add it to our appsettings.Local.json file

{
"flatFileLocation": "/Volumes/dev/people.json"
}

With all that complete let's go into our main terminal and run our api with our local profile

dotnet watch run -lp local --trust

in our Swagger we should see three get endpoints we can call


Each one of these should work

/v1/dataSource should return where our flatfile is located on our local computer

/Volumes/dev/people.json

/v1/people should return the contents of our people.json file

[
  {
    "id": "1",
    "firstName": "Robert",
    "lastName": "Smith",
    "birthDate": "1984-01-26"
  },
  {
    "id": "2",
    "firstName": "Eric",
    "lastName": "Johnson",
    "birthDate": "1988-08-28"
  }
]

/v1/person/{personId} should return the specified person.

 {
    "id": "2",
    "firstName": "Eric",
    "lastName": "Johnson",
    "birthDate": "1988-08-28"
  }

Next if we stop our api (cntrl+c) and restart it with the following command

dotnet watch run -lp http --trust

we again run our application however this time with the development profile, if we navigate to our swagger and call our functions the only one which will work is 

/v1/dataSource which should return the placeholder we specified in our appsettings.Development.json file.

Dev

the other two endpoints should both fail, since we never implemented them 

Tuesday 26 December 2023

MAPI Swagger

In my previous post we created a very simple empty minimal API project, in this post we are going to add swagger to it. You maybe asking wtf is that? swagger lets you test your end-points without the need of Postman or Insomnia, though neither of those tools is particularly worse or better than the other, and they're very handy for testing production code; But for dev work I find swagger is the best option, because it's the easiest to work with. 

To get start open your 'csproj' file, it's the XML one, it should look like the following


<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>

</Project>

in your command prompt enter in:

dotnet add package Swashbuckle.AspNetCore

you should see the following:

Microsoft projects generally use nuget packages for feature installs, you can check out the swashbuckle package at https://www.nuget.org/packages/swashbuckle.aspnetcore.swagger/. Your csproj file should now have a reference to the Swashbuckl.AspNetCore nuget package and look something like the following.


<Project Sdk="Microsoft.NET.Sdk.Web">

<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.5.0" />
</ItemGroup>

</Project>


with that that done we now have to configure swagger in our Program.cs main function, from our previous post we should have something like the following


namespace pav.mapi.example;

public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

var flatFileLocation = builder.Configuration.GetValue<string>("flatFileLocation");

app.MapGet("/", () => flatFileLocation);

app.Run();
}
}

If your run your application you'll see either "Dev" or "Local" printed out on your screen depending on which profile you load. the thing is that only happens because we mapped the roort url to the value of our appsettings file, if we change that to say "v1/dataSource" then we'll just get a blank page


public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

var flatFileLocation = builder.Configuration.GetValue<string>("flatFileLocation");

app.MapGet("/v1/dataSource", () => flatFileLocation);

app.Run();
}

 
If we navigate to "http://localhost:5050/v1/dataSource" we'll only now see our "Dev" or "Local" response based on which profile we have loaded. Generally api's have multiple endpoints and it would be nice to be able to not only see them all but be able to test them without having to have navigate to each url. This is exactly what swagger provides, so let's configure it in our Main method.

Before our builder.Build() command add the following two lines of code, 

builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();

these to lines more or less configure swagger to be able to map all of our endpoints, and generate a swagger UI, however that's not enough we still need to enable it in our project. Most examples show you how to do this in development profile, but we also have our Local profile; after our builder.Build() command add the following.


if (app.Environment.EnvironmentName == "Local" ||
app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}


with that done now when we start our application in either "Development" or our custom "Local" profiles swagger will be added to our project. To navigate to your swagger page, simply append the word "Sagger" to the root url of your API "http://localhost:5050/swagger/index.html" and you should see something like the following:


and that's it now, in your "Local" as well as your "Dev" Profiles you can test your api

Saturday 23 December 2023

Minimal API

When it comes to data access, my preferred approach is to use a dotnet core minimal API; for me this is the ideal middle man between your data and your user interface, I like it because it's simple to set up and simple to understand. Though it doesn't come with all of the complexity of a traditional MVC api. it also doesn't come with all of the extra features of its more 'complex' or 'advanced' (depending on how you look at it) counter part.

That said let's get started, open your terminal and navigate to your desired parent directory.
Execute the following commands:

dotnet new web -o pav.mapi.example --use-program-main

followed by

code pav.mapi.example


The first command says create a web project with the name 'pav.mapi.example' and use the traditional public static main template for the program.cs pag. I do this because I'm old, and it's what I am use to.

The second line just instructs code to open up the pav.mapi.example folder in a new instance of MS Code, if you wanted to open it in the same instance you could throw on the -r option so your command would look like 'Code pav.mapi.example -r' and it would open in the current instance of MS code you have running.


Your app should look something like the above, let's start with the explorer, notice that you have two appsettings files, the way it works is dotnet always loads the 'Appsettings.json' file then it will load when appropriate the corresponding 'Appsettings.<environment>.json' file.

The $10 question is how to I specify which extra app settings file to use; well before we answer that, let's add an 'appsettings.Local.json' file; generally I like to have a version of everthing that I can run in a stand alone fashion, this way I can develop and test things in isolation without the need of an SQL database or a Cloud storage container, or docker. 

In your 'appsettings.Local.json' file add a single entry "flatFileLocation": "Local" and you guessed it in your 'appsettings.Development.json' file add a single entry "flatFileLocation": "Dev", later on we may expand on this, but for now if we can start our application in one of these two modes that will be a win.


Now open up your Program.cs file


namespace pav.mapi.example;

public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/", () => "Hello World!");

app.Run();
}
}

this is why I prefer the Minimal API project, because that's all there is to it, if we run our application now with "dotnet run" we'll see the following in our browser.


Let's try to swap that with our Appsettings flatFileLocation value, modify your Program.cs code to the following

namespace pav.mapi.example;

public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();
var flatFileLocation = builder.Configuration.GetValue<string>("flatFileLocation");
app.MapGet("/", () => flatFileLocation);

app.Run();
}
}

Notice that we added a 'FlatFileLockaiton' variable and assign it our value from our appsettings file, with this change if we stop our project with 'cntrl+c' and run it again with 'dotnet run', then reload our browser we'll get the following.


Which is pretty good we are reading from our 'appsettings' file, that's fine and dandy, but how do we specify the file we want to load, well for that we need to open our properties folder, and take a look at our 'launchSettings.json' file

{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:1286",
"sslPort": 44327
}
},
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "http://localhost:5019",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"https": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "https://localhost:7177;http://localhost:5019",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}

take a closer look at the profiles sections, notice that under the environment variables we have an entry "ASPNETCORE_ENVIROMENT": "Development", now we could very well switch "Development" to "Local" and run our application that way 

{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:1286",
"sslPort": 44327
}
},
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "http://localhost:5019",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Local" // Take a look here
}
},
"https": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "https://localhost:7177;http://localhost:5019",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}


though better than having two versions of our application, not exactly the ideal solution, so let's add a profile specifically for our "Local" environment.

{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:1286",
"sslPort": 44327
}
},
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "http://localhost:5020",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"batman": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "http://localhost:5050",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Local"
}
},
"https": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"applicationUrl": "https://localhost:7177;http://localhost:5019",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},
"IIS Express": {
"commandName": "IISExpress",
"launchBrowser": true,
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
}
}
}


I named the local launch settings profile batman on purpose, this is because I want to drive home that you are specifying the profile via the launch settings, which in turn choose the appsettings, not the other way around. I placed my custom environment second to drive home the fact that by default when you 'dotnet run', it's the first profile that is loaded if not specified. if you where to execute 'dotnet run', the browser would return "dev"

Now finally for the big reveal, if you want to specify the profile you simple type in 

dotnet run -lp batman -lp stand for load profile

and that's it you now have an app that can be customised based on your appsettings json file. 

One final note, go ahead and replace batman with local, just to remove the silliness from our example

Tuesday 24 October 2023

Product analytics

Product analytics is the act of capturing and analysing how users are interacting with a digital product or service. A product analytics tool is a type of software that captures and exposes usage patterns and insights for a digital products such as web or mobile applications. Beyond capturing simply tracking and collecting data about product usage, the real value that an analytics tool provides is the in ability to data mine this data. At a high level there a three ways to comprehend product usage through data:

Trends
Graph engagements with certain features or pages and compare it against other parts of the product over time.

Funnels
Track the levels of drop-off at each step across a specific subset of features and pages.


Paths
View all journeys that users take, either leading up to or following a specific interaction, as well as a measure of how common or uncommon the next step is.

What product analytics is not:

  • Web analytics: whose aim is the collection, analysis reporting of website data in order to improve the web user user experience.
  • Marketing analytics: the process of leveraging data to evaluate the effectiveness of marketing campaigns and strategies
  • Business intelligence data: is the act of brining together all of an organisation data in order to generate business insights.
Before collecting analytics, the Product manager must identify which analytics to capture, to get started the product manger can ask themselves the following questions:
  • How can we ensure that our users are successful with our products?
  • What's blocking users from getting value form our products?
  • How can we add value to the product that users are willing to pay for?
  • Are we building products and features that users want and need?
In the past product manager answered the above questions through a combination of asking questions and intuition, today with product analytics these answers can be found in data, the question is what data, how to collect it, and where to analyse it. Product analytics aims to support intuition with evidence based decision making. Product analytics doesn't replace qualitative data, it augments it with quantitative facts. Helping maximise valued on product decision making.

Quantitative data can tell you what a user did, Quantitative data tells you why they did it. The combination of the two are what a product manager needs to make informed decisions from.

Product analytics tools generally capture two high level types of data:
  • Events: any user action in software applications
    • Clicks
    • Slides
    • Gestures
    • Play commands
    • Downloads
    • Text field input
  • Event properties: the specific attributes of the tracked interactions.
    • Device
    • software version
    • custom attributes

Tracking and reporting

At it's heart product analytics boils down to tracking metrics and providing visibility of tracked metrics. These metrics need to be tracked at the individual user level as well as the segment level, that is to say for B2B businesses, the individual interactions are important however it's essential to ensure that the organisation you are serving is finding the value they are paying for.

Key product analytics definitions:
  • Acquisition: refers to the process of gaining new users of your product. Teams can use product analytics to track acquisition via metrics like new user signups and logins.
  • Cohort: a subset of your user base. Cohorts typically have a time component to them, for example your August Cohort of new users. There are also behavioural cohorts, which are essentially the same thing as segments.
  • Engagement: this tracks how users interact with an application at the most granular level. You can measure Product engagement with a variety of metrics, for example using Adoption, Stickiness, and time spent in-app.
  • Event: An action a user takes within a software product. Some generic examples of events are: Share Dashboard, Select Option, Change View, and Enter New User.
  • Funnel analysis: a measurement of how customers move through a defined series of steps in your application. This helps provide clarity as to where users drop off when following these steps, and where they go from that drop-off point.
  • Growth: a measure of the net effect of your user acquisition and retention efforts. A product and company achieves growth by adding new customer accounts or by increasing usage within existing customer accounts (or ideally, both).
  • Lagging indicators: metrics that you want to move for the health of the business , but may be harder to see results in a short timeframe.
  • Leading indicators: metrics that are measurable in a  shorter timeframe, and have a high probability of affecting your lagging indicators.
  • Metrics: are a standard of measurement by which efficiency, performance, progress, or quality of a plan, process or product can be assessed.
  • Path analysis: a visualisation of what users are doing before or after using a specific page or feature in your application, shown as the sequence of actions that users took before or after the target event.
  • Product adoption: also referred to as activation, this measures when users understand your product’s value and perform certain actions, for example engaging with key features and moving through account setup workflows.
  • Retention: the percent of users or customer accounts still using your product after they initially install or start using it. Another way to understand if users are continuously engaging with your product is with Stickiness, which measures how many users return to the product on a regular basis.
  • Segment: A subset of users that share a common characteristic, or multiple common characteristics. 

Product analytics strategy

Track everything is not a valid strategy, before you decide what to track you must decide what you want to know. Start with an objective, then work your way back to the granular data that you need to collect. Once you understand what you want to know, identify what you information you need to gain insights to your objective, then identify the metrics you need to gain those insights. Some common strategic goals could be:
  • Understand user: in order to build the right products and features
  • Reduce friction: simplify the use of a product or feature by optimising it to usage patterns
  • Increase revenue:  leverage analytics to identify opportunities for customer expansion or comprehend granular ROI per feature.
  • Drive innovation: Gain insights in the direction you should take your product based on usage.
The take away is to identify specific objectives, and then work your way backwards to the data you need to support those objectives. 

Metrics

Metrics are the evidence we use to gauge the quality of our product from various dimensions:

Baseline usage
  • How many active users do I have today, and in the last week and month?
  • How do users get around my product? (what's the user journey map)
  • Which features do users engage with the most?
  • Are users finding important features of the product quickly and easily?
Adoption
  • Is usage of key features increasing or decreasing?
  • Which features and pages are users having challenges with?
  • Which features do users ignore?
Retention
  • How frequently are active users coming back? (understand usage cadence by segment)
  • How many users continue using my within the first months?
  • How many users who interact with a key feature come back?
Performance
  • How quickly is data served to the user?
  • How many bugs does our product have?
Value
  • How many paying customers do we retain per billing cycle?
  • At what rate are we loosing paying customers?
  • At what rate are we gaining new customers?
Though there can be dozens of metrics and supporting dimensions which we could capture, there are three primary metrics that all organisations are going to want to understand.
Business outcomes
Represents how your product impacts key business and financial outcomes in both the short and long term.
Product usage
reflect how users behave inside your product, which features are most used or suffer from friction
Product quality 
measure how well does your product technically work, response time, bugs per feature, and so on.
There are Lagging and Leading indicates, lagging indicators measure past performance, they inform you how your product was doing in the past. Lagging indicators are those which you want to move for the health of the organisation, they are difficult to understand in the short term. Leading indicators on the other hand can be measured in a shorter time frame and have a high probability of influencing your lagging indicators. Leading indicators inform you of future performance.


Not only is it not valuable to measure everything, it's also not feasible, this is why as a product manage you must strategically choose the metrics that will provide you with valuable insights. The SMART framework is an excellent rubric to help focus on the metrics that matter most to the organisation.
  • Specific: Choose an object with a numeric goal in mind
  • Measurable: Ensure that your metric is quantitative in nature
  • Actionable: Ensure that each metric can lead to an action
  • Relevant: Your metrics should tie back into your objectives
  • Timely: A balance of leading and lagging indicators, measure progress over time & understand performance.
For example 
Goal
Upsell freemium users to premium accounts
Strategy
Target heavy users with in-app discounts to incentivise upgrade
Product outcome
Users receive in-app notifications of limited time preferred pricing
Metrics
Track between regular upgrade patterns vs incentivised patterns

Identifying metrics frameworks

OKRs (Objectives and key results)
Is a goal-setting framework utilised by organisations to align teams and individuals with overarching objectives and measure progress effectively. Objectives are high-level, inspirational goals, while Key Results are specific, measurable metrics that define success. 

KPI's (key performance indicators)
Key Performance Indicators (KPIs) are quantifiable metrics used by digital organisations to evaluate and measure various aspects of their performance and success over a period of time. These metrics provide valuable insights into how well the organisation is meeting its goals and can help inform decision-making and strategy.

One metric that matters
(OMTM) is a strategy in which an organisations or team identifies a single key performance indicator (KPI) that is most closely aligned with its primary objective or core goal, and focuses intensely on tracking and optimising that particular metric. 

North star metric
The "North Star Metric That Matters" concept is a strategic approach in which an organisation or team identifies a single, central metric that represents the core value it delivers to its users and aligns all efforts and objectives around optimising that metric over a period of time. Unlike (OMTM) the North star metric is composed of numerous sub-metrics, much like the customer health value.

Check metrics (balance metrics)
Ensure that you are not focusing too much on a particular metric. Check metrics can be thought of a sanity check to ensure that your primary metrics are not obfuscating an underlying issue your product may be suffering from.

Mapping your North star metric
  1. Start with mapping your product's vision, what is it's future direction
  2. Define what you'r products immediate goals & priorities are?
  3. Identify a North Star metric which you can use to measure the core value of your product.
  4. Identify how is the North star metric is calculated, what underlying data and what proportions do you they represent.
  5. How does your North Star align with your immediate goals & priorities
  6. List two to five leading indicators that you can track that are likely to cause movement in your North star.
  7. Identify any potential consequences of strengthening your north star metric which may be in conflict with your long term vision.
  8. Leverage check metrics to observe potential consequences of increasing your North star metric.

Product analytics hierarchy of needs

This is a simple step-by-step hierarchy, to get the right data, properly analyse it then leverage it to bring real change to your organisations decision making and product development processes.


Collect data

this is the last thing you define when establishing a data strategy, but the first thing you need to your hierarchy. Raw data is the foundation of Data analytics. In the Data analytics context data comes in two related forms:
  • Data about the people who use the product
      • Visitors
      • Accounts
      • Segments
      • User metadata
  • Data about how they use the product
      • Events
      • page loads
      • Clicks 
      • Feature usage

Refine data into metrics

The second step is to transform your raw data into more meaningful metrics:
  • Breadth: A measure of how many total users you have, as well as how many users you have per account.
  • Depth: How much of the product your customers are using specifically how many key features of the product They're utilising.
  • Frequency: How often are users log into your product in a given time frame.
What is meaningful metrics is context specific to the organisation as well as the product, generally it is recommended to leverage a smaller number of clear metrics that are easily measured and align to the teams you work with around a central impactful objective.

Build metrics into reports

Build reports from your metrics, establish tables and graphs that describe or compare user behaviour over time:
  • Track how metrics fluctuate over time
  • Identify areas of improvement
Paths: visualisations of what users are doing before or after using a specific page or feature in your application, shown as the sequence of actions that users took before or after the target event.
Funnels: a measurement of how customers move through a defined series of steps in your application. This helps provide clarity as to where users drop off when following these steps, and where they go from that drop-off point.
Retention: are people using the product over time, are they coming back to the product

Take action

Now that your metics are visible through reports you can turn information into action based on tangle data, you are now able to make evidence based decisions.


In order to champion action throughout the organisation the product manage must steel themselves with various weapons to champion change through action.

Tool #1 Analysis
  • Review the data you have
  • Question the results
  • Formulate hypothesis
  • Validate your hypothesis.
Tool #2 Share
  • open you data
  • share your insights,
  • leverage a organisational facing dashboard
Tool #3 Storytelling
  • Data & insights benefit from a story, what are they saying?
  • Share insights in a slide deck
    • Contextual information
    • Logic
    • User quotes 
    • Supplemental data
At the end of the day Reports are good, but if you do not take action on the insights gained from them they are of limited value.

Data actualisation

Data actualisation is less of a step and more of a state of being, it is the result of hard work and advocacy. Once your organisation has reached this level they are truly a data informed organisation which leverages:
  • Accurate, clean and actionable date
  • Ability to ask and answer progressively more difficult questions.
  • You'll only use intuition as a starting point, and will be able to validate with metrics