Monday 22 July 2024

Composite adapter pattern

The (composite) adapter pattern is exactly what it sounds like, think of one of those universal adapters you know the ones; when you travel abroad and you need to plug a North American plug into a European outlet or vice versa.



In development we have the same idea, often times we have data in the form of an object, this could be a class that is represented by an interface, or maybe just a class; whatever the case is, we need to consume that data in a function that though our object has everything we need, it does not implement the expected interface or type which our function consumes. 

For example let's say that we have the following TypeScirpt interface and class for a person


export interface IPerson {
birthDay: number;
birthMoth: number;
birthYear: number;

firstName: string;
lastName: string;

getAge(): number;
getFullName(): string;
}

export default class Person implements IPerson {
birthDay: number;
birthMoth: number;
birthYear: number;
firstName: string;
lastName: string;

constructor(birthDay: number, birthMonth: number, birthYear: number, firstName: string, lastName: string) {
this.birthDay = birthDay;
this.birthMoth = birthMonth;
this.birthYear = birthYear;

this.firstName = firstName;
this.lastName = lastName;
}

getAge(): number {
const today = new Date(Date.now());
let age = today.getFullYear() - this.birthYear;

if (this.birthYear > today.getFullYear() -age)
age--;
return age;
}

getFullName(): string {

return `${this.firstName} ${this.lastName}`
}
}

Keep in mind this is a contrived example, let's say our solution has a function that requires an IHuman interface which looks like the following.


export interface IHuman {
birthDate: Date;
fullName: string;

getAge():number;
}

What we can do is to create an PersonToHumanAdapter, you can think of this as a wrapper that takes in IPerson as a constructor variable and wraps it in a class that implements the IHuman interface. 

As UML this can look like the following 

It's rather trivial, we create a PersonToHumanAdapter class, which implements the IHuman interface and takes in the IPerson implementation, it then adapts the IPerson implementation to an IHuman interface. Sometime this is also referred to as just a simple wrapper, however I find the word Adapter is more descriptive. 


export class PersonToHumanAdapter implements IHuman{
birthDate: Date;
fullName: string;

constructor(p: IPerson) {
this.birthDate = new Date(p.birthYear, p.birthMoth, p.birthDay);
this.fullName = p.getFullName();
this.getAge = p.getAge;
}

getAge(): number {
throw new Error("Method not implemented.");
}
}

An important distinction to make is that adapters are dumb, they do not modify or add functionality, they just make an implementation of a class usable by a client. In the above example we simply assigned all of the members of the person object to our adapter, however we could have created a private person variable and leveraged it instead.


export class PersonToHumanAdapter implements IHuman{
private _p: IPerson
public get birthDate() {
const p = this._p;
return new Date(p.birthYear, p.birthMoth, p.birthDay);;
}
public set birthDate(birthDate: Date) {
const p = this._p;
p.birthDay = birthDate.getDay();
p.birthMoth = birthDate.getMonth();
p.birthYear = birthDate.getFullYear();
}
public get fullName(){
return this._p.getFullName();
}
public set fullName(fullName: string) {
const p = this._p;
p.firstName = fullName.split(" ")[0]
p.lastName = fullName.split(" ")[1];
}

constructor(p: IPerson) {
this._p = p;
}

getAge(): number {
return this._p.getAge();
}
}

At the end of the day it really depends on the nuances of your particular requirements.

A more generic visualisation would be the following

Think of the client as "Our" program, the thing that is trying to use a function called request, which is defined in the Target interface. Our adapter implements the Target interface and tells our client that yes I have an implementation of the Request function, the adapter wraps the adaptee and lets our "Program" use the "Adaptee" even though the Adaptee does not implement the correct interface. The Adapter wraps the SpecificRequest function which is defined in the Adaptee and returns it's result to the client when called via the adapter. 

Friday 23 February 2024

UML Activity Diagrams

UML (Unified Modeling Language) activity diagrams are graphical representations used to model workflows, processes, and behaviours within a system. They provide a visual overview of the steps involved in accomplishing a task or achieving a specific outcome. Generally most UML diagrams are appropriate for software design, however a few can be leveraged to understand systems from both a human as well as technology perspective. In the previous example on simple flow charts we created a high level model of a checkout experience, for this example we'll focus on the cash register process.

Symbols

Activity diagrams unlike simple flow charts, have a more stringent set of symbols, this is because UML is much more standardised, whereas flow charts are much more pragmatic. Generally activity diagrams are not appropriate for all audiences, they are a great way for business analysts and software engineers to collaborate and ensure that they have a shared understanding.

Symbol Description
A solid dot represents the start of the process
A solid dot with a circle around it represents the end of the process
A circle with an X in the middle represents the end of a branch or activity
A diamond indicates that a branch is created, or that two branches merge into one
A rectangle with rounded corners represents an activity, something that is done
A bar with a flow going into it and a number of flows exiting depicts parrallel processes
A bar with multiple flows entering and only one exiting depicts a pause point, where all parallel process must converge
The note symbol allows the diagram creator the opportunity to annotate their diagram with some contextual information for the reader
The send signal notation, indicates that the process is outputting a message to another process
The receive signal notation, indicates that the process is waiting for input from another process
Timer notation, how long a process is suppose to wait before Continuing
Frequency notation, indicates a timer job, how long before triggering an action
Interrupt notation, indicates that something has occurred which needs to branch to a different flow
Call other process, indicates that the activity is actually a sub process

Example

Previously we created a simple flowchart depicting a checkout process, this time let's create an activity diagram laying out the cash register process.


again this is a quick contrived example that only touches upon this process, there is no doubt in my mind that if you sat down with a cashier you would find many more nuances and improvements to this process, however notice that if you showed this diagram to a cashier, they most likely would be confused, however a simple flow chart would most likely facilitate a discovery workshop

Monday 19 February 2024

Swim lanes

Swim lanes are not exclusive to any particular business processing technique, and can in fact be used with simple flowcharts, activity diagrams, BPMN 2.0 or almost any other process modelling technique. Swim lanes are visual containers that partition a business process diagram horizontally or vertically. They represent different participants, roles, departments, systems, or organisational units involved in the execution of a process. Swim lanes provide clarity and structure to process diagrams by categorising activities based on their performers or responsibilities.

The above is called a 'Pool' this pool has swim lanes, in the above I've depicted vertical swim lanes, but there is nothing wrong with using horizontal lanes. I generally choose vertical ones, because, most mediums have more vertical space than horizontal, not to mention that no one likes reading sideways text, however the following is perfectly legitimate.

Whether the lanes or horizontal or vertical is completely in consequential, the value of the swim lane is two fold, it easily identity's the actor, and segments their contribution to the process.

One thing to keep in mind about the above is that it is very much a contrived example, once could argue that this is in fact multiple business process, that the stock clerk is a variation, etc. 

The take away here is simple, Swim lanes segment a process by actor, they represent different participants, roles, departments, systems, or organisational units involved in the execution of a process. They provide a clarity of who does what and at which point during a process.

Friday 16 February 2024

Simple flowcharts

Flowcharts are visual representations of processes or workflows, often used in various fields such as software development, engineering, business management, and more. They provide a clear and structured way to illustrate the sequence of steps within a system or a procedure. 

Symbols

Simple flowcharts have a number of standard symbols and notations, however they are a pragmatic approach to process modelling, and often times incorporate symbols outside their loosely defined standard set of symbols. This pragmatism is as much a benefit as it is drawback, it is very easy to overuse flowcharts; this is because anyone can understand them and often times initial drafts become finalised documents, even though there are other notations that maybe more appropriate for the particular thing being modelled.

I generally look at simple flowcharts as rough sketches, an easy and quick way to discuss a process, but by no means the finished deliverable.

Symbol Description
The start stop symbol is exactly what it sounds like, it indicates that the process is started or terminating
An action or step is something that happens during the process, this is generally where the work is done
The input or output, represents a need by the process from a participant to continue, or an output which a participant needs to proceed
Decision, a decision symbol is a split in logic, it just means that based on some condition, the process enters an alternative flow
Merge, often times during a process there is a split, the process can have multiple flows which run in parallel, the merge represents the coming together of these flows, think of it as parallel actions that eventually must wait for each other to complete
An artifact is similar to an output, however the difference is that the artifact will exist after the process is compelete
Artifacts simply represents multiple artifacts
The annotation is a simple decoration, it provides the viewer of the process some contextual inforamtion that may not be obvioulsy communicated via the diagram itself
The process link out symbol indicates that this flow continues somewhere on the page, the content of the symbol is the key to look for in the porcess flow in counterpart
The process link in symbol indicates that this flow is the continuation from somewhere on the page, the content of the symbol is the key to look for in the porcess flow out counterpart
The off page link simply indicates that this porcess continues on a differnt page, generally the content of this symbol indicates the continuation page
The subprocess indicates that this step is actually its own sub process, and generally contains where this process is indicated
The above are the eleven basic symbols one could use to to model just about any process. These symbols are simple to understand, and though they may not get into the granular details of technical implementations, they are an excellent way to simply depict a process for the purpose of a general discussion.

Grocery store checkout example

keep in mind that the following is a is a simplification, there are a large number of sub processes that are not depicted, things like what if the person client does not have the means to pay, what if they will soon return to pay, what if the item doesn't have a price, there are so many more variations that could exist. however the following does serve the purpose of a simple depiction.



Friday 9 February 2024

Business process scoping

The core idea of Business Process Modelling (BPM) is to understand the outcome, sequence and  activities needed to achieve a specific result. To define the rules of interaction between all participants.

Any business process is an end-to-end set of activities which collectively respond to an event and transform information, materials, and other resources into outputs that deliver value directly to the customer(s) of the process. This value may be realised internally within an organisation, or it may span several organisations.

When looking to automate a process, at a high level there are only four  aspects to any process:
  1. Trigger: What starts the process
  2. Result: what does the process accomplish
  3. Steps: What are the steps in the process
  4. Needs: What are the specific needs for each step
If you can identify all of the above, you can successfully map any business process, this may seem simple enough on paper, however in real life this is generally where murky waters start. Often times organisations do not have clear cut business processes, most processes are born out of necessity and start relatively simple, however over time they tend to grow and mutate into something that started pragmatically, but has over years transformed into a behemoth. Many times these 'organic' processes are never documented, and more often than not reside in someone or a group of peoples heads. In the latter, often times various stakeholders understand part of the processes or worse yet they have varying opinions on what the actual processes is. For this reason before modelling anything it is important to lay out the boundaries of the process. 

There are a number of business process scoping methods, these methods help you understand the environment around the process, the value of the process, and a high level overview of the process. By understanding the 

SIPOC (seabook)

SIPOC is an acronym that stands for Suppliers, Inputs, Process, Outputs, and Customers. It is a high-level process mapping tool used in business analysis and process improvement.
  • Suppliers: These are the entities that provide the inputs needed for the process to function.
  • Inputs: These are the resources, materials, or information required to initiate the process.
  • Process: This refers to the series of steps or activities that transform inputs into outputs.
  • Outputs: These are the results or products generated by the process.
  • Customers: These are the individuals or entities who receive the outputs of the process.
SIPOC diagrams help to identify and understand the relationships between these key elements of a process, providing a clear overview that aids in analyzing and improving processes within an organization.
Suppliers Inputs Process Outputs Customers
These are from where all of your inputs come, there could be one or dozens of suppliers This represents all of the tangible or intangible things you get from your suppliers, and you need for your process to produce outputs This is a list of all of the steps you need to take your inputs, and transform them into your output(s) These are the outputs of your process, ideally each process should be mapped to one output, however every rule has its exceptions This column represents your customers, these are the people/organisations/departments, etc who will benefit from the outputs.

Keep in mind that the SIPOC is not a tool meant to model your process, this tool's purpose is to have a broader conversation around the upstream and downstream aspects of your process, understand the suppliers and their inputs, as well as the outputs and their customers. 

IGOE

The IGOE (Inputs, Governance, Outputs, and Enablers) framework is a method used in systems thinking and analysis. It breaks down a system into four main components:
  • Inputs: These are the resources, materials, or information that are utilized by the system.
  • Governance: The rules, regulations, and decision-making structures that guide the system.
  • Outputs: These are the results or products generated by the system in pursuit of its goals.
  • Enablers: Factors that facilitate or support the achievement of the system's goals
The IGOE framework places the 'Process' at the centre and then looks to the left and right of it, as well as other factors which impact the process. 


This approach again has it's short comings, however the two analysis approaches together provide a strong understanding of the process.

Process scoping

Process scoping is a hybrid of the previous two approaches, it creates a holistic view of the particular process, the advantage of merging the two is a holistic overview in one model, then downside is that this model is complex to create as well as understand, for this reason it may make more sense to use the previous to models to gain the necessary understanding, but then to combine the two into one model for a cohesive representation. It is made up of seven parts.
  1. Outcomes: the result of the process
  2. Process steps: the chain of granular steps which result in the outcome
  3. Triggers: anything which starts the process
  4. Participants: every individual or group who's input is required for the process
  5. Variants: Any edge or alternative flows to the main process. 
  6. Governance: The rules, regulations, and decision-making structures that guide the system.
  7. Enablers: Factors that facilitate or support the achievement of the system's goals


Generally when creating a model such as this, one would work backwards, from the outcomes through to the triggers, than to work out from the participants, variables, governance and enablers, this provides a high level overview of the process and the all of the influencers surrounding it.

Establishing granularity

Regardless of scoping technique, you always end up asking yourself the question is this a process of processes, that is are any steps within my process, processes themselves? Let's take a look at the following process steps:

  • Register a lead
  • Score a lead
  • Update status of a lead
  • Sign a contract with a lead
  • Register service request
  • Dispatch a field worker
  • Process payment
  • Assess Service quality
  • Apply correction
  • Produce report

We can ask ourselves, is this one process? or are there multiple process here? Though all of the above are granular steps in an overall flow, however these could be segmented into four different processes.

Token analysis

In token analysis a business process should only handle one thing at a time, the token; meaning that each step should impact the token in some way or form, transform it, capture some information about it, route it. If there is a change in token between steps, then you are most likely dealing with a separate business process.

Process a lead
  • Register a lead
  • Score a lead
  • update status of a lead
  • Sign a contract with a lead

Process a request
  • Registeres service request
  • Dispatch a field worker
  • Process payment

Assessment of services
  • Assess Service quality
  • Apply correction

Generation of report
  • Produce report

As. you can see the 'macro' flow deals with four different tokens, hence it can be segmented into multiple process each time there is a change in token.

Takeaway

As you may recall we have four parts to a business process:
  1. Events: Things that happen outside of the process, but have an effect on the process
    • Triggers: an action that starts the process
    • Timers: an interval which starts the process
    • Messages: information which the process receives
    • Error: a fault that impacts the process flow
  2. Outcomes: what does the process accomplish
    • Every process exists to deliver a specific repeatable outcome
    • A process must have at least one outcome if not several
  3. Actions: What are the steps in the process
    • Each action is an activity carried out by an agent; it may be a person, organisation or an automated solution.
    • Actions represent an activity used to product a final ore intermediate result
    • Actions may be standalone steps, or the may represent a sub-process
  4. Participants: Who or what performs the actions within the process
    • The executors as well as any relevant parties of the actions or process:
      • Supervisors
      • Informed party
      • Decision maker
      • Operator
At a macro level every process should consist of the four key elements above

Monday 8 January 2024

Docker repo

With our Azurite docker image running, it's time to configure our docker repo. Let's start by adding a docker profile to our 'launchSettings.json' file


"Docker": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": true,
"launchUrl": "swagger",
"applicationUrl": "http://localhost:5050",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development"
}
},

As I've mention in the past, your naming conventions are very important, they are what let future you figure out what the hell you did; in this case all you're going to have to remember is that you have to check your 'launchSettings.json' file and you'll be able to follow the bread crumb trail of what this profile is meant for.

before we continue we're going to have to add a nuget to our project to work with azureBlobStorage, go to your .csproj file and enter in the following command 

dotnet add package azure.storage.blobs

you should see the following in your terminal

We need to add one more nuget package aht that is the 'azure.Identity' package

Your .csproj should now have the"azure.storage.blobs" package reference added

<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net7.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="azure.storage.blobs" Version="12.19.1" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.5.0" />
</ItemGroup>

</Project>

these will let us leverage Azures prebuilt classes for interacting with our blob storage.

Before we dive into our DockerRepo class, let's open our app settings file and add our azurite development connection string

{
"flatFileLocation": "DefaultEndpointsProtocol=https;
AccountName=devstoreaccount1;
AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;
BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;"
}

I broke the string up onto multiple lines for ease of reading, however you'll have to concatinate it onto one line.

Now we can finally start coding Next open up your 'DockerRepo.cs' file, it should look like the following


using pav.mapi.example.models;

namespace pav.mapi.example.repos
{
public class DockerRepo : IRepo
{
public Task<IPerson[]> GetPeopleAsync()
{
throw new NotImplementedException();
}

public Task<IPerson> GetPersonAsync(string id)
{
throw new NotImplementedException();
}
}
}

We configured the class but never implemented the methods, Let's take the opportunity to do so now; we're going to create a constructor that takes in a connection string and a the logic for our two get functions.

using System.Text.Json;
using Azure.Storage.Blobs;
using pav.mapi.example.models;

namespace pav.mapi.example.repos
{
public class DockerRepo : IRepo
{
BlobContainerClient _client;
public DockerRepo(string storageUrl)
{
this._client = new BlobContainerClient(storageUrl, "flatfiles");
}

public async Task<IPerson[]> GetPeopleAsync()
{
var openingsBlob = _client.GetBlobClient("people.json");
var openingJSON = (await openingsBlob.DownloadContentAsync()).Value.Content.ToString();

if (openingJSON != null)
{
var opt = new JsonSerializerOptions
{
PropertyNameCaseInsensitive = true
};

return JsonSerializer.Deserialize<Person[]>(openingJSON, opt);
}

throw new Exception();
}

public async Task<IPerson> GetPersonAsync(string id)
{
var ppl = await this.GetPeopleAsync();
if(ppl != null)
return ppl.First(p=> p.Id == id);
throw new KeyNotFoundException();
}
}
}

Now if we take a look at our main

using pav.mapi.example.models;
using pav.mapi.example.repos;

namespace pav.mapi.example
{
public class Program
{
public static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
var flatFileLocation = builder.Configuration.GetValue<string>("flatFileLocation");

if (String.IsNullOrEmpty(flatFileLocation))
throw new Exception("Flat file location not specified");

if(builder.Environment.EnvironmentName != "Production")
switch (builder.Environment.EnvironmentName)
{
case "Local":
builder.Services.AddScoped<IRepo, LocalRepo>(x => new LocalRepo(flatFileLocation));
goto default;
case "Development":
builder.Services.AddScoped<IRepo, DockerRepo>(x => new DockerRepo(flatFileLocation));
goto default;
default:
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
break;
}

var app = builder.Build();

switch (app.Environment.EnvironmentName)
{
case "Local":
case "Development":
app.UseSwagger();
app.UseSwaggerUI();
break;
}

app.MapGet("/v1/dataSource", () => flatFileLocation);
app.MapGet("/v1/people", async (IRepo repo) => await GetPeopleAsync(repo));
app.MapGet("/v1/person/{personId}", async (IRepo repo, string personId) => await GetPersonAsync(repo, personId));

app.Run();
}

public static async Task<IPerson[]> GetPeopleAsync(IRepo repo)
{
return await repo.GetPeopleAsync();
}

public static async Task<IPerson> GetPersonAsync(IRepo repo, string personId)
{
var people = await GetPeopleAsync(repo);
return people.First(p => p.Id == personId);
}
}
}

we pass our connection string to our Docker repo service in the form our flatfilelocation variable which is populated based on the profile loaded.

Now if we run our application with the 'local' profile our static GetPersonAsync and GetPeopleAsync functions will receive the LocalRepo implementation of our IRepo interface and if we run our application with the 'docker' profile, it will use the DockerRepo implementation of our IRepo interface.

Tuesday 2 January 2024

Azurite Powershell

In order to upload files to our azurite storage, we're going to leverage PowerShell... on a mac, so please read the MSDN documentation on that since, it may change.


Though it may seem as if we are adding unnecessary complexity to our project, with powershell we can automate the process of uploading data to our blob storage, which may not seem very important now, however in the future when you we'll be deploying to the cloud, it will make life much simpler.

In your application create a 'powershell' folder, if you haven't already, let's start with a simple powershell script to copy our people.json file from our local hard drive to our docker container.



if(0 -eq ((Get-Module -ListAvailable -Name az).count) ){
Write-host "installing AZ module." -foregroundcolor Yellow
Install-Module -Name Az -Repository PSGallery -Force
Write-host "AZ module installed." -foregroundcolor Yellow
}
else{
Write-host "AZ module already installed." -foregroundcolor Green
}

$cs = "DefaultEndpointsProtocol=https;"
$cs += "AccountName=devstoreaccount1;"
$cs += "AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;"
$cs += "BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;"
$cs += "QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;"
$cs += "TableEndpoint=https://127.0.0.1:10002/devstoreaccount1;"

$context = New-AzStorageContext -ConnectionString $cs

function UploadFiles {
Param(
[Parameter(Mandatory=$true)][String]$containerName,
[Parameter(Mandatory=$true)][String]$localPath)
# upload all flat files to local azurite
Get-ChildItem ` -Path $localPath ` -Recurse | `
Set-AzStorageBlobContent `
-Container $containerName `
-Context $context `
}

function createContainer { Param(
[Parameter(Mandatory=$true)][String]$containerName,
[Parameter(Mandatory=$true)][Int32]$permission)

$container = $context | Get-AzStorageContainer -Name $containerName -ErrorAction SilentlyContinue
if($null -ne $container){
Remove-AzStorageContainer -Name $containerName -Context $context
}

$container = $context | Get-AzStorageContainer -Name $containerName -ErrorAction SilentlyContinue
if($null -eq $container){
Write-Host "Createing local $containerName contaier" -ForegroundColor Magenta
$context | New-AzStorageContainer -Name $containerName -Permission $permission
}
else {
Write-Host "local $containerName container already exists" -ForegroundColor Yellow
}
}


createContainer -containerName "flatfiles" -permission 0;

UploadFiles -containerName "flatfiles" -localPath "/Volumes/dev/data/"

$sasToken = New-AzStorageContainerSASToken -name "flatfiles" -Permission "rwdalucp" -Context $context

Write-Host "your SAS token is:$sasToken it has been copied to your clipboard" -ForegroundColor Green
Write-Host "The full connection URL has been copied to your clipboard" -ForegroundColor Green
Write-Output "https://127.0.0.1:10000/devstoreaccount1/flatfiles/people.json?" $sasToken | Set-Clipboard

You should have the URL to the people.json file copied to your clipboard with a SAS token to let you access it, simply past the URL in your browser or postman or insomnia and you should be able to get your json file.

Next let's create a script that will list all of the blobs in our Azurite blob storage


if(0 -eq ((Get-Module -ListAvailable -Name az).count) ){
Write-host "installing AZ module." -foregroundcolor Yellow
Install-Module -Name Az -Repository PSGallery -Force
Write-host "AZ module installed." -foregroundcolor Yellow
}
else{
Write-host "AZ module already installed." -foregroundcolor Green
}

$cs = "DefaultEndpointsProtocol=https;"
$cs += "AccountName=devstoreaccount1;"
$cs += "AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;"
$cs += "BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;"
$cs += "QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;"
$cs += "TableEndpoint=https://127.0.0.1:10002/devstoreaccount1;"

$context = New-AzStorageContext -ConnectionString $cs
$containerName = "flatfiles"

Get-AzStorageBlob -Container $containerName -Context $context | Select-Object -Property Name

And finally a powershell script to delete files.


Param (
[Parameter()][String]$blobToDelete,
[Parameter()][Boolean]$deleteAllFiles)

if(0 -eq ((Get-Module -ListAvailable -Name az).count) ){
Write-host "installing AZ module." -foregroundcolor Yellow
Install-Module -Name Az -Repository PSGallery -Force
Write-host "AZ module installed." -foregroundcolor Yellow
}
else{
Write-host "AZ module already installed." -foregroundcolor Green
}

$cs = "DefaultEndpointsProtocol=https;"
$cs += "AccountName=devstoreaccount1;"
$cs += "AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;"
$cs += "BlobEndpoint=https://127.0.0.1:10000/devstoreaccount1;"
$cs += "QueueEndpoint=https://127.0.0.1:10001/devstoreaccount1;"
$cs += "TableEndpoint=https://127.0.0.1:10002/devstoreaccount1;"

$context = New-AzStorageContext -ConnectionString $cs
$containerName = "flatfiles"

if($true -ne ([string]::IsNullOrWhiteSpace($blobToDelete))){
Remove-AzStorageBlob -Blob $blobToDelete -Container $containerName -Context $context
}

if($true -eq $deleteAllFiles) {
Get-AzStorageBlob -Container $containerName -Context $context | Remove-AzStorageBlob
}


the above unlike the other two scripts takes in parameters letting us specify if we want to delete a specific file or all files. it can be called several ways, but the following two are probably the easiest

./deleteBlobs.ps1 people.json         
./deleteBlobs.ps1 -deleteAllFiles $true     

and that's it for now, keep in mind that theses scripts are specific to our azurite environment, when it comes time to deploy to production we'll cross that bridge when we get to it.