A Category of Systems

Exploring the consequences of Systems & Cybernetics in Engineering
Author Tom Westbury License Creative Commons Licence

A Machine Named Desire

Date 2022-06-30
Tags

In this post, I will de­scribe a pat­tern that I have iden­ti­fied in nu­mer­ous places dur­ing my time as a sys­tems en­gi­neer. This pat­tern sits some­where be­tween the tra­di­tional dou­ble-loop learn­ing sys­tem and a vi­able sys­tem. I call this pat­tern the De­sir­ing Ma­chine as a nod to Deleuze as I feel that this pat­tern feels some­what like a Deleuz­ian de­sir­ing ma­chine. This post fo­cusses mainly on the de­riva­tion of the pat­tern but I hope to ex­pand on its uses as a frame­work for un­der­stand­ing and en­quiry in fu­ture posts. I will add that I do not be­lieve that this pat­tern is any­thing new; I have seen as­pects of it ex­pressed in var­i­ous sys­tems think­ing texts. How­ev­er, I don’t think the pat­tern has been ad­dressed by name or given the recog­ni­tion as a use­ful pat­tern in the way I be­lieve it de­serves.

A Brief Aside Into No­ta­tion and Method

To de­scribe the pat­terns in this post, I will use a min­i­mal­ist no­ta­tion that I’ve set­tled upon for de­scrib­ing dy­namic sys­tems. This is a sim­pli­fied ver­sion of stock and flow di­a­grams used in sys­tems dy­nam­ics. Rounded rec­tan­gles are used to rep­re­sent vari­ables and di­a­monds for processes that evolve them.

For ex­am­ple, here’s a sim­ple sys­tem:

Notational example

For the method­ol­o­gy, I’ll be show­ing my work­ing as sub­se­quent ap­pli­ca­tions of DSRP. Al­though it is dis­puted whether DSRP is com­plete, it is a def­i­nitely sim­ple frame­work and good enough for the pur­poses here. I have also used a few laws from var­i­ous sys­tems thinkers, a great sum­mary of these laws can be found in the book The Gram­mar of Sys­tems: From Or­der to Chaos & Back.

The Loop

Ini­tial­ly, we’ll start from the most sim­ple loop of The Real and the process by which it changes which we’ll call Evo­lu­tion. This leads us to a triv­ial sys­tem look­ing like this:

The simplest system

I hope that you’ll agree that this is an in­cred­i­bly low res­o­lu­tion model of the uni­verse. The rea­son why I’ve started here is be­cause all sys­tems are in­scribed within the re­al. It’s good to keep in mind that ap­pli­ca­tions of DSRP are all about im­pos­ing struc­ture on a Volatile, Un­know­able, Com­plex and Am­bigu­ous re­al­i­ty—ev­ery­thing that ex­ists is al­ready within this mod­el, we just need re­cur­sive sub­se­quent ap­pli­ca­tions of DSRP to carve our more sub­tle pat­terns from it.

Ash­by’s law of req­ui­site va­ri­ety states that a good reg­u­la­tor must have at least as much va­ri­ety as the en­vi­ron­ment it con­trols. There­fore, along­side the en­vi­ron­ment, there must be a model of that en­vi­ron­ment that we can use to con­trol it. This model is part of the Real so we’ll carve it out of the Real with a dis­tinc­tion. For rea­sons ex­plained lat­er, we’ll call it Un­der­stand­ing:

Adding understanding

If we were fol­low­ing DSRP to the let­ter, the cre­ation of a dis­tinc­tion across an el­e­ment would cause the cre­ation of two new el­e­ments within the sys­tem of the orig­i­nal. For the sim­plic­ity of the di­a­grams, we’re ig­nor­ing the sys­tem part for now, as we’re op­er­at­ing from the per­spec­tive of the de­sir­ing ma­chine that we’re de­riv­ing. I hope that this omis­sion will not cause too much con­fu­sion.

ER­RA­TUM: I have come to the re­al­i­sa­tion that the Real should be re­placed by the En­vi­ron­ment in the fol­low­ing di­a­grams. The En­vi­ron­ment be­ing a small part of the Real that we care about. The Dark­ness prin­ci­ple (a re­sult of Ash­by’s law) pre­vents us from mak­ing a De­sir­ing Ma­chine that can cope with all of the Real de­spite what the MIT school might at­tempt

The third part of mak­ing a dis­tinc­tion is the cre­ation of the re­la­tion­ships be­tween the two new el­e­ments cre­ated by the dis­tinc­tion. These re­la­tion­ships are what we used to de­fine the pur­pose of our dis­tinc­tion so we must in­ter­ro­gate our mo­tives. The rea­son for this en­quiry is to un­der­stand a gen­eral pat­tern for sys­tems that seek to mod­ify the en­vi­ron­ment in the way they see fit, so the first re­la­tion­ship should be Ac­tion. Ac­tion is a process that takes in The Real and Un­der­stand­ing and mod­i­fies The Real to­wards some goal based on Un­der­stand­ing.

Dou­bling the Loop

This Un­der­stand­ing model rep­re­sents our un­der­stand­ing (fun­ny, that) of the laws of physics and/or the en­vi­ron­ment which the sys­tem con­trols. It al­lows our feed­back loop to be­come a feed­for­ward loop. This means that in­stead of course cor­rect­ing our sys­tems in­ter­ven­tions on the cur­rent state of the en­vi­ron­ment, we can pre­dict where the en­vi­ron­ment will be and make our in­ter­ven­tions based on that.

We must ask our­selves though, where does our Un­der­stand­ing come from? There’s only one place it can come from—the Real. How­ev­er, our knowl­edge of the Real is in­com­plete; there will al­ways be edge be­hav­iours that will sur­prise us. The fact that we can never com­pletely model a sys­tem is known as the dark­ness prin­ci­ple.

Be­cause of this, we can­not take our un­der­stand­ing model for grant­ed—we must add an­other loop that mod­i­fies our un­der­stand­ing based on sur­prises (unen­coun­tered phe­nom­e­na) from the Real. We can call this new process learn­ing.

Adding learning

The pat­tern that we now have is some­times called the Dou­ble loop learn­ing mod­el. This pat­tern cor­re­sponds to places where we are ac­tively up­dat­ing our men­tal mod­el—where we are im­prov­ing our feed­for­ward with feed­back. This is a well un­der­stood pat­tern within Cy­ber­net­ics and has per­me­ated a lot of mod­ern or­gan­i­sa­tional think­ing. It could be said that this pat­tern is the quin­tes­sen­tial Cy­ber­netic pat­tern. For ex­am­ple, this pat­tern gen­er­alises the ubiq­ui­tous OODA loop mod­el.

The Ar­row of De­sire

So far, we’ve de­vel­oped a stan­dard pat­tern for an adap­tive sys­tem. There is a con­cept, though, that we’ve been im­plic­itly talk­ing about that we haven’t ac­counted for in our pat­tern. That con­cept is the pur­pose or goal of the sys­tem. From this point for­ward, we will de­fine that only Ac­tion and Learn­ing can be used to in­ter­act with the Real. This con­straint is jus­ti­fied by the as­sump­tion that in­ter­nal processes of a sys­tem can only op­er­ate upon the un­der­stand­ing of the sys­tem.

To be­gin with talk­ing about pur­pose and goals, we should first make a dis­tinc­tion in the Un­der­stand­ing mod­el. If our sys­tem has a goal then it must be en­coded in the Un­der­stand­ing there­fore we can cut part of our Un­der­stand­ing with a dis­tinc­tion and call it the Ideal. How­ev­er, it is also equally valid to make this dis­tinc­tion out of the Real too—the Ideal could just as eas­ily be out­side of the scope of our sys­tem’s per­spec­tive. This is why I’ve ne­glected the sys­tems part of DSRP be­cause, de­pend­ing on the sit­u­a­tion, the de­sir­ing ma­chine pat­tern can be cut up by sys­tem bound­aries in many ways. This new dis­tinc­tion brings with it a new re­la­tion­ship that we’ll call De­sire. This new re­la­tion­ship is an in­put to the Ac­tion process.

Adding desire

De­sire is the dif­fer­ence be­tween Un­der­stand­ing and the Ideal. It there­fore acts as an er­ror sig­nal that dri­ves Ac­tion. That is to say that if the Ideal and Un­der­stand­ing do not dif­fer, there will be no de­sire and there­fore Ac­tion will not oc­cur. A larger dif­fer­ence be­tween Un­der­stand­ing and the Ideal cre­ates a larger De­sire sig­nal and there­fore dri­ves greater Ac­tion. De­pend­ing on the de­sir­ing ma­chine, De­sire can be a mul­ti­-di­men­sional vec­tor whose ba­sis may not be or­thog­o­nal—that is to say, that more com­plex de­sir­ing ma­chines can pro­duce con­flict­ing De­sire which is caused by in­con­sis­ten­cies in The ideal and/or Un­der­stand­ing.

De­sire is not the only re­la­tion­ship be­tween the Real, Un­der­stand­ing and the Ideal. The Learn­ing process is also guided by the dif­fer­ence be­tween Un­der­stand­ing and the Ideal. This sig­nal is anal­o­gous to the idea of at­ten­tion–it acts as a fil­ter of what in­for­ma­tion from the Real is worth in­cor­po­rat­ing into Un­der­stand­ing. For now we will call it Mean­ing:

Adding meaning

Mean­ing is an im­por­tant sig­nal within the de­sir­ing ma­chine as it en­sures that the un­der­stand­ing does not get over­whelmed with in­for­ma­tion that is not rel­e­vant to the goals of the sys­tem. Along with De­sire, Mean­ing can be used as a lens to un­der­stand many patholo­gies of sys­tems.

Cri­tique

The fi­nal re­la­tion­ship to cre­ate on the di­a­gram is the one that changes the Ideal. As a de­sir­ing ma­chine only has ac­cess to the Real through Learn­ing and Ac­tion. This is added as the process that cre­ates and mod­i­fies the Ideal and there­fore sets the goals of the de­sir­ing ma­chine. With­out it, there could be no Ideal as this process is the one that cre­ates the dis­tinc­tion in­ter­nally to the Un­der­stand­ing be­tween what is and what is Ideal. On the di­a­gram, it looks like this:

The whole desiring machine

This di­a­gram demon­strates the pat­tern of the de­sir­ing ma­chine. In fu­ture posts I will ex­plore how this pat­tern can be used as a lens to un­der­stand some patholo­gies within sys­tems but first we will ex­plore how dif­fer­ent de­sir­ing ma­chines can in­ter­act as dif­fer­ent lev­els of ab­strac­tion hi­er­ar­chy.

Self Sim­i­lar­ity

Like many sys­tems, De­sir­ing ma­chines are of­ten self­-sim­i­lar—the processes within de­sir­ing ma­chines are of­ten them­selves de­sir­ing ma­chines. When we’re work­ing as an en­gi­neer­ing firm, we em­body a de­sir­ing ma­chine. Our cus­tomers, be they in­di­vid­u­als or or­gan­i­sa­tions, also em­body de­sir­ing ma­chines. From the cus­tomer’s per­spec­tive, our en­gi­neer­ing firm looks like one of their Ac­tion processes (re­mem­ber the part about how each of the processes may be a de­sir­ing ma­chine it­self). Note that it’s not ‘De­sir­ing Ma­chines all the way down’; the end of the chain of De­sire is of­ten just a sin­gle or dou­ble cy­ber­netic loop act­ing upon the Real.

The view­point of an Ac­tion process as a de­sir­ing ma­chine is an in­ter­est­ing one to take—we can think of the ex­am­ple of an en­gi­neer­ing firm that’s de­vel­op­ing a sys­tem for a cus­tomer or­gan­i­sa­tion. We can ei­ther view the cus­tomer’s re­quire­ments as a part of our en­vi­ron­ment or we can dis­tin­guish the cus­tomer’s de­sir­ing ma­chine out of the en­vi­ron­ment us­ing a new dis­tinc­tion. The fol­low­ing di­a­gram shows what that looks like:

Hierarchy of desiring machines

ER­RATA: The Cus­tomer De­sir­ing Ma­chine should be con­nected to the Learn­ing Process of the low­er-level De­sir­ing Ma­chine

The primes show the process and parts of the cus­tomer de­sir­ing ma­chine. We can use this di­a­gram as a lens to un­der­stand some of the patholo­gies caused by the hi­er­ar­chy of two de­sir­ing ma­chines. For ex­am­ple, the de­sire of the cus­tomer is fil­tered through both the cus­tomer’s un­der­stand­ing, the en­gi­neer­ing fir­m’s mean­ing and the en­gi­neer­ing fir­m’s un­der­stand­ing. This means that the less the two un­der­stand­ings match (and the less mean­ing­ful en­gi­neer’s find the re­quire­ments) the less the en­gi­neered so­lu­tion will meet the cus­tomer’s de­sire. This is an ef­fect that I’d like to ex­plore fur­ther in a fu­ture post.

Con­clu­sion

I have found the De­sir­ing Ma­chine pat­tern model use­ful in my own prac­tice for un­der­stand­ing and di­ag­nos­ing prob­lems of sys­tems en­gi­neer­ing. I hope that I have ex­plained it well enough here for oth­ers to un­der­stand it and to ap­ply it in their own sys­tems prac­tice. I do not be­lieve that I have added any­thing new dur­ing this in­quiry, but I think the pat­tern is use­ful enough to war­rant be­ing named.

I’d be es­pe­cially in­ter­ested to know where it does­n’t work well and ad­di­tion­s/­sub­trac­tions are greatly ap­pre­ci­at­ed. Please put any thoughts that you might have in the com­ments be­low.

In fu­ture posts, I hope to ex­pand on my ideas around the de­sir­ing ma­chine, show­ing how it can be used to un­der­stand the vi­able sys­tems model as well as other con­cepts in sys­tems en­gi­neer­ing. I’d also like to aug­ment my model of the de­sir­ing ma­chine with the idea of sys­tem ca­dences or rhythms to gain un­der­stand­ing of how the pat­tern changes when in­for­ma­tion flows at dif­fer­ent rates around the sys­tem. There’s also some in­ter­est­ing in­sights to be had when the S part of DSRP is put back in so that we have dif­fer­ent ways of cut­ting up the de­sir­ing ma­chine with sys­tem bound­aries.