What an interesting and strange article. The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects. It's not clear what positive suggestions this article offers except to always disregard regulation and build your own thing from scratch, which is ... certainly consistent with the Works In Progress imprint.
The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
There is something about Club of Rome to systems thinking that is similar to the Dijkstra's observation about Basic and programming.
Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.
I didn't feel like he was refuting the whole discipline. Rather, he seems to admire Forrester and the whole discipline. The argument just seems to be, even with great systems thinking, you can't build a complex system from scratch and that existing complex systems are often hard to fix.
Yeah, what they are attempting mg to do in the span of one short essay is equivalent to trying to discredit an entire field of inquiry. Even if you don't think the field is worth anything, it should be obvious that it will take a lot of research and significant argumentation to accomplish that goal, this essay is lacking in both departments.
This insight - that modeling human systems is hard because humans also respond to models of their world and then change it - is not all that new, it's called reflexivity [1] and has been around for about the same time as systems thinking.
This article does not begin to cover systems thinking. Cybernetics and metacybernetics are noticably missing. Paul Cilliers' theory of complexity - unmentioned. Nothing about Stafford Beer and the viable system model. So on and so forth.
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
"Metacybernetics" is a concept with a small handful of Google hits, some of which appear to be obscure research papers and some appear to be metaphysical crackpottery on blogs.
I think it's worth considering that the theories you're familiar with are incredibly niche, have never gained any foothold in mainstream discussions of system dynamics, and it's not wrong for people not to be aware of them (or to choose not to mention them) in a post addressed at general audiences.
Further, you just missed the opportunity to explain these concepts to a broader HN audience and maybe make sure that the next time someone writes about it, they are aware of this work.
Only metacybernetics is particularly obscure, because I haven't finished writing the paper which brings together the disparate theories describing the same phenomena and gives metacybernetics a proper definition. I mentioned it to spark interest and hopefully a conversation, which was successful.
Cybernetics was the birthing place of neural networks. Hardly niche.
I don't think commenters should be expected to provide full overviews of topics just to inform others. Parent gave plenty of pointers beyond metacybernetics, all of which are certainly discoverable. If you are curious, read about it. It's not the responsibility of random strangers to educate you.
It seems odd to me that someone would write such a polished and comprehensive article and yet completely misunderstand the definition of the central topic.
That happens in system dynamics a lot, actually - there are many independently developed theories in many different disciplines that do not intertwine historically at all. I have met multiple people who work with systems mathematically on a professional level who had no idea about these other things.
I've seen this too. In particular there seems to be a huge dividing line between systems research stemming from the physical-mathematical heritage of formal dynamical systems, and the other line mostly stemming from everything Wiener did with cybernetics (and some others who were contemporaneous with Wiener). Both sides can be profitably informed by the other in various ways.
This is actually a critique of massive bureaucratic systems, not systems thinking as a practice. Gall's work is presented as an argument against systems thinking, while it's a contribution to the field. Popular books on systems thinking all acknowledge the limitations, pitfalls, and strategies for putting theory into practice. That large bureaucracies often fails to is, in my view, an unrelated subject.
Modernizing software systems take time because of inherent corruption in the procurement process or workings of consulting company involved. Those problems can be solved much faster and cheaper if a knowledgeable tech person was involved.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.
> largely outside the typical congressional appropriation oversight channels
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
Process is useful for raising the lowest deliveries quality, for making former-unknowns into knowns, and for preventing misaligned behavior when culture alone becomes insufficient.
If you have need for speed, a team that knows the space, and crucially a leader who can be trusted to depart from the usual process when that tradeoff better meets business needs, it can work really well. But also comes with increased risk.
General principle 1: to make a meeting matter, make a decision. (A meeting at its most basic is kinda like a locking primitive, gets independent threads to synchronize for a short time. Think through why you need that synchrony.)
General principle 2: create focus on the critical path. (If each ticket you work on is slightly different from other tickets and no cookie-cutter solutions exist, then there is some chain of slow, annoying, winding steps, and the rest of the dependency graph doesn't really matter, just these big pains in the butt that often are linked in the dependency graph only by the fact that it's going to be one developer working on all of them and they can't work on them all simultaneously. It follows that you can only get interesting speed improvements if multiple developers are working on the same change. Note that daily stand up is an example of a meeting which does not make a decision—it could but in practice nobody uses it that way—but instead its function is to create pressure on the critical path. Often unhealthy pressure, someone was sprinting at 100% and now they are getting a little burned out, and daily stand up forces them to do something that they can report at standup lest they be honest and say that they're suffering.)
General principle 3: process helps create shared reality. (Consider two different ways to protect prod from the developers. In one way, everyone commits to main, some file or database table or configmap contains a list of features, and those features can be toggled in dev, uat, or prod. The process here is, whenever you change anything, you wrap it in a feature toggle, so that your change does not impact prod if that toggle is off. Versus, consider having three different branches, you can usually commit new features to dev, eventually we cut a release from Dev and push it to the UAT branch, cut a release from UAT to push to the prod branch. But these are separate branches because we might need to hotfix UAT or Prod. The process here can go in these two different directions, see, but one of them leads to a shared reality, this is the entirety of the code and all of the different ways that it can be configured in our production environment, and we need to carefully consider how we remove those feature toggles—versus the other one has three independent realities and nobody is considering all of the different ways that it can be configured or what is being removed, and periodically you get bugs because people didn't revert the revert—what, you didn't know that you needed to revert the revert, you always need to revert the revert. So process tends to be more lightweight if it generates one shared reality).
General principle 4: process needs to help you figure out, and keep highlighted, the “Useless Without.” (There are many nice-to-haves, in a given project. There are a lot of them that people will say are must-haves. The product must be secure, the product must be available at this website address, okay fine. But there is one business goal that the project serves, which, if that business goal is not accomplished, the whole project is useless. That is the Useless Without feature. So I worked on a shop floor system of kiosks for like 6 months once before I determined from talking to the stakeholders that the thing was actually Useless Without time tracking, and this is a sensitive issue because unionized pipefitters are understandably skittish around surveillance technology that could be used in dystopian ways. But we're going to address their needs by looking at the project only, trying to figure out how long each of the steps in building the project takes, but we still don't talk about how we're trying to make the shop floor run efficiently. But you understand every meeting I had before we had clarified this, was actively detrimental to my productivity on this task.)
I like this saying better: every system is perfect until people get involved. People act irrationally because they are reacting to the nonsense that pervades their reality.
You clearly haven't read much in the field of systems thinking, then. Many of the practitioners and most of its pioneers are in fact actual mathematicians, biologists, or computer scientists (Wiener, von Foerster, Banathy, etc)
This is totally orthogonal to your original claim that systems thinkers are "liberal" philosophers but OK.
McCulloch and Pitts, early cyberneticians literally invented neural networks. See the wikipedia page on neural nets.
Another really simple one: Law of Requisite Variety. If that's too simple, I'd encourage you to bear in mind that Norbert Wiener, beyond his direct contributions to mathematics in the form of signal processing filters, is also responsible for the view of control as communication, which motivates much of the approach to control and stability in digital systems.
This essay focuses on a very narrow section of systems thinking and systems theory. There's an entire field, with many different subdisciplines beyond just the Club of Rome stuff (and which influenced them directly) that, quite explicitly also deals with systems that "fight back". In fact, any serious definition of systems thinking usually has said dynamics baked into it—systems are assumed to evolve from the start.
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
Both of the books "Systems Thinkers" and "The Emerging Consensus in Social Systems Theory" are nice broad introductions into the historical developments, various lines of thought, and the massive space that is systems thinking. They should both give you a good initial starting point for further research.
I studied biology in college and this has always been obvious to me, and it shocks me that people with backgrounds in e.g. ecology don't understand that living systems are unpredictable auto-adaptive machines full of feedback loops. How a bunch of ecologists could take doomerism based on "world models" seriously enough to cause a public panic about it (e.g. Paul Ehrlich) baffles me.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
So... predicting doom and gloom from overpopulation would obviously reduce the future population if people take it seriously.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price. If you are both a buyer and a seller of labor you are your own adversary in a two-party pricing game.
I personally hold the view that the arrow of time goes in one direction and the future of non-linear computationally irreducible systems cannot be predicted from their current state (unless you are literally God and have access to the full quantum-level state of the whole system and infinite computational power). I don't mean predicting them is hard, but that it's "impossible like perpetual motion" impossible.
I also wonder if we are being fooled by randomness when we think we see a person or a technique that yields good predictions. Are good prophets just luck plus survivorship bias? Obviously we forget all the bad prophets. All lottery winners are lucky, therefore lucky people should play the lottery. But who is lucky? The only way to find out is to play the lottery. Anyone who wins should have played, and anyone who loses should not have played.
I like this. The author is somewhat needlessly hopeless about the prospects of changing a complex system.
Basic summary is that once you start getting more than a handful of feedback loops, the author through many examples cautions that maps of the system becomes more like physical maps—necessarily oversimplified. When you have four feedback loops under the right control of management, it's still a diagnostic aid, but you add everything in the US healthcare system, say—fuggetaboudit! And because differences at the small scale add up for long term outcomes, the map doesn't let you forecast the long term, it doesn't let you predict what to optimize, in fact, the only value that the author finds in a systems map for a sufficiently complex system, is as a rhetorical prop to show people why we need to reinvent the whole system. The author thinks this works very well, but only if the new system is grown organically, as it were, rather than imposed structurally.
The first criticism is, this complaint about being unable to change a system, is actually too amorphous and wibbly wobbly to stand. Here's what I mean: the author gives the example of the ICBM project in US military contracting as a success of the "reinvent method", but if you try to poke at that belief, it doesn't "push back" at you. Did we invent a whole new government to solve the ICBM project? I mean we invented other layers of bureaucracy—but they were embedded in the existing government and its bureaucracy. What actually happened was, a complex system existed that contained two subsystems that were, while not entirely decoupled, still operating with substantial independence. Somewhere up the chain, they both folded into the same bureaucracy with the same president, but that bureaucracy minimized a lot of its usual red tape.
This is actually the conceit of Theory of Constraints folks, although I don't usually see them being bold about it. The claim is that all of those hacks that you do in order to ship something? “Colleague gave me a 400 line diff, eh fuckitapprove, we'll do it live” ... that sort of thing? Actually, say ToC folks, that is your system running well, not poorly. The complex system is being pinned to an achievable output goal and it is being allowed to reorganize itself to achieve that goal. This is ultimately the point of the whole ToC ‘finding the bottlenecks’ jargon. “But the safeties are off and someone will get hurt,” you say. And they say somewhat unhelpfully, “That’s for the system to deal with.” Yes, the old configuration had these mechanisms to keep things safe, but you need a new system with new mechanisms. And that's precisely what you see in these new examples, there actually is top-down systems engineering, but around how do we maintain our quality standards, how do we keep the system accountable.
If the first criticism is that the “organically grow a new system to take its place” is airy-fairy, the second criticism is just that the hopelessness is unnecessarily pessimistic. Yes, complex systems with lots of feedback loops do maintain a homeostasis and revert back to that as you poke and prod them. Yes, it is really frustrating how to change one thing, you must change everything. Yes, it is doubly frustrating that systems that nominally are about providing and promoting X, turn out to provide and promote Y while actually being X-neutral (think for instance about anything which you do which ultimately just allows your manager to cover their ass, say—it is never described as a CYA, just acknowledged silently that way in hallway conversation).
But, we know complex systems that find new homeostatic equilibriums. You, reading this, probably know someone (maybe a friend, maybe a friend of a friend) who kicked drugs. You also know somebody who managed to “lose the weight and keep it off.” You know a player who became a family man, and you yourself remember instances where you were a dumb kid reliving the same shitty day over and over when you could have just done this one damn thing differently—you know it now!—and your days would have gotten steadily better and better rather than the same old rut. So you know that these inscrutably complex things do change. Sometimes it's pinning the result, like someone who drops the pounds because “I just resolved to live like my friend Derek, he agreed to take me a week through everything in his life, I wrote down what he eats for breakfast, when he hits the gym, how much does he talk with friends and family, then I forced myself to live on this schedule for a month and finally I got the hang of it.” Sometimes it's literally changing everything, “Yeah I lost the pounds because I went to live in the Netherlands and school was a 50 minute bike ride from my apartment either way and then I didn't have any friends so I joined the university's competitive
ultimate frisbee team, so like my dinner most days was bought that day after practice in a 5 minute trip through the grocery—a raw bell pepper, a ball of mozzarella, maybe some bread in olive oil—I didn't have time to cook anything big.” Or sometimes it was imposed top-down but with good motivation, “yeah, I really wanted to get a role as an orphan in this musical, so I dieted and dieted with the idea of ‘I can binge once I get the part, but I have to sell scrawny orphan when auditions come round soon’ and like it sucked for two weeks but then I got used to the lifestyle and I no longer wanted to binge, funny how that worked out.”
There are so many different stories, and yes they never look like we would imagine success to look like, but being pessimistic about the existence of the solution in general because there's nothing in common about the success stories, I don't know, seems to throw the baby out with the bathwater. There is hope, it's just that when you are looking at the systems map, people get in this rut where they're looking for one thing to change, but really everything needs to change on that map, you've created a big networked dependency graph of the spaces you need to interrogate to figure out whether they are able to cope with the new way of doing things and, if not, are they going to grind their heels in and try to block the change. There's still use in it, you just need to view the whole graph holistically.
What an interesting and strange article. The author barely offers a definition of "systems thinking", only names one person to represent it, and then claims to refute the whole discipline based on a single incorrect prediction and the fact that government is bad at software projects. It's not clear what positive suggestions this article offers except to always disregard regulation and build your own thing from scratch, which is ... certainly consistent with the Works In Progress imprint.
The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
There is something about Club of Rome to systems thinking that is similar to the Dijkstra's observation about Basic and programming.
Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.
I didn't feel like he was refuting the whole discipline. Rather, he seems to admire Forrester and the whole discipline. The argument just seems to be, even with great systems thinking, you can't build a complex system from scratch and that existing complex systems are often hard to fix.
The title of the article is an intentional conflation of "systems thinking" with "magical thinking", which is not a compliment.
Yeah, what they are attempting mg to do in the span of one short essay is equivalent to trying to discredit an entire field of inquiry. Even if you don't think the field is worth anything, it should be obvious that it will take a lot of research and significant argumentation to accomplish that goal, this essay is lacking in both departments.
This insight - that modeling human systems is hard because humans also respond to models of their world and then change it - is not all that new, it's called reflexivity [1] and has been around for about the same time as systems thinking.
[1] https://en.wikipedia.org/wiki/Reflexivity_(social_theory)
This article does not begin to cover systems thinking. Cybernetics and metacybernetics are noticably missing. Paul Cilliers' theory of complexity - unmentioned. Nothing about Stafford Beer and the viable system model. So on and so forth.
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
"Metacybernetics" is a concept with a small handful of Google hits, some of which appear to be obscure research papers and some appear to be metaphysical crackpottery on blogs.
I think it's worth considering that the theories you're familiar with are incredibly niche, have never gained any foothold in mainstream discussions of system dynamics, and it's not wrong for people not to be aware of them (or to choose not to mention them) in a post addressed at general audiences.
Further, you just missed the opportunity to explain these concepts to a broader HN audience and maybe make sure that the next time someone writes about it, they are aware of this work.
Only metacybernetics is particularly obscure, because I haven't finished writing the paper which brings together the disparate theories describing the same phenomena and gives metacybernetics a proper definition. I mentioned it to spark interest and hopefully a conversation, which was successful.
Cybernetics was the birthing place of neural networks. Hardly niche.
I don't think commenters should be expected to provide full overviews of topics just to inform others. Parent gave plenty of pointers beyond metacybernetics, all of which are certainly discoverable. If you are curious, read about it. It's not the responsibility of random strangers to educate you.
It seems odd to me that someone would write such a polished and comprehensive article and yet completely misunderstand the definition of the central topic.
That happens in system dynamics a lot, actually - there are many independently developed theories in many different disciplines that do not intertwine historically at all. I have met multiple people who work with systems mathematically on a professional level who had no idea about these other things.
I've seen this too. In particular there seems to be a huge dividing line between systems research stemming from the physical-mathematical heritage of formal dynamical systems, and the other line mostly stemming from everything Wiener did with cybernetics (and some others who were contemporaneous with Wiener). Both sides can be profitably informed by the other in various ways.
Because people wandering in are going to wonder about the term cybernetics:
https://en.wikipedia.org/wiki/American_Society_for_Cyberneti...
https://en.wikipedia.org/wiki/Complexity_theory
This is actually a critique of massive bureaucratic systems, not systems thinking as a practice. Gall's work is presented as an argument against systems thinking, while it's a contribution to the field. Popular books on systems thinking all acknowledge the limitations, pitfalls, and strategies for putting theory into practice. That large bureaucracies often fails to is, in my view, an unrelated subject.
I just want to know if there exists a Factorio mod that changes the graphics to the cutesy, minimalist assets shown in the topmost image.
If you want to experiment with a version of the world model the article references, you can play with an implementation I put together here:
https://insightmaker.com/insight/2pCL5ePy8wWgr4SN8BQ4DD/The-...
Modernizing software systems take time because of inherent corruption in the procurement process or workings of consulting company involved. Those problems can be solved much faster and cheaper if a knowledgeable tech person was involved.
Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.
Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.
> largely outside the typical congressional appropriation oversight channels
I've seen it happen more than a few times that when software needs to get made quickly, a crack team is assembled and Agile ceremonies and other bureaucratic decision processes are bypassed.
Are there general principles for when process is helpful and when it's not?
Process is useful for raising the lowest deliveries quality, for making former-unknowns into knowns, and for preventing misaligned behavior when culture alone becomes insufficient.
If you have need for speed, a team that knows the space, and crucially a leader who can be trusted to depart from the usual process when that tradeoff better meets business needs, it can work really well. But also comes with increased risk.
General principle 1: to make a meeting matter, make a decision. (A meeting at its most basic is kinda like a locking primitive, gets independent threads to synchronize for a short time. Think through why you need that synchrony.)
General principle 2: create focus on the critical path. (If each ticket you work on is slightly different from other tickets and no cookie-cutter solutions exist, then there is some chain of slow, annoying, winding steps, and the rest of the dependency graph doesn't really matter, just these big pains in the butt that often are linked in the dependency graph only by the fact that it's going to be one developer working on all of them and they can't work on them all simultaneously. It follows that you can only get interesting speed improvements if multiple developers are working on the same change. Note that daily stand up is an example of a meeting which does not make a decision—it could but in practice nobody uses it that way—but instead its function is to create pressure on the critical path. Often unhealthy pressure, someone was sprinting at 100% and now they are getting a little burned out, and daily stand up forces them to do something that they can report at standup lest they be honest and say that they're suffering.)
General principle 3: process helps create shared reality. (Consider two different ways to protect prod from the developers. In one way, everyone commits to main, some file or database table or configmap contains a list of features, and those features can be toggled in dev, uat, or prod. The process here is, whenever you change anything, you wrap it in a feature toggle, so that your change does not impact prod if that toggle is off. Versus, consider having three different branches, you can usually commit new features to dev, eventually we cut a release from Dev and push it to the UAT branch, cut a release from UAT to push to the prod branch. But these are separate branches because we might need to hotfix UAT or Prod. The process here can go in these two different directions, see, but one of them leads to a shared reality, this is the entirety of the code and all of the different ways that it can be configured in our production environment, and we need to carefully consider how we remove those feature toggles—versus the other one has three independent realities and nobody is considering all of the different ways that it can be configured or what is being removed, and periodically you get bugs because people didn't revert the revert—what, you didn't know that you needed to revert the revert, you always need to revert the revert. So process tends to be more lightweight if it generates one shared reality).
General principle 4: process needs to help you figure out, and keep highlighted, the “Useless Without.” (There are many nice-to-haves, in a given project. There are a lot of them that people will say are must-haves. The product must be secure, the product must be available at this website address, okay fine. But there is one business goal that the project serves, which, if that business goal is not accomplished, the whole project is useless. That is the Useless Without feature. So I worked on a shop floor system of kiosks for like 6 months once before I determined from talking to the stakeholders that the thing was actually Useless Without time tracking, and this is a sensitive issue because unionized pipefitters are understandably skittish around surveillance technology that could be used in dystopian ways. But we're going to address their needs by looking at the project only, trying to figure out how long each of the steps in building the project takes, but we still don't talk about how we're trying to make the shop floor run efficiently. But you understand every meeting I had before we had clarified this, was actively detrimental to my productivity on this task.)
I like this saying better: every system is perfect until people get involved. People act irrationally because they are reacting to the nonsense that pervades their reality.
To me it feels like "Systems thinking" is a subject produced by a bunch of liberal "philosophers" playing to be mathematicians.
You clearly haven't read much in the field of systems thinking, then. Many of the practitioners and most of its pioneers are in fact actual mathematicians, biologists, or computer scientists (Wiener, von Foerster, Banathy, etc)
Could you quote a non-trivial "systems thinking" theorem or tool such that, by knowing it, I will be able to solve a problem I couldn't solve before?
This is totally orthogonal to your original claim that systems thinkers are "liberal" philosophers but OK.
McCulloch and Pitts, early cyberneticians literally invented neural networks. See the wikipedia page on neural nets.
Another really simple one: Law of Requisite Variety. If that's too simple, I'd encourage you to bear in mind that Norbert Wiener, beyond his direct contributions to mathematics in the form of signal processing filters, is also responsible for the view of control as communication, which motivates much of the approach to control and stability in digital systems.
This essay focuses on a very narrow section of systems thinking and systems theory. There's an entire field, with many different subdisciplines beyond just the Club of Rome stuff (and which influenced them directly) that, quite explicitly also deals with systems that "fight back". In fact, any serious definition of systems thinking usually has said dynamics baked into it—systems are assumed to evolve from the start.
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
Any particular resource to recommend?
Both of the books "Systems Thinkers" and "The Emerging Consensus in Social Systems Theory" are nice broad introductions into the historical developments, various lines of thought, and the massive space that is systems thinking. They should both give you a good initial starting point for further research.
start small or fail big
[dead]
I studied biology in college and this has always been obvious to me, and it shocks me that people with backgrounds in e.g. ecology don't understand that living systems are unpredictable auto-adaptive machines full of feedback loops. How a bunch of ecologists could take doomerism based on "world models" seriously enough to cause a public panic about it (e.g. Paul Ehrlich) baffles me.
Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.
So... predicting doom and gloom from overpopulation would obviously reduce the future population if people take it seriously.
Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price. If you are both a buyer and a seller of labor you are your own adversary in a two-party pricing game.
I personally hold the view that the arrow of time goes in one direction and the future of non-linear computationally irreducible systems cannot be predicted from their current state (unless you are literally God and have access to the full quantum-level state of the whole system and infinite computational power). I don't mean predicting them is hard, but that it's "impossible like perpetual motion" impossible.
I also wonder if we are being fooled by randomness when we think we see a person or a technique that yields good predictions. Are good prophets just luck plus survivorship bias? Obviously we forget all the bad prophets. All lottery winners are lucky, therefore lucky people should play the lottery. But who is lucky? The only way to find out is to play the lottery. Anyone who wins should have played, and anyone who loses should not have played.
I like this. The author is somewhat needlessly hopeless about the prospects of changing a complex system.
Basic summary is that once you start getting more than a handful of feedback loops, the author through many examples cautions that maps of the system becomes more like physical maps—necessarily oversimplified. When you have four feedback loops under the right control of management, it's still a diagnostic aid, but you add everything in the US healthcare system, say—fuggetaboudit! And because differences at the small scale add up for long term outcomes, the map doesn't let you forecast the long term, it doesn't let you predict what to optimize, in fact, the only value that the author finds in a systems map for a sufficiently complex system, is as a rhetorical prop to show people why we need to reinvent the whole system. The author thinks this works very well, but only if the new system is grown organically, as it were, rather than imposed structurally.
The first criticism is, this complaint about being unable to change a system, is actually too amorphous and wibbly wobbly to stand. Here's what I mean: the author gives the example of the ICBM project in US military contracting as a success of the "reinvent method", but if you try to poke at that belief, it doesn't "push back" at you. Did we invent a whole new government to solve the ICBM project? I mean we invented other layers of bureaucracy—but they were embedded in the existing government and its bureaucracy. What actually happened was, a complex system existed that contained two subsystems that were, while not entirely decoupled, still operating with substantial independence. Somewhere up the chain, they both folded into the same bureaucracy with the same president, but that bureaucracy minimized a lot of its usual red tape.
This is actually the conceit of Theory of Constraints folks, although I don't usually see them being bold about it. The claim is that all of those hacks that you do in order to ship something? “Colleague gave me a 400 line diff, eh fuckitapprove, we'll do it live” ... that sort of thing? Actually, say ToC folks, that is your system running well, not poorly. The complex system is being pinned to an achievable output goal and it is being allowed to reorganize itself to achieve that goal. This is ultimately the point of the whole ToC ‘finding the bottlenecks’ jargon. “But the safeties are off and someone will get hurt,” you say. And they say somewhat unhelpfully, “That’s for the system to deal with.” Yes, the old configuration had these mechanisms to keep things safe, but you need a new system with new mechanisms. And that's precisely what you see in these new examples, there actually is top-down systems engineering, but around how do we maintain our quality standards, how do we keep the system accountable.
If the first criticism is that the “organically grow a new system to take its place” is airy-fairy, the second criticism is just that the hopelessness is unnecessarily pessimistic. Yes, complex systems with lots of feedback loops do maintain a homeostasis and revert back to that as you poke and prod them. Yes, it is really frustrating how to change one thing, you must change everything. Yes, it is doubly frustrating that systems that nominally are about providing and promoting X, turn out to provide and promote Y while actually being X-neutral (think for instance about anything which you do which ultimately just allows your manager to cover their ass, say—it is never described as a CYA, just acknowledged silently that way in hallway conversation).
But, we know complex systems that find new homeostatic equilibriums. You, reading this, probably know someone (maybe a friend, maybe a friend of a friend) who kicked drugs. You also know somebody who managed to “lose the weight and keep it off.” You know a player who became a family man, and you yourself remember instances where you were a dumb kid reliving the same shitty day over and over when you could have just done this one damn thing differently—you know it now!—and your days would have gotten steadily better and better rather than the same old rut. So you know that these inscrutably complex things do change. Sometimes it's pinning the result, like someone who drops the pounds because “I just resolved to live like my friend Derek, he agreed to take me a week through everything in his life, I wrote down what he eats for breakfast, when he hits the gym, how much does he talk with friends and family, then I forced myself to live on this schedule for a month and finally I got the hang of it.” Sometimes it's literally changing everything, “Yeah I lost the pounds because I went to live in the Netherlands and school was a 50 minute bike ride from my apartment either way and then I didn't have any friends so I joined the university's competitive ultimate frisbee team, so like my dinner most days was bought that day after practice in a 5 minute trip through the grocery—a raw bell pepper, a ball of mozzarella, maybe some bread in olive oil—I didn't have time to cook anything big.” Or sometimes it was imposed top-down but with good motivation, “yeah, I really wanted to get a role as an orphan in this musical, so I dieted and dieted with the idea of ‘I can binge once I get the part, but I have to sell scrawny orphan when auditions come round soon’ and like it sucked for two weeks but then I got used to the lifestyle and I no longer wanted to binge, funny how that worked out.”
There are so many different stories, and yes they never look like we would imagine success to look like, but being pessimistic about the existence of the solution in general because there's nothing in common about the success stories, I don't know, seems to throw the baby out with the bathwater. There is hope, it's just that when you are looking at the systems map, people get in this rut where they're looking for one thing to change, but really everything needs to change on that map, you've created a big networked dependency graph of the spaces you need to interrogate to figure out whether they are able to cope with the new way of doing things and, if not, are they going to grind their heels in and try to block the change. There's still use in it, you just need to view the whole graph holistically.