Reversible and irreversible decisions—depending on where you work, one or the other doesn't exist. Reversible decisions are forbidden because it assumes the possibility of being wrong, or people assume that nothing is irreversible.
Matt Mullenwegg talked about this in a recent podcast episode of The Knowledge Project (Matt Mullenwegg: Collaboration Is Key; notes):
[73:38, Shane Parrish] What are patterns of people that make really good decisions? What do you see in these people and how do they think about things in a way that is transferable to other people?
[73:48, Matt Mullenwegg] You know, one of the best advice I got really, which was from—early at Automattic I actually hired a CEO, I consider him like a co-founder, Tony Schneider, he's like my business soulmate—and one of the things he taught me early on was: make reversible decisions quickly and irreversible ones deliberately. And I still return to that on a weekly basis. If it's a reversible decision, we'll probably learn a lot more by doing it. I find it so funny, in software especially, let's just build the first version, and build it to throw away maybe. But let's get that prototype out there. We could debate it for weeks or months, or do a million mockups. I have this old essay, "1.0 is the Loneliest Number". The oxygen of usage is required for any idea to survive, and so you want to get to that first version as fast as possible, and that learning is really, really valuable to the speed of iteration. So I like smaller reversible that happen frequently quickly, and without being attached to them a lot.
(And later he references Farnam Street's post about decision journals, which is out of scope here, but worth a read.)
I've found that smart people who are willing to be publicly wrong with decisions are few and far between. It's uncomfortable to be wrong and publicly wrong. And it's uncomfortable to violate consistency, even in the service of making better decisions. But it's the right way to go. Sufficiently complex decisions are often intractable—you're lucky if you can figure out what all of the variables are, let alone their values, let alone how they interact with each other. So you've got to do something to get from zero to a solution.
Experiment... why do people often think that word means "let's just try something and see what happens", or "hold my beer I'm going to try and jump over it"? Thought and preparation goes into experimenting, or it's useless. If you work in physical space, experimenting is often expensive in time and money and the opportunity cost of having your staff and facilities do something else. (Another post for another day.)
In my experience, at work, most information systems and team processes aren't set up well for reversing decisions. Sure, we have The Process that (supposedly) defines how we are allowed to design things, and we have Configuration Management which defines how we can change things. ("Things", in these cases are often requirement specifications, test procedures, software, hardware drawings, etc.) But The Process is often odious, and Configuration Management is the nun at the front of the class in a Catholic school. They're not built to try things quickly, explore the frontiers, and then back up and go in a different direction if needed.
That's only partially true. We could move faster and try things if we wanted to. What limits us is the way that we manage our information. When you really, really get down to it, what is a decision? It's a definition made by people at a certain time. A decision controls something. A decision can be encoded in nouns and verbs, i.e., variables, or classes and methods. If we thought of all the decisions that we captured in prose as being algorithms, or nodes in a larger graph, we could know what affected what, and try something, check the result, then decide to keep it or dump it. It's probably not easy to set up, but the problem seems easy enough to understand.
Even if we didn't make any changes to the way we operate, it seems like we should be able to submit experiment proposals to change control boards the same way we submit problem reports and change proposals—ask for explicit permission to try x on subset y for t time, report back with results, then commit to continuing the experiment or reverting it or accepting it as a change on the whole set of things.
(Apologies for the abstractions. That's what it's like to talk about work outside of work sometimes.)
Instead, we plow ahead with extensive planning and review, then take a systematic approach to planning things. That's good and often goes well enough—better than just winging it—but you're limited to what you know ahead of time, and many of the lessons you learn through applying the plan are chalked up to "lessons learned" for a future program, which may or may not be used ever. That paradigm is best for irreversible decisions.