:

Theory-based learning and experimentation

for their goal to materialize.Our framework enables strategists to scrutinize and improve their assumptions by raising objections against their theory and by pointing them to critical experiments to learn whether their assumptions hold.Using our results, strategists can in particular identify overlooked critical contingencies.Overall, we suggest how strategists should revise their beliefs about what it takes to be successful in the light of evidence and arguments for and against their strategy.

| INTRODUCTION
A good scientist pushes to the edge of knowledge and then reaches beyond, forming a conjecture-a hypothesis-about how things work in that unknown territory.[…] In the same way, a good business strategy deals with the edge between the known and the unknown (Rumelt, 2011, p. 242f).
Strategists and entrepreneurs must go beyond current knowledge to pursue profitable opportunities (Knight, 1921).Knowledge about the future is incomplete and many contingencies are unforeseen (Simon, 1962).Consequently, scholars have explored possibilities how strategists can generate knowledge on the edge between the known and the unknown.One stream of the literature views knowledge generation under uncertainty as a process of experimentation.Implementing a new business strategy can be seen as an experiment (Pillai, Goldfarb, & Kirsch, 2020), and strategists who follow scientific principles and systematically develop hypotheses and run experiments to test their assumptions make better decisions than strategists who do not (Camuffo, Cordova, Gambardella, & Spina, 2020;Ries, 2011).A second, related stream of the literature proposes that strategists can formulate and use theories to envision a possible future and make decisions on their basis (Felin & Zenger, 2009, 2017;Gavetti & Menon, 2016).Here it has been suggested that theories are particularly valuable when they are falsifiable, and that theories provide focus for strategy implementation.
In the context of the former stream of literature, scholars have started to address the question how strategists should experiment (Agrawal, Gans, & Stern, 2021), while at the same time emphasizing that experimentation can never completely eliminate uncertainty about a strategy before irreversible investments are made (Gans, Stern, & Wu, 2019).Here it is often assumed that strategists already know the set of experiments they could possibly run and that they have already formed beliefs-albeit probabilistic or ambiguous-about the mapping of the results of the experiments to the value of an idea and appropriate ways to implement it. 1The latter stream of literature, however, argues that strategists' initial challenge is a different one: they need to first of all mentally construct a representation or map of what they see as an opportunity.In particular, theories encapsulate a point of view about an uncertain future and what is needed to be successful (Felin & Zenger, 2017;Schmidt, 2015).They guide experimentation and thus guide learning about assumptions (Zellweger & Zenger, 2021).But, as we argue in this article, theories also relate what is observable today to consequences in the future and thus allow for making inferences from early stage experiments about assumptions that cannot be tested without major investments.Yet if theories are mentally constructed by strategists, it follows that they are likely incomplete and possibly wrong (Knight, 1921;Simon, 1962).Thus, theories can also misguide strategists by pointing them to experiments that are not informative about the success of their idea, or by wrongly suggesting that strategists should go forward with major investments.When strategists construct theories, their challenge is to identify which contingencies even matter.There might also be arguments why one or more of their assumptions are wrong, which could force them to revise their theory (Gardenfors, 1988;Spohn, 2012).This raises the question: How can theories be useful for guiding experimentation and decisionmaking toward an envisioned future?In particular, how can theories be improved before large or even irreversible investments are made on their basis?
To answer these questions, we propose that theories can be systematically evaluated and improved by learning about and testing the assumptions that the theories encapsulate.Because theories may be incomplete or wrong, and some assumptions cannot be tested without making major investments, we propose that strategists should exploit an asymmetry in learning theories, which has also been pointed out by Popper (1959) in the philosophy of science: strategists can learn that they are wrong (i.e., that their point of view and the underlying set of assumptions cannot be a basis for the creation of an envisioned future) whereas they cannot learn that they are ultimately right, and consequently when contingencies are not fully knowable theories should be considered as being tentative and subject to further possible revision.
Concretely, we develop a normative framework based on modeling a strategist's theory as a set of premises that imply a "conjecture," which is a belief that formulates a future possible state of the world that is associated with success (Prahalad & Hamel, 1994;Rindova & Martins, 2021).For example, the conjecture that it is possible for Tesla Motors as a manufacturer of electric vehicles to become one of the major players in the high-volume segment of the car market has been central to the company's strategy at least since 2006.A theory is then a strategist's subjective formulation of the necessary and jointly sufficient conditions under which the conjecture will be true.We call such conditions "premises," which encapsulate what strategists believe to be the major contingencies and how they map on the conjecture.For example, Tesla's strategists hypothesized that the successful development of a battery that provides enough power for long-range rides is a necessary condition for the conjecture about success in the market for electric vehicles.Other contingencies (e.g., a change in the oil price) may be considered to be less relevant for the conjecture and thus not be included in the 1 In the formal literature on the topic, experimentation is conceptualized as updating prior beliefs using Bayes' rule after receiving a (potentially noisy) signal from a fixed space of possible signals (e.g., Agrawal et al., 2021).However, the space of possible signals is assumed to be known to the strategist and-importantly-unchanging, that is strategists face a signal extraction problem rather than the problem of evaluating and possibly changing the mapping from possible signals to outcomes, which is the challenge highlighted in our article.
theory.Using our framework strategists can then identify testable premises, where the results of such tests allow them to make inferences about assumptions that are not testable without making major investments, thus mitigating the "paradox of entrepreneurship" noted by Gans et al. (2019).For example, from experiments with batteries in prototypes Tesla may infer whether-given its other assumptions-the plan to become a major player in the highvolume segment is feasible.
The challenge is, of course, that strategists' assumptions and how they relate to each other (and thus what would be valid inferences) may be wrong.For instance, Tesla's strategists may have been correct that solving the battery problem is necessary and the oil price is irrelevant for the success of Tesla's overall strategy, but they could have overlooked other necessary conditions for success, for instance related to charging infrastructure.To account for this possibility, in our framework strategists may learn from an argument formulated as an "objection," which is the statement that a belief in their theory is wrong and which is backed by a "counter-theory": a theory that explains why that belief is wrong.For example, Tesla's strategists realized that a key premise for their conjecture about success was that the battery of electric vehicles must be large enough to supply sufficient driving range.An argument against the conjecture would be that a large battery will also make electric vehicles so costly to make that they will be more expensive than comparable gasoline-power cars and customers are not willing to pay higher prices for electric vehicles.Our framework helps strategists learn from such arguments against the conjecture or one of the premises even if they are rejected because they can expose hidden premises: To reject the argument and continue believing in demand for electric vehicles in the volume segments, Tesla's strategists must now also believe that the company will be successful despite having higher prices due to large batteries.Specifically, the belief that customers are willing to pay higher prices for electric vehicles now becomes a premise: a critical assumption on which the strategist should consequently focus attention (Ocasio, 1997) and possibly learn about through experimentation (Camuffo et al., 2020).
As basis for our framework, we formulate two axioms.First, strategists should be open to revising their assumptions when they learn that they are wrong.This means that strategists learn from counter-theories, integrate new assumptions into their theories but also give up beliefs if they turn out to be wrong.It also means that while strategists' beliefs are subjective, these beliefs are constrained by the requirement that strategists logically deduce what follows from what they already believe, and that they resolve logical contradictions if they appear.Second, strategists should order their beliefs in terms of how willing they are to question them.In particular, strategists should be as willing to question a belief (call it C) as they are willing to question its weakest premise, that is, the strategist's belief about the weakest necessary condition for the belief that, together with their other beliefs, forms a sufficient condition for C. Using our framework, we then derive three formal results about learning from counter-theories.First, we show under which conditions a strategist who is exposed to one or more counter-theories should continue to believe that the conjecture is true.Second, we show what and, in particular, which hidden premises the strategist learns from counter-theories.And third, we show when the strategist should focus experiments on such newly learned beliefs.
In the following, we develop our framework.While our core results about experimentation and learning are mathematically deduced, our exposition is verbal, using a stylized version of the case of Tesla's strategy from the perspective of 2008 as illustrative example.We then discuss the implications of our framework and how it relates to and complements other approaches for dealing with uncertainty, including learning by applying Bayes' rule (e.g., Camuffo et al., 2020;Gans et al., 2019), recent work on strategy process and the problem-based view (Nickerson, Yen, & Mahoney, 2012;Nickerson & Zenger, 2004) as well as established practitioner approaches and frameworks (Ries, 2011;Schoemaker, 1993).We close by discussing limitations and providing guidance for future research.

| Conjectures, theories, and premises
Our framework is based on the idea that managers can develop knowledge about an unknown future by formulating theories that encapsulate exclusive, firm-specific points of view and then testing and refining the underlying assumptions (Felin & Zenger, 2009, 2017;Gavetti & Menon, 2016;Schmidt, 2015).Nickerson and Argyres (2018) argue that the knowledge embodied in theories is an essential input into strategic problem formulation and thereby guides strategic decision-making and strategy implementation.While the process of formulating and testing theories will typically be a group-level process (Baer, Dirks, & Nickerson, 2013;Nickerson & Zenger, 2004), in order to focus on the underlying core mechanisms in the following we make the simplifying assumption of an individual-level process.
To illustrate out framework, we use a simplified and stylized version of a strategist's thinking about Tesla from the perspective of 2008 when Tesla had just introduced its first model, the Tesla Roadster.We use the example for the purpose of illustrating our framework and therefore claim neither descriptive accuracy nor completeness.However, as noted in the introduction, theories are likely incomplete and assumptions wrong.The value of our framework is precisely that it can overcome inaccuracy and incompleteness in theories.
After Elon Musk had become a major shareholder in Tesla, in 2006 he revealed his "secret masterplan" in a blog post that described his point of view about how it would be possible for Tesla to create sales for electric vehicles first in the high-end and later in the high-volume segments of the market.Concretely, his reasoning was: The overarching purpose of Tesla Motors (and the reason I am funding the company) is to help expedite the move from a mine-and-burn hydrocarbon economy toward a solar electric economy, which I believe to be the primary, but not exclusive, sustainable solution.Critical to making that happen is an electric car without compromises, which is why the Tesla Roadster is designed to beat a gasoline sports car like a Porsche or Ferrari in a head to head showdown.[...] The strategy of Tesla is to enter at the high end of the market, where customers are prepared to pay a premium, and then drive down market as fast as possible to higher unit volume and lower prices with each successive model (Musk, 2006).
Central to this reasoning was the idea that Tesla would be successful in the high-volume segments of the market if certain conditions were fulfilled.We call such a belief a conjecture about what will be true in the future (cf.Nickerson & Argyres, 2018): a future possible state of the world that is associated with success (what Prahalad & Hamel, 1994 have termed a "strategic intent").The Merriam-Webster (2021) dictionary defines a conjecture as "inference formed without proof or sufficient evidence".A core part of this definition is that conjectures are formed on the basis of incomplete knowledge and that they are based on an inference.Furthermore, Musk's above statement identifies the underlying assumptions from which an inference toward the conjecture is made.Concretely, these assumptions include the belief that Tesla's high-end car is perceived to be superior to traditional cars with internal combustion engine ("ICE-powered cars") and that achieving success in the high-end segment leads to success for electric vehicles in the volume segments.Missing from this theory are issues such as charging infrastructure, costs or competition.As we will show further below, such overlooked issues can be included when strategists learn hidden premises from arguments why their theories are incomplete or wrong.
In our framework, theories contain the assumptions behind a conjecture in the form of connected beliefs.A belief can be any elementary statement (e.g., Customers perceive Tesla's highend electric car to be equal or superior to ICE-powered high-end cars by the established brands like Porsche, Ferrari, Mercedes, and Audi) or a sentence that relates statements to each other using standard logic (e.g., If customers perceive Tesla's high-end electric car to be equal or superior to ICE-powered high-end cars by the established brands like Porsche, Ferrari, Mercedes, and Audi, Tesla will sell as many or more high-end electric cars as the number of ICE-powered high-end cars sold by these established brands).Combining these allows for drawing logical conclusions (e.g., to continue the example, a strategist who holds the two aforementioned beliefs will conclude that Tesla will sell as many or more high-end electric cars as the number of ICE-powered high-end cars sold by these established brands).For depicting beliefs and connections between beliefs we use standard logic notation. 2In our notation, capital letters denote elementary statement.Both elementary statements and sentences that relate statements to each other can be either true or false.
Theories are based on drawing logical conclusions from premises, which are the strategist's assumptions behind a conjecture: the conditions that are-from the perspective of a strategistnecessary and jointly sufficient for a particular future to materialize.We call a belief A of the strategist a premise for another belief Z of the strategist, if the strategist thinks that A is a necessary condition for Z and that A, together with his other beliefs B, C and so on, forms a sufficient condition for Z.More generally, each belief Z that logically follows from other beliefs is associated with a set of premises (all the necessary conditions that together form a sufficient condition for the belief Z).We call this set of premises a theory for the belief Z (see Table 1 for a definition of the key terms).Importantly, if a strategist believes that all premises are true, then he must also believe that the conjecture is true as it logically follows from the premises.
To learn on the basis of theories requires strategists to reason using deductive logic.The use of logic allows them to formulate their assumptions and enables them to think with precision about what needs to be true so that, according to their thinking, an envisioned future will materialize.It also allows them to identify testable assumptions and then draw inferences from these to beliefs that are not testable ex ante or that are only testable at prohibitively high costs.
To be useful for theory-based learning and experimentation, beliefs have to be formulated in a way that they are at least in principle verifiable.In the Tesla case, we therefore, for example, refer to a specific segment and competing car manufacturers in formulating the focal 2 In standard logic, logical operators are used to connect statements.In this article we use the following ones: ¬ denotes negation (¬A means that A is false); & denotes logical conjunction (A&B is true only if both A and B are true) and ) denotes logical implication (if A is true then B is true).In our Figures, the beliefs depicted in the boxes are either atomic statements or beliefs that relate statements.We use a verbal exposition for the ease of presentation, but explicitly mention the expressed logical connections between statements to add precision to the example.

Objection A reason or argument presented in opposition
An objection is a belief that the conjecture or a premise of the theory for the conjecture is false.

Countertheory
See above definition of theory A theory for an objection: The set of premises that imply an objection.
F I G U R E 1 Key concepts (illustrated with the Tesla example) conjecture: Tesla will sell enough electric cars in the midsize segment to be among the five largest players in that segment.3 Figure 1 depicts Tesla's theory for T Z , the focal conjecture in the example (beliefs of the form X refer to beliefs in general whereas beliefs of the form T X refer to our Tesla example): T A to T D are necessary and jointly sufficient conditions for T Z , as T Z would no longer follow if any of these beliefs were removed (Figure 1 also clarifies how the key concepts of belief, conjecture, theory and premise relate to each other).If the strategist believes T A , T B , T C , and T D , he also believes T Z because it follows logically from the premises, which thus together form a sufficient condition for T Z .As the strategist has no other reasons to believe T Z , the beliefs T A , T B , T C , and T D are the respective necessary conditions for T Z and, therefore, by our definition are the premises of T Z .Throughout this article, and in particular in the examples depicted in the Figures, we assume that the strategist's theory provides an exclusive, firm-specific point of view on the situation (Felin & Zenger, 2017). 4Under uncertainty, the strategist cannot be sure that reaching the conjecture is possible at all, and thus it makes sense to first of all ponder about the question what would be necessary to reach it.As is evident from the Tesla example in Figure 1, the theory formulated there is specific to Tesla.Exclusivity means that the strategist has no reasons to believe the conjecture other than the ones that are explicitly stated, and that he thus believes that the conjecture is true if and only if all premises are believed to be true (this is illustrated by the curly brackets in the figures).5More importantly, exclusivity also means that all beliefs that are premises for Z are strategic beliefs, because without any of them the strategist would not believe that Z were possible.Strategic beliefs are those beliefs the strategist considers critical for reaching the conjecture, and therefore the strategist should pay close attention to them and, if feasible, test them.
Note that the strategist might also have other beliefs that are relevant for thinking about the conjecture but which are not premises, as they are not connected to the focal conjecture (e.g., the belief that past attempts to successfully commercialize electric vehicles in the volume segment of the market have not been successful).Furthermore, note that implications are also beliefs in our framework.For example, in Figure 1, T D (which is the belief that T Y ) T Z ) is not mathematically deduced from the other beliefs.Instead it is an expression of the belief of the strategist that relates customers' perception of Tesla's midsize cars relative to ICE-powered cars to sales for them relative to competitors.But in addition to the expression of such beliefs, we also assume that the strategist forms logical consequences of his beliefs.That is, if the strategist believes that T A , T B , T C , and T D are true, the strategist will deduce that he also believes T Z .Thus, even though the beliefs of the strategist are subjective, they are constrained by the requirements that logical consequences among the beliefs are formed and believed and that logical consistency among beliefs is maintained.
Taken together, the starting point for our analysis is a strategist who has an exclusive theory for a focal conjecture Z, which is a logically consistent set of beliefs.In addition, the strategist can also actively consider other beliefs to be relevant for the focal conjecture but which are not premises for Z (see Figure 1).In the following, we are interested in how theory-based learning and experimentation affects whether or not the strategist continues to believe that Z is true after scrutinizing his theory through experiments and objections backed by counter-theories.We do not, however, address the question how the strategist acts upon his beliefs or how he continues learning (even if his theory has been refuted), but we will return to these issues in the discussion.

| Learning through contradiction and maintaining consistency
At the time of writing it seems that the conjecture about sales of electric vehicles in the midsize segment may actually materialize, though in 2008 this was still very unclear and a purely hypothetical scenario.In fact, it would have been entirely possible that Tesla's strategy would fail.The ultimate reasons for success or failure could not be known in 2008.Therefore, any theory about success is likely incomplete and at least partly wrong.More generally, due to the nature of uncertainty, the strategist cannot know if his beliefs about necessary and sufficient conditions and relations among beliefs are correct, and they are therefore subject to revision.By making his assumptions explicit, however, the strategist can learn that he is wrong.
Like scientists, strategists can systematically scrutinize their assumptions by formulating them as testable hypotheses and then engaging in experimentation (Zellweger & Zenger, 2021).Yet not all assumptions are testable without major investments (Gans et al., 2019).Theories help to overcome this problem as they provide a subjective mapping from assumptions that are testable today to consequences that are only knowable in the future.To learn on the basis of theories thus clearly requires more than formulating and testing assumptions.It also requires linking beliefs that can only be tested by making major investments to premises that are testable at a relatively low cost before the major investments are made.More generally, it requires exposing oneself to the possibility that one's theories are wrong or incomplete and thus in need of revision.While the outcome of an experiment can show that an assumption has been wrong, another possibility is that there is an argument that provides a reason why a premise is wrong.We explain this in more detail below, but the general point is that both evidence obtained through an experiment and arguments against the conjecture or one of the premises in the form of objections that are backed by counter-theories can create contradictions with the theory that must be resolved by revising one's theory (Gardenfors, 1988).Like in science, strategists can not only learn from experiments, but also from juxtaposing different theories that contradict each other.
To do so the strategist must detect any inconsistencies among his beliefs, such as when a new belief he forms contradicts one or more of his existing beliefs related to the conjecture.We thus assume in our framework that the strategist engages in a mental simulation and thinks through the logical consequences of any new beliefs (e.g., from counter-theories) combined with all existing beliefs.On the other hand, we allow strategists to hold incorrect beliefs, as long as they are consistent with other currently held beliefs related to the conjecture.In fact, the key point of our framework is that the strategist learns that he has been wrong by detecting and revising inconsistencies among different theories.This is of particular relevance in the process of pondering about arguments against one's theory that are given in the form of objections and counter-theories.This also means that the set of premises (what the strategist believes to be necessary and/or sufficient conditions for the conjecture) is not fixed but may change and, therefore, the strategist's theories may change.Below we derive formal results about this theory change process.
In our framework, the basic unit of analysis is therefore the set of beliefs of a strategist related to the conjecture, and theories (as defined above) are subsets of beliefs of a strategist related to a conjecture.Moreover, in order to learn on the basis of theories the strategist also needs to apply logic (e.g., he needs to infer the consequences of a refuted assumption or a counter-argument).Of course, it would pose unrealistic demands on a strategist to ask for consistency among all of his beliefs.We, however, ask from a strategist what we typically also ask from a scientist: His beliefs related to a focal conjecture should be consistent. 6This leads to the first axiom of our framework. 7xiom 1. (consistency and consequences): A strategist's beliefs related to a conjecture are logically consistent.Beliefs that are logical consequences of existing beliefs related to a conjecture are included in the strategist's beliefs related to the conjecture.If contradictions arise, they are resolved by giving up beliefs.
Of course, it is always an option to formulate theories in a way that they cannot easily be contradicted, or to refrain from experiments that will disconfirm them.For instance, strategists often use blurry concepts or measurements in formulating beliefs, or avoid thinking about logical relations-necessary and sufficient conditions-among their beliefs.But such tendencies in no way invalidate our argument, quite to the contrary.Only when strategists impose discipline on their theory formulation and revision process will they be able to detect contradictions inside their theories, and they will also be able to identify experiments and integrate counter-theories into their theories, which arguably provides them an advantage over strategists who don't use such principles (cf.Camuffo et al., 2020).In addition, they should also be less likely to persist in pursuing strategies based on wrong assumptions.

| The weakest premise
When the strategist has formulated a theory for a conjecture, he can put it to a test by an experiment.The logic of such experimentation is to try to refute one of the premises.Here our definition of a theory implies an asymmetry when it comes to evaluating it: Because premises are, by definition, individually necessary and jointly sufficient for a conjecture, a conjecture is considered true only if all premises are considered true, but not anymore considered to be true if just one of its premises is shown to be wrong.Therefore, a theory can be efficiently scrutinized by singling out one of its premises for scrutiny through experimentation.If this premise is supported by the outcome of the experiment, the conjecture continues to be accepted (and another premise may be scrutinized).If it turns out to be false, the strategist learns that an assumption he believed to be necessary for the conjecture is in fact wrong and he thus does not anymore have sufficient reason to believe that the conjecture can be reached.
This leaves us with the question which premise should be questioned first or, in other words, how the strategist should focus where he experiments.To answer this question, we introduce the notion of belief strength, which we define as follows: Definition (belief strength).The strength of a belief is given by the strategist's unwillingness to question it. 8  We find it useful to assume that the strategist orders all beliefs related to the conjecture with respect to his willingness to question them.Essentially, belief strength is a measure that helps the strategist to organize and focus his experimentation when his beliefs are tentative and at most only partially grounded in factual knowledge. 9 The notion of belief strength is particularly useful because it allows solving a dilemma of the strategist: As the conjecture expresses the strategist's strategic intent (Hamel & Prahalad, 1989), he is in principle unwilling to question it.On the other hand, running experiments that may falsify premises means to put the conjecture to a test, and thus to be willing to question it!This paradox gets resolved in our framework by requiring that the willingness to question the conjecture must be the same as the willingness to question the weakest premise in the theory for the conjecture.Thinking about necessary and sufficient conditions for the conjecture thus forces the strategist to assign a strength to the conjecture that is equal to the weakest premise.The goal is, however, to find a theory for the conjecture in which every premise is 8 Thus, a belief that is easily questioned is weak.To avoid confusion, note that mathematicians sometimes speak using a different convention.They speak of strong assumptions when they refer to assumptions that should be questioned.We use weak and strong in the sense that we speak in everyday language.For example, the statement "I strongly believe that Tesla will become the manufacturer that sells most cars worldwide" would express that one is less willing to question Tesla's leadership in sales than if one would make the statement "I only weakly believe that Tesla will become the manufacturer that sells most cars worldwide." 9Belief strengths and probabilities are important complements.Probabilities are very useful when they can be estimated from data.Strategists' theories, however, contain beliefs (such as T C or T D in the example) for which the probability cannot be estimated from data ex ante, as they express relations between events that will take place in the future.Moreover, sometimes such beliefs express relations between singular events.When probabilities cannot be estimated from data, strategists must thus formulate subjective conditional beliefs (how what is testable today relates to what will happen in the future).To formulate such subjective conditionals in a Bayesian fashion is problematic.First, a technical problem appears: Strategists would have to formulate probabilities for a number of conjunctions that grows exponentially in the number of elementary statements, as Bayesian updating over possible logical relations among elementary statements would not be defined if there would not be ex ante probabilities for all possible conjunctions, given a system of statements (see Harman, 1986, p. 25ff, for a conceptual, andGriffiths &Tenenbaum, 2009, for a more technical discussion of this problem).Second, we point out that untestable subjective conditional beliefs can be improved by listening to and integrating counter-theories.However, here a second technical problem appears: It would not be defined how systems of probabilistic conditional beliefs should be updated, given a counter-theory, as Bayesian updates are conceptually meaningful when priors are conditioned on facts, not on arguments about relations of statements raised in a debate.For these reasons belief revision and belief strengths are the more appropriate technical means to cope with revising ex ante beliefs about relations of beliefs (in particular, conditionals; beliefs that express logical relations).Conceptually, belief revision and belief strength are more appropriate, as here it is defined how strategists should cope with contradictions (and contradictions will appear when relations of statements are debated ex ante, before they can be tested), while Bayesian updates cannot cope with such contradictions (Bryan, Ryall, & Schipper, 2021).
stronger than any plausible objection so that the strategist can have a strong belief in the conjecture even though it is only as strong as its weakest premise.
To enable a systematic comparison and revision of beliefs, it is useful to impose the "weakest premise principle" to any belief: the strength of a belief that logically follows from a set of premises is equal to the strength of its weakest premise.In other words, the strategist should question any belief as much as he is willing to question the weakest premise of this belief.This principle is well established in cognitive science (e.g.Gardenfors, 1988) and philosophy (Spohn, 2012).
In the Tesla example, one can argue that among the premises for T Z , the strategist is most willing to question T A (which is thus the weakest premise) and-as a consequence-focus experimentation on scrutinizing T A while tentatively considering T B , T C , and T D as true.However, if such an experiment would show that T A is wrong, a contradiction would arise and T A would be given up (in line with Axiom 1).More generally, if contradictions arise among a strategist's beliefs, belief strength also implies willingness to give up: to resolve contradictions, the strategist rather gives up weaker than stronger beliefs.
Taken together, we assume that a strategist has a subjective ordering over his beliefs related to the conjecture in terms of how willing he is to question them.It describes the order in which the strategist would put his beliefs to a test through experiments or be willing to give up beliefs as a result of performing a mental simulation.However, any ordering needs to fulfill the principle that a belief is as strong as its weakest premise. 10This is stated in our second axiom as follows: Axiom 2. (belief strength and the weakest premise): The strategist orders the beliefs considered relevant for thinking about a focal conjecture with respect to their strength.The strength of a belief (and in particular, a conjecture) is equal to the strength of its weakest premise (the weakest necessary condition for this belief that, together with other beliefs, forms a sufficient condition for this belief).If contradictions arise, only as few beliefs as necessary are given up, and weaker beliefs are given up first. 11  As Axiom 2 requires the strategist to only give up as few beliefs as necessary, by definition only the weakest premise and its logical repercussions are given up when the weakest premise turns out to be false.This has important implications for theory formulation and, in particular, testing through experimentation, as we show next.

| Sub-premises and focused experimentation
In the formulation of a theory each of the premises might themselves be the consequence of "sub-premises," which are the necessary and jointly sufficient conditions for the premise or, in other words, a theory for the respective premise.More generally, theories can have a recursive structure and we can therefore apply the definition of a premise recursively.According to our 10 In the extreme, some beliefs are never questioned as they are always true (like A&B ð Þ)A), and logically false beliefs (like A ) ¬A) are always questioned. 11If a contradiction could be resolved by giving up either of two beliefs that have equal strength and there are no weaker beliefs that could be given up to resolve the contradiction, both beliefs will be given up.definition of a theory as an exclusive explanation all such sub-premises are also premises of the conjecture.
To illustrate the idea of sub-premises, see Figure 2.This figure includes not only the premises for the conjecture T Z but also the sub-premises for the premise T A : T AA , T AB , T AC , and T AD .These sub-premises constitute a theory for T A : they are considered necessary and jointly sufficient for T A .Specifically, they identify the three criteria that must be fulfilled for customers to perceive Tesla's electric vehicle to be superior to competitors in the high-end segment: emissions (T AA ), acceleration (T AB ) and driving range (T AC ), and T AD states that these are indeed jointly sufficient.Taking recursion seriously, we can, for example, zoom further into the subpremise concerning battery charge (T AC ) to identify the conditions under which a battery with the required characteristics can be developed (the two beliefs shown on the lower right of Figure 2 are premises for T AC ).
The notion of sub-premises is important when it comes to identifying beliefs to which the strategist should target his experimentation efforts.According to Axiom 2, the premise the strategist is most willing to question and thus give up is the weakest premise.We thus define a focused experiment as follows: Definition (focused experimentation).An experiment is called focused when the strategist scrutinizes the weakest premise of the focal conjecture.
The definition of a focused experiment in combination with the notion of sub-premises is useful because many premises cannot be tested ex ante, or doing so would be prohibitively costly.Therefore, if a premise is currently not testable strategists can identify testable subpremises by considering what they believe are the necessary conditions for that premise.Ultimately, this process of identifying sub-premises should yield a weakest premise for the conjecture that is testable at low costs.
In the Tesla example, say the strategist considers the assumption about the ability to develop batteries that allow long-distance rides but are not too heavy (T ACA ) to be the weakest premise of the conjecture T Z , as it was questionable whether that would be attainable given the state of battery technology as of 2008 (throughout the article, ex ante weakest premises of the conjecture and objections are indicated in gray color in the figures).A focused experiment would therefore be to test T ACA , for example, by building a prototype, which means that T ACA is testable at a cost that is a fraction of the cost of implementing the entire strategy (e.g., investing in a factory and distribution).If it turns out to be false, the strategist would (by Axiom 1) also give up beliefs that are consequences of this assumption, namely that Tesla's high-end electric vehicles will have a battery with 450 km driving range (T AC ), that customers perceive Tesla's cars to be equal or superior to ICE-powered cars by established manufacturers (T A ), and that Tesla would be successful in the midsize segment (T Z ), but he would keep all other beliefs.
The Tesla example also shows how belief strength depends on both logical relations among beliefs and subjective attributions of plausibility.According to Axiom 2, beliefs "inherit" their strength from sub-premises, as each belief is as strong as its weakest premise.Thus, in our example the strength of T Z , T A , and T AC is given by the strength of T ACA .Moreover, note that by definition, if premises are grouped together, the group of premises is again a premise and its strength is equal to the weakest sub-premise.In our example, the belief "both T AB and T AC are true" (formally written as T AB &T AC ) is also a premise of the conjecture and its strength is equal to the strength of T ACA . 12ore generally, in our framework strategists can draw inferences about whether to believe in a conjecture from evidence about its weakest premise.Therefore, the fact that a premise is as strong as its weakest sub-premise allows for an "indirect" test of some beliefs that are currently untestable without making major investments through testing their weakest premise.The notion of belief strength thus also helps to overcome a key problem of learning under uncertainty, namely that some relevant beliefs cannot be evaluated based on observations or evidence from an experiment (Gans et al., 2019).In our framework, such beliefs are as strong as their weakest premise, that is as strong as the underlying necessary assumption the strategist is most willing to question.However, while testing weakest premises allows learning about the assumptions behind a conjecture through experimentation, as we stated earlier, it is of course also possible that the theory itself is wrong or incomplete, and thus strategists may draw wrong conclusions from testing premises.To overcome this problem, our framework also accounts for the possibility to question theories by raising objections against a conjecture or its premises, which we address next.

| Objections, counter-theories, and contradictions
So far, we have argued that to learn whether the theory for the conjecture is wrong a strategist should first become clear about the underlying premises and then test the weakest premise by putting it to an experiment.However, this is only one step.The strategist can also ask himself whether his conjecture or any of the premises would be wrong.For example, in the Tesla case one could argue that the conjecture is wrong because Tesla has a competitive disadvantage compared with traditional car manufactures as it cannot produce cars at scale to be one of the largest car manufacturers, or that the absence of an infrastructure for out-of-home charging could limit the attractiveness of electric vehicles.The strategist can take such arguments he hears from others seriously, or he can also actively work to find reasons why he is wrong.
When such an argument is made, an objection is raised.An objection denotes the belief that a premise of the theory for the conjecture, or the conjecture itself, is wrong.Objections provide a basis for learning if they are backed by a counter-theory: a theory that explains why the conjecture or one of its premises is wrong.The idea is analogous to the creation of counter-factuals, which are alternative ways in which the past may have unfolded.Counter-factuals are based on examining the possible consequences of assumptions that certain past events (which are known) would have been different and comparing these alternative consequences to the outcome that actually occurred (e.g., Durand & Vaara, 2009;Lewis, 1973;Wason, 1960).A counter-theory, on the other hand, takes as starting point a possible future event about which a theory exists that explains under which conditions it will occur.Thus, while a counter-factual can help pinpoint the past conditions that made a critical difference to a known event, a counter-theory can help identify the present conditions that make a critical difference to a possible future event.Importantly, to improve theories by learning counter-theories, the respective premises of the theory and the counter-theory must be comparable.In particular, statements about, for example, customers, markets and technologies must agree in units of measurement, so that contradictions between theory and counter-theory can be identified.We define objection and counter-theory as follows (see also Table 1): Definition (objection and counter-theory).An objection is a belief that Z or a premise of Z is false.A counter-theory is a theory (as defined above) that implies an objection as a conjecture.
From this definition it follows that a counter-theory will also have a weakest premise and that the objection is as strong as the weakest premise of the underlying counter-theory.When asking why the conjecture could be wrong, the strategist of course considers beliefs that are logically inconsistent with beliefs he considers relevant for the focal conjecture.A counter-theory, therefore, leads to a contradiction and therefore, when exposed to a counter-theory, a strategist will need to revise his beliefs (in line with Axioms 1 and 2).For any theory, there may be multiple counter-theories, as an objection may be raised against each of multiple specific premises of the theory.We use the Tesla example and formulate two counter-theories for illustrating our results.Hereby we focus on issues that are likely to have been raised in 2008, such as those related to charging infrastructure and batteries (the mechanisms we describe apply equally well to objections concerning other issues, such as competition or costs).For both we explain the underlying logic and what is the respective weakest premise.
The first of these is the objection that T A false (see Figure 3).The underlying counter-theory raises the point that a dense out-of-home infrastructure is necessary for customers to perceive Tesla's electric vehicles to be superior to ICE-powered cars (an issue that is absent in Tesla's theory).It is formulated as a set of premises that jointly imply T L , where T L is equivalent to the belief that "T A is false."The counter-theory argues that unless charging infrastructure is available within 5 miles from all homes in major cities, customers will find charging to be inconvenient (T LA ), that such charging infrastructure will not be built (T LB ), and that inconvenience caused by its absence means that customers will not perceive Tesla's high-end electric car to be equal or superior to ICE-powered high-end cars by the established brands (T LC ).Because a counter-theory is also based on premises that are necessary and jointly sufficient conditions for the contradictory belief (here T L ), it also has a weakest premise (which, ideally, should be testable).In addition, the strength of the objection is equal to the strength of this weakest premise (according to Axiom 2).In our example, the weakest premise for the objection about the need for charging infrastructure is T LAB : the belief that customers buying the Tesla Roadster consider charging to be more inconvenient than filling up gas unless there is an out-of-home charging infrastructure that is available within 5 miles.This premise is testable, for example, by observing the behavior of Roadster customers.
The second objection is about the fact that battery weight (and thus the range of electric cars) interacts negatively with acceleration (see Figure 4).The counter-theory expresses that it cannot be both true that Tesla's high end model will have acceleration faster than highend combustion engine models (T AB ) and a battery with 450 km range (T AC ).The theory behind this objection expresses the argument that a car with a battery sufficiently large for a 450 km range will be heavier than comparable ICE-powered cars (T MB ) and thus be so heavy that the car cannot accelerate faster than these cars (T MA ).As such a large battery would indeed make Tesla's cars heavier than comparable ICE-powered cars, T MA is the weakest premise of this counter-theory.This premise is testable, for instance, by computing the weight that the high-end model cannot exceed to sustain acceleration higher than comparable ICE-powered  cars and then check whether a battery with a 450 km range can be built below the critical weight.

| Learning as integration of new beliefs
The question is now how counter-theories affect the strategist's beliefs regarding the conjecture.We can clarify this by conceptualizing a theory-based learning process by which countertheories are integrated with the existing beliefs related to the conjecture.In our framework, learning then refers to a mental process by which new beliefs are added to the existing set of beliefs related to the conjecture, while at the same time contradictions are resolved as required by Axiom 1.This process involves considering both existing and new beliefs (here the countertheories) while simultaneously thinking through implications given the logical relations between existing and new beliefs.Because logical relations also affect the relative strength of beliefs (according to Axiom 2), the relative strength of all beliefs may change as a result of this integration.Contradictions are resolved by removing beliefs (applying Axiom 1) based on the updated strength ordering (applying Axiom 2).
In the rest of this section, we derive formal results based on the following assumptions: • The strategist follows Axioms 1 and 2.
• One well-defined theory for Z with weakest premise W is given.
• A set of i 1, … f g well-defined counter-theories with weakest premise E i are given, where each counter-theory i implies that a premise for the conjecture, denoted S i , in the theory is false and the counter-theory i does not create further contradictions with the theory.
• Counter-theories do not provide alternative explanations for why Z is true, or why a premise of the strategist's initial theory for Z is true, and they do not contradict each other.• Theory and counter-theories have been formulated in a way that the relevant premises and sub-premises have been identified (in particular, an ex-ante testable weakest premise has been identified for the theory and for each of the counter-theories).• The beliefs relevant for the conjecture are a result of integrating the theory and the countertheories (no other beliefs are given).
Making these assumptions, we will derive two formal results about the mental process of learning by integrating counter-theories and resolving contradictions (Propositions 1 and 2) and one formal result about focused experimentation (Proposition 3).

| Maintaining the belief in the conjecture
The strategist wants to decide whether or not to continue investing with the aim of creating a world in which the conjecture is true.Our first result specifies the conditions under which a strategist who is exposed to one or more counter-theories will continue to believe that the conjecture is true.In brief, he should continue to believe in the conjecture if he considers the weakest premise of his theory to be stronger than the weakest premise of the strongest objection.More precisely, assume the strategist's beliefs contain a theory for the focal conjecture Z, which he thus believes to be true.The theory's weakest premise is denoted by W .The strategist now integrates one or more counter-theories into his beliefs that each contradict a premise in the theory.Specifically, each counter-theory i implies that a premise in the theory S i is false, with i 1, … f g.Each counter-theory thus contains as conjecture the belief S i is false (formally ¬S i , where ¬ denotes negation) and a set of premises for ¬S i .In the example, the countertheories contradict T A and T AB &T AC , which thus correspond to the respective S i 's in our notation.
The beliefs ¬S i are ordered by relative strength so that ¬S 1 is the strongest objection, ¬S 2 the second-strongest and so on.Each counter-theory may contain multiple premises, so denote the weakest premise of ¬S i by E i (recall that by Axiom 2 the strength of ¬S i is equal to the strength of E i ).As ¬S 1 is the strongest objection, E 1 is stronger than any E i with i>1.The following proposition then follows from Axioms 1 and 2, stating the conditions under which the strategist-after hearing one or more counter-theories-continues to believe that the conjecture Z is true .
Proposition 1. (maintaining belief in the conjecture): A strategist follows Axiom 1 and 2, and his beliefs related to the conjecture Z imply that it is true.The strategist now learns one or more counter-theories related to thinking about the conjecture Z.Then, if E 1 is weaker than W , the strategist will continue to believe that Z is true.
Proof: See Appendix A.
To get an intuition for Proposition 1, see the left side of Figure 5. Per Axiom 2, it suffices to remove the weakest premises of each of the counter-theories to resolve all contradictions among theory and counter-theories.The strength of the strongest objection is given by its weakest premise E 1 .If E 1 is weaker than the weakest premise of the theory W , then E 1 and not W is given up, as weaker beliefs are given up first to resolve contradictions (Axiom 2).As E 1 is stronger than E 2 (the weakest premise of the second strongest objection), W is also stronger than E 2 and thus, the contradiction caused by the second strongest objection is also resolved in favor of the theory: W is maintained and E 2 is given up.As W is stronger than the strongest objection, the weakest premises of all objections are discarded and W is maintained.
To illustrate Proposition 1 with our Tesla example, recall that the weakest premise of the theory was T ACA : the assumption that it is feasible to develop a battery that allows for sufficient charge for a 450 km ride.In the notation of Proposition 1, T ACA thus corresponds to W .According to Proposition 1, W has to be compared with E 1 , the weakest premise of the strongest objection.
To determine what E 1 is, we have to compare the weakest premises of the respective objections with each other, as the strength of objections is given by the strength of their weakest premises.
Say the strategist considers it more plausible that Tesla Roadster customers consider charging to be more inconvenient than filling up gas unless there is an out-of-home charging infrastructure that is available within 5 miles (T LAB ) than that the weight of batteries will prevent superior acceleration (T MA ) and are thus most concerned that T LAB may actually be true.Thus, T LAB corresponds to E 1 in the notation of the proposition.Thus, after hearing the countertheories behind the two objections, in order to decide whether to (at least tentatively) continue to believe that Tesla will sell enough electric cars in the midsize segment to be among the five largest players in that segment (T Z ), the strategist need only consider whether or not he believes that T ACA is stronger than T LAB -the strength of W needs to be compared with the strength of E 1 .Then, if W is considered to be stronger than E 1 according to Proposition 1, the strategist will tentatively reject both objections.That is, he believes that (given all his other assumptions) the absence of a dense charging infrastructure and the large weight of batteries will not prevent Tesla from becoming one of the large players.

| Hidden premises and learning from counter-theories
In this and the following subsection we assume that the strategist's mental comparison resulted in the conclusion that W is considered to be stronger than E 1 .Thus, we analyze the case in which all objections were rejected and the strategist thus continues to believe that the conjecture is true.Recall that by giving up the weakest premises of objections, the strategist also rejects the objections.But even if all objections are rejected, the strategist may learn from counter-theories and thus his beliefs can change in the process of considering objections.This is because Axiom 2 requires that strategists resolve contradictions in a way that keeps as many beliefs as possible.In particular, the strategists also keeps all beliefs of a counter-theory, except for their weakest premises, as giving up the weakest premises suffices to resolve the contradictions with the theory.Because the strategist draws all logical consequences from his beliefs, this implies that he will also believe that the weakest premises of the counter-theories must be false.Importantly, also the strategist's strategic beliefs can change.That is, a newly learned belief can become a hidden premise: giving up this belief would force the strategist to also give up his belief in Z.We explain this in more detail as follows.
As the other beliefs from the counter-theories do not lead to any contradictions after the counter-theories' weakest premises have been removed, the beliefs related to the conjecture will include all these other beliefs.In addition, the strategist also learns that the weakest premise of each counter-theory must be false.This is because according to Axiom 1 the strategist will draw all logical conclusions from his beliefs, which now also include the other beliefs from the counter-theories except the respective weakest premises.Now recall that a counter-theory implies that a premise of the theory S i is false.For each objection, the fact that it was rejected thus means that the strategist still believes that S i is true.Therefore, the belief that S i is true together with the other, newly accepted beliefs from counter-theory i imply that E i is false (if E i would hypothetically be added back the strategist would again conclude that S i is false and the contradiction would reappear). 13 Learning that the weakest premise of a counter-theory is false can provide the strategist with important insights about contingencies that would otherwise have been overlooked.To illustrate, in the Tesla example the strongest objection (see Figure 3) is that T A is false (T A corresponds to S 1 ): customers will not perceive Tesla's high-end electric cars as superior.The weakest premise of the counter-theory is T LAB : that Tesla Roadster customers find charging inconvenient if there is no dense out-of-home charging infrastructure (which thus corresponds to E 1 ).If the objection is rejected, T LAB is given up whereas the strategist will believe that all other beliefs from the counter-theory are true.For example, the strategist now explicitly believes that convenient charging is necessary for high-end customers to perceive Tesla's cars as equal or superior (which is expressed in T LC ).Because he also continues to believe that T A is true (as the objection was rejected), he now must also believe that T LAB is false.The strategist thus concludes that while convenient charging is necessary (T LC is true) the absence of a dense charging infrastructure in itself does not make charging inconvenient (T LAB is false).Of course, the strategist does not believe that a charging infrastructure is not necessary.But he now believes that charging can be made convenient even if there is no dense charging infrastructure (perhaps through at-home charging and a smaller number of public charging stations in selected places).
But there is an additional, subtle consequence of considering but rejecting objections: The belief that E i is false can (but not necessarily does) become a hidden premise for the conjecture Z.That is, it could become a necessary condition for the conjecture to be true, and if it were not true the conjecture could not be true given the strategists' other beliefs.Formally, the belief that E i is false will become a premise for Z if the strategist, given his current belief strength ordering, cannot start believing E i without also giving up the belief in the conjecture Z.The intuition behind this result is based on the following thought experiment: Consider that after having concluded that E i is false the strategist obtains evidence that E i is true after all, which also means that he has to consider it as very strong.This would lead to a contradiction, as the strategist would conclude that S i is false.So the strategist would need to resolve a contradiction again.Whether in that case the strategist would give up his belief in Z as a result of the thought experiment that E i is true depends on which belief would be given up: According to Axiom 2, weaker beliefs are given up first and only as few beliefs as necessary are given up to restore consistency.If none of the other beliefs from counter-theory i are weaker than S i , then S i and, by implication, Z would be given up.If, on the other hand, at least one of the other beliefs from countertheory i is weaker than S i , then the weakest of these would be given up and the strategist would continue to believe that S i is true (and that Z is true) even if he would believe that E i is true.Therefore, whether or not the belief that E i is false becomes a premise for Z depends on the strength of the other beliefs from the counter-theory relative to S i.
14 13 In logical terms, this result is a consequence of the modus tollens.As E 1 is a premise of the objection, removing this premise, but keeping all other beliefs of the counter-theory, implies that the strategist believes that E 1 ) ¬S 1 .¬S 1 , by definition, is a belief that contradicts the strategist's theory for the conjecture.But the strategist's theory is fully maintained!Therefore the strategist believes that ¬S 1 is false.From E 1 ) ¬S 1 it then follows that E 1 must be false. 14More formally, if the strategist considers the belief that E i is false to be weaker than S i , he currently believes that E i is false, but he is more willing to question this belief than the belief in S i .Technically, this means that E i ) ¬S i is weaker than S i .In such cases, the belief that E i is false does not become a premise of Z. See proof of Proposition 2 in the Appendix.
In the Tesla example, this means that the belief that T LAB is false becomes a premise if all other beliefs from the first counter-theory are stronger than T A .For example, the belief that T LAB is false will not become a premise if the strategist considers the belief that customers for Tesla's high-end electric cars have the same criteria for convenience as Tesla Roadster customers to be weaker than T A .Before explaining in more detail what the implications of learning hidden premises are, we summarize our formal results as follows: Proposition 2. (hidden premises): Consider a strategist who learned one or more counter-theories and considers W to be stronger than E 1 .Then the strategist will believe that E i is false for all i.If the strategist considers all of the remaining premises of counter-theory i (i.e., all premises except E i ) to be at least as strong as S i , then the belief "E i is false" will become a premise for Z.
Proof: See Appendix A. Learning by mentally integrating counter-theories helps the strategist to become clearer about his assumptions even if the objections are rejected, and it can help him decide which experiments to run.If the belief that E i is false becomes a premise, then it will be a strategic belief.That is, it becomes a part of the strategist's exclusive explanation for the conjecture and not simply another belief that is relevant for thinking about the conjecture.As a consequence, given his other beliefs the strategist will be believe that that Z is true only if E i is false.
We illustrate this with the second objection, which states that T AB &T AC is false: it cannot be both true that Tesla's high end model will have sufficient charge for a 450 km ride and acceleration faster than high-end combustion engine models.When this objection is rejected, the strategist believes that T MA is false: a battery with enough charge for a 450 km ride will not make the vehicle too heavy for it to accelerate faster than competing ICE-powered cars.The belief that T MA is false will become a strategic belief if T MB (the belief that batteries will make Tesla's high-end electric cars heavier than competing ICE-powered cars) is at least as strong as the belief that T AB &T AC is true.Because T MB is strong (large batteries do add a lot of weight to cars), the belief that T MA is false becomes a strategic belief (a hidden premise): After rejecting the counter-theory, the strategist believes that given his assumptions only if T MA is indeed false is it possible that T Z will be true.In other words, the strategist now believes that the success of Tesla's strategy critically depends on the ability to make Tesla's high-end vehicles accelerate faster than competing ICE-powered cars despite them being heavier due to the weight of batteries.This conclusion is an outcome of the strategist's mental process of thinking about objections to his theory, and as we show next, it may have consequences for the focused experiment the strategist should run.
To summarize, even if objections are rejected the strategist can learn from countertheories in two ways.He can learn that that given his assumptions the conjecture will become true even if a particular contingency is met (e.g., charging can be made convenient even if there is no dense charging infrastructure).And he can learn that the conjecture will become true only if a particular contingency is met (e.g., as batteries make electric vehicles heavier, customers will prefer the high-end model only if it can still accelerate faster than ICE-powered high-end models despite the added weight).Thus, learning from countertheories enables the strategist to change his beliefs about important and even critical contingencies.

| Counter-theories and focused experimentation
Proposition 1 clarified the conditions under which the strategist has reasons to continue believing in the conjecture when exposed to one or more counter-theories, and Proposition 2 stated what is being learned from counter-theories if they are rejected.What is missing, however, are the implications for the initially mentioned attempt to refute one's belief in the conjecture by experiment.Above we defined focused experimentation as the process of testing weakest premises with the purpose of potentially refuting them.As we show next, when a counter-theory has been rejected (according to Proposition 1) and new beliefs have been learned (according to Proposition 2), the premises that should be targeted for focused experimentation can also change.
Recall that when an objection is rejected, the strategist will believe that E i is false and, according to Proposition 2, this belief will become a premise for Z if the remaining beliefs of counter-theory i (each of the beliefs that composed the counter-theory except for E i ) are at least as strong as S i (the premise in the theory that was contradicted by the counter-theory).Now there is one case in which the belief that E i is false will become a target for a focused experiment.This happens when the counter-theory of which E i was the weakest premise was targeted at a premise S i of the theory that had W as its weakest premise and the belief that E i is false becomes a premise for Z after rejecting the counter-theory (according to the second part of Proposition 2).In that case, the strength of the belief that E i is false will be the same as the strength of W (if E i would be added again, the strategist would have to give up W , as all other beliefs in the counter-theory i are at least as strong as W ).That implies that the strategist is as willing to question whether E i is false as he is willing to question W . Thus, it is a focused experiment to test both E i and W .This leads to our third formal result: Proposition 3. (focused experimentation): Consider a strategist with a theory who learned one or more counter-theories and considers W to be stronger than E 1 .If a counter-theory i contradicts a premise S i in the theory that has W as its weakest premise and the belief that E i is false is a premise of Z, then it is a focused experiment to scrutinize both the belief that E i is false and W . 15 If no counter-theory contradicts a premise S i in the theory that has W as its weakest premise, it is a focused experiment to scrutinize W . Proof: See Appendix A. Proposition 3 states the condition under which a hidden premise becomes a target for focused experimentation.To illustrate Proposition 3, note that the second objection was targeted at T A , and T A had T ACA as its weakest premise, which is also the weakest premise for T Z (formally, T ACA corresponds to W in the Tesla example).As we showed above, after rejecting the second objection the belief T MA is false becomes a hidden premise.Therefore, according to Proposition 3, in addition to T ACA , it will be a focused experiment to test the belief that T MA is false.In other words, the strategist believes that for Tesla to be successful (for T Z to become true) it is not enough to develop a battery that allows for a 450 km ride but this battery also needs to be below a critical weight so that an acceleration faster than established ICE-powered high-end cars can be reached.A focused experiment should thus not only focus on battery development but also on studying whether the development of an electric vehicle with such a battery is 15 If there are multiple counter-theories that each contradict a premise T k in the theory that is premised on W, and all ¬E k s are premises for Z, then it is a focused experiment to scrutinize ¬E k1 ^¬E k2 ^::: ^W.feasible. 16The outcome of such an experiment would inform the strategist not only about the feasibility of developing such a car but -because beliefs are logically linked-the strategist would also infer whether customers will perceive Tesla's high-end electric vehicle as equal or superior to competing ICE-powered high-end cars and, ultimately, whether he can maintain his belief in Z.

| ALTERNATIVE THEORIES
Counter-theories can help to test and improve the strategist's theory by raising objections.However, given the nature of uncertainty, there may also realistically be alternative, plausible views about how the conjecture may be reached (by assuming exclusivity we have so far ruled out that the strategist considers such alternatives).To illustrate, our Tesla example was based on the logic that success in the high-end segment would serve as a stepping stone for success in the midsize segment (encapsulated in T C : customers in the midsize segment will have a preference for Tesla's electric vehicles if demand for Tesla's high-end cars is high, and T D : that this is sufficient for Tesla to become one of the five largest players in the midsize segment).An alternative point of view for the focal conjecture concerning success in the midsize segment could be based on a logic of disruptive innovation (Christensen, 1997), where electric vehicles would initially be introduced in the low-end segment and then success in the low-end segment would spur sales among customers in the midsize segment.
Recall that a theory is an exclusive explanation that formulates a firm-specific point of view about the conditions under which a focal conjecture will be reached (Felin & Zenger, 2017).Therefore, alternative theories by definition contain elements that contradict the prior theory.In our framework, these viewpoints can be formulated as alternative theories that include a counter-theory and that contain elements of an alternative explanation for why the conjecture may be true.Any alternative theory at least questions the necessity of some assumptions in the original theory.Practically, however, the alternative theory can often be traced to questioning the core mechanism proposed in the original theory.In our example, the original theory is based on the idea of Tesla's initial success in the high-end segment while the alternative theory sketched above is based on the idea of Tesla's initial success in the low-end segment.Thus, the alternative theory contradicts either T C or T D .An argument against T C is the following (see top part of Figure 6): The primary buying criterion for customers in the midsize segment is low prices (T CX ), which implies that high demand for Tesla's high-end cars does not necessarily lead to customers in the midsize segment preferring Tesla's electric vehicles (T CY ) and thus implies that T C is false. 17On the other hand, the belief that the primary buying criterion for customers in the midsize segment is low prices (T CX ) is also a premise for an alternative explanation based on initial success in the low-end segment (that is, the counter-theory that leads to the contradiction and the alternative explanation share the same premise; see Figure 6, bottom for this alternative theory).Specifically, the beliefs T N and T O formulate a logic by which an electric vehicle that is cheaper than comparable ICE-powered cars in the low-end segment but has acceptable 16 Note that the first objection also contradicts a premise of the theory that has W as its weakest premise, as W is the weakest premise for T A .However, we assume that the strategists considers the second strongest premise T LAA ð Þin the first objection to be weaker than W . Thus, the first objection does not lead to a change of the focused experiment. 17Customers' decision to switch to electric cars could also be determined by a function of price and prestige, which could also be formulated as a premise.For simplicity, we focus on the either/or case in the example.
driving range leads to Tesla becoming one of the leading car manufacturers in the low-end segment.If this is achieved, according to the alternative theory Tesla will become one of the top five players in the midsize segment (T Z ) because in addition to customers in the midsize segment preferring vehicles with lower prices (T PA = T CX ) the low-end strategy also enables scale to be built (T PB ), thereby achieving cost leadership in the midsize segment (T PC ).
In our example, there is one key contradiction between prior theory and alternative theory, since the objection raised concerning the primary buying criterion for customers in the midsize segment being low prices (T CX ) leads to the belief that T C is false, whereas T C is believed to be true in the prior theory.Note that this difference in the two theories has repercussions on other beliefs, for example, what kind of capabilities Tesla needs to generate enough demand in the volume segment to be a top five player.The weakest premise of the objection against T C is T CX , as the strategist is sure that T CY is true (namely that if it is true that customers in the midsize segment primarily care about price then the prior theory would not work), but is less sure whether customers in the midsize segment really do primarily care about price.Therefore the alternative theory would be rejected if T CX would be weaker than T C .We now examine what the strategist would then learn from hearing about the alternative theory.
Because the alternative theory includes an objection in the form of a counter-theory that we analyzed above, as a result of rejecting the alternative theory the strategist will believe that the weakest premise of the objection to the original theory posed by the alternative theory is false.In our example, the strategist now believes that customers in the midsize segment do not primarily switch to electric vehicles out of price considerations.In fact, in the example the belief that T CX is false would even become a premise (and thus a strategic belief), because the belief that the top-down logic does not work if midsize customers primarily switch to electric vehicles out of price considerations is strong.Thus, the strategist will also need to pay special attention to the purchase criteria of customers in the midsize segment.If, on the other hand, the strategist would consider T CX to be stronger than T C , he may accept the alternative theory for T Z .However, he would then need to expose the alternative theory to the same scrutiny and discipline as he did with the ex-ante theory.He would need to order the beliefs of the alternative explanation with respect to their strength and consider objections against the alternative theory.For example, the alternative theory ignores competition and assumes that large scale production of electric vehicles alone will make Tesla the cost leader (T PC ).Once objections and underlying counter-theories have been identified, the strategist could apply our three propositions again, to check if the alternative theory withstands objections, to learn from them (perhaps identify hidden premises as newly identified necessary assumptions for becoming the industry cost leader), and to run focused experiments to test weakest premises.

| DISCUSSION AND CONCLUSIONS
In this article, we introduced a framework with which strategists can systematically evaluate and improve the theories behind their envisioned future that is core to their strategy.We derived three propositions that help strategists to systematically learn from objections, to identify hidden premises, and to test the weakest assumptions behind their envisioned future.In the following, we discuss the implications of our framework and results.

| Theory-based learning under uncertainty
A key difference between our framework and other formal approaches to learning under uncertainty (e.g., learning by updating prior beliefs using Bayes' rule) is that in our framework strategists not only learn whether or not they should continue working toward their envisioned future based on making observations but they also learn from arguments formulated as counter-theories even if objections are rejected (e.g., by identifying hidden premises).This mental learning process has particular value under uncertainty, when the critical contingencies that will affect outcomes are not yet fully known.As a normative framework, the Bayesian learning approach instructs strategists to use evidence to update the probability of success of an idea or a way to implement it and abandon the idea or the implementation path if the posterior probability is below a threshold value (e.g., Camuffo et al., 2020;Kerr, Nanda, & Rhodes-Kropf, 2014).However, from an ex ante perspective, whether or not an implementation path will eventually work and enable a strategist to reach their strategic intent-such as being a number 5 player in the US car industry using electric instead of ICE-powered engines-depends on many contingencies, and the "true" mapping of currently available evidence to eventual outcomes is unknown.Theories suggest plausible mappings from evidence today to outcomes tomorrow.Using theories thus enables learning, as theories define possibly relevant contingencies (the space of variables) and suggest plausible relations between them (Griffiths & Tenenbaum, 2009).Theories as spaces of variables and plausible relations constitute priors, which are assumed to be given in prior formal work on entrepreneurial learning (Agrawal et al., 2021;Gans et al., 2019).In other words, while these studies assume existing priors, our results identify possibilities how priors can be improved from an ex ante perspective, namely by attempts to falsify them on a logical basis and by learning from counter-theories and experiments that are focused on weakest premises.Thus, our results show how firms, when faced with uncertainty, can meaningfully learn about the best way to reach their strategic intent and increase the chances of success by forming and revising theories.
Our framework also helps to overcome some conceptual confusion in the prior literature.In the prior literature, a failure to abandon an idea in the face of contradictory evidence is often taken as indication of a cognitive bias, for example, due to overoptimism (e.g., Camerer & Lovallo, 1999;Kahneman & Lovallo, 1993).However, if the mapping from evidence today to outcomes tomorrow is not yet known, it is not even clear what contradictory evidence actually is!For instance, from the perspective of 2008 it is impossible to know in a statistical sense whether or not sales in the high-end segment for electric cars is conducive to sales in the midsize segment.So it is, from an ex ante perspective, not clear whether a lack of demand for Tesla's high-end model would indicate that there is no demand for Tesla's electric cars in the volume segment.In our approach, the idea is to try to gather contradictory evidence about weakest premises, because that would be enough to falsify theories.This does not necessarily mean that the conjecture is wrong but is first of all an invitation to continue thinking about conditions under which the conjecture will perhaps nonetheless materialize.Using our framework a strategist would only conclude that an idea might not be worth pursuing if he cannot find another strong theory for the conjecture (that is, a theory that has a strong weakest premise) and that would be stronger than the objections raised against it.Thus, our approach suggests that persistence should not be conflated with overoptimism but instead as continued search for a strong theory while being open for the possibility that one cannot find any plausible path toward making the conjecture true.
Finally, our approach closes an important gap in normative theories of entrepreneurial learning.Some assumptions (such as T C or T D in our Tesla example) cannot be tested without substantive investments.While prior contributions have suggested that this creates irreducible uncertainty (Gans et al., 2019), our approach suggests an indirect way of testing them as premises can be explained by sub-theories with a weakest premise that is testable at a low cost.While these sub-theories, of course, can be wrong, they can be improved by raising objections and learning from the counter-theories behind them.Thus, we propose a learning mechanism that, while not eliminating the uncertainty around major investments, at least provides strategists with a refined understanding of why they pursue major investments, and the conditions under which they will be successful.

| Linking theory-based learning and experimentation to the strategy process
Our work also contributes to a small but growing literature that argues for a normative approach to strategy formulation based on identifying processes and frameworks that are prescriptively designed, applied and taught in order to lead to better strategies and better decisions (e.g., Grandori, 1984;Nickerson & Argyres, 2018;Rindova & Martins, 2021).In this literature, identifying problems and designing and implementing solutions to solve these problems are the most central activities in the strategy process (Nickerson et al., 2012;Nickerson & Zenger, 2004).Our work complements and extends this literature by offering a normative framework for the systematic evaluation of theories (Popper, 1959) by raising objections and testing assumptions through experiments.
Formulating assumptions and testing them through experiments has been advocated by practitioner approaches like the lean startup (Ries, 2011).However, these approaches have been criticized for being myopic and focusing on incremental advances due to a lack of attention to the underlying theories or vision entrepreneurs may have (Felin, Gambardella, Stern, & Zenger, 2020;Thiel, 2014).In our framework, on the other hand, theories and weakest premises provide clear direction for which experiments strategists should perform.More fundamentally, the logical linkages among beliefs allow strategists to make inferences about what the outcome of an experiment means for their other beliefs.Thus theories relate what is observable today to consequences in the future, which allows for making inferences from early-stage experiments about assumptions that cannot be tested without major investments.This means that our framework mitigates the paradox of entrepreneurship noted by Gans et al. (2019) by enabling strategists to indirectly test assumptions that are central to their strategy but that are at present untestable (such as Tesla's assumption that initial success in the high-end segment opens up a path for electric vehicles in the volume segment).
As a framework for the systematic evaluation of theories our approach is embedded in a wider strategy process that also includes other activities for finding, formulating and solving problems that go beyond our framework.Specifically, our mathematical results take conjectures, theories and counter-theories as inputs.We have suggested that theories can be created by starting from a conjecture that embodies a strategic intent (Prahalad & Hamel, 1994) or a shaping intention (Rindova & Martins, 2021) and identifying the premises as the necessary and jointly sufficient conditions under which the conjecture will materialize.The formulation of a conjecture and an underlying theory, however, is an at least partly creative process that requires imagination (e.g., Wiltbank, Dew, Read, & Sarasvathy, 2006).In this sense, our framework can be seen as a tool for making sure strategists couple imagination with discipline in their reasoning about an imagined future (cf.Weick, 1989).The theory creation process can also be supported by established frameworks like scenario planning (Schoemaker, 1993), Five Forces (Porter, 1980), or hypothesis trees (Davis, Keeling, Schreier, & Williams, 2007) that allow for both identifying potentially relevant issues and going deeper into particular topics and which are thus complementary to our framework.
As a tool for disciplined reasoning about a focal conjecture, our framework can be applied whenever a strategist wants to evaluate a given theory.This also means that it can be applied again whenever new objections must be considered, new alternative theories are to be evaluated, or the outcome of an experiment creates new facts to be incorporated into the theory, as long as the strategist starts with a complete theory that is formulated in line with our axioms.When a theory has been falsified (through either an objection that is accepted, or an experiment that falsifies a premise), the strategist can use the remaining beliefs as building blocks for formulating an alternative theory for reaching their strategic intent.Importantly, any new or changed theory also should be evaluated by checking if the theory withstands objections and by performing focused experiments and learning additional constraints from counter-theories.

| Further research, limitations and extensions, and conclusions
One avenue for further work is to link our model with game-theoretic models.For example, our model could serve as a microfoundational mechanism for strategic cognition in value capture theory, in particular when strategists differ in their views of a situation and deal with unknown unknowns (Bryan et al., 2021;Cappelli & Chatain, 2021).Here, different theories of different agents could provide a rationale for why agents envision different possibilities to create value together, or why they envision different sets of competitors.In addition, it appears worthwhile to study learning and revising theories not just from the perspective of an individual strategist but also in the context of a group of strategists who have different and potentially overlapping or conflicting theories.This naturally leads to questions such as the conditions under which a group of strategists will agree on a conjecture, or under which conditions it is possible to persuade others of one's own theory (cf.Kaplan, 2008).In particular, our results can also form the basis for studies that examine how to strategically persuade others through arguments about the relation of statements rather than through providing signals or evidence, as in the literature on Bayesian persuasion (e.g., Kamenica & Gentzkow, 2011).Our results should also be useful to understand how arguments made by influential CEOs in speeches can alter the collective perception of critical contingencies that influence, for example, firm valuation.
Another avenue for further work is to link theory-based learning with statistical learning.Validating or falsifying theories using statistics from an ex-ante perspective is of course impossible: For some relations in a theory, data will only be available in the future, but theories can provide useful priors for statistical learning at a later stage (Ehrig & Foss, 2022;Ehrig & Schmidt, 2021;Griffiths & Tenenbaum, 2009).Moreover, strategists can employ a staged approach and use Bayesian learning, for example, to improve products in a test market (Zellweger & Zenger, 2021) or prototypes (Ehrig, Knudsen, & Rauh, 2022).Theories then inform strategists how to make inferences from the learning outcomes to the next stages, such as scaling to global markets or developing a market-ready product.In addition, our framework can also be linked to the process of learning about the value of real options (Trigeorgis & Reuer, 2017).
Finally, our work offers opportunities to investigate strategic reasoning skills empirically.We argued that our framework is normative and thus helps strategists who use it to come up with better theories and thus make better decisions.The aforementioned findings by Camuffo et al. (2020) also lend credence to the view that strategists and entrepreneurs fare better when they use a scientific approach to evaluate novel opportunities.However, it is an open question to what extent strategists in fact use such principles.There is anecdotal evidence for this: Elon Musk famously uses what he calls "thinking from first principles" (Meija, 2018), which is essentially using the core idea behind our framework: identifying the premises for one's conjecture and then testing them.Future work could study more systematically how theories are formulated and revised by strategists and entrepreneurs and how doing so is linked with success.

Proof of Proposition 3
If a counter-theory implies that S i is false while W is a premise for S i , and the maintained premises of this counter-theory are at least as strong as W , ¬E i is a premise for Z (as stated in Proposition 2).Moreover, this premise has equal strength to W (as also stated in Proposition 2).If there is one such counter-theory i, by definition of a focused experiment, a focused experiment has to show that both ¬E i and W are true to confirm the strategist's belief in the conjecture.If there is more than one such counter-theory i, j, a focused experiment has to show that all ¬E i , ¬E j , and W are true.If no counter-theory contradicts a premise S i in the theory that has W as its weakest premise, either no counter-theories exist or all counter-theories contradict premises S j in the theory that are stronger than W .If the corresponding ¬E j are at least as strong as S j they will thus not be weakest premises, as S j is stronger than W .If the corresponding ¬E j s are weaker than S j , they will not become premises for Z.Thus, W remains the weakest premise of Z and it is a focused experiment to scrutinize W .

F
I G U R E 2 Tesla's theory (with examples of sub-and sub-sub-premises)

4
Counter-theory 2: Battery weight and acceleration F I G U R E 3 Counter-theory 1: Out-of-home infrastructure