Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

From Team Reasoning to Shared Intention

problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning
individual reasoning team reasoning decision decision intention shared intention

‘somebody team reasons if she works out the best possible feasible combination of actions for all the members of her team, then does her part in it.’

(Bacharach, 2006, p. 121)

‘The key difference between [individual and shared intentions]
is not a property of the intentions themselves,
but of the modes of reasoning by which they are formed.

Thus, an analysis which starts with the intention
has already missed
what is distinctively collective about it

(Gold & Sugden, 2007)

Because team reasoning is not supposed to require shared agency; we can therefore attempt to use it in giving an account of shared agency. This has been attempted several times. Gold and Sugden offer a view (which I find difficult to interpret); but perhaps the best developed view and certainly the most recent is due to Elisabeth Pacherie. I will focus on her view.

two accounts involving team reasoning

Gold & Sugden, 2007 --- not covered here

Gold & Sugden (2007)

Pacherie, 2013

Pacherie (2013)
Objectives first.
This is very important: Pacherie’s objectives are not the same as Bratman’s (nor Gilbert’s), and Pachiere is very clear about them.
Useful illustration of the method: state your objectives so that **you** set the criteria for success (rather than leaving this to your reader).

Pacherie’s ‘shared intention lite

‘I am skeptical that all intentional joint actions require the sophistication in ascribing propositional attitudes that Bratman’s account appears to demand.

‘a modest or ‘lite’ notion of shared intention, less cognitively demanding than what the analyses proposed by leading philosophical accounts suggest and constituting a plausible basis from which more sophisticated forms of shared intentions can [...] emerge’

(Pacherie, 2013, p. 2)

importance: not aiming to tell the whole story

‘philosophers who appeal to shared intentions are trying to capture a stronger notion of intentional joint action.

For a joint activity to be a joint intentional action in this strong sense, the individuals who engage in this activity must think of its goal

not just as bringing about outcome O, but

as bringing about outcome O together.’

(Pacherie, 2013, p. 5)

‘Two persons P1 and P2 share an intention to A, if:

(i) each has a self-conception as a member of the team T, consisting of P1 and P2 (collective self-framing);

(i’) each believes (i) (group identification expectation);

What is a team?
Pacherie follows Bacherach here.
Psychological group identification. Not a matter of rational control, nor even of choice. It is involuntary.
‘In identifying as a member of a group, an agent conceives this group as a unit of agency acting in pursuit of some group-goal.’ (Pacherie, 2013, p. 16)

(ii) each reasons that A is the best choice of action for the team (team reasoning from a group viewpoint); and

(iii) each therefore intends to do his part of A (team reasoning from an individual viewpoint).’

(Pacherie, 2013)

see also Gold & Sugden (2007); Pacherie (2011)

Step 2b: Pacherie on ‘Shared intention lite’ (best account linking shared intention to team reasoning)
Objections: (a) very limited model (decision theory and team reasoning); (b) requires frames; (c) counterexamples to Pacherie (too ‘lite’)? (d) Not ‘lite’ enough (depending on what intentions are), (e) Still fails the requirements on inferential and normative integration of shared intentions with intentions.
*TODO: Does team reasoning or Pacherie’s account meet Searle’s constraint: ‘The notion of a [shared intention] ... implies the notion of cooperation’ Searle (1990, p. 95)? Yes, beautifully
Does team reasoning meet Bratman’s constraints 1. Shared intentions are inferentially integrated with ordinary, individual intentions. 2. Shared intentions are normatively integrated with ordinary, individual intentions (e.g. aggregation).
consider this case. You and I each meet conditions (i)-(iii). However, we are each doubtful that the other meets condition (i); this appears to be possible because there is no explicit knowledge requirement. Despite these doubts, we also meet (ii) for we each intend to do our parts in A in part because we recognise that there is still a small chance that the other meets (i) and in part because we each think doing our parts in A will have positive side effects (perhaps we each think that this result in being viewed as a good team player by some observers). So here it seems that we meet conditions (i)-(iii) and have a shared intention, although if asked whether we have a shared intention we could each reasonably deny that we do (because we each doubt that the other will meet conditions (i)). I suspect that this case is either not possible or not a problem; but it may illustrate why further elaboration of the account could be helpful at this point.
This deals with the potential objection but now the same objection applies with respect to the other conditions.
But check what Pacherie herself says:
‘(i’) [...] isn’t independent from condition (i), but rather constitutes a default assumption given the nature of the psychological processes that are supposed to automatically induce group- identification. In other words, had ‘non-group’ cues be present and salient enough, the’ (Pacherie, 2013, p. 19)
Following the same strategy will result in a not very lite view.
Invoking team reasoning here also seems unnecessarily restrictive. Like game theory, team reasoning is quite limited in its explanatory ambitious; it can make no sense of agents with long-term plans, values and emotions. So although it might be interesting as a formal model with limited scope, we can hardly invoke it in a theory of shared intention if our aim is to understand how to distinguish genuine joint actions from parallel but merely individual actions. Why? Simply because too many of the genuine joint actions will not involve team reasoning.
individual reasoning team reasoning decision decision intention shared intention
which types of subject can have intentions? individual only also plural how is there shared ? intention reductive aggregate
We were asking: we have seen aggregate actions, and even aggregate preferences, but how do we get from here to aggregate *intentions*?
Now we solved that
problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning
Repeats the diagram above in words

What distinguishes joint action from parallel but merely individual action?

Joint action involves shared intention.

What is shared intention?

Bratman’s planning theory

vs

Pacherie’s team reasoning theory

Which, if either, is correct?

evaluation later. First descriptive: check our understanding.

first step: descriptive compare and contrast

Pacherie

shared intentions are consequences of team reasoning

Bratman

shared intentions are for coordinating planning (&c)

Are these claims compatible? E.g. could we somehow unite the two theories? I think this would be hard since things that meet Pacherie’s conditions for shared intention would not enable coordination of planning (team reasoning, and decision theory more generally, is not concerned with planning).

two modes of reasoning
 
 

inferential integration

requires team preferences

requires only individual preferences

requires common knowledge of rationality
(or ?)

requires common knowledge of intentions
(or ?)

Pacherie

shared intentions are consequences of team reasoning

Questions

Is team reasoning a feature of every joint action?

And if not, what about joint actions that do not involve team reasoning?

two modes of reasoning

Are intentions inferentially integrated with shared intentions?

requires team preferences

When, if ever, do teams have preferences?

requires common knowledge of rationality
(or ?)

Who has common knowledge of rationality?

‘Nothing in this account of team agency purports to tell people when they ought---whether morally or rationally---to act as members of teams.’

(Sugden, 2000, p. 195)

Pacherie

shared intentions are consequences of team reasoning

Questions

Is team reasoning a feature of every joint action?

And if not, what about joint actions that do not involve team reasoning?

two modes of reasoning

Are intentions inferentially integrated with shared intentions?

requires team preferences

When, if ever, do teams have preferences?

requires common knowledge of rationality
(or ?)

Who has common knowledge of rationality?

What are team preferences?
problem 1: teams have to satisfy the axioms
problem 2: individuals have to have the same team preferences!

An individual who engages in team-directed reasoning appraises alternative arrays of actions by members of the team in relation to team-directed preferences

(Sugden, 2000)

‘At the level of the team members, a team preference is a team- directed preference which is common to all those members, and which governs the team-directed reasoning of each of them.’

(Sugden, 2000)

‘At the level of the team, team preference is a ranking of outcomes which is revealed in the team's decisions.’

(Sugden, 2000)

For the team to have team-directed preferences, we require that the individual members’ team-directed prefereces match. How could this come about?
I think there are broadly two possibilities. First, we might explicitly decide on team preferences. Second, the situation we’re in might render discussion unnecessary (as with the Hi-Lo game we encountered earlier).

Pacherie

shared intentions are consequences of team reasoning

Questions

Is team reasoning a feature of every joint action?

And if not, what about joint actions that do not involve team reasoning?

two modes of reasoning

Are intentions inferentially integrated with shared intentions?

requires team preferences

When, if ever, do teams have preferences?

requires common knowledge of rationality
(or ?)

Who has common knowledge of rationality?

These are the questions you would want to answer if you were going to pursue team reasoning.

1. What is team reasoning?

2. How might team reasoning be used in constructing a theory of shared agency?

background: aggregate agents

So answering the questions might enable us to decide between the accounts.

What distinguishes joint action from parallel but merely individual action?

Joint action involves shared intention.

What is shared intention?

Bratman’s planning theory

vs

Pacherie’s team reasoning theory

Which, if either, is correct?

problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning