Keyboard Shortcuts?f

×
  • Next step
  • Previous step
  • Skip this slide
  • Previous slide
  • mShow slide thumbnails
  • nShow notes
  • hShow handout latex source
  • NShow talk notes latex source

Click here and press the right key for the next slide.

(This may not work on mobile or ipad. You can try using chrome or firefox, but even that may fail. Sorry.)

also ...

Press the left key to go backwards (or swipe right)

Press n to toggle whether notes are shown (or add '?notes' to the url before the #)

Press m or double tap to slide thumbnails (menu)

Press ? at any time to show the keyboard shortcuts

 

The Autonomy Dilemma

problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning
Completely coherent for Puppe to have preferences as long as we are consistent enough in living out the fiction.
The team-reasoning aggregate subject is an imaginary agent. It requires the work of the imagination to sustain it.
This is why the autonomy dilemma has such force: who is going to sustain it if it’s preferences are not mine or yours?
Why should we care about the merely imaginary agent at all?
Of course there are cases where we do (e.g. the Squadron), but these typically involve institutions etc. So are rare and costly.

Why suppose that team reasoning explains how

there could be aggregate subjects?

  • we take* ourselves to be components of an aggregate agent
  • through team reasoning, we ensure that the aggregate agent’s choices maximise the aggregate agent’s expected utility
  • the aggregate agent has preferences (literally)
Team reasoning gets us aggregate subjects, I think. After all, we can explicitly identify as members of a team, explicitly agree team preferences, and explicitly reason about how to maximise expected utility for the team.
If you have preferences, you satisfy the axioms.
Remember Elsberg Paradox: you not satisfy the axioms does not imply that you preferences are irrational: it implies that you do not have preferences at all.
Using Steele & Stefánsson (2020, p. §2.3) here.

transitivity

For any A, B, C ∈ S: if A⪯B and B⪯C then A⪯C.

(Steele & Stefánsson, 2020)

completeness

For any A, B ∈ S: either A⪯B or B⪯A

continuity

‘Continuity implies that no outcome is so bad that you would not be willing to take some gamble that might result in you ending up with that outcome [...] provided that the chance of the bad outcome is small enough.’

Suppose A⪯B⪯C. Then there is a p∈[0,1] such that: {pA, (1 − p)C} ∼ B (Steele & Stefánsson, 2020)

independence

roughly, if you prefer A to B then you should prefer A and C to B and C.

Suppose A⪯B. Then for any C, and any p∈[0,1]: {pA,(1−p)C}⪯{pB,(1−p)C}

Steele & Stefánsson (2020, p. §2.3)

For an aggregate agent comprising me and you to nonaccidentally satisfy these axioms

- we must coincide on what its preferences are

- we must coincide on when each of us is acting as part of the aggregate (rather than individually)
... and on which part of the aggregate we each are

- whenever any of us unilaterally attempts to influence what its preferences are by acting, we must succeed in doing so

- ...

Seems to require something like institutional structure. Head of department just says how things are.
But you will often get people going a bit off-script.

autonomy

‘There is ... nothing inherently inconsistent in the possibility that every member of the group has an individual preference for y over x (say, each prefers wine bars to pubs) while the group acts on an objective that ranks x above y.’

(Sugden, 2000)

dilemma

autonomy -> rare for team reasoning to occur because axioms

no autonomy -> no aggregate subject after all (just self-interested optimism)

We specified at the start that our theory concerned only games in which it was not possible to make an enforceable agreement in advance of playing.
problem of action problem of joint action we don't need shared intention we do need shared intention Bratman's planning theory Pacherie's team reason- ing theory ??? } } decision theory game theory limits -- hi-lo, prsnr's dlmma team reasoning