• No labels

1 Comment

  1. Sikstrom notes on Carey's Ethnographic Theories of Mistrust

    Trustworthy AI: There is a lot of public debate about ‘trustworthy’ AI. But what exactly does this mean? Dan and I once debated this - he said the use of the word (i'm paraphrasing): “trust: is a misnomer as trust is an interpersonal human dynamic and so you can’t “trust” tools. Tools are either reliable or not reliable. You don't trust your alarm clock, you rely on it." I think there’s room to ask two questions here: Is “trust” in AI a useful framework? (Philsopher and pragmatist Rorty would argue that we should roll with the language being used). So, what is it about AI that prompts us to think about and ask questions about ‘trust’? (in ways we might not do about other tools like laptops, or stethoscopes).

    Theories of Trust/Mistrust

    To answer some of these questions I turned to Carey’s “Ethnographic Theory of Mistrust.” In it he argues the following:

     Trust is everywhere and lauded as good. Trust must be maximized. Trust is a socio-technical intervention.

    • Mistrust/distrust undoes the good work done by trust. It is Social acid (p.2).
      • Results of the lack of trust = lack of social cohesion, violence, etc.
      • Locks people/societies into vicious cycle of backwardness and underdevelopment (eg., antivaxxers as irrational and backward]
    • Everyday practices of mistrust include:
      • Lying and dissembling;
      • obfuscation;
      • curvy speech;
      • relentless gossiping;
      • frequent accusations of betrayal;
      • Workarounds;
    • Trust is framed as a Moral virtue (Ability to trust others is a sign of a virtuous person) [bok]
    • “A hypothesis regarding future behaviour, a hypothesis certain enough to serve as a basis for practical conduct.
    • People/eras/societies variously decide what admixture of knowledge/ignorance that suffices to generate trust [Simmel]. [We can think of this as epistemological practices and virtues - or how we come to value specific kinds of knowledge about people and things and how that might inform our feelings about AI tools
    • “In other words the morphology of the trust hypothesis shapes and produces particular social forms” (carey, p. 3).
    • MISTRUST is an alternative hypothesis - not the photographic negative of trust and/or absence of trust  - but a state of being that generates interesting constructs in their own right.

    POINT One: Two approaches

    • Trust as psychological state: dispositional (general assumptions about trustworthiness of others) vs interpersonal approaches (trust is a function of a particular relationship)
    • Trust as strategy (game theory, cooperation, prisoner’s dilemma): the decision to place trust in someone is a deliberate/conscious strategy that can be used to maximize success, however defined. [Think about decision to lend money in Malawi as strategy to protect in times of trouble]
    • Trust in people, social trust in unknown others, trust in the system (eg legal system).

    POINT TWO: Trust is not just about choice, but a way of seeing the world:

    • Familiarity (basis for simplification/short cut decision making - eg., you trust your doctor, so follow their advice rather than researching options yourself)
    • Temporal collapse (social actors confronted with infinitely ramifying futures). Trust SIMPLIFIES decision making [I think of this as the leap of faith].
    • “To Show trust is to anticipate the future” (p.6).
    • Trust depends on confidence in one’s expectations; and such expectations cannot emerge ab nihilo, but depend on a degree of familiarity with people, the world, or systemic representations of the real [Dashboards with big data analytics can be this].
    • Interpersonal level; attributions of personalities to people.
    • In other words, we must believe people or entities have durable personalities or characters, that we can understand them. [where i agree with Carey is when you grow up and live in a more spontaenous/unpredictable world it is much harder to feel confident in predicting the future. “Chiuta pera” [only god knows, a common expression in Malawi.]

    POINT THREE: TRUST is used to control others

    • “A policy for handling the freedom of other human agents or agencies” (Dunn in Carey, p.6). [A sub question here for the grant is that to what extent are clinicians expected to follow AI-enabled recommendations? Is this a way of standardizinging idiosyncratic practices or a more ‘use this info if it’s helpful’).
    • Trust ALWAYS involves an element of risk, redistribution of control, dependency
    • “In an increasingly disembodied and dislocated world, in which traditional forms of social control no longer apply and risk is the order of the day, trust becomes the central social technology.” Absolute and unforgiving social tech; it requires compliance from those we trust, lest it be lost, forever. [Laura is thinking about Trust as a Gift - and Gifts are dangerous because you are required to reciprocate - Mauss].

     The Mistrust Hypothesis:

    • Mistrust as strategy [protect ones self and one’s interests] and a disposition.
    • Grounds for mistrust have been explored.
    • Disposition of mistrust less so.
    • Distrust = specific past experience.
    • Mistrust = a general sense of unreliability of a person or thing.
    • Attitude of mistrust:
      • Proximity or familiarity does not = knowability or certainty. This is KEY. Increased knowledge or exposure does not result in trust [they are not a holy trinity - i like this point because it questions assumption that increased statistical literacy will equal trust - and also it’s testable to some extent].

    Ethnographic Evidence (From the Atlas Mountains mostly):

    • Stranger as king [Malawian proverb - “the visitor kills the snake”] - there are other dimensions that result in trust that are not related to familiarity at all. Examples Carey brings up is that close friends are just as likely to be mistrusted as strangers [in fact in malawi your closest family and friends are the one’s most likely to betray you.]
    • People are unknowable: their inner worlds - intentions, motivations, character - are inscrutable [This resonates with me so fully in relation to how Malawians understand people.]: “Refuses psychological reductionism and embraces social and interactional complexity, rather than simplifying them for functional reasons.” (p.10).SO ARE ML MODELS - ultimately they are Black Boxes
    • People are free/uncontrollable [This is harder for me - but there is a kind of feeling that you can’t assert/expect things of others. This might tie into fact that ML models are unlikely to be things clinicians will be required to follow - judgement will always supersede human decision making. This fact is less likely the further you fall from centre of power I think]
    • Social Construction of Friendship: “not predicated on a progressive unveiling of one’s self to the other, nor are they built on foundations of trust.”
    • This is not a mutually exclusive approach but they are constitutive of one another; Each implies its shadow. Intertwined/mutually reinforcing. Trust is not binary (but a continuum).
      • A more tolerant and flexible form of friendship
    • Mistrust can’t be upheld as subjectively pleasant - but to point out that it is not necessarily corrosive, may at times be enviable and worth sustained interest [which leads us to the question of can we design AI-enabled tools that provoke productive forms of mistrust?].