Seth Lazar

@sethlazar@sigmoid.social

New instance, new

. I'm a professor at the Australian National University, and I write about the moral and political philosophy of AI and related digital technologies. On the political philosophy side, I'm especially interested in algorithmic governance (i.e. governance by algorithms) and the distribution of attention. On the moral philosophy side, I'm interested in differences between moral reasons as applied to people and to machines, as well as the ethics of attention.

January 5, 2023 at 3:02:58 PM

Philosophy moves pretty slow, so most of my published work is on older research programs on war and the ethics of risk. My AI work is mostly in draft or under review. But I keep up to date with interdisciplinary discussions by my involvement in AI ethics conferences (I have chaired AIES and FAccT). And I'm giving the 2023 Tanner lecture on AI and Human Values at Stanford this month (details here: hai.stanford.edu/events/tanner).

Here's a rough guide to my work on AI and related tech. Start with the two pieces that are out already. First a paper with Claire Benn on What's Wrong with Automated Influence, which started out in my vigorously agreeing with critiques of surveillance capitalism but not really being sure quite why. We argue that the ready reach for individualist normative frames ('it's your data' etc) is inadequate, and a more structural political philosophy approach is needed. mintresearch.org/autinf

Then a paper on power in AI—many talk about it, few define it or explain why it matters normatively. This handbook chapter aims to map the terrain and offer a robust theory of power that can help illuminate the moral and political challenges that AI raises mintresearch.org/power

[more to follow later]

Here's a draft with my PhD student, Jake Stone, on predictive justice: the thesis that differential epistemic performance of predictive models is itself wrong, independently of its downstream causal effects (even if the latter ultimately matter more). It doesn't just posit the possibility of a criterion of predictive justice, but introduces, motivates, and defends a theory of it, showing how it relates to other normative concepts like doxastic wrong: mintresearch.org/pj

A few more papers that I'm not distributing widely, but am happy to share by email: one with my postdoc Nick Schuster (led by Nick) on the ethics of attention, exploring whether reliance on recommender systems can undermine our development of the morally relevant skill of the judicious allocation of attention.

And there are my behemoth Tanner lecture essays, which will be (with the other papers I expect) the core of the book I'm writing on this subject. The first is on 'Governing the Algorithmic City', and introduces algorithmic intermediaries as a target of inquiry for political philosophy, then illustrates why justifying algorithmic governance will require a rethink of our standard approaches.

Elk Logo

Welcome to Elk!

Elk is a nimble Mastodon web client. You can login to your Mastodon account and use it to interact with the fediverse.

Expect some bugs and missing features here and there. Elk is Open Source and we're actively improving it as a community project. Join us and let's build it together!

If you'd like to report a bug, help us testing, give feedback, or contribute, reach out to us on GitHub and get involved.

To boost development, you can sponsor the Team through GitHub Sponsors. We hope you enjoy Elk!

Anthony FuPatakDaniel RoeJoaquín SánchezTAKAHASHI Shuuji三咲智子 Kevin Deng

The Elk Team