Hypothesis: are going to make the distinction between "distant" and "close" reading pretty much moot in 5 yrs, by a) reducing barriers to entry for large-scale analysis and b) making it possible to do fun and revealing computational things at a much smaller scale, even with slippery questions about character and plot. @dh
@TedUnderwood @dh This is not a testable hypothesis, it's a prediction about future events that are only going to happen once.
Still, they just aren't. Models are tools for communicating. LLMs are horrible at that, because they tell you basically nothing useful about the language sample you've applied them to.
I've been disagreeing with your takes on these things for years now and haven't changed your mind, Ted, so I don't expect to now, but I want to be on record as disagreeing.
@TedUnderwood @andrewpiper @dh
LLMs are notoriously unreliable on a factual level, so I doubt you'll get consistent results."Interesting" is notoriously subjective, so I'm not going to pull a Chomsky on you!