Don’t write for LLMs, just record everything

Some people have argued the advent of LLMs has dramatically increased the value of having a public writing footprint. The first reason given is that this might help secure a meaningful form of immortality. The second reason given is that this might make future LLMs trained on public writing corpora more useful to you, personally, in mundane ways[1].

I think that the first one doesn't check out, the second one is possible but a long-shot, but you can get a lot of the anticipated benefits of the second one by dropping the "public" bit and doing something a little unorthodox.

Contra Immortality

I don't know if gwern believes in this specific story: two years ago, he wrote a comment which contained the sentence:

This seems like a bad move to me on net: you are erasing yourself (facts, values, preferences, goals, identity) from the future, by which I mean, LLMs.

That sounds a bit like the immortality/value-propagation argument! But that paragraph ends with:

For the trifling cost of some writing, all the worlds' LLM providers are competing to make their LLMs ever more like, and useful to, me.

That seems like a "mundane utility" consideration, so who can say, really. Vishal's blog post, however, was quite explicit:

I came to Inkhaven with the intention of working on a sequence which would argue that people should write more on the Internet in order to increase their chance of living forever.

Condition on taking ASI "seriously", and that it seems pretty likely to be built in the next few decades (absent intervention). If you believe that, there are very few coherent worlds where the words you wrote on the internet matter by the causal pathway described above, for either you or your values being preserved into the far future.

The only situation that I can think of where this matters at all is the one where we do manage to build an aligned ASI (and manage to avoid building an unaligned ASI in the meantime), but this happens after your death, and you haven't chosen to take advantage of one of the available methods of brain-state preservation[2].

Otherwise, if we build an aligned ASI, we will shortly thereafter live in a world where people stop involuntarily dying, and so the entire scheme is pointless.

If we build an unaligned ASI, the best you can hope for is being sold to aliens[3].

I won't try to write a detailed rebuttal of stories about how a mostly-unaligned ASI might still create more ValueRobertM in the future than it otherwise would have, if more of my writing was in the training corpora of whatever system executes a takeover - I don't remember anybody making that exact argument. Instead I'll gesture at a couple reasons for pessimism:

  • The system that ends up executing a takeover might not end up trained on a very large fraction of the available human writing.
    • Especially if we have a period where AIs do most of the AI R&D before that takeover.
  • Value is complex. Even if you manage to get[4] 1 bit of your preferences into the 10,000 bits that make up the preferences of the system executing the takeover, that does not thereby cause anything near 0.01% of a maximally Valuableyou use of the lightcone.

I think most stories that include this as a meaningful component of things going well, either for you personally or for humans in general, are smuggling in assumptions like "ASI is fake", or are otherwise failing to grapple with what seem like extremely tough philosophical challenges.

Contra Mundane Utility from Pretraining

Immortality seems unlikely, but what about making future LLMs more useful to you in boring ways? There are real upsides here, but I'm skeptical both of how practical it is to realize in practice, how much of the value you can capture, and how necessary some of the details are.

patio11, describing one relevant dimension:

The models are getting really good at Patio11-As-A-Service.

(I get emails fairly frequently which either request ombudsman-like intercession or advice regarding interacting with the financial industry as a consumer. Sometimes to check SOTA I feed facts in them to models.)

[many tweets]

What a wonderful time to be alive, where the reward for a hard day’s work is speaking a new spell into the universe. Increasingly not in the evocatively illustrative sense either but “No, humanity at large and potentially for all time gets to just clone stamp that spell out.”

The upside of your writing being used in pretraining specifically is mostly not in embedding your worldview, heuristics, best practices, etc, into the AI's world model. It is in upweighing its tendency to leverage those utility-laden aspects of your writing by default. Take Joe Random, who needs help writing a letter to his bank, but has never heard of patio11. What good does it do him for ChatGPT to have been trained on patio11's writings, if this does not cause ChatGPT to use the skills it learned thereby[5]?

Correspondingly, the best time to start writing on the internet was thirty years ago, not yesterday. I have little concrete evidence, just intuition from personal use, experimentation, and trying to reason about the inductive biases of modern systems, but I think you are in a pretty rough spot if you are trying to build up a portfolio of such public writing starting from scratch for this purpose. You're behind in word count, in developing a unique voice, in figuring out what useful, true[6] things should be "your beat". You might also be fighting against stricter scrutiny[7] applied to writing published on the internet post-ChatGPT. (Also, maybe timelines are short; let's ignore those for now.)

But there is another way to "[speak] a new spell into the universe"! You just need to write the artifact that contains the information necessary to turn it into a repeatable process, and then give it to an already-trained LLM. All of the above are reasons to be skeptical of the value of getting your writing included in the training data, not reasons to be skeptical of the value of writing in general. The spell might be a little bit rougher and less polished than it would have been if the LLM casting it had been trained on millions of words of your writing, but for many purposes it will be good enough. If your writing is good and you publish it, it is even reasonably likely that it will come up in searches conducted by the LLM when gathering information on the subject.

And yet I think that limiting yourself to "writing" is thinking too small.

Pro Panopticon

Have you ever finished a 30-minute conversation with a coworker about some kind of gnarly problem, gotten back to your desk, and felt the details of that conversation escape you in real-time?

What about the experience of trying to remember who you'd told about [subject] a few days ago? You have so many different messaging apps: Slack, Discord, FB Messenger, Signal, WhatsApp, Telegram, Twitter... trawling through all of them would be a headache. Some of them don't even have a search feature functional for this purpose.

How about the one where you've just finished [repetitive process] for the fourth time this week, and you know you could probably automate most of it if you sat down with Claude Code for an hour and explained the entire process, but you have so many other things to do?

Did you send any messages shaped like "Yep, I'm on it" or "I'll have that done by [date]" today?

I have good news for you: all of those are situations where current LLMs are already capable of delivering significant value, given the necessary data. All you have to do is record everything you do on or around your computer[8]. The best time to start was thirty years ago, the second-best time is today.

It is somewhat disconcerting to suggest that people might want to install surveillance software on their own computers, as someone who grew up reading Reason magazine[9]. There are some obvious risks[10], though I think that most of them are either manageable or overstated.

Some software for doing this kind of thing already exists, though I don't have any experience with anything publicly available. Lightcone is building an internal prototype which already seems pretty useful. Sadly, the audio recording, transcription, and speaker diarization and assignment pipeline is the part that seems most useful, and also the part that's hardest to take home with you. Still, I'd be surprised if I didn't end up wringing some real value out of logging all of my keystrokes and having frequent screenshots of my screen taken in the next few months. This is hardly a full description of the feature-set; there are a bunch of fiddly details[11]. The point is: anytime you think, "Oh, that could be useful, but only if it also captures [x]", software is now cheap(er) to build and extend, and storage is relatively abundant. I think literal 100%-on video capture of your device screens might be non-trivially expensive to store at scale if you want it to remain accessible for querying in real-time, but ~everything else should be within reach of a typical consumer software subscription.

There are many reasons why this might be impractical to implement. Maybe you work at an organization that has opinions about what software you install on your computer. Maybe you're in the business of securities fraud and want to avoid ending up in Matt Levine's newsletter. Maybe you live in a repressive authoritarian regime.

But a couple of weeks ago, I spent an hour discussing some technical considerations about this piece of software with Habryka. We then tossed the annotated transcript into whatever coding harness he was using and asking it to derive the plan from that, which was vastly faster and more accurate than typing it all up by hand would have been.

The future is here! Move fast! Don't break too many things!

  1. ^

    And also more useful to others. Examples from patio11: one, two.

  2. ^

    Or you have, but those turn out to suffer from enough information loss to be insufficient for uniquely identifying "you" in ways that are compensated for by your public writing, which is more likely to be preserved into the future than your non-public writing. This particular combination strikes me as implausible but I haven't thought about it very hard.

  3. ^

    And maybe this is bad, actually.

  4. ^

    Numbers made up for the sake of argument.

  5. ^

    In preference to worse versions of those "skills" it might have learned from lower-quality sources of training data.

  6. ^

    gwern:

    Nonfiction: I strongly suspect that LLMs are swayed much less, per token, by fiction than they are nonfiction.

  7. ^

    Either by the labs filtering the training data, or by the models themselves, if the data they're trained on is annotated with a publication date.

  8. ^

    I think the argument does extend to the simpler "record everything", but the social and legal affordances aren't there yet.

  9. ^

    There's no contradiction in principle, but many libertarians are quite privacy-conscious and wouldn't want to voluntarily install an additional attack surface that could be exploited by the government, either overtly or covertly.

  10. ^

    One risk is in the possibility that LLMs may very soon enable scaled cyber-offense campaigns against entire populations of individual targets - in this world, you're a slightly juicier target than everyone else. (In the current world, the risk might actually be worse: the relative reward of compromising a centralized "data-capture" service is much higher, since it's impractical to just smash-and-grab every single individual's personal data independently.) There are some more boring risks, like "oops I accidentally recorded evidence of myself committing a crime which the government now has access to", which is mostly a problem if 1) you're a criminal, or 2) things get much worse in terms of government persecution of individuals. That doesn't seem likely enough to me to be a huge downside, but ymmv.

  11. ^

    What to exclude, for example.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top