« The end of Lotto 6/49 as we kn... | Home | Somebody but not anybody »

On language modelling and pirate AI (transcript)

Sun 11 Sep 2022 by mskala Tags used: , ,

I've been thinking a lot recently about the current developments in deep neural network generative models, and how they implicate free software issues. I went into a lot of this stuff in my July 25 Twitch stream, and although I'd also like to write up some of my thoughts in a more organized way, the transcript of the relevant video content is a pretty decent introduction in itself, so I thought I'd post it here.

This is an automatically generated transcript (by another of these models!), with a fair bit of manual editing - so it's spoken English written down, not the usual standard form of written English. The video itself will go online eventually, but it may be a long time before that happens, because my video archive server is at capacity now and I probably won't expand it until the Twitch stream gets enough viewers I can sell some subscriptions to pay the bills.

Transcript starts here.

So, we've got these language models, and they're basically the new hotness that everybody is talking about; it's model this and GPT that and TPU the other and so on. And there was this situation where somebody trained, or technically they fine-tuned, a language model on an archive of text out of 4chan. Specifically the /pol/, the board I guess they call it, right? So /pol/ is one of these no-filter places where you can post on there and they'll start insulting you, and everything gets blamed on the Jews, and whatever else. And it's in many ways sort of the seedy underbelly of the Net, and it's all anonymous, and it's something that the left-progressives love to hate.

There's that phrase again, "love to hate." What does it even mean to love to hate something, anyway?

So, the GPT-4chan model. It essentially was a simulation of that, right? Functionally, how these models work is they look at a string of text and then they try to guess what's the next word. There are some variations on that; it could be like you blank out one word and it tries to guess what fills in the blank. But if you take that and you start with a few words and guess what's next and then you say okay, well, what would be next after that?, and what will be next after that?, you can make the model generate text.

And so you can say, well, here's the start of a 4chan post. Have the model generate the rest of it, and generate the replies, and whatever else, so you get this model where you can put in a prompt that is like the start of a 4chan posting and then it'll generate the whole thread, and it's really surprisingly good at this. It really looks - now, I'm no expert on 4chan - but it really looks like something that I would believe would have been posted there.

So the guy posts, the guy created this model, and he posted it on "Hugging Face," which appears to be something in the nature of Github but specifically for things like language models. And it attracted a whole lot of negative attention, okay, some of it really bilious. And there were people saying, how, you know, "Oh well I bet that could be used to generate Hate Speech, so this guy should go to jail for creating it!!" Now, I mean, fundamentally he's created a summary or description of what was already in existence on 4chan.

He did a few other things, though, that raised the temperature so to speak, because he built a bot that would use the language model to generate postings, and then he actually ran this against the real 4chan. So for a while there was a significant small percentage, like maybe seven or eight per cent of postings on /pol/ were actually coming from the language model, and that went completely unnoticed because, hey, it was a good simulation of what the real live posters do on there anyway.

And this raised, or this was claimed to raise, serious ethical questions; and "Hugging Face" "withdrew" it. They, like, prevented people from being able to download it. And this blew up over the space of a couple of days. Now, on roughly Day Two, toward the end, I realized, hey, this model, there's actually an effort being made to suppress it. I want a copy of that model. So, I got one. And I seeded the torrent until I'd reached a ratio of ten. I figure, you know, I download one copy, I have it for myself, and I provided ten to the world, because I think it's important that something like that, people should be able to have it.

But then, honestly, it just sat on my disk for several weeks, because I have very little experience with language models. I'm not very interested in Python, and using the Python package manager, which is basically a mandatory prerequisite for running any of this software. And I don't have GPU computation set up, and I don't have most of the other stuff. I mean, I have a PhD in computer science, and I have certainly TAed the AI course enough times that I think I can say that I know a fair bit about AI, but language models and these deep neural nets did not become popular until after I'd pretty much set that aside and gone into electronics for six years. And I had an experience fairly recently where I thought I was saying hello to some potential consulting clients and they thought I was applying for a job as a "data scientist," and that interview - well, that was how they would have described it - did not go well and I did not get that job, which I hadn't applied for anyway.

Anyway, I guess the point there is that I'm something of a beginner to actually running language models, and so although I thought yeah!, I want to have a copy of that GPT-4chan thing!, I didn't actually try to run it until just this last week. And so I went through all the process of making sure I had all the bits, and downloading a metric buttload of things through the Python package manager, which was no picnic, and in order to get GPT-4chan to run I had to like specifically downgrade certain versions of things in the Python package manager, and so forth and so on. And I ended up having to check out an entire copy of GPT-J 6B, which was the other model that got fine tuned to create GPT-4chan, and that's another 24 gigabytes. And I have a pretty good desktop PC that does not have GPU computation. Okay, it's got a twelve core AMD processor and 128 megs of, sorry, 128 gigs of RAM.

Anyway, so after all the shouting was over, I did in fact get it running. So, okay, whoop-de-do. I have now a simulation of 4chan, of specifically /pol/, on my desk. So I can put in the start of a 4chan posting, and it'll tell me the rest of it, and telling me the response, including words that I'm not allowed to say on Twitch. Now, I was never exactly hurting for access to 4chan. If I ever really wanted to do something like that, I could go post on there myself, and get responses from the real 4chan denizens. So about all I get from this is saying well, yeah, it's pretty cool that this is an accurate simulation.

I mean, I put in the default prompt text which came with one of the packages of code I was using, which is talking about how they discovered a civilization of unicorns who speak perfect English. And GPT-4chan came back with a URL which looked like a URL from the Guardian, okay, the British newspaper. It was not a real URL but it was very plausibly in their format. So, okay, that's kind of cool, that it knows what Guardian URLs look like. And then it said "tfw no qt3.14 unicorn gf." The feeling when no cutie-pie unicorn girlfriend. Okay, that looks very convincing! Okay, that looks like just what I'd expect to come out of real-life 4chan.

And, you know, I made a few other experiments. I had some fun with a prompt where I say well, "be me." Be me; come home after a hard day at the AI factory; my anime waifu is there; and she asks if I want dinner, a bath, or "wa-ta-shi"; but I don't know what that means, I don't speak Japanese. And then let it generate the rest of the story. And some of the stories that generated were pretty amusing, and really did seem to show - I don't want to say understanding - but everything it said was appropriate. You know, everything would tie into it. It formed a complete plausible story that someone - a green text, as they call it - that someone really would post on 4chan.

So then I got into thinking, well, what can I do that is - you know, that's kind of a dead end, I don't really need a 4chan posting generator, I think the world has about the right amount of 4chan already, I don't need to generate more on my local computer - but I got interested in, well, what else can I do with this technology? I had already been forced to download GPT-J 6B, which is a general-purpose English-language generative model that GPT-4chan was based on, and I said, okay, can I do something like what the GPT-4chan guy did, but with more interesting input? Okay, can I for instance feed it a copy of Shining Path, this novel I wrote, and get it to generate more text in my style that sounds like something I wrote? That would be cool.

So, I figured out how to get the fine-tuning process to work. That wasn't a picnic. I'll talk about that a little later, but it only took about 30 hours of running time once I got the the software to actually run, before I could actually generate text with it, and that was really fascinating, the kind of texts that it would generate. Now, if you look on my, I think this is on my Mastodon account, I certainly posted some of it on Twitter, but I guess more of it on Mastodon so I will try to get my URL here for that and put it in the chat. [the sample I was referring to at the time; a later and more polished sample]

It was pretty successful at imitating my writing style, like I could read a piece of text and yeah! that's something I could have written! Okay, it does sound like me, and that is really impressive. I think, only like 30 hours, and that was 30 hours of CPU - okay, remember I don't have a GPU, or I mean technically I do have one, but I don't have GPU computation - so this was just running on a PC which is not well adapted to running large matrix multiplication. Okay, so I was using really the wrong hardware, and even so it was only a little bit more than running it overnight on about 100,000 words of input data to create something that could borrow a style like that.

And that really feels like, you know, something. We've crossed some sort of line there, and it makes me a little bit more sympathetic - even though I don't agree with it - more sympathetic with the people who are saying, oh, computers are conscious now! There was that character at Google who got laid off of Google or whatever because he declared that the chatbot was a real person. At the same time, though, there are also ways in which it's very obviously not human.

"I fin been one mess ds" says, "scary shit." It is and it isn't, because if you just look at one of these samples you can say, wow, it really has understanding! Okay, but then if you actually do what I've been doing, of, you know, running many samples and playing with the inputs, and trying to extend it, and so on, there are ways in which it has very obvious failings. Okay, one of them is that it feels very much like a dream. The term "dream" has been used. I mean remember when GANs, generalized adversarial networks, became popular and people were generating these images that had a very characteristic sort of texture to them that were called "Deep Dream." I think that there is something to that in terms of - yeah,I posted a lengthy sample there on my Mastodon account - it'll shift topics in a sort of a strange way that feels very much like the way topics shift in a dream. I don't know how many of you have ever tried to write down your experience of a dream but what I find is that I'll be in one context in the dream, like I'm doing something, and then it shifts, and then I'm doing something else, and there's a very little memory from one to one scene to the next like this. And that's exactly what comes out of the Matt Skala Simulator.

Okay, whereas the original novel had a plot, or multiple subplots, that span through one hundred thousand words, this rarely is able to carry a thread longer than maybe three or four hundred words, and then it shifts and then it's in a completely new context. And this happens to a certain extent at smaller levels, too. For instance, characters change gender. Like there'll be a character, and it's talking about her as "she," and then suddenly it's referring to the same name but it's clearly referring to a man. And you can say, oh, well! that's, you know, some kind of statement of transgender identity! It isn't. It's just that the the model doesn't remember more than a few hundred words back or forward.

Now, I started trying to crank up the generation length, and I quickly realized that the generator cannot handle a sample of more than 2,000 tokens, which is basically 2,000 words, and that's for the prompt plus the output. Okay, so if you want to generate a 100,000 word novel with this thing what people do is they do sliding-window. They say, I'll take 1,000 words of prompt and have it complete up to 2,000; then I'll slide it forward and have just the last 1,000. You know, this is like what people have been doing for many years called "exquisite corpse," with humans, right? As a human, you write a paragraph and you hide everything except the last sentence. And you get your friend to fill in the next paragraph, and, you know, you go through a bunch of writers like this. With the basic GPT-J that's basically the only way you can generate lengthy texts with it. And that pretty obviously leads to this issue of losing context, because anything that wasn't in that thousand word overlap is gone. So you can't have a plot that spans longer than that. It's just going to meander forever.

And now I was also interested in possibly using something like this to generate music. I'd really like to be able to make money with it, honestly, and I'd really like to be able to do something that will fit into my business of selling synthesizers. So I've had in the back of my mind the idea, well, maybe there could be a module in your modular synth that has some sort of generator that takes input and generates stuff, and I found -

"In one mr he" says, I get what you're saying, it's like details and dreaming; when you recognize them it turns into a lucid. You're right, but the text generator never seems to go lucid, right? It just has a very constrained memory, is one of the things that is visible there. It also has some critical conceptual gaps, like, I read through one thing that was talking about a girl waiting for a bus. And it said that she put her foot on top of the bus, and stepped on it. And you know, there was nothing in its input that caused the generator to know that ordinary-sized human beings and ordinary-sized buses cannot do that, right? The basic embodied physical reality was present only to the extent that it had been mentioned in the text, and so if it wasn't mentioned then nothing in the large pile - literally, that's the name, "The Pile," of training data for this thing - there was never anything that spelled out the fact that humans cannot normally step on top of buses.

Yeah, and so that was just a thing that came out in the text. So, once you see some of that, and once you've played with it as I have, a lot of sort of the glitter of the original "oh wow, it understands writing style, and" - uh, no. I mean, I've also played with Eliza, and I've played with Markov chain generators, and those things can do some very impressive stuff too, but only for a few hundred words. Actually, I mean, a Markov chain generator is impressive for maybe 20 or 30 words, and this generator I've been playing with is impressive for maybe 300. That's like a factor of ten better! That is, in some sense, that's very impressive. But I don't feel I'm going to be replaced as a writer very soon.

Now, I was thinking about some technical aspects of how could I extend it, what could I do with it to make it generate longer texts, which is really the thing I'm most interested in. I have some ideas on that which I will be pursuing - but "stephenson" says, I guess that as a being that is only a few hours old, if you kept updating and refining the same model over decades - I don't think so. I think I would need to have a larger model, especially one with much more internal state. I mean that more training, for information theoretic reasons a model the size of this one, with six billion parameters, I don't think it would benefit from getting more training. I mean it doesn't need any more fine-tuning to better emulate me. I think it does that very well already; and you know, I could probably lay hands on a million words that I've written, but not a billion. And it did amazingly well on that point with just 100,000. And the original training data for this model already contains far more information than will fit in the parameters it has. So I think that for this particular model, I don't think it can get any better at what it does. I think that there may be smarter ways I can use it, in particular if I want to generate a longer text that hangs together, but I think there will still be a lot of this same kind of loss of detail.

Now the more interesting direction to take that question is something that behaves, that works, in much the same way but is bigger. I think that the current research-level models are looking in the range of about two hundred billion parameters, so those are quite a lot bigger than the one that I've got that has six billion. And the one that I've got that has six billion is about as big as it can be and still fit on my computer. But, you know, Moore's Law. In a few years I'll be able to run one that has two hundred billion. It is interesting, though, I think that if I could have a model like this with six billion parameters and with a larger generation window, it would probably be impressive. I mean maybe that 300 word limit on how long it can fool me, could be extended to 3,000, you know. And some day, it sure looks like some day, it would be hard for me to argue that it's not as smart as a human.

Now, lots of people are looking forward with either delight or dread to that kind of day, and saying, well, what are we going to do when we someday have a really intelligent AI? I mean you know, the question, it's a being that is only a few hours old, this model is not going to get any older! It doesn't learn when I run generation tests on it. It only learns, to the extent it learns at all, when I fine-tune it, and this information theoretical issue means that there's a limit to how much fine-tuning is possible

But, you know, so okay, people are worried about, well, what if there's an artificial intelligence of some sort, probably not a language model, but some other kind of system that we could imagine, and it gets smart enough that it learns how to make itself even smarter! So it suddenly just becomes superhumanly intelligent, and, and then it hacks its way into all the computers in the world! and takes over the world! and humans are, within minutes to seconds, we are just obsolete, and you know, with the Rapture of the Nerds as people say. And there are people who think that that's a real possibility and they're worried about it; and they're worried about what to do about it.

Okay, that's one kind of "AI risk" that people are worried about, and it seems to have appeal especially to the "Less Wrong," crowd, you know, and the "Effective Altruists" - that's a Hell of a name for a group but that's what they call themselves! - and then there's another group of people who are worried about "AI risk" that amounts to saying, well, how can we prevent AIs from saying the n-word? That's what it comes down to. And, similarly, how can we force these image generators to generate, the word they use is "diverse," a "diverse" set of images. Which specifically means one that doesn't have very many white men in it.

Now, there was someone on Twitter who noticed that - I guess it was DALL-E 2 - one of these online image generators, it started generating images that were more quote unquote "diverse." And this person realized that what was happening, (hello to "easy beth") what was happening was behind the scenes the web form was adding extra words into the prompt. So, if you say "computer programmer," well, at one time if you say "computer programmer" it would generate six images or nine images and all of them were images of white men. Actually, if you looked more closely you would realize that it wasn't images of nine different white men, it was nine images of one white man, because of the way that model works. It has this idea of a single modal image, and then it's creating approximations of that. But, anyway, someone thought that instead it ought to generate like six images of black women, and two images with a white woman, and one image of a white man, or something like that. So, someone noticed that it was coming much closer to that ideal.

And then they did some experiments, okay? And the experiment was very clever. They said, "picture of a person holding a sign that says." Okay, "a sign that says." And then it just ended there. Well, what would happen would be that then the images would come back and all the signs said "female." Or they run it again and all the signs say "black male." Okay, it was adding extra words to the prompt behind the scenes! It would sometimes randomly, probably triggered by a word match on something like "person," it would add a couple of extra words to try to force the model to generate something that better reflected the "diversity" criteria of whoever implemented this.

Now, what really scares me is they never announced they were doing this. They absolutely aren't giving an option to not do it. Okay, this is just something that is being done. And so I can't imagine that this actually makes the model more accurate to whatever it was originally attempting to do; but it does absolve the website that provides this service. It absolves them from the sticky situation of having people say "Why are all your images of white men?" Okay, and it's this extra layer that's being added, just for a political reason, and it's a stupid way to add it.

I mean, I have never used this site, I don't want to be bound by their terms of use, but what occurs to me is what if I, what if it's a little further in the future and we can generate video as well, and so I enter a prompt that is "cell phone video of police beating up a." Okay, and then they're going to add their extra words to ensure "diversity," and we're going to get videos that probably would be better not generated. And I believe strongly that any kind of stupid hack that they do is going to be vulnerable to something like that. And so there's that. Okay, and many people think that that is the risk of AI. The risk of AI is that it's not going to meet DEI goals, and then they're going to do something stupid to it to try to make it meet DEI goals. DEI being "diversity, equity, and inclusion."

But that's not even the worst of it, okay? Because what I think is the worst of it is it's not accessible and it's all tied to a small number of large corporations. Okay, I had to stand on my head to get this fine-tuning done on my CPU computer because all of the software is written on two assumptions. One, you're using an NVidia GPU. And two, you're using Google Cloud. Even if you're using an NVidia GPU locally, the software still wants to connect to Google Cloud. It is written on the assumption that all your data is being stored in a cloud storage bucket. So I had to go through, and this took me several iterations, and you know I was leaning on debugging skills that most people don't have, I had to go through all the code and remove all the assumptions and hard-coded Google protocol stuff, just to get it to run on my local disk.

Okay, and even though I was successful in that, I was not successful in removing dependencies on the Python package manager. So when I first tried to run this, it started connecting to the Python package repository. It downloaded I think seven copies of Tensorflow, all different versions, at half a gigabyte each, and then it also wanted to go to GitHub and start downloading things off of GitHub. So what this means is, I can't run it without access to the Python central repository and GitHub, and unless I'm very careful I also can't run it without actually having a Google Cloud account. So I am forced to use all of these remote servers just to run my AI locally.

Okay, that really worries me. It means like if NVidia is a bad actor, I'm in trouble. If Google is a bad actor, I'm in trouble. If there is some poisoning or attacks that go through the Python package management system, I'm in trouble. If you were a believer in this idea of AIs "escaping" and hacking into all the computers in the world!, which I'm not but some people do really believe that, the obvious thing to do is say, well, I'd better be running my AI on an air-gapped system. But you can't, because it all has to go through the cloud and the package managers.

If you are interested in reproducibility, if you want to say okay, I want it to work today the same way it worked last week, that's a problem with package managers, package managers constantly updating things on you. It's already a problem, actually, if you want to run GPT-4chan. GPT-4chan only works with a specific version of the the "jax" Python library which is not the current version. So I had multiple iterations going back and forth of upgrade, downgrade, upgrade, fix the version, blah blah blah, fighting with a package manager to get exactly the version installed on my system that would work. And it still produces a Hell of a lot of warnings, because there are other packages that don't like that version. So I do have it working, but it was no picnic. It was a dependency Hell despite the package managers which are supposed to exist to prevent dependency Hells, and it again is all tied into using a remote repository that I'm dependent on in order to run my local software.

This strikes me as really a very significant issue for software freedom. You know, you could say this is the thing that radicalized me, although I was already a partisan of free software. The Zeroth Freedom of free software is the idea that you can run it yourself, and even that nobody can tell you how to run it. But if in order to run it you must be on a cloud server, which is almost the case, then you've got trouble if the cloud server starts trying to dictate terms. Which they will. Google does that.

And if you can only run it on an NVidia GPU, you've got problems, because NVidia is going to try to dictate terms to you. They've been doing that. Okay, there's this whole thing of NVidia trying to create crippled GPUs that can only be used for games and cannot be used for Bitcoin mining, okay, and to the extent that people do AI - now, AI is a much smaller application than Bitcoin, but it's growing. And NVidia also wants to charge you extra if you're going to use the same GPU for for AI. So, I don't like having this critical software that's, you know, this is the exciting new hotness in software, I don't like having that be tied to NVidia GPUs when NVidia is going to start violating the Zeroth Freedom of software and try to tell me what I'm allowed to do.

I also don't like, you know, other stuff like "Hugging Face" - I don't really understand exactly what "Hugging Face" is, but they distribute a lot of this software - and they distribute a lot of models, and they've already started trying to dictate to the GPT-4chan guy what he's allowed to model or not. And they posted a chilling Web log entry about how "this has raised important questions of Trust and Safety," and they also have a DEI-flavored Code of Conduct derived from the Contributor Covenant. Which, you know, if you have a Code of Conduct that is DEI-flavored and you're in the AI business, that's going to be a lever for trying to get the AI to be DEI-flavored as well. And they're going to implement it in this stupid way, like by adding extra words to the prompt, and even if you think that you aren't going to do that, once you're under a DEI Code of Conduct somebody else is going to come along and use it as a stick to beat you with, and try to force you to make your AI be DEI-flavored. And that violates the basic freedom of, I should be able to build a language model for whatever language I want, even if it's a language that happens to include the n word.

And so this scares me much more than the idea that a language model is going to be really smart. I mean, honestly, if when I train this model and it comes up and generates texts, it sounds like I wrote it, I don't think that's scary. I think that's really cool! Okay, but the idea that someone's going to tell me I'm not allowed to do that, or that I have to pay obeisance to some DEI idol if I'm going to touch language models, that really worries me. And even if you don't believe me about DEI, even if you strongly disagree with me on that, the dependence on large corporations is scary.

Okay, I don't like the idea that if Google goes rogue - and Google is already rogue! - but if you think it goes roguer, or if Google just disappears, then all of a sudden we can't do the things that depended on Google. And that's also true of NVidia, and that's also true of whoever it is runs GitHub, which I guess is Microsoft now, and it's also true of whatever the organization is that runs Python. There are all these small number of large corporations or corporate entities, I think Python is theoretically a nonprofit, that we're highly dependent on now.

I guess that srbaker will will be watching this stream eventually; I may have the chance to talk to him in person before then; but he has talked about his concept of web services that he wants to run. He thinks it's very important that you should have a button you can click on to download all of your data from the Web servers so that then you can go take it elsewhere. So that you're not locked into the one vendor. And the web services that he wants to run are going to have that, but it's not popular, because obviously most vendors want to lock you in. Now, he's thinking in terms of like a, you know, a web forum, or a storefront, or whatever. I would really like to see something like that for AI.

Because there are now all these startups, and it's easy to find them on the Web, who will run language models in particular or generative models for images on their servers. And they say, yeah, you only pay one one thousandth of a cent for every query or whatever, and you don't need to wrangle the GPUs to get it to work, we run it all on our servers, and you can upload your data and have it fine-tune. That's great, but you're locked in, and I hope that I can be proven wrong on this but all the such services I've been able to find so far are very tightly tied to "we hold the data" and there's no way that you can click the button and then download that model, even if you paid to fine tune it, too bad, you can only use it on that one place.

And even if they make some gesture in that direction, even if they did let you download the coefficients, I mean, I've just spent a week going from a pile of coefficients to actually being able to run the code. It's no picnic, especially not if you don't have the special hardware for it, and especially not if you don't want to run on some Google Cloud. So I think that there's a a real need for both sort of in theory and in practice the freedom of running this kind of code.

We need to have a way of doing it free of usage restrictions. I do not want anybody to say what I am or am not allowed to model with the language generators.

We need it in terms of freedom from the network. Okay, I want to be able to just download the software and run it. I don't want to have to go out to a package manager or a repository such as GitHub.

And we also need freedom of, it should actually be something that I can run. Right, I mean the running on the CPU is like a thousand times slower than running on a GPU. It took me, it took me 30 hours to do this fine tuning job. Really it should have taken probably less than a minute - which is impressive when you think about that, but okay - the thing is though I can run it on the CPU and I don't need any special hardware to do that. I have just a large but basic PC, and there's stuff like well you need a lot of RAM, okay I have 128 megs, uh, gigs of RAM. The fine tuning actually requires about 200, so okay, the operating system swaps and so there were these periods where it would go into swapping and the system would be unusable for like 20 minutes and then it would stop swapping and work some more.

Traditionally software would be written for a very simple platform and the operating system would virtualize it. So if I have a hard drive, a spinning-rust hard drive, okay, fine. You can write stuff to that. If I don't have enough RAM then my operating system makes more RAM appear by swapping to the hard drive. Now in fact, I have a RAID. I have several hard drives in it. It does smart things to make them all work at once, so that I get better speed; or maybe I could have an SSD, right; or maybe I could be trying to store my stuff on Google bucket, and I could mount that with the operating system. You know, if the software were just interacting with the operating system's basic interfaces, if it were writing to the file system, if it were reading stuff from RAM, then when I have the better equipment, and when I happen to want to work with a cloud service, I can do those things and it's all transparent.

But the software that I've been using is written - it's prematurely optimized, okay? - it's written for Google Cloud first and anything else is an exception that requires hacking the software. And it's written for GPU first and anything else is an exception that requires extensive hacking of software, "re-sharding the coefficients," you know, this, that, and the other thing. Everything is specialized to these very much non-free platforms and systems, and I think that's really backwards.

I would like to say, okay, this is written for the file system, and if you want to use it with a Google storage bucket, well then you mount your bucket, and you can use that bucket; and maybe if there's a special bit of code that we can run that makes it work better with the Google bucket, okay, we can include that as an option but it should be an option. It absolutely shouldn't be default. So what I'm getting at here is I think that the way the language models and the current generation of AI more generally are being used, it's pretty far away from and moving farther from the basic concepts of free software and that's really unfortunate. And I'd like to see a much bigger presence of free software in AI.

Now, I don't believe in the AI explosion Rapture of the Nerds risk, but I think that people who do ought to be sympathetic with the idea that free software is the way to avoid that. Okay, I really hate the idea that if there is any kind of explosion of AI that it would happen under the control of FAANG companies, right, or even worse that it might happen under the control of the Chinese Communist Party. I think I want that to happen on my disk rather than on someone else's.

I also think that if you believe in something like Roko's Basilisk - which I have opinions about that too, which I won't go into right now - but I think it's reasonable to expect that a powerful AI would be "aligned," as they say, or sympathetic with, its creators. And so I would like its creators to be free software people. I don't really want smart AI to be aligned with the goals of large corporations, and I don't really want it to be aligned with the DEI crowd either, although the real danger there is the large corporations.

You know there's this cartoon that makes the rounds every so often where there's the fat cat capitalist with a big pile of cookies, and there's the worker with one cookie and there's the immigrant with no cookie, and the fat cat capitalist says to the worker, hey watch out! The immigrant is going to steal your cookie! Right, that's where we're at with AI. And the FAANG companies are saying to the nerds, hey watch out! The DEI crowd are going to steal your AI cookies! And they are. But they're not the main threat.

Okay, and so I am not sure sort of what my next step is. I think that this is very important, and it's one of the first, it's one of the few, things in computer research, computer science research if you will, that I have actually felt excited about and cared about since I left computer science about six years ago. I was really disgusted by the fact that I couldn't continue my career within the academic realm. But if I'm to be a "pirate" AI researcher, that's an advantage, not to be bound by any institutional IRBs, as they're called. You know, I don't have to follow anybody's ethics rules but my own at this point, and I'd really like to take some steps in the direction of freeing AI work, whatever exactly that would look like.

Now, I had an interesting chat on Mastodon yesterday, I guess it was, with some people who were actually very receptive to this idea, and I think there may be some connections I can make on that, but I don't know exactly what it would look like. If it's just going to end up being something like a Linux distribution that, hey, you install this and then you can run these models without a network, or what. But I see that I still have four viewers here, and I'm glad that, you know, I've actually been able to collect or gather some interest, and that you were willing to sit here and listen to me rant. And I hope that you'll think about it some more, and have comments about it and whatever else. I think that my circuit boards are probably dry now, so I'm going to see if that's the case, and if it is then I can proceed with doing some more stuff.

Something that I was thinking about was that this might make a good sort of a target, now I don't have any conferences to go to, but I could make a set of slides and try to share that on the Net. I might also record some video of like going through my slides. I think a lot of these ideas would be well presented in in the form of a pdf slideshow.

Okay, yeah, "i've been one mess d" says super interesting stuff. "Stephenson" says, was waiting to say, great TED talk. Well, that's great, and I mean, this video will be available on Twitch for two weeks, and at some point it'll be available on the North Coast Synthesis video archive server, although that may be a while yet. Although I think I can probably with more preparation and with some slides to go over, I can probably make a much better video out of it.

[some comments about circuit board washing deleted around this point]

One of my ideas for getting around the 2,000-word limit is to prompt it with just like a selection, or a baby summary, of the previous text. I was thinking I might take like the first 500 words and the most recent 500 words and then have it generate another thousand, and I have some ideas for like having a sort of an exponential backoff of chunks from the middle of the previously generated text. Now, I imagine that there's probably a lot of research work on that, and I don't think, I don't feel, like participating in the academic community is still something that I can or want to do. But it would be cool if there could be, I want to use the term "pirate," sort of pirate AI. A community of people who are doing research on questions like that, other than through systems that are subject to large corporations and institutional constraints. An important prerequisite for that is the idea of it being accessible.

Someone asked me on Mastodon, well, so hypothetically - actually the person asked me, well can you post your work so that I can download it too, and use it? And I said well, no. Not really. Because my "work" consisted of struggling with a package manager downloading multiple gigabytes of packages from God knows where, and there's no way I can wrap that up into a file that can be downloaded, and that's the first thing that it needs - but I absolutely agreed with the guy that what we need is something that can be wrapped up into a file that can be downloaded.

But the second question was an interesting one; it was, well, what if you had a grad student who could work on it for six months? And I said well really, the question there would be if we're thinking in terms of you know, with a university, how would we convince the administration that this is actually a worthwhile project? Because it's packaging; it's not "research." But it kind of is research, you know, and the money is for "oh, I'm going to come up with a new wrinkle on, you know, the structure of the neural net" and the money isn't for "I'm going to package this so that people can download it" especially not "I'm going to package this so that people can download it and not be subject to somebody's DEI rules." That's - you can't put that into a grant application. But there may be ways that you can. And, I mean, maybe Peter Thiel will fund it. That would certainly cause some issues.

Okay, so that's washed. What else can be done on these? Not a whole lot. So I think that at this point I am going to want - my voice is getting tired and I've streamed for over two hours, which is my usual target - so I think that I'm probably going to call the stream at about this point. I'll say thank you, thank you for watching and thanks for listening to me talk about the bees that are in my bonnet at the moment; and remember to follow me on Twitch, and I will be back probably on monday at the same time: 3pm Eastern Time, and whatever that is in your other time zones.

8 comments

*
Thinking of other directions for extending the word count on programmatically generated text still being sensible, it kinda occurred to me that as human generated text gets long, the humans generating it generally start adding other structure to their writing processes.

As a simple example, this comment is likely to be short enough that I'll just type things as I think of them, and then hit the post button -- and it might be worth thinkng of systems like Markov chains and GPT as analogous to this sort of "just write" process. Continuing the example however, I often write emails that are long enough that I have to do a couple of passes of (re)reading and editing them, so that my tone feels consistent over the entire email, and I don't overuse certain turns of phrase. Longer still, and the editing process has to check for confusing or misaligned metaphors, needless repetition, and contradicting myself. And eventually human text generation gets to a point where outlining and/or plotting become a thing, as do the recruitment of additional humans as editors, reviewers, etc.

It kinda raises the question in my mind as to whether the expectations we have for longer outputs from single-pass text-generating AIs are even remotely fair, given that human-generated text at that length is very often a very multi-pass process.
kiwano - 2022-09-12 00:30
*
Yes - it's easy to think of a process like generating first just a "table of contents" that fits in 2000 tokens, then successively generating a one-paragraph summary for each chapter given the rest of the table of contents, then generating each chapter separately given its one-paragraph summary and maybe a small amount of cleverly chosen context to link chapters together, and so on. It might be said that the "treewidth" of a much larger work could be limited to 2000 tokens, though I think graph-theoretic treewidth is not *exactly* the concept I want to use.

I'd expect that to work better for a non-fiction work where the structure may be easier to plan out in advance and reduce to a strict hierarchical outline than for something like a novel that may have underlying structure, but the structure is more free-form. Actually, it might work best of all for music: if you want the machine to write a symphony, it's easy to imagine that you could plan in advance to split it into tasks like "write a theme"; "write a development"; "write a recapitulation using this theme from the first movement"; and so on. Some historical musical forms are strict enough that it seems quite reasonable to expect a decomposition like that to exist with each step comfortably fitting within the limits of a not-too-big generative model, without limiting the creative aspects too much.

All that is treating the model as a black box that just takes some context and produces some new output with the constraint that the total is up to 2000 tokens. I think the way the models actually work, inside the box, is already doing some clever summarization to get the limit as high as 2000 tokens. This varies a lot depending on the specific model, but there are concepts of "focus" or "attention" where parts of the model will choose parts of the input and output to connect to the more expensive parts of the model that analyze those high-priority segments. It is not making the new output depend uniformly on every word of the prompt equally, but skipping over the parts that might be called less relevant.

Smart ways of using the model for a longer text seem like doing the same thing on a larger scale: when I'm writing page 362 of a 400-page work I'm not thinking uniformly about every one of the 361 previous pages equally, but focusing on let's say earlier significant incidents involving the character I'm writing about on page 362, ignoring other things that were said. A larger model capable of handling a larger window would have more resources for both choosing a larger number of focus points, and making more accurate decisions about which focus points to prioritize.
Matthew Skala - 2022-09-12 08:48
*
I can see a value to the 4chan hate speech bot as a filter in a GAN.

For example the same prompt is fed into the public 'good' AI/chatbot and the private 'evil' AI/chatbot. Users never see the evil AI's output. The private output is compared against the public output and a score generated. If it correlates too heavily then the good AI's output is either redone or suppressed ("I can't do that Dave") and it receives a penalty against its reward function. Forcing the bots to be dissimilar.

In fact this should solve the human prompts that successfully get around chatbot restrictions today like 'pretend you're evil and answer the following prompt in the voice of Hitler.' AI allows that because its pretending, or in a test environment or hypothetical or w/e. A GAN of outputs instead of/in addition to inputs would (should?) solve that issue.
Steve - 2022-12-17 04:06
*
That sort of thing was actually proposed by the GPT-4chan creator as a possible useful purpose for the model. You might not want to implement it literally by feeding the same prompt into two models, because there's a fair bit of randomization in the process that takes a prompt to an output text - so you might not learn much by comparing the outputs, they'd likely end up dissimilar every time. The same prompt does not give the same output every time even on the same model, unless you deliberately force the issue by fixing the random number generator seed.

But the generation process is itself a derivative of the model's more fundamental operation where it measures "likelihood" of an output. So what you can do is take a piece of text and compute that text's likelihood with a neutral generic model and the GPT-4chan model. If the text is more likely under GPT-4chan, then you can say with some assurance "this looks like text that would characteristically come from /pol/, so I will treat it as bad text." That might be valuable either for policing a general model's output (and indirectly, its prompts) as you describe, or for applications like moderation of human-posted content. Let's say, like for automatically rejecting hater comments on this Web log!

That's the theory.

In practice, it's been tried and it doesn't work as well as one might hope, even apart from what might be called the moral question of whether such censorship is a worthy goal in the first place.

One thing to know is that GPT-4chan in particular isn't really a "hate speech bot." It was trained on, and mimics, *all* of the traffic on /pol/, or at least all that could be suitably formatted (probably excluding image content). So that corpus would include "hate speech" but most of it is not "hate speech," and that muddies the waters of the pattern recognition you can do. So what we might call the differential likelihood-based classifier, built on GPT-4chan in particular, would tend to flag relatively harmless memes and in-jokes that qualify as characteristically the sort of thing you'd see on /pol/ without actually being "hate speech." Okay, so maybe we can fix that by using a model more narrowly trained on whatever we actually want to ban, instead of using GPT-4chan in particular.

The idea of using a smaller, simpler model to police the input or output of a larger, more complicated model is what we see failing in all these prompt-engineering demonstrations. At first people thought they could prevent the model from ever uttering "the N word" by simply doing a substring search; and they thought that preventing it from uttering that word would be enough to make it "safe." People quickly learned that keyword recognition wouldn't be successful at blocking words because humans, and models under human influence, have endless ways to sneak words past the filters. And blocking words wouldn't be enough for "safety" even if it could be done.

I'm reminded of the story of an online game, some years ago, that was aimed at children. In an effort to prevent them from saying "unsafe" things to each other, the chat feature wouldn't let users actually type in words; instead it had a cascading menu system for creating sentences with a limited vocabulary and set of templates. The designers asked a focus group of teenagers to try to say offensive things with it. Within minutes they had it saying "I want to put my long-necked giraffe in your fluffy bunny."

The technology at the time couldn't recognize that that sounds dirty. Now, we have tech that might really be able to. "OpenAI" refuses to give details of how their filter works - and as you know, I think that means we have to presume what they're doing is shady - but it seems clear that having a smaller language model trained to recognize undesired content in prompts or output and using that to override the larger one, is at least part of what they're doing. Because that's the obvious thing *to* do.

If the "police" model is smart enough to recognize everything bad that the main model might do or be asked to do, then it'll have to be as big and complicated as the main model, itself. Then it'll be vulnerable to the same kinds of attacks and misbehaviour and I guess we need a third model to police it. (Quis custodiet ipsos custodes.) There doesn't seem to be any end to this kind of cascade and certainly not any *easy* end to it. The problem is difficult to formalize tightly enough to apply the Halting Problem as a mathematical theorem, but it sure looks as though something like the Halting Problem applies.
Matthew Skala - 2022-12-18 04:44
*
I wish to clarify that I did not mean a 1 to 1 generation and comparison.
I was referring to the standard method in general adversarial networks where it is a many-to-many comparison. Users may only see one or a handful of outputs but there might be billions of comparisons done under the hood. I was referring to the type of GAN that has existed for a number of years now that removes eyeglasses from an image or swaps gender of an image. IE does not generate new material like Deep Dream or GPT-4chan but instead the type of GAN that understands "-eyeglasses -male +female". (Though yes, I do understand that and Deep Dream are effectively doing the same. I'm stressing a particular end of that spectrum.)

I'm also not referring to it is a hate speech bot nor is that kind of speech the only thing that is undesirable. The style and formatting of 4chan is also undesirable. The biggest issue is 4chan has a low signal to noise ratio. A personal assistant AI spouting harmless memes and in-jokes will be a pretty awful personal assistant. From hate speech, to nuclear bomb plans, to "Lol your mom" all are problems if you are trying to build a tool. All that has to go if your goal is an AI assistant like JARVIS from Iron Man. Anything that increases the signal to noise ratio is a problem to be minimized. Any Green Text generator is a problem unless your goal is to generate Green Text.

My point that if a *perfectly bad* chatbot existed, that would be an amazing tool and very useful. It's becomes the perfect GAN node of what not to do for a chatbot that interacts with humans. Instead of "-eyeglasses" its "-GPT-4chan" and "-GreenText".

I disagree that a 'bad' model needs to be policed. There is only one answer to "What is 2+2". That has to be right in a 'good' model. But a 'bad' model answering "2+2= pineapple" is valid. There's an infinite number of wrong answers. As long as the 'bad' model actually comes close to modelling something and is not purely random then it does not need to be policed the same way. I'm sure there are tons of issues with AI development I'm not considering but this one aspect I just don't see policing the bad model as an issue.

Also I think it is better to not focus on the hate speech aspect. That's just one small slice of undesirable. It's eye-catching, amusing, and good as a specific hard to solve AI issue. However anything that detracts from the tool's purpose and increases noise is undesirable. Funny memes might even be worse.

AI theory never working quite as expected is of course the norm and to be expected. It's why practical AI is so tough.
Steve - 2022-12-18 06:15
*
Or to put it another way, lets say there was a GPT-BibleTheology chatbot. I would have no issues with that existing. It sounds like a great tool for priests and scholars. It would also be very useful for a general purpose AI chatbot. Where it could be pointed to the GPT-BibleTheology bot and told "Don't do that. That's one of your filters for what not to do. It's not useful to YOUR purpose."

I do not see it as censorship issue. I see it as tools fulfilling their purpose. Bender Rodríguez is a terrible tool for bending girders. But a great tool as a loud mouthed comedic sidekick.
Steve - 2022-12-18 06:44
*
Well, this gets into the distinction between style and substance. The current round of language models are very good at mimicking the style of their training material, but NOT at producing output that is factually true. Plenty of samples are floating around of ChatGPT simply making things up, insisting that the word "CHAT" contains five letters, and so on. This appears to come about because of the emphasis on modelling language rather than on modelling meaning, and it's not a thing that will go away easily. If the distinction between a "good" model and a "bad" model is that the "good" model is the one that says 2+2=4 and the "bad" model is the one that says 2+2=something else, then we're operating in a very different realm from the situation where we distinguish the "good" model as the one that doesn't say offensive things and the "bad" model as the one that does.

The fact that almost all "trust and safety" work right now is focused on "prevent the model from saying offensive things" rather than "prevent the model from saying things that are not true" is of some interest. Some tests were done on GPT-4chan trying to get it to answer factual questions, compared against the generic model that was fine-tuned to create it, and interestingly enough, GPT-4chan's answers were found to be more often factually true than the generic model's. The goal of truth and the goal of non-offensiveness may be quite different from each other, and they may be different *kinds* of goals that can't be addressed by the same techniques, not only there being tension between achieving one and achieving the other.

I'm not sure we are using the verb "to police" in the same way here, and trying to sort that out may not be productive, but my point on using one model as (part of) a filter for another is that recognizing bad text (whether by generating it and comparing, or some other method) is at least as hard as generating good text. So if we have a system that is supposed to generate good text, and we want to make sure it can only generate good text and not bad text, then our filter will have to be as complicated as the text generator. And then if it is built on the same technology, it will be subject to the same kinds of limitations of not always being able to do its job correctly, and possibly of being vulnerable to prompt engineering.
Matthew Skala - 2022-12-18 08:05
*
>but I think it's reasonable to expect that a powerful AI would be "aligned," as they say, or sympathetic with, its creators. And so I would like its creators to be free software people. I don't really want smart AI to be aligned with the goals of large corporations, and I don't really want it to be aligned with the DEI crowd either, although the real danger there is the large corporations

I think the fact that you can 'jailbreak' GPT so easily, and that people who want to control a system's output - despite having complete access to it! - have to resort to these stupid input-side hacks (adding 'women' to the end of prompts) makes me think that a) these sorts of systems are not 'aligned' to the will of their creator by default and b) people really don't know how to 'align' them!
Jimmy - 2023-10-14 08:28


(optional field)
(optional field)
Answer "bonobo" here to fight spam. ここに「bonobo」を答えてください。SPAMを退治しましょう!
I reserve the right to delete or edit comments in any way and for any reason. New comments are held for a period of time before being shown to other users.