Blog - David Helkowski
index

Reasons not to reject LLMs

I just stumbled on this blog article: Why I object to and reject generative AI

To some degree I also object to LLMs, but I don't think all of the reasoning in this blog article is sensible and I will step through it a piece at a time by going through each point she lists and giving my thoughts on it. Each point is a reason she objects and/or rejects it.

"the enshittification of learning, educating, thinking, writing and researching, and the loss of cognitive skills and development in those who use these technologies"

This is the first time I've noticed enshittification has two ts. That seems odd to me despite being the word Cory coined. enshit-tification? Shouldn't it be enshit-ificiation?

I'm not convinced that you can't learn a lot of useful and valuable things from interacting with AI ( I'm not going to keep typing LLM, you know what I mean in context ). Certainly AI doesn't think or reason, and it's likely bad for our minds to keep reading the slop it pumps out, but it's still an excellent research tool.

So, on the most part, I believe it improves learning more than it harms it currently. Certainly there are those who choose not to learn and instead to substitute slop for real creation and thus not learn how to create anything themselves, but I don't blame AI for that. I blame humans for being lazy assholes.

Education has been shit for a long time. I use the word shit here because if we are talking enshittification, shit is now fair game. Because education, including uni, has been shit now for decades, AI doesn't really do much to make it shittier. Maybe that's why it has two ts. Because of shittier. I'm sure there is a grammatical rule here I should know. "Grammatical" likely exhibits the rule itself.

Let's rewrite and split up this first point a little, because the word enshittification is distracting me.

1. "AI makes learning, education, thinking, writing, and researching worse."

2. "AI causes a loss of cogitive skills."

3. "AI harms cognitive development of people."

I've covered learning, education, and researching. That leaves thinking and writing from 1. I agree it will harm how people think in the long run. On an individual basic I'd like to believe by recognizing the danger I can avoid my own thinking being corrupted by it. That's likely too hopeful. I agree generally AI damages the human mind and it's very unlikely that can be avoided.

In turn I therefore agree with 2 and 3. Off to a good start. I'm mostly agreeing with her first point.

"the slop and inaccuracies (‘hallucinations’) these AI models constantly and confidently churn out, polluting the information ecosystem and spreading misinformation and disinformation"

Yeah it pumps out slop constantly. This is pretty obvious. The bigger concern here in my view is that people are treating this slop as gold and using it as-is instead of recognizing it as slop and using it as reference information instead of quality.

Yes it can be wildly inaccurate at times, but it is also the quickest way to get probably accurate information compared to searching online. It's basically a summary of all the crap online into a short answer. AI also just makes up crap beyond just referencing real content though, and that's a problem. That's why you should verify anything AI says for yourself.

I agree that over time AI will poison the entire internet because how are you supposed to verify anything when the whole internet becomes mostly AI slop itself? You'll just verify against slop. This was, though, always a problem with the internet because people make shit up themselves constantly.

Yuu have to think for yourself and always have. AI doesn't change this. So I reject the notion that the slop is harmful itself. It is people thinking the slop is great that is a problem. As long as you use your brain and see that it is slop and then react accordingly, the slop factor isn't so bad.

So while I agree with this point that the slop pollutes the internet, I don't think it really changes anything. The internet was slop before AI, and will continue to be slop. :shrug:

"the stealing of intellectual property to feed the AI models for the generative AI industry"

I don't believe in the notion of IP, so we can skip this one.

"the craven exploitation and extractivism exhibited by the AI oligarchs in relation to humans and the planet so that they may bolster even further their wealth and power"

What does "craven" even mean? I need to look it up to understand what she means there. Internet says it means cowardly. I don't see how using AI to exploit people is cowardly. Shitty to be sure, but cowardly? How so?

Exploitation? Of what? Of who? I'm guessing this is a continuance of IP theft? In that sense, sure, if you believe in the notion of IP. So we are going to have to skip that as well because I don't feel like turning this post into a long explanation of why I don't believe in IP. That needs it's own post, or series of posts, or it's own series of books...

Extractivism? This sounds like a made up word. Hmm. It's real and refers to digging up primarily rare earth minerals. I don't personally see what is wrong with digging them up so long as you don't destroy the environment in the process. I do see a problem with putting them into products designed to become obsolete quickly which are then discarded without the minerals being recovered though.

My issue here is that that is just an objection to "making tech." Tech needs this kind of stuff. If we are going to discuss banning tech because tech generally harms the planet, that's a think. I'm not a tree hugging greeny though, so I'm going to skip getting into that this moment.

This is a red flag for me. I'm a technologist. If someone is seriously going to argue "AI bad because more tech", then there isn't much I need to say besides "whatever" and walk away.

"AI oligarchs" Really? Yeah I agree the bigcos are controlling the most powerful AI and hiding many aspects of how the systems work. That said, there are lots of open models and with enough money and tech you too can recreate most of the crap they've made. So, while I do agree the bigcos are doing shitty stuff here, I wouldn't call them oligarchs.

The finer point I've seen that is similar is the idea AI generally will increase the divide between the haves and have-nots because of the high cost of having and running AI effectively. That's a legitimate concern, but only if you actually believe you will become part of the lower class if you too don't engage in using AI heavily.

I don't buy it. AI is awesome to be sure, but I don't believe you have to use it or you will be left behind and be unable to get by in life. AI is something you can choose to use or not, same with any technology. Nobody is making you use computers. You probably do because there are benefits, but if you want to go full tree-hugger and live in the jungle, that's still a thing you can do.

On to the next point. This is going to take a very long time at this rate. Oh well.

"the sly attempts by the AI empires to fool people that these models are 'intelligent', 'sentient', 'friends' or 'magical' through the ways that the sycophantic chatbot interfaces have been designed and through their public sales pitches and hype rhetoric"

Huh? I don't think the companies pushing AI have to do much to trick anyone into believing all that. People want to believe all that from the start. We've been primed for it through many years of sci-fi about AI. I do agree that the companies are working to make the AI systems feel this way, but, well, good? I like it when it feels intelligent, sentient, magical, and like it is my friend. I know it's all total bullshit and not true, but so what?

Yes I agree that people believing it really is these things is ridiculous and I think those people are stupid fucks, but, well, lots of people are that way. I don't think AI is to blame for people being dumb enough to believe this stuff and delude themselves. The responsibility is with the users choosing to delude themselves.

There is an argument to be made that AI is vastly harming children because they aren't as capable yet of discerning well, so perhaps there should be age restrictions, but who are we kidding? Kids are going to use AI. You can't stop that. They are going to believe it is all of those things.

So, hmm, I guess I agree on a limited variant of her point here. The limited point being that people believing AI is intelligent and is their friend will distort and corrupt the way they think. It will also likely harm their ability to interact "correctly" with other humans. Not that humans are particularly great to each other though.

All in all I don't think this matters. People delude themselves about AI and will continue to do so. :shrug:

"the negative impacts on data workers in the Majority World, including underpayment, precarious labour and exposure to distressing content"

Apparently "Majority World" is the modern replacement for the phrase "Third World". Interesting. I won't be using this new term, but okay. Good to know what it means at least.

I fail to see how AI harms data workers there. It seems to me like it enables them more than ever before. On to the next point.

Well hold up, there is an added point there at the end. "distressing content". I'm assuming she is referring to people being paid to review content and that content containing distasteful AI generated stuff.

Distasteful content always existed. I don't think AI makes it more prevalent. If anything it reduces the amount of stuff breaking the rules because it homogenizes everything to something vaguely acceptable, so I disagree that it causes more people to be exposed to nasty stuff.

I will agree in one very specific case which is synthetic CSAM. The laws are generally saying that counts as CSAM though, so I think that point is being addressed already. France, for example, is currently going after X for this.

"the economic exploitation of and negative health impacts on people in the Majority World who work in mining the rare earth minerals used to make the hardware used for generative AI systems and those who work in e-waste recycling"

I've already addressed this. It's an issue common to tech generally and not AI specific. Bad companies abuse both people and the environment to dig up stuff. That's not the fault of AI. That's the fault of the bad companies and the bad people running them. Yes they should be held accountable and laws improved. Have at that.

"the negative impacts on ecosystems, including increased air, water, noise and soil pollution, increased e-waste, increased fossil fuel and water consumption and increased land clearing and habitat destruction due to the hyperscale data centres and other infrastructures Big Tech is building to support the expansion of generative AI services"

Hmm. I could believe that more data centers harms the environment. I could also believe the world is working on reducing the harm of data centers to the environment. What is the request here exactly? Have less data centers? That's not going to happen. So, sure, I agree AI is pushing the developed world to make more data centers and that is likely bad for the environment. Joining a small minority who refuse to use AI doesn't fix that though. I'd like to hear an argument for how one intends to stop that.

"the negative health and economic impacts on people who live near hyperscale data centres due to these buildings’ use of local water and energy resources and the impacts on the environment caused by their construction and operation"

Agreed. Go do something about that. What, though, exactly?

"the ableism, classism, racism, sexism, ageism, homophobia, colonialism, White supremacy, eugenics and other forms of algorithmic bias, repression, prejudice and social discrimination that generative AI technologies reproduce and enforce"

For the most part I agree there. No need to tear into this point.

"the malicious use of these technologies for financial fraud, impersonation, catfishing, sexual abuse of children and adults, and other forms of exploitation, scams and violence"

Financial fraud abounds regardless. Not AI specific.

Impersonation? We have laws about this. The laws handle it for the most part. I'm all for laws illegalizing doing that though. You can't put the cat back in the bag though and that ability will remain.

Catfishing? The solution to this is just meeting people in person. Perhaps some laws could be good here as well.

Sexual abuse? I'm assuming nudification apps are being referred to here. I personally view these apps as a net benefit to humanity because the naked body is nothing to be ashamed of and these apps aren't showing the body of the person in the first place. It's just fictional. The world could do with a reduction in the crazed reaction to "omg nude person." It's a non-issue in my view.

All of these points and the generic "it will be used to exploit people" was going on already. Sure AI makes some forms of bad behavior easier to do, but it also makes lots of valuable and meaningful things easier to do. It's a tool that can be used for good or evil. It is the fault of humanity for choosing the evil.

Objecting to the tool doesn't get rid of it either. What is the point of all this raging against AI if there is shit all we can do to stop it?

"the use of these technologies by neoliberal democratic governments to conduct surveillance and policing of their citizens, remove human oversight of governance systems and reduce citizens’ opportunities to access social welfare, healthcare, employment and other services"

Uh... this reads like a dive into the deep end of paranoid conspiracy nonsense.

First off, surveillance? Wtf does generative AI have to do with that?

So the beginning of this point I'm just skipping.

The latter point that AI will be used to discriminate against people is a legitimate concern. Insurance companies have been ingesting data about people and using it to do just that for a very long time before AI. So sure, that's all bad and we should make laws against that to protect our data from misuse.

The key thing here is that AI isn't the issue. Abuse of information about ourselves is, and that was happening already before AI.

"the use of these technologies by authoritarian regimes to control their citizens and repress their human rights"

What? This is reaching. I'd need to hear some concrete example to respond to this one.

"the use of these technologies by right-wing political parties and governments for propaganda and politically-motivated disinformation"

This is just a repeat of "for disinformation" which was an earlier point, but tainted with rock throwing towards the "right-wing". That's not helpful and doesn't make it a better point.

"the use of these technologies in automated warfare ('lethal autonomous weapons systems') in ways that remove human oversight and accountability, leading to conflict escalation, the unconscionable deaths of civilians and destruction of the natural environment and the essential infrastructures required to support human life"

All bad. Hell even AI agrees it is bad if you ask it.

"That is, large language models and image generator models designed and marketed by Big Tech corporations in the interests of profit."

I'd just like to point out the use of the label "Big Tech". I hate those fuckers more than most, but I will say that you should take caution when people label them this way. These companies are pretty evil in my view, but the moment you label them as, essentially, all evil and only caring about profit is when you ignore the human workers involved in them.

People broadly condemning "Big Tech" are, imo, anti-capitalism. I am one of them. I'm not saying it is wrong for people to hold that opinion. Just watch for this sort of view and use it to understand the perspective of the person speaking. It's a tell. Some of these people ( maybe me ) have gone off the deep end...

"All these issues are discussed in greater depth in the book I am currently writing on critical perspectives on generative AI."

Well I hope the book uses better capitalization and punctuation than this blog article. I don't think I'll be reading it regardless because this whole condemnation of AI is lacking in any information about what to do about it.

There is all sort of shitty stuff happening because of AI. More people yelling "AI BAD" fixes precisely squat.