Blog - David Helkowski

Reasons not to reject LLMs

I just stumbled on this blog article: Why I object to and reject generative AI

To some degree I also object to LLMs, but I don't think all of the reasoning in this blog article is sensible and I will step through it a piece at a time by going through each point she lists and giving my thoughts on it. Each point is a reason she objects and/or rejects it.

"the enshittification of learning, educating, thinking, writing and researching, and the loss of cognitive skills and development in those who use these technologies"

This is the first time I've noticed enshittification has two ts. That seems odd to me despite being the word Cory coined. enshit-tification? Shouldn't it be enshit-ificiation?

I'm not convinced that you can't learn a lot of useful and valuable things from interacting with AI ( I'm not going to keep typing LLM, you know what I mean in context ). Certainly AI doesn't think or reason, and it's likely bad for our minds to keep reading the slop it pumps out, but it's still an excellent research tool.

So, on the most part, I believe it improves learning more than it harms it currently. Certainly there are those who choose not to learn and instead to substitute slop for real creation and thus not learn how to create anything themselves, but I don't blame AI for that. I blame humans for being lazy assholes.

Education has been shit for a long time. I use the word shit here because if we are talking enshittification, shit is now fair game. Because education, including uni, has been shit now for decades, AI doesn't really do much to make it shittier. Maybe that's why it has two ts. Because of shittier. I'm sure there is a grammatical rule here I should know. "Grammatical" likely exhibits the rule itself.

Let's rewrite and split up this first point a little, because the word enshittification is distracting me.

1. "AI makes learning, education, thinking, writing, and researching worse."

2. "AI causes a loss of cogitive skills."

3. "AI harms cognitive development of people."

I've covered learning, education, and researching. That leaves thinking and writing from 1. I agree it will harm how people think in the long run. On an individual basic I'd like to believe by recognizing the danger I can avoid my own thinking being corrupted by it. That's likely too hopeful. I agree generally AI damages the human mind and it's very unlikely that can be avoided.

In turn I therefore agree with 2 and 3. Off to a good start. I'm mostly agreeing with her first point.

"the slop and inaccuracies (‘hallucinations’) these AI models constantly and confidently churn out, polluting the information ecosystem and spreading misinformation and disinformation"

Yeah it pumps out slop constantly. This is pretty obvious. The bigger concern here in my view is that people are treating this slop as gold and using it as-is instead of recognizing it as slop and using it as reference information instead of quality.

Yes it can be wildly inaccurate at times, but it is also the quickest way to get probably accurate information compared to searching online. It's basically a summary of all the crap online into a short answer. AI also just makes up crap beyond just referencing real content though, and that's a problem. That's why you should verify anything AI says for yourself.

I agree that over time AI will poison the entire internet because how are you supposed to verify anything when the whole internet becomes mostly AI slop itself? You'll just verify against slop. This was, though, always a problem with the internet because people make shit up themselves constantly.

Yuu have to think for yourself and always have. AI doesn't change this. So I reject the notion that the slop is harmful itself. It is people thinking the slop is great that is a problem. As long as you use your brain and see that it is slop and then react accordingly, the slop factor isn't so bad.

So while I agree with this point that the slop pollutes the internet, I don't think it really changes anything. The internet was slop before AI, and will continue to be slop. :shrug:

"the stealing of intellectual property to feed the AI models for the generative AI industry"

I don't believe in the notion of IP, so we can skip this one.

"the craven exploitation and extractivism exhibited by the AI oligarchs in relation to humans and the planet so that they may bolster even further their wealth and power"

What does "craven" even mean? I need to look it up to understand what she means there. Internet says it means cowardly. I don't see how using AI to exploit people is cowardly. Shitty to be sure, but cowardly? How so?

Exploitation? Of what? Of who? I'm guessing this is a continuance of IP theft? In that sense, sure, if you believe in the notion of IP. So we are going to have to skip that as well because I don't feel like turning this post into a long explanation of why I don't believe in IP. That needs it's own post, or series of posts, or it's own series of books...

Extractivism? This sounds like a made up word. Hmm. It's real and refers to digging up primarily rare earth minerals. I don't personally see what is wrong with digging them up so long as you don't destroy the environment in the process. I do see a problem with putting them into products designed to become obsolete quickly which are then discarded without the minerals being recovered though.

My issue here is that that is just an objection to "making tech." Tech needs this kind of stuff. If we are going to discuss banning tech because tech generally harms the planet, that's a think. I'm not a tree hugging greeny though, so I'm going to skip getting into that this moment.

This is a red flag for me. I'm a technologist. If someone is seriously going to argue "AI bad because more tech", then there isn't much I need to say besides "whatever" and walk away.

"AI oligarchs" Really? Yeah I agree the bigcos are controlling the most powerful AI and hiding many aspects of how the systems work. That said, there are lots of open models and with enough money and tech you too can recreate most of the crap they've made. So, while I do agree the bigcos are doing shitty stuff here, I wouldn't call them oligarchs.

The finer point I've seen that is similar is the idea AI generally will increase the divide between the haves and have-nots because of the high cost of having and running AI effectively. That's a legitimate concern, but only if you actually believe you will become part of the lower class if you too don't engage in using AI heavily.

I don't buy it. AI is awesome to be sure, but I don't believe you have to use it or you will be left behind and be unable to get by in life. AI is something you can choose to use or not, same with any technology. Nobody is making you use computers. You probably do because there are benefits, but if you want to go full tree-hugger and live in the jungle, that's still a thing you can do.

On to the next point. This is going to take a very long time at this rate. Oh well.

"the sly attempts by the AI empires to fool people that these models are 'intelligent', 'sentient', 'friends' or 'magical' through the ways that the sycophantic chatbot interfaces have been designed and through their public sales pitches and hype rhetoric"

Huh? I don't think the companies pushing AI have to do much to trick anyone into believing all that. People want to believe all that from the start. We've been primed for it through many years of sci-fi about AI. I do agree that the companies are working to make the AI systems feel this way, but, well, good? I like it when it feels intelligent, sentient, magical, and like it is my friend. I know it's all total bullshit and not true, but so what?

Yes I agree that people believing it really is these things is ridiculous and I think those people are stupid fucks, but, well, lots of people are that way. I don't think AI is to blame for people being dumb enough to believe this stuff and delude themselves. The responsibility is with the users choosing to delude themselves.

There is an argument to be made that AI is vastly harming children because they aren't as capable yet of discerning well, so perhaps there should be age restrictions, but who are we kidding? Kids are going to use AI. You can't stop that. They are going to believe it is all of those things.

So, hmm, I guess I agree on a limited variant of her point here. The limited point being that people believing AI is intelligent and is their friend will distort and corrupt the way they think. It will also likely harm their ability to interact "correctly" with other humans. Not that humans are particularly great to each other though.

All in all I don't think this matters. People delude themselves about AI and will continue to do so. :shrug:

"the negative impacts on data workers in the Majority World, including underpayment, precarious labour and exposure to distressing content"

Apparently "Majority World" is the modern replacement for the phrase "Third World". Interesting. I won't be using this new term, but okay. Good to know what it means at least.

I fail to see how AI harms data workers there. It seems to me like it enables them more than ever before. On to the next point.

Well hold up, there is an added point there at the end. "distressing content". I'm assuming she is referring to people being paid to review content and that content containing distasteful AI generated stuff.

Distasteful content always existed. I don't think AI makes it more prevalent. If anything it reduces the amount of stuff breaking the rules because it homogenizes everything to something vaguely acceptable, so I disagree that it causes more people to be exposed to nasty stuff.

I will agree in one very specific case which is synthetic CSAM. The laws are generally saying that counts as CSAM though, so I think that point is being addressed already. France, for example, is currently going after X for this.

"the economic exploitation of and negative health impacts on people in the Majority World who work in mining the rare earth minerals used to make the hardware used for generative AI systems and those who work in e-waste recycling"

I've already addressed this. It's an issue common to tech generally and not AI specific. Bad companies abuse both people and the environment to dig up stuff. That's not the fault of AI. That's the fault of the bad companies and the bad people running them. Yes they should be held accountable and laws improved. Have at that.

"the negative impacts on ecosystems, including increased air, water, noise and soil pollution, increased e-waste, increased fossil fuel and water consumption and increased land clearing and habitat destruction due to the hyperscale data centres and other infrastructures Big Tech is building to support the expansion of generative AI services"

Hmm. I could believe that more data centers harms the environment. I could also believe the world is working on reducing the harm of data centers to the environment. What is the request here exactly? Have less data centers? That's not going to happen. So, sure, I agree AI is pushing the developed world to make more data centers and that is likely bad for the environment. Joining a small minority who refuse to use AI doesn't fix that though. I'd like to hear an argument for how one intends to stop that.

"the negative health and economic impacts on people who live near hyperscale data centres due to these buildings’ use of local water and energy resources and the impacts on the environment caused by their construction and operation"

Agreed. Go do something about that. What, though, exactly?

"the ableism, classism, racism, sexism, ageism, homophobia, colonialism, White supremacy, eugenics and other forms of algorithmic bias, repression, prejudice and social discrimination that generative AI technologies reproduce and enforce"

For the most part I agree there. No need to tear into this point.

"the malicious use of these technologies for financial fraud, impersonation, catfishing, sexual abuse of children and adults, and other forms of exploitation, scams and violence"

Financial fraud abounds regardless. Not AI specific.

Impersonation? We have laws about this. The laws handle it for the most part. I'm all for laws illegalizing doing that though. You can't put the cat back in the bag though and that ability will remain.

Catfishing? The solution to this is just meeting people in person. Perhaps some laws could be good here as well.

Sexual abuse? I'm assuming nudification apps are being referred to here. I personally view these apps as a net benefit to humanity because the naked body is nothing to be ashamed of and these apps aren't showing the body of the person in the first place. It's just fictional. The world could do with a reduction in the crazed reaction to "omg nude person." It's a non-issue in my view.

All of these points and the generic "it will be used to exploit people" was going on already. Sure AI makes some forms of bad behavior easier to do, but it also makes lots of valuable and meaningful things easier to do. It's a tool that can be used for good or evil. It is the fault of humanity for choosing the evil.

Objecting to the tool doesn't get rid of it either. What is the point of all this raging against AI if there is shit all we can do to stop it?

"the use of these technologies by neoliberal democratic governments to conduct surveillance and policing of their citizens, remove human oversight of governance systems and reduce citizens’ opportunities to access social welfare, healthcare, employment and other services"

Uh... this reads like a dive into the deep end of paranoid conspiracy nonsense.

First off, surveillance? Wtf does generative AI have to do with that?

So the beginning of this point I'm just skipping.

The latter point that AI will be used to discriminate against people is a legitimate concern. Insurance companies have been ingesting data about people and using it to do just that for a very long time before AI. So sure, that's all bad and we should make laws against that to protect our data from misuse.

The key thing here is that AI isn't the issue. Abuse of information about ourselves is, and that was happening already before AI.

"the use of these technologies by authoritarian regimes to control their citizens and repress their human rights"

What? This is reaching. I'd need to hear some concrete example to respond to this one.

"the use of these technologies by right-wing political parties and governments for propaganda and politically-motivated disinformation"

This is just a repeat of "for disinformation" which was an earlier point, but tainted with rock throwing towards the "right-wing". That's not helpful and doesn't make it a better point.

"the use of these technologies in automated warfare ('lethal autonomous weapons systems') in ways that remove human oversight and accountability, leading to conflict escalation, the unconscionable deaths of civilians and destruction of the natural environment and the essential infrastructures required to support human life"

All bad. Hell even AI agrees it is bad if you ask it.

"That is, large language models and image generator models designed and marketed by Big Tech corporations in the interests of profit."

I'd just like to point out the use of the label "Big Tech". I hate those fuckers more than most, but I will say that you should take caution when people label them this way. These companies are pretty evil in my view, but the moment you label them as, essentially, all evil and only caring about profit is when you ignore the human workers involved in them.

People broadly condemning "Big Tech" are, imo, anti-capitalism. I am one of them. I'm not saying it is wrong for people to hold that opinion. Just watch for this sort of view and use it to understand the perspective of the person speaking. It's a tell. Some of these people ( maybe me ) have gone off the deep end...

"All these issues are discussed in greater depth in the book I am currently writing on critical perspectives on generative AI."

Well I hope the book uses better capitalization and punctuation than this blog article. I don't think I'll be reading it regardless because this whole condemnation of AI is lacking in any information about what to do about it.

There is all sort of shitty stuff happening because of AI. More people yelling "AI BAD" fixes precisely squat.

Crypto Bros

I've been using computers since before the graphical internet existed. 2400 baud modems with green screens? Yep I was using those. Token ring networks? Also and they were painful. So I can tell you I've seen many cycles of technology and bullshit.

As a result I have a strong b...

Firmament game is slop

I've played through and completed all of the Myst series by Cyan Worlds except for URU. I've also played many similar games over the years. I enjoyed many of those games greatly. I have fond memories of playing them over the years. I can remember many details within the games even no...

Your diary is evidence

Some eons ago I remember thinking that my diary is my private information and I could rightly complain if it was made public against my will.

Well, my notion of that has now been shattered.

How? I noticed in the latest lawsuit against OpenAI that they are forcing diary entri...

Tsukuba Center Bicycle Parking

Before I begin fair warning. I'm angry about this and I'm venting, so if cursing and venting about stuff bothers you, you should bail now.

That said, fuck Tsukuba Center and their automated bicycle parking lots.

Why do I hate them so much? Well it's complicated and I will ex...

Attention is hard for everyone

I recently watched a YouTube video by Asa Park about MrBeast. The video is focused on being very critical of him. I'm not a fan of MrBeast and do think he is strange, but I don't particularly think he is any ultimate evil. So ...

Is the villain the enemy?

What if the villain is not the enemy but is just the label we hoist upon anyone who does not fit the standard roles society defines as acceptable?

People are not so trivial as to fit within the notions we scope them to. While many will choose to cooperate with ...

Personal Japan travel guide

In light of the potential destruction of the IT industry by AI, or at least the reduction of it to the point that competition for jobs will drop salary of it, I have been been considering alternative ways to make a living recently.

A few weeks ago I came up wit...

Love Without Hypocrisy

I was sharing a story from when I was a teenager with a friend of mine and I thought perhaps I shall share it with all of you as well. To my memory I was 15 at the time this all happened.

I was attending a bible camp called Worldview Academy. They still exist today, and...

0player? Oh no

Recently I have been working on building a puzzle game. Well another one. I have actually been working on building a game called TentFires for some 5 years now. Well more to the point I have been playing TentFires and iterating on it for 4 years after I created a fully wo...

Is Japan Paradise?

I recently watched a YouTube video titled I'm leaving Germany | Brutally Honest Review.

I found the detail of the video good and it made a lot of sense to me. What I found striking initially was tha...

GuliKit King Kong 3 Max

Some time ago I bought the GuliKit King Kong 3 Max controller. It looked awesome. The price is much more reasonable than Xbox Elite Series 2 controllers, and it has better tech according to the specs.

I got it; it feels awesome and I was able to use it well for a number...

Let's Reinvent the SD Card

About a month ago some a random LinkedFluencer posted some bullshit about how her SD card failed. She was raging that she should have been notified by her computer that it was reaching end of life and that if it just exposed wear leveling through to the system she would h...

Memovich: The Plot Dissolves

Two days ago Milla Jovovich posted an extensive account of her side of the MemPalace story. It was pretty absurd in my view, to put it mildly. I began writing a new blog article titled: "Memovich: The Plot Thickens".

I started writing something sharp enough to ...

Memovich Continued

My last blog article was about MemPalace, the supposedly incredible AI memory project vibe coded in part by Milla Jovovich.

In the last article I focused primarily on what I view as lack of qualifications of who I believe the real author of the project was/is, ...

Milla Jovovich and the Fine Art of Selling Cognitive Swill

This just in. Milla Jovovich is slapping her name on the ugly bare bottom of a crypto bro trainwreck ai disasterpiece called MemPalace.

Beneath the choking snake oil you'll find... nothing of value.

Ok ok let's get into this practically, as I can only write s...

Regarding Garry Tan, YCombinator, and Corruption

A few weeks ago I saw an article on Hacker News new with an entire 2 upvotes that piqued my interest. It was a link to this website. It's a detailed article about how Naive ( usenaive ) is j...

Medial Axis Extraction in O(n)

Around three months ago, I solved a long-standing problem in computational geometry and computer vision:

Extracting the true medial axis from a binary image in linear time per pixel.

No thinning.
No iterative label propagation.
No graph recon...

On AI and the Death of Humanity

Tags: ai

I've been using "AI", as it is currently referred to, for several years now. Mostly I've been using ChatGPT, although at times I've run a variety of different local models, mainly to test what is possible and ascertain for myself whether these things are "dangerous" or not. For the m...

USA: Destination Hell

I am a US citizen. I don't, though, live in the US. I reside in Japan, and I intend to do so permanently. There are a few major reasons why:

The third poin...

Decoersion

Something I have believed strongly for many years is that all forms of coercion are unacceptable. Recently I have been thinking about it more, and realizing how pervasive coercion is throughout society.

So, without further ado, I present to you, decoersion ( intentionally spelled w...

Social Media Decline

Social Media: From Fertile Farms to Wastelands

Social media is in rapid decline. What was once a thriving space for innovation, meaningful discussion, and genuine human connection has become a hollow shell of its former self, plagued by corporate greed, algorithmic manipulation, ...

Toddler Investors

Introduction

The investment landscape is moving at an ever-increasing pace, with investors often making funding decisions in mere minutes. The rise of trend-driven investments, such as artificial intelligence, has led to a system that prioritizes popularity over true innovation. ...

Rethinking PCs: A practical approach

Tags: tech

The current trajectory of computer hardware development has increasingly diverged from the practical needs of everyday users. Manufacturers continue to push advancements in speed, power efficiency, and display resolutions—yet for most people, these improvements offer diminishing real-world ben...

Peak Mall Theory

Tags: japan

US mall death, Japan mall life

The decline of shopping malls in the United States is a well-documented phenomenon, while in Japan, malls remain thriving, bustling centers of commerce, entertainment, and community life. This contrast raises an interesting question: why have Japane...

Email is Outdated

Email seems ubiquitous and irreplaceable. Is it though? It was created for a useful purpose, and worked effectively for a number of years, but I would argue that it is outdated and needs to be replaced by something better as soon as possible.

I explore here why I believe email...

Identity System

Over my career in software I've repeatedly had to deal with issues of identity. Another way to refer to identity is usernames. The most pervasive identity system in use is email, although it is quickly being supplanted by cell phone number.

Almost every user of the internet ha...

LinkedIn is Buggy

The main social platform I use is LinkedIn. I've been using it for many years. Over my years of using it I've found it to be very buggy. I began posting the bugs I found as LinkedIn posts. That didn't seem to get much attention or have any meaningful affect, so I created a Github pro...

Hubspot Culture Code

What follows is a critical critique of the Hubspot Culture Code. This will be an analysis focused on logic. If you are looking for a warm fuzzy perspective on company culture, this isn't it.

A compa...

Pursuing relationships is inherently puerile

Intro

I've seen a disturbing trend online where expressing interest in others for the purpose of being in a relationship with them is thought to be inherently puerile. This view is relayed more simply as "men pursuing women is always sexual and disgusting".

The t...

My time is worth $x per hour

In the all too recent history I was feeling proud of my day job, and I thought to myself "I can made $x per hour." A bit more thinking I came to the conclusion "I shouldn't do anything that takes me an hour that I could pay someone else to do for less than $x per hour." I believed th...

Should gaming be a gradeschool subject/sport?

I recently read a post on LinkedIn broadly proclaiming "esports will overtake basketball". My immediate reaction to this was that that is entirely rubbish, and I responded jokingly making fun of the idea. As you might expect, there was a mixed reaction to that.

Some few people...

Job Descriptions

I am connected to quite a few recruiters and managers on LinkedIn. As a result of this, I often see posts about what candidate's resumes should look like, what information they should contain, how they should approach applying to jobs etc.

Changing the way you apply for roles ...

UMD Data Breach

A bit over 5 years ago I was involved in the events following a serious data breach at the University of Maryland. I was pulled into the FBI / Secret Service investigation due to my involvement. It may not be clear, but I did not start the data breach. Criminal elements of unknown so...

Blog

Welcome to my public blog. I haven't blogged publicly in quite a number of years. This is in part due to the overwhelming amount of random information available about me already, and I am unsure about adding to the pile.

I have decided that adding information of higher quality...