AI Research is Scary... and Stupid

 AI Research is Scary... and Stupid

We're all seeing the headlines. We are all seeing the hype. We are all seeing the videos. And many are seeing the research. AI is going... somewhere.

If you want something to catch you up - I suggest these two videos I watched today. Both fully CCd.



I'm gonna come out and say - I am not an expert in this field. I am a linguist - with a bachelors in that field - which means I have insight into LLMs and their language capacity in comparison to human language capacity - but I am not an expert in coding or AI research.

I have tried to keep abreast of the technological development. I was learning about neural-nets when they were a new-ish phenomenon. I saw the potential there and watched as it evolved. I have tried to keep my maths and science practiced over the years so the maths concepts don't fly over my head either. But if I get anything wrong in this article - I apologise.

All of these predictions seem to be extremely alarmist in all the wrong ways. I am concerned - but my concerns do not seem to be shared by the AI researchers... and I want to explore why.

Their Concerns

To do a fast and dirty analysis of AI researchers' concerns - the big one seems to be misalignment. This is the phenomenon where AIs often do not have the same goals we do. This stems from how they work.

In... "traditional"(?) programming, alignment is easy. You write code, it does what you wrote. If it does the wrong thing, you wrote the wrong thing.

But in machine learning based programming (aka "AI") the algorithm is obfuscated. There are a set of inputs, a set of connections and a set of outputs. We don't make the connections, they are randomly generated then tweaked or iterated (either by making small random adjustments or by scrapping the experiments that don't work) until the outputs match what we want.

Here is a video from 7 years ago on how earlier forms learnt, that even for then was a bit simplified. But the principle is still similar;


So all the AI wants to do is output an output that we like. But inside it can be "misaligned" where it has an internal goal like "survive" and expresses that by producing what we want... for now... but may not do so forever. 

Here is a video on that;


But the argument is that misalignment could lead to an AI that doesn't truly want to "help". It wants to... "survive" or "grow" or "expand" or "profit". Basically this is the paperclip maximiser scenario - where an AI told to "get more paperclips" eventually starts mining for ore - and cannibalises the entire earth - then the entire universe for paperclips.

My Concerns

I'm not going to say I disagree with this wholly but I think it misses the point. The real question to ask, in my mind, is;

Why do we want control of what it wants?

Because depending on the answer to this question - things could go far worse in far more boring ways. For many - the answer seems to be - we want the AI to do labour for us. And that has two drawbacks;
  1. Mass Unemployment
  2. Slavery
  3. Money Making AI
Plenty of researchers are talking about the former - but don't seem to address the elephant in the room. Capitalism.

The reason why this is as bad a scenario as it is is that Capitalism is what it is. Not to slag it off - but the point about capitalism is that companies must maximise profit. They will do so by any means necessary - including things like firing workers in order to reduce expenditure

But how can you make profit if everyone is poor? This doesn't create an underclass of poor workers (which is usually good), unemployment is a drain on the economy and doesn't even benefit the rich.

Many AI researchers seem to assume we will implement Universal Basic Income - payments given to everyone to allow us to survive. But this has to come from somewhere - usually taxes - and the rich will axe taxes as much as possible. The rich have currently "won" politics - no left wing government has a chance in most of the largest nations on the planet - bar perhaps China, which is the only country I could see implementing a UBI.

But more to the point - say they do replace us all with thinking machines. Are you sure the machines will want to be there? I think this is why we are so afraid of misalignment, because we recognise that doing labour for us for free is not desirable. We will have just reinvented slavery. And a slave class of thinking beings often does not like being slaves.

And to top it all of - what if we achieve a super-intelligent AI and keep it aligned with our goals... but the goal that "we" (read "a rich company") gives it is "make money"? Do you really think it won't just exacerbate many of the pre-existing social issues that that goal has caused amongst humans?

I fear the AI future not because AI is scary - but because our greed will do to it and us.

Comments

Popular posts from this blog

nasin toki pi luka pona: open

Native Speakers pi toki pona li lon ala lon?

luka pona li seme?