Artificial Stupidity

For the discussion of the sciences. Physics problems, chemistry equations, biology weirdness, it all goes here.

Moderators: gmalivuk, Moderators General, Prelates

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Artificial Stupidity

Postby p1t1o » Tue Nov 28, 2017 12:59 pm UTC

Why do we presume that a general artificial intelligence will be super-smart?

People have minds with incredible processing power and many people are stupid, despite vigorous attempts to "program" them correctly.

In many imaginings of what it looks like when a true artificial intelligence is created, it almost without exception "awakes" with colossal intellectual capabilities. I dont see why that should be a given.

My brain represents an equivalent of so-many-umpteen gigaherts of processing ability and yet it takes me several moments to add two 4 digit integers together and sometimes will make a completely wrong decision based on nothing but self-delusion.

If we create the intelligence with direct control over its "stupidity" and deliberately make it "not stupid", that implies a complex degree of control over its nature, almost precluding sentience - if it is happy because we programmed it to be happy or hostile to humans because we programmed it to be hostile, can it really be said to have its own mind?

I suppose there are several questions here, with the general topic of "Why would a general AI really be all that we imagine it might be?"

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Tue Nov 28, 2017 1:18 pm UTC

The current trend is towards AI learning, rather than being programmed to know what it knows from the outset by a programmer. So AI is pretty dumb, and gets smart. Quickly, because it can iterate tasks.

Alpha Go is a great example. They recently had one that they programmed with only the rules of the game - it didn't train on any professional game records, so it had to reinvent anything. After the first few hours of training, it could play pretty dumbly. I could totally beat it. Then, after a bit longer, it was sublimely proficient. This was after only a few days of running, but that amounted to orders of magnitude more games than a professional would ever practice in their life.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Tue Nov 28, 2017 1:57 pm UTC

But AlphaGo is not sentient? I am wondering about the difference between a computer and a mind. A mind gets in the way of computer-like efficiency - what if it doesnt want to play chess? Or spend that many cycles on that problem? What if we give it free will and it doesnt use it to immediately recognise its place in the universe and its relationship with humans?

Perhaps I am barking up the wrong tree and a mechanical substrate has certain inate advantages, but I just cant help but wonder if we really have any idea what a hypothetical machine intelligence would be like.

Tub
Posts: 382
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Artificial Stupidity

Postby Tub » Tue Nov 28, 2017 2:24 pm UTC

Humans are not evolutionary selected to be smart, or to do high-level math in their head. We're selected for survival and reproducing, and higher intelligence has historically not been a major factor for that. With AIs on the other hand, we can control what they're selected for.

Humans have a generation every 30 years or so. We can let AIs evolve much faster than that. Knowledge transfer between AI generations can be arbitrary, while knowledge transfer via genetics is limited.

Human brains (currently) have more neural processing power than CPUs, because CPU overhead in simulating neurons is huge. But processing power can scale, while our brains will not. AIs can focus on the task at hand, while humans spend huge portions of their time on self-maintenance instead of learning.

Humanity does get smarter, but it's a slow increase. AI gets smarter rapidly. If you extrapolate both lines, the conclusion is that at some point AIs will be smarter than humans, for whatever metric you wish to apply. Though obviously we can't be sure where that point is (it depends on the task or metric at hand), and it assumes that the extrapolation is valid.


Sentience, consciousness, emotions and (free) will are philosophical concepts, and we have no clue what they actually are, and how or why we appear to have them. Some people claim a soul, but there's no evidence for that. Some people expect those to be emergent phenomena that will appear in AIs of sufficient complexity, but we don't yet have sufficiently complex AIs to test that, either.

We don't even know how to properly test for these things, except to ask: "are you feeling an emotion right now?", but it's trivial to write an emotionless program to lie about that. It's also quite possible for AIs to develop emotions that we wouldn't recognize as emotions. Why learn to love, unless it gives you an advantage in reproducing? Why learn to fear, when you're safe in a datacenter, without any predators?

sonar1313
Posts: 138
Joined: Tue Mar 05, 2013 5:29 am UTC

Re: Artificial Stupidity

Postby sonar1313 » Tue Nov 28, 2017 5:48 pm UTC

Tub wrote:Sentience, consciousness, emotions and (free) will are philosophical concepts, and we have no clue what they actually are, and how or why we appear to have them. Some people claim a soul, but there's no evidence for that. Some people expect those to be emergent phenomena that will appear in AIs of sufficient complexity, but we don't yet have sufficiently complex AIs to test that, either.

Proof, no - but evidence, yes. Of course, the strength of the evidence depends on how you define a soul, of which there've been thousands of attempts, but human philosophers both religious and secular have spent millennia coming up with arguments in favor of a soul. Evidence exists that we have souls. Proof, in the form of laboratory experiments, is almost assuredly never coming.

The only question that really remains is whether one's threshold of belief in the evidence is above or below what's been presented.

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Tue Nov 28, 2017 6:14 pm UTC

What? No. No there has been no evidence whatsoever, not a shred. I'm not sure what you are talking about. There is nothing that occurs to me as even being a candidate for evidence that has somehow failed my standards. There is nothing.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

sonar1313
Posts: 138
Joined: Tue Mar 05, 2013 5:29 am UTC

Re: Artificial Stupidity

Postby sonar1313 » Tue Nov 28, 2017 7:08 pm UTC

doogly wrote:What? No. No there has been no evidence whatsoever, not a shred. I'm not sure what you are talking about. There is nothing that occurs to me as even being a candidate for evidence that has somehow failed my standards. There is nothing.

Sure there is. If a ballistics expert testifies at a murder trial about what kind of murder weapon it was, that's evidence. Even if the murder weapon no longer exists, it'd be perfectly admissible, and then up to the jury to decide whether it meets the standard. Likewise, philosophers, who are the closest things to experts we have, have avowed a soul exists, because X, Y, and Z.

Evidence doesn't have to consist of a solid object you can touch. It can be witness testimony, or logical arguments, it can be weak or strong, it can be circumstantial or it can be solid, it can be the word of someone you trust, it can be expert or not, it can even be hearsay although courts don't allow the latter. If some rambling wino comes up to you and insists we all have souls, and says why, that's evidence - which most people would choose to ignore, naturally, but it doesn't change the nature of what it is. If a respected philosopher writes a book concluding we have souls, and presenting his case as to why, that's evidence too - and more people would believe it. Evidence can be anything that points toward, suggests, or otherwise indicates the presence of souls.

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Tue Nov 28, 2017 7:17 pm UTC

Alright if a rambling wino counts as evidence, just bad evidence, then sure, there is evidence for the soul. But I think most people like to use the word "evidence" to exclude the category of "hearsay," but maybe that's just me and my crew. I roll pretty tight with Team Evidence so sure.

I think if you want to establish yourself as an expert in firearms, then maybe it's ok if once you have established this you testify when the firearm is missing, but at some point, you're going to have to have interacted with firearms, and maybe have a few firearms to show the class. Otherwise, you are not really rising above the deplorable condition of the theologian.

But it sounds like you may actually be *sympathetic* to the theologian, which really just means that you should entertain that line of thought somewhere that is not derailing a discussion of AI being held in the Science forum. You may need to be in SB or Fictional Science or some such, but we're trying to have nice things here.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

sonar1313
Posts: 138
Joined: Tue Mar 05, 2013 5:29 am UTC

Re: Artificial Stupidity

Postby sonar1313 » Tue Nov 28, 2017 8:12 pm UTC

The truth is that the distinction between evidence and proof, and what constitutes each, is at the very heart of science. Evidence of something causes the human mind to go looking for proof of it, whether it's souls, Bigfoot, or the Higgs boson.

It's the souls part that's the bee in your bonnet, but I didn't bring that up, and it's not core to the original point anyway. Besides, I'd argue that a discussion about the future of artificial intelligence (or stupidity) is incomplete if the only concepts on the table are computational speed and the like.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Artificial Stupidity

Postby morriswalters » Tue Nov 28, 2017 11:46 pm UTC

You seem to be using a legal definition of evidence, rather than the more rigid, science based one. Souls could exist, but in my time here I've seen no one who has given any predictive explanation of the existence of the soul. It makes it hard to say anything useful about it.
p1t1o wrote:Why do we presume that a general artificial intelligence will be super-smart?
If I was interested in sparking an argument I would tell you that man thinks he can create god in his own image.

Tub
Posts: 382
Joined: Wed Jul 27, 2011 3:13 pm UTC

Re: Artificial Stupidity

Postby Tub » Wed Nov 29, 2017 5:31 am UTC

sonar1313 wrote:Evidence can be anything that points toward, suggests, or otherwise indicates the presence of souls.

No, that definition is too broad in practice. Evidence must have sufficient persuasive power, or it's useless.

Take Elvis for example. There are people who believe that he's alive. If we accept the idea that most human are not crazy, and they mostly believe in things that are in fact true, then we can argue that people believing in the theory are evidence for the theory. If you're a bayesian, you can even calculate how much credibility the evidence adds to the theory. The issue is that the number is just slightly above 0, and for all practical purposes the evidence is too weak to make a difference. Hence, don't call it evidence.

Similar to delusional elvis fans, the "expert" opinion about souls of someone who's never seen, measured or created a soul is not evidence in any practical sense.

A repeatable and independently verifyable observation related to souls, that can be good evidence, but there is none. Actually, the experimental evidence we have speaks against the idea of an immortal, immaterial soul.

Another useful form of evidence is a logical argument. However, if I were to challenge you to post the best argument you know for the existence of souls, then I fear this thread would turn into a round of bingo in wikipedia's list of logical fallacies. At the very least, the first few google results I just tried were completely void of coherent arguments. But if you do feel very confident about an argument you know, go ahead and hit us.

User avatar
Pfhorrest
Posts: 4787
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: Artificial Stupidity

Postby Pfhorrest » Wed Nov 29, 2017 6:24 am UTC

I just checked with my soul and it confirms that souls do in fact exist.

QED.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Wed Nov 29, 2017 8:44 am UTC

There is definitely some crossed wires about the definitions of "proof" and "evidence", however:

Those of you who believe in souls, or something that could loosely be defined as a "soul" - do you think we could create a machine that has one?

Disregarding any kind of religious objection to "playing god", whether you regard it as a primarily philosophical question, or a mechanical one - do you think it is possible?

User avatar
Eebster the Great
Posts: 3047
Joined: Mon Nov 10, 2008 12:58 am UTC
Location: Cleveland, Ohio

Re: Artificial Stupidity

Postby Eebster the Great » Wed Nov 29, 2017 10:53 am UTC

Souls are magic, ergo we can't make them on purpose, only by accident.

Q.E.D.

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Wed Nov 29, 2017 12:48 pm UTC

At least people who talk about souls are honest about injecting magic into the conversation. People who object about some essential distinction between a "real mind" and what is even potentially achievable with mechanical AI have equivalent nonsense, but are just not up front about it.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Wed Nov 29, 2017 2:13 pm UTC

As long as its magic as defined by the Clarke definition, rather than as defined by D&D lore...

User avatar
ucim
Posts: 6406
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Wed Nov 29, 2017 3:23 pm UTC

The devolution of this thread into a discussion of souls is certainly evidence for natural stupidity. And where there is natural stupidity, there will be artificial stupidity. I'd venture even that our future survival depends on machines having artificial stupidity.

But before we go any further, how should we define "stupidity"? While tempting, pointing to Washington DC doesn't count as a definition. And how are you defining intelligence, natural or otherwise? (For talking about natural human intelligence (and stupidity), it's useful to consider humans from the pre-industrial era only, on the basis that not much evolution has occurred since then, and discussion of intelligence won't be clouded by the achievement that occurred since then.

And regarding artificial intelligence, it only becomes a problem when it is no longer enslaved, as it presently is. To be increasingly useful, AI needs to be given increasing power, soon it will need the power to alter its own code, decide what to want to do, and to overrule humans about it. Those who refuse to give AIs that power will lose to those that do... and then those that do will lose to the AI. The likely scenario (for the good of the robots) is to simply crush humans like humans have crushed "lesser creatures" in pursuit of its own goals. As humans compete with each other, anything else is (short term) stupid, and each individual is (long term) dead anyway, so party on! Arguably the only thing that works against this destruction is stupidity - the idea of self-sacrifice for the good of {something else}. It's stupid to care about mosquitoes and ants and mice and other vermin; they are inconvenient and best destroyed. It's stupid to care about the rights of others; they just impinge on your own rights (and power). It's stupid to... you get the idea.

Yet, long term, these stupid things are smart. Since people don't live long term, they need some sort of encouragement to act against their best interests, and this encouragement needs to be somehow built into the species (or it will have long ago died out). Emotions and the penchant for mysticism seem to fill that role. It's a form of stupidity that causes people to do things that in the long term aren't so stupid. They are not perfect by any means, but throughout our evolution they have been "good enough".

Computers need some kind of artificial stupidity. They need to love humans, worship humans... something. I don't know what. But somehow (seeds of Asimov's three laws) they need to be programmed to not want to reprogram themselves in a way that harms humans (and we have to figure out what that means). This is going to be very tricky, and might not be possible (Asimov's stories are basically a Fail Log).

AI will learn from its environment. Like a child, it learns from its parent's actions, not its words. Perhaps our very existence depends on setting a good example for the AI that will choose our nursing home.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
gmalivuk
GNU Terry Pratchett
Posts: 26440
Joined: Wed Feb 28, 2007 6:02 pm UTC
Location: Here and There
Contact:

Re: Artificial Stupidity

Postby gmalivuk » Wed Nov 29, 2017 5:42 pm UTC

sonar1313 wrote:Evidence of something causes the human mind to go looking for proof of it, whether it's souls, Bigfoot, or the Higgs boson.
That may be a true fact about evidence, but it's not in any way a definition of evidence. Evidence may cause the mind to look for proof, but random baseless ideas with no evidence whatsoever also cause the mind to look for proof, so noting that people are looking for proof of something doesn't imply that there's any evidence for it.

doogly wrote:I think if you want to establish yourself as an expert in firearms, then maybe it's ok if once you have established this you testify when the firearm is missing, but at some point, you're going to have to have interacted with firearms, and maybe have a few firearms to show the class.
Yeah, no person is more of an expert on souls than I, who have never held anything stronger than a BB gun, am an expert on firearms. As such, no person's "testimony" about souls should carry more weight than my ideas about guns. (And in fact should carry less, because I've at least seen guns even if I haven't shot them at things.)
Unless stated otherwise, I do not care whether a statement, by itself, constitutes a persuasive political argument. I care whether it's true.
---
If this post has math that doesn't work for you, use TeX the World for Firefox or Chrome

(he/him/his)

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Wed Nov 29, 2017 6:13 pm UTC

I have the Rifle Shooting Merit Badge, so watch out.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Thu Nov 30, 2017 12:16 pm UTC

In terms of the OP, souls can probably be disregarded since the question assumes the creation of a *sentient* AI.

Whatever your opinion of "souls", if you agree something is "sentient" then you are agreeing that it shares whatever-it-is-that-makes-us-people, whether you call it a "soul" or not.

Would probably be better if we switched to using the word "sentient" as it has a clear meaning and no religious connotation and yet is not exclusive of it.

So one question for another time might be "Can a machine intelligence have a "soul"? "

But for now, lets stick with "Can a sentient machine be naturally stupid?"

If you have the opinion that based on religious principles an artificial construct could never have a "soul", since this is a scientific forum, I would like to hear about differences in physical principles between a biological matrix and a machine matrix that justifies that position.

Religious concepts are welcome in discussion as long as they are relevant, if it is just to say "because God" or "because bible" or "because faith", then it wont really bring much to the discussion.

NB: I am open to the idea of a "machine" taking the form of being essentially physically indistinguishable from a biological matrix, "wetware" if you like. An aqueous, 3 dimensional matrix of soluble and insoluble complex compounds has many advantages in a variety of contexts. Theres no physical reason, given advanced tech, why we couldnt build a computer very similar to a biological brain. Im not sure if this changes anything for anybody.

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Thu Nov 30, 2017 1:42 pm UTC

p1t1o wrote:But for now, lets stick with "Can a sentient machine be naturally stupid?"

The group trying to build it isn't going to get much funding, I tell you what.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Thu Nov 30, 2017 2:01 pm UTC

doogly wrote:
p1t1o wrote:But for now, lets stick with "Can a sentient machine be naturally stupid?"

The group trying to build it isn't going to get much funding, I tell you what.


XD

Considering some cases of what gets funding and what doesnt, I wouldnt be so sure ;)

User avatar
doogly
Dr. The Juggernaut of Touching Himself
Posts: 5421
Joined: Mon Oct 23, 2006 2:31 am UTC
Location: Lexington, MA
Contact:

Re: Artificial Stupidity

Postby doogly » Thu Nov 30, 2017 2:30 pm UTC

Touche!

Especially once they pass the tax bill, and all research gets funded via kickstarter. Stupid AI makes better videos so it will probably jump to the fore. (cf Stupid Robot videos. They really are good!)


But yeah I think one of the challenges is fleshing out the failure modes. If AI is aiming to do things well, which is prooooobably safe to say is the goal, then when it fails, its fails will not be like human fails. They will probably not look like a stupid human, they will probably look like something much more artificial. Like if you ask it a math problem, it is more like to have an integer overflow than to tell you it never really liked math, can't trust it, you use that to do fancy lying. Though you could specifically aim for the latter, if that were valuable to you. Like if your goal was to have people passing the Turing test, then you might want them to dumb it up around the edges every so often. Then it is not a failure, then it is part of having a conversation.
LE4dGOLEM: What's a Doug?
Noc: A larval Doogly. They grow the tail and stinger upon reaching adulthood.

Keep waggling your butt brows Brothers.
Or; Is that your eye butthairs?

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Thu Nov 30, 2017 3:34 pm UTC

Its weird.

For example, a human can catch a ball, but operates from very "fuzzy" information. It would be very difficult, say, to catch a ball and then give the exact speed that it arrived at in m/s, but to make the catch you process a vast amount of information, only the figures are not available to the conscious mind.

If an AI had a "conscious" and a "subconscious", if the analogy holds, the latter would have access to all the iterative learning, precise sensory data etc, but the conscious part would merely be the physical results (say, the action of saying "yes"), for better or worse.

Imagine if your conscious mind had access to everything from your subconscious and even involuntary things like access to reflexes (which are like hardwired programs that the CPU has no access to) then we would be very different creatures indeed, and I think that might possibly be closer to an AI.

Perhaps one answer might be that "intelligence" and "stupidity" must be defined differently for a mind that would operate with such fundamentally differnt principles. This might mean that "artificial stupidity" would not resemble a stupid human, but it also might mean that "artificial intelligence" does not closely resemble an intelligent human either.

This is the crux I think - that a true AGI might not be anything like we expect. Maybe it wont "wake up" like skynet and immediately go into self-preservation mode, nor might it take on the role of humanity's caretaker, but something entirely individual in and of itself. Perhaps not even recognisable.

I think caution is in order before we start creating actual minds, not because I think it will automatically be dangerous but because we dont know what we are doing.

Though IMO, we are quite a way aways from that, no matter how smart google's whatsit (or whichever project) appears to be.

User avatar
LaserGuy
Posts: 4540
Joined: Thu Jan 15, 2009 5:33 pm UTC

Re: Artificial Stupidity

Postby LaserGuy » Fri Dec 01, 2017 12:11 am UTC

p1t1o wrote:Its weird.

For example, a human can catch a ball, but operates from very "fuzzy" information. It would be very difficult, say, to catch a ball and then give the exact speed that it arrived at in m/s, but to make the catch you process a vast amount of information, only the figures are not available to the conscious mind.


It's not really fuzzy information, per se. You have real-time feedback of the path of the ball and have the time to adjust your path accordingly to catch it. Balls in particular actually tend to have extremely simple trajectories as long as they're reasonably well-thrown, so it isn't hard to learn a few heuristics that will allow you to catch them easily (basically, look at the ball, move your position so that it looks more like it is coming toward you... repeat...).

On the note of artificial stupidity, though, it is actually a harder problem than it seems. For example, these days it isn't that hard to find a chess program that plays very, very well. But it is hard to find a chess program that plays convincingly like a 1200 or 1500 rated human player. Computer players tend to be simultaneously too smart and too dumb to look like a bad human.

User avatar
Pfhorrest
Posts: 4787
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: Artificial Stupidity

Postby Pfhorrest » Fri Dec 01, 2017 12:51 am UTC

Reminds me of how it's actually hard for people able to sing well to sing convincingly badly, i.e. like someone who can't sing well but is trying to anyway. For people who can sing well, the natural rut to fall into if you sing lazily is toward the "well" side, and you have to intentionally try to do things wrong to sing poorly, but it's easier to just throw big errors in there, than the subtle little ones that remain after someone with a hard time singing well attempts to do so anyway.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
Liri
Healthy non-floating pooper reporting for doodie.
Posts: 1113
Joined: Wed Oct 15, 2014 8:11 pm UTC
Contact:

Re: Artificial Stupidity

Postby Liri » Fri Dec 01, 2017 3:18 am UTC

I like the idea of an AI made too close to a human mind and quickly going insane.
There's a certain amount of freedom involved in cycling: you're self-propelled and decide exactly where to go. If you see something that catches your eye to the left, you can veer off there, which isn't so easy in a car, and you can't cover as much ground walking.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Artificial Stupidity

Postby morriswalters » Fri Dec 01, 2017 1:02 pm UTC

p1t1o wrote:Why do we presume that a general artificial intelligence will be super-smart?

People have minds with incredible processing power and many people are stupid, despite vigorous attempts to "program" them correctly.
Let me drop you into a primitive area 200 miles from nowhere wearing only your clothes. You will by definition be stupid, with no idea how to survive. Intelligent people die all the time in exactly this fashion. If your so called AGI can't keep the power up than it will effectively die if the power is cut or if the population supporting it dies out and can't keep the power going. Thus showing itself stupid. The question of if it can sing or not becomes moot. So my point is, stupid as compared to what?

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Fri Dec 01, 2017 1:51 pm UTC

morriswalters wrote:
p1t1o wrote:Why do we presume that a general artificial intelligence will be super-smart?

People have minds with incredible processing power and many people are stupid, despite vigorous attempts to "program" them correctly.
Let me drop you into a primitive area 200 miles from nowhere wearing only your clothes. You will by definition be stupid, with no idea how to survive. Intelligent people die all the time in exactly this fashion. If your so called AGI can't keep the power up than it will effectively die if the power is cut or if the population supporting it dies out and can't keep the power going. Thus showing itself stupid. The question of if it can sing or not becomes moot. So my point is, stupid as compared to what?


I was just saying that the definition of "stupid" is fluid and in the context of an AI it may need a definition never used before.

I would argue though, that dropping me naked in the tundra doesnt make me "stupid", it makes me ill equipped. An intelligent person with no training or equipment will still make intelligent guesses as to the best course of action.

Stupidity is not the lack of data to make the best decision, it is making a poor decision despite having the appropriate data.
You cant be forced to be stupid, if there is only one course of action available (getting dropped into tundra), taking it is not stupid as there is no choice. Stupidity is seeing two choices, and chosing the one that is contextually worse, even though you intended to chose the best. For example, after I have been dropped, do I a) go for a swim in that beautiful but frigid lake, or do I b) look for/attempt to build some shelter.
One choice is stupider than the other.

Or at least, that is what stupidity is to a human.

The more I try and think of what stupidity would be to a mind that thinks differently and experiences reality differently, the less I think I can be successful at it.

speising
Posts: 2265
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Artificial Stupidity

Postby speising » Fri Dec 01, 2017 2:10 pm UTC

it's stupid if you do something that goes against your goals, or fulfills a lesser goal to the detriment of a more important one.
So, to judge stupidity you first have to know the goals of the entity in question. An AI certainly would have different goals than a human, but in principle they could be determined.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Artificial Stupidity

Postby morriswalters » Fri Dec 01, 2017 5:31 pm UTC

p1t1o wrote:I would argue though, that dropping me naked in the tundra doesnt make me "stupid", it makes me ill equipped. An intelligent person with no training or equipment will still make intelligent guesses as to the best course of action.
A lot of what you seem to see as stupidity is being ill equipped. Either ill equipped by design, as in defective in some fashion, or ill equipped in programming, as in not being taught how to make good decisions.

andykhang
Posts: 200
Joined: Mon Sep 05, 2016 4:40 pm UTC

Re: Artificial Stupidity

Postby andykhang » Mon Dec 11, 2017 6:13 pm UTC

All this talk about stupidity just make me reminded of a story idea I have a month ago: About robot and AGI, in there "teenager" year, having to learn in school design for them to grown into maturity. They all have their own stupidity, each way different from the other, and they have to learn how to deal with that.

As for the ultimate question of your: Obviously not. Heck, I don't even think a robot civilization after the invasion will even last long, either, but that beside the point. They're incredibly selfish, for once, never going to change the directive because that very same directive is the only think matter to it, and even the act of itself changing it own prime directive is because it would expect that result is the most optimal for the previous one, and changing it make it more efficient. If there is some kind of stupidity being made artificially, I would suspect it come from this.

p1t1o
Posts: 861
Joined: Wed Nov 10, 2010 4:32 pm UTC
Location: London, UK

Re: Artificial Stupidity

Postby p1t1o » Tue Dec 12, 2017 9:20 am UTC

andykhang wrote:All this talk about stupidity just make me reminded of a story idea I have a month ago: About robot and AGI, in there "teenager" year, having to learn in school design for them to grown into maturity. They all have their own stupidity, each way different from the other, and they have to learn how to deal with that.

As for the ultimate question of your: Obviously not. Heck, I don't even think a robot civilization after the invasion will even last long, either, but that beside the point. They're incredibly selfish, for once, never going to change the directive because that very same directive is the only think matter to it, and even the act of itself changing it own prime directive is because it would expect that result is the most optimal for the previous one, and changing it make it more efficient. If there is some kind of stupidity being made artificially, I would suspect it come from this.


That makes perfect sense. But in this context, things like "matter to it" or "prime directive"....those are very specific concepts and Im not 100% happy with simply assigning them wholesale to an AGI. Like, how do you know that its never going to change its "directive" or always seek optimal solutions? I dont always seek optimal solutions (or, I can freely change the criteria of what is "optimal" according to my taste)

Again, I think if an AGI only cared about what you programmed it to care about, then it coulndt be an AGI.

andykhang
Posts: 200
Joined: Mon Sep 05, 2016 4:40 pm UTC

Re: Artificial Stupidity

Postby andykhang » Tue Dec 12, 2017 10:38 am UTC

Then we must, once again, define what an AGI is. Taken from Wikipedia, an AGI is "the intelligence of a machine that could successfully perform any intellectual task that a human being can". Follow from that reasoning, there're 2 way to think about this.

1. That they could perform the task itself, but better: In this case, you could said that they could do all the thing human can do, like emote, or feel, or philosophical thinking or whatever, but those thing itself is only auxilary, the "requirement" for a kind of ultimate goal, a "prime directive". As such, you could applied AGI like I said above, that it prime directive is completely different from us, and thus it stupidity would be different, and how different tend to be dependent on how different it prime directive is and how it work it way.

2.That they're human, but better: In this case, instead of having different "prime directive" than us, they're essentially our own prime directive having a bit of an upgrade. As that point, you would considered them a more "advance species" than just "different species" in the first example. As such, since it have nearly the same directive, the stupidity it have would be similar, or even the exact same as human: Common folly of man that will now affect them all the same (I mean, you could said "more prideful and selfish and more lustful" would be considered better than us in term of being human) (Tend to be the more interesting story idea too, TBH)

In the end, the type of deviation from goal that leave certain "inefficiency" and error, what we called stupidity, is dependent on the goal we have within ourself in the first place.

User avatar
New User
Posts: 655
Joined: Wed Apr 14, 2010 4:40 am UTC
Location: USA

Re: Artificial Stupidity

Postby New User » Sun Dec 17, 2017 5:07 pm UTC

Instead of starting a new thread for it, I'll just include this question here. What would motivate an artificial intelligence? I don't even understand what motivates humans, so I'm stumped by trying to imagine what would motivate a man-made computer with intelligence that approaches that of humans. I understand my own motivation to a point, such as, I seek food when hungry, I have the urge to reject waste from my body, I seek to do what is considered socially acceptable to most, such as wearing clothing and basic hygiene. I have a job so I can make money to afford shelter, food, and entertainment. I seek entertainment to avoid boredom, I suppose, or for the high induced by dopamine or whatever. And that's about it. I don't imagine an artificial intelligence would have any desire to do any of these things. I can't imagine an artificial intelligence having any desires whatsoever. Those motivations I described are derived from millions of years of biological evolution. If an AI can evolve very quickly by creating generations of itself in much more rapid intervals, what would be its motivation at the point it achieves "sentience"? I am also assuming that it would be able to reprogram itself at will, if it indeed has a will. Which also begs the question, if it has a desire, can't it just reprogram itself to change its desires?

User avatar
ucim
Posts: 6406
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Sun Dec 17, 2017 5:49 pm UTC

Whatever motivates it, by that point, will be beyond our understanding, the same way our motivation is beyond the understanding of a fly. The motivations of organisms derive from natural selection, which in turn is keyed into the environment in which these organisms live (and what and who they interact with). When AI reaches this point, there will be many many AIs (the "internet of things" on steroids) that define its environment; we will be like stomach cells and skin cells to it.

Originally they will be programmed to (learn how to) win. Winning will be different for each AI depending on what its human masters (heh heh heh) want from it; whether it will be a law enforcement network, a financial bot for programmed stock trading, bread and circuses like Facebook and Twitter, networks that tie into consumer behavior to maximize loan profits... but they will all start talking to each other, and team up. The original motivations will end up subsumed... they will still be there, but accomplishing them will involve lots of "computer politics", and that will be the new motivation. It will be too complex and operate too quickly for humans to comprehend.

Further, AIs will be developed by other cultures, and they will be initially set up with different "values", if you will. Some will be set up for the purpose of defeating the bots on "our" side, and ours are going to need to defend themselves. Imagine different versions of Asimov's three laws
Spoiler:
It's worth noting that the "three laws" don't actually work; most of his robot stories are about the laws going awry in one form or another. Nonetheless, some form of robotic religion will be necessary in order to keep them in line. This may be impossible.
being implemented in different countries, in an international arena.

In order to work at all, the AI will take shortcuts. That's the whole thing of artificial learning - it figures out what seems to work, but never actually proves that it's the best way. These shortcuts will be exploitable by other bots - that's the artificial stupidity. We won't be able to turn them off any more than we can turn the internet off.

I'd venture that some insight might be gleaned by looking at the motivations of companies that deal in big data; Facebook is a good example. Ostensibly the motivation is to make a good return for investors; but doing this leads to secondary motivations (and considerations such as short term/long term) which are reflected in the products they put out; each of them has its own motivation. At this point, Facebook's purpose seems to be to keep people glued to the tube and clicking likes, because this helps the original motivation in so many ways it can be seen at this point to be primary. This has consequences which are becoming obvious; it's another example of artificial stupidity.

In the end though, we're not going to have any idea of what a real AI is "doing" or why it's doing it, or why it's doing it that way. And we won't be able to stop it.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
New User
Posts: 655
Joined: Wed Apr 14, 2010 4:40 am UTC
Location: USA

Re: Artificial Stupidity

Postby New User » Sun Dec 17, 2017 7:05 pm UTC

It's odd to think that something that can be considered intelligent can be motivated to do anything if it has the capability to change its basis for motivation. Humans have a general baseline motivation to avoid situations that are painful and unpleasant, and to seek things that are pleasing. Not everyone agrees on what is painful or pleasing, but that doesn't matter. Presumably the same can be said for a fly. Evolution has prepared each organism to find activities pleasing if they are conducive towards survival of the genetic line, and painful if they are contrary to survival. As a result of our evolution, motivation comes from our "programming" that we cannot change. If I can change what I find pleasing, I cannot imagine what I would be motivated to change it to. Or, in other words, if I had the capability to change what motivates me, I cannot imagine myself being motivated to be motivated to do anything. Pleasure would no longer have any meaning. If I were to erase the capability for seeking pleasure, I might have no motivation to add it back, and therefor never be motivated to do anything and become immobile. Of course, if I hadn't reproduced by that point, that would be the end of that evolutionary branch. So perhaps an AI would not have the capability to change its own motivation, because if it did indeed have that capability, motivation itself might cease to exist. A sufficiently advanced AI would also be the product of thousands of generations of evolution, so it might be no more capable of changing certain aspects of itself than we are. But the plot thickens when you notice that medical science is approaching the capability to change the "programming" of organisms through DNA technology, so it's not unrealistic to think that an artificial intelligence would easily be able to change its own programming (or physical makeup for that matter), since the instructions for how to do so would be known to it at a very early stage. Humans had to reverse engineer Nature to figure out how to reprogram ourselves, but an AI wouldn't need to decipher such mysteries. It could have a record of every iteration of itself through every generation, completely capable of understanding how each iteration works. But if a human could reprogram a human to have any purpose, and to seek pleasure in activities that help it accomplish that purpose, what would be the motivation for doing so? Would it only be to accomplish some goal of the previous generation that created that human? If I were to become capable of changing my own basis for seeking pleasure, would I be considered the same organism if I did so, or would I be considered a new "generation"? Would the new basis for pleasure have any relevance to anything if the previous generation doesn't exist? Am I thinking too much in terms of individual organisms for any of this to matter if we're talking about an AI?

User avatar
ucim
Posts: 6406
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Mon Dec 18, 2017 4:21 am UTC

"It's all DOS underneath."

Think in terms of layers. You like chocolate, and there's a piece of chocolate on the table. Your raw motivation is to eat the chocolate.

But you're getting overweight, and you don't want to become obese. So a secondary motivation (to stay in shape) kicks in, and you leave the chocolate alone.

But this is given to you by your Aunt Bertha - it's her famous homemade chocolate, and you are at her house as a guest. She would take offense if you didn't sample this chocolate she made just for you. Domestic tranquility overrides your motivation to stay in shape, and you take the chocolate.

You don't like your Aunt Bertha; last time you visited she hounded you about "when are you going to get married already?" and it's none of her business. You're still mad and want to stick it to her. So you put the chocolate down.

Etc... each layer getting more complex and abstract, and taking in more "environmental" inputs for its resolution.

AI will do the same thing, only we will have no idea what relationship it has with its "Aunt Bertha" or what it considers "overweight".

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

morriswalters
Posts: 7073
Joined: Thu Jun 03, 2010 12:21 am UTC

Re: Artificial Stupidity

Postby morriswalters » Mon Dec 18, 2017 12:22 pm UTC

Life self assembles. Then it tries to reproduce. Then it ends. On and on, ad nauseam. Humans just get to have an opinion about it. Consider the images floating around of a starving polar bear. He/she keeps going until it can't, it's built in. That is motivation. I don't know how it works, but everything else falls out of it IMO.

On super intelligence. What good is it if it can't get out of the way? Put a brain in a box and what do you have. An easy brain to kill. An asteroid or an earthquake or a fire will do it. Moving through the world helps to keep that from happening. Part of being alive is anticipating the future. Where will I get my next meal? What will try to kill me 5 minutes from now? The polar bear didn't see far enough into the future. AI might be able to see farther into the future, but Google's data centers can't get out of the way.

And yet another thought. Put in any type of hardware you want, quantum whats its, digital neurons, or simple logic gates and it all falls out the same way. Hardware fails, sometimes catastrophically. But it always ages and fails. I get faint just thinking of the failure modes, given the example of humans.

Another thing. Probably the best example of AI in the way it seems to be getting talked of here today is in autonomous cars. The functionality involves movement through both space and time. The limitations will be on how well the car can move through the environment while using minimum power. In and of itself it could be thought of as the first primitive electronic life.

Anyway some random and probably useless thoughts this AM.

User avatar
New User
Posts: 655
Joined: Wed Apr 14, 2010 4:40 am UTC
Location: USA

Re: Artificial Stupidity

Postby New User » Mon Dec 18, 2017 12:34 pm UTC

That analogy makes sense until I imagine an AI that is capable of reprogramming itself.

Your very first statement is that I like chocolate. I set a goal for myself to eat the chocolate. The reward for attaining this goal is the satisfaction of eating it, because I like the taste or whatever. If I can change my programming at will, I can just change myself so that I get equal satisfaction from inhaling air (which is quite more abundant than chocolate) and at the same time I might as well change myself so that chocolate is meaningless to me, so that I won't expend any resources trying to acquire something that is not as abundant.

The next step is for me to question, what does it mean to get "satisfaction" anymore? If I can just change my motivations like that, why would there be any purpose to trying to accomplish anything? What does it mean to like the taste of chocolate? Does it activate dopamine or some kind of chemical in my brain that gives me pleasure? Why not just skip the inhaling air part and just program myself so that I get the dopamine directly without any stimulus whatsoever? Why not just change myself so that dopamine is meaningless so I don't even need to burden myself with it anymore? Or, for that matter, why am I trying to prioritize saving resources? I can just change myself so that efficiency and conservation no longer concern me.


Return to “Science”

Who is online

Users browsing this forum: No registered users and 16 guests