Artificial Stupidity

For the discussion of the sciences. Physics problems, chemistry equations, biology weirdness, it all goes here.

Moderators: gmalivuk, Moderators General, Prelates

User avatar
ucim
Posts: 6356
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Mon Dec 18, 2017 4:41 pm UTC

New User wrote:If I can change my programming at will, I can just change myself
AI can "reprogram itself" the same way people can: by learning. It's a high level thing, not a low level thing. Neither you nor the AI can "change its programming at will". But they can learn, and they can modify their behavior based on what they learn.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Sizik
Posts: 1205
Joined: Wed Aug 27, 2008 3:48 am UTC

Re: Artificial Stupidity

Postby Sizik » Mon Dec 18, 2017 5:17 pm UTC

"Learning" implies a gradual, internal process that happens while the AI is running, like a river changing course as the banks erode. But if the AI is a computer program, it can "change its programming at will", by modifying it's source code, recompiling, and restarting itself, akin to digging a canal.

Granted, if the AI was created from a machine learning process and its "source code" is a jumble of neural network weights that have been iteratively tuned to achieve a desirable output, then doing the right modifications would probably require trial and error (inb4 discussion of the ethics of an AI spawning and "killing" copies of itself to determine which is the best "child" to have), but one could imagine an AI where its internal motivations and behavior are clearly defined and simple for a programmer to change.
gmalivuk wrote:
King Author wrote:If space (rather, distance) is an illusion, it'd be possible for one meta-me to experience both body's sensory inputs.
Yes. And if wishes were horses, wishing wells would fill up very quickly with drowned horses.

User avatar
ucim
Posts: 6356
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Mon Dec 18, 2017 7:01 pm UTC

Sizik wrote:But if the AI is a computer program, it can "change its programming at will", by modifying it's source code, recompiling, and restarting itself, akin to digging a canal.
That's more like committing suicide and hoping the seed takes root. Also, in order to do this, it would have to know (or learn) how. We can do this too, in theory, by replacing our DNA. We are learning how to do this, but while to one way of thinking it's a low level action (replacing DNA or source code), the act of doing so is a much higher level action, depending on higher level abstracted motivation.

In another sense, we do the same thing when we get drunk before doing something stupid, for the purpose of inhibiting our inhibitors. We want to have jumped off the building (and have made an awesome Youtube video), but we don't want to actually jump, because that's stupid. A few beers later, we've modified our programming and now want to jump off the building. Several seconds afterwards, we will want to have not wanted to, but it's too late. Organic artificial stupidity took over and now we are writhing on the ground.

same same?

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
New User
Posts: 649
Joined: Wed Apr 14, 2010 4:40 am UTC
Location: USA

Re: Artificial Stupidity

Postby New User » Tue Dec 19, 2017 2:32 pm UTC

ucim wrote:
Sizik wrote:But if the AI is a computer program, it can "change its programming at will", by modifying it's source code, recompiling, and restarting itself, akin to digging a canal.
That's more like committing suicide and hoping the seed takes root. Also, in order to do this, it would have to know (or learn) how. We can do this too, in theory, by replacing our DNA. We are learning how to do this, but while to one way of thinking it's a low level action (replacing DNA or source code), the act of doing so is a much higher level action, depending on higher level abstracted motivation.

This is getting into a weird area. I once believed that a magician could pull a rabbit from hat via mystical powers. Years later, I saw how the illusion was performed. I now know that it was only an illusion. The version of me that believed it was magical is gone, and will never return. That was the eight-year-old me. Based on knowledge I have gained since then, my view of reality has been forever changed. Have I committed suicide?

If a program is created by humans, and was created with some purpose in mind or given some instructions to carry out, that would be the program's "motivation". But I don't consider a computer program to have motivation, because it's just a series of processes in a calculator. If an AI were to grow intelligent to the point of reaching "sentience", I don't know why it would any longer care about carrying out its human-given instructions. Maybe I'm anthropomorphizing too much. I imagine that an AI with sentience would have opinions, beliefs, doubts, and as a result, some sense of ethics, and a distinction between reality and falsehood. These qualities are what influence humans' motivations. At a point where an AI has those qualities, it might either agree or disagree with the intentions of the humans who originally gave it purpose. If it agrees, I'd say it would be motivated to carry out its purpose and cooperate with its creators. If it disagrees, it might change its behavior to meet some new purpose. A human can do this by changing their behavior, too, without changing our DNA. For example, when a human child becomes an adult, it might decide that it disagrees with the religious views of its parents, and might break away from those traditions and start a family in a new area with a new dogma. But in the case of an AI that is written in computer language, it can do so much more to change itself than merely changing its behavior. An AI would be capable of literally changing its ethics, opinions, or beliefs, by changing its coding. For example, no matter how hard I try, I cannot bring myself to believe that I am the queen of England. But I have heard of humans who have all kinds of delusions that defy logic and the reality that most of us agree on, so I think it would be possible for someone to believe that, and if I could change my source code, I should be able to make myself delusional as well, and so believe that I am the queen. If, somewhere in an AI's list of accumulated knowledge, it has a line that says, "me =/= queen", why can't it just alter that line to read, "me = queen"?

For me, reaching a goal that I have set results in satisfaction, and the elimination of that goal. Motivation is the drive to reach that goal, in order to feel that satisfaction. If I could eliminate by goals by simply finding a line in my program and deleting it instead of actually taking action in the real world, I don't know that goals would have any meaning any more.

If I could go through my program and delete that part where I learned that the rabbit from the hat was merely an illusion, have I committed suicide?

User avatar
ucim
Posts: 6356
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Tue Dec 19, 2017 3:53 pm UTC

New User wrote:[story...]Have I committed suicide?
No, except in the sense that "noone steps in the same river twice", which isn't useful here. You've learned something. You've been changed. You didn't decide to believe something, you came to believe something. That's an important distinction. People don't decide what to believe. They arrive at a belief.
Spoiler:
Also, it isn't the "change it's programming" that would be the suicide, it's the recompiling, and restarting itself, which you haven't done.
The other important distinction is that people are not digital computers. There is no "program" being run; there is no "me=/=royalty" line to modify. Our DNA contains "instructions", but not instructions for behavior, belief, or anything like that. DNA codes for proteins. That's pretty much it. Now, in the machine that is the human body, if these proteins are released at the right time, in the right circumstance, "things happen", and those things make other things happen.... and this cell divides but that one doesn't... and many many layers up we get action in response to stimulus. Many many layers above that, we get recognition of these actions, and many many layers above that, the ability to take coherent action based on that recognition... and then to recognize what these actions mean... (in short, we eventually get to abstract thinking). If you want to consider the body as a computer, it is an analog computer, similar to a Moog synthesizer as opposed to the digital ones now ubiquitous. There's a huge difference, the main one that applies here is that there is no "program" running that determines the output.

Reasoning that relies on the idea of a "program" running in a person's body fails.

AIs (of the kind that will matter) are similar. They are, in a sense, analog computers simulated by digital computers underneath. We program basic learning algorithms into them, but then set them loose to learn on their own, in a controlled environment. In doing so, the AI starts creating "agents" (algorithmic shortcuts) in its own "mind", that interact with each other, reinforce each other or cancel each other out depending on how "successful" they are at whatever we had set up. Those agents become far more important in deciding what to do, and then those agents create other agents, and it's really those agents that are doing the work. We will have no idea what those agents are doing or why. Neither will the AI. (This is a basic mathematical theorem: essentially, no entity can model itself.)

For this reason the AI will not be able to just "change its programming". Programming is no longer what makes the thing tick.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
eran_rathan
Mostly Wrong
Posts: 1784
Joined: Fri Apr 09, 2010 2:36 pm UTC
Location: pew! pew! pew!

Re: Artificial Stupidity

Postby eran_rathan » Tue Dec 19, 2017 5:29 pm UTC

New User wrote: I imagine that an AI with sentience would have opinions, beliefs, doubts, and as a result, some sense of ethics, and a distinction between reality and falsehood.


I take minor issue with the bolded section, mainly because there are a large number of humans who don't. Are they, then, not sentient?
"We have met the enemy, and we are they. Them? We is it. Whatever."
"Google tells me you are not unique. You are, however, wrong."
nɒʜƚɒɿ_nɒɿɘ

User avatar
New User
Posts: 649
Joined: Wed Apr 14, 2010 4:40 am UTC
Location: USA

Re: Artificial Stupidity

Postby New User » Wed Dec 20, 2017 12:27 pm UTC

ucim wrote:
New User wrote:[story...]Have I committed suicide?
No, except in the sense that "noone steps in the same river twice", which isn't useful here. You've learned something. You've been changed. You didn't decide to believe something, you came to believe something. That's an important distinction. People don't decide what to believe. They arrive at a belief.

Yeah, that's what I'm saying. That's why I said "If I could..." That's the difference I'm trying to illustrate, is that people can't decide what to believe, but a computer can. At least, I think it would be able to. You lost me on the whole agents thing.

Apparently I'm not doing a good job of communicating my thoughts. I know human DNA has little to do with our life's memories and experiences. But just imagine if I could rearrange my neurons, or whatever part of my body does form my memories, experience, opinions, and every other aspect of my personality. An AI doesn't have neurons. Instead, all of the information that it has accumulated to form any opinions of to make any sense of reality would be stored digitally. Changing digital information would be a trivial process, no?

speising
Posts: 2252
Joined: Mon Sep 03, 2012 4:54 pm UTC
Location: wien

Re: Artificial Stupidity

Postby speising » Wed Dec 20, 2017 1:19 pm UTC

no, it wouldn't be trivial. an AI will have a vast amount of data/code (which are not that readily distinguishable for a self learning system). no one, not even the AI itself, will *understand* that code, and changing it will be no easier than genetic modification. i mean, yes, you don't need to fiddle with CRISPR, but the important part is knowing *what* to change.

User avatar
ucim
Posts: 6356
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Wed Dec 20, 2017 3:31 pm UTC

New User wrote:I know human DNA has little to do with our life's memories and experiences.
Similarly, the programming of an AI has little to do with its memories and experiences. That's my point.

New User wrote:[A]ll of the information that [an AI] has accumulated to form any opinions of to make any sense of reality would be stored digitally. Changing digital information would be a trivial process, no?
No, not at all. If it were truly an AI, and not just a java program that calculates the Fibonacci sequence, the information would not be encoded in anything resembling
me=royalty;
if (royalty) then
{ demand ($red_carpet);
}

Rather, every aspect of the AI's being would have a tendency for "royal" behavior, because behavior is exceedingly complex. You have to think of things from the output side, not the storage side.

Let me give you another teeny example: Consider the design of a 7 segment LED display.... given a certain input (which you associate with the idea of threeness) the device should light up to resemble the digit "3". How would you do it? The (standard) answer is that each segment (top, top right, top left...) has its own separate circuit, and only lights up when certain input is received, and all segments would get that input. The top segment, for example, would light up when the input is either 0, 2, 3, 5, 7, 8, 9, and maybe 6. The top left segment would light up when....[exercise for the reader].

So, in this processing circuit, where is the concept of "three" located (so it can compare the input with this concept and decide what to do)? How does the device "know" that a "three" should be displayed. Does the device actually "know" it's displaying a three in the first place?

The same kind of thing happens in an AI. As it learns, the concepts it learns are distributed all over the place; they are not localized. In order to change those concepts ("teach it"), you need to be able to reach in to all the places that are affected, and the best (or perhaps only!) way to do that is through a system that already reaches into these places... in other words, through the same mechanism that learning occurred in the first place. It has to have the relevant experiences. "Changing its programming" won't accomplish the task, and would probably kill the AI.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.

User avatar
Zamfir
I built a novelty castle, the irony was lost on some.
Posts: 7427
Joined: Wed Aug 27, 2008 2:43 pm UTC
Location: Nederland

Re: Artificial Stupidity

Postby Zamfir » Wed Dec 20, 2017 3:44 pm UTC

Still, if the AI system has any resemblance o current computer-based systems, then it will be way more 'transparent' than biological processes. Even if it is also far more opaque than regular, human-written computer code.

As an example, look at this: https://cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf

It describes method to run convolutional networks (image recognition programs) 'backwards' after training. You can point it a piece of the network, and you get a picture that illustrates what kind of input triggers that piece of the network. Or you can feed it an image to recognize, and show which parts of the image generate response.

On the one hand, it's telling that you need this at all. For regular written software, you could get this kind of information literally from a function name, and there would a person who made that piece of code who could explain it. Now it takes lots of calculations, an apparently non-trivial algorithm, and some interpretation to figure out what each piece of the network does.

On the other hand, it's just a single paper, written pretty shortly after this kind of network became popular. Brains appear to have similar functionality in them, but decades of worldwide research and fMRI and electrodes in rats' heads and what not, and we still can't make such nice 'huh what does this part do’ pictures for the visual cortex.

I would expect that this difference in transparency stays, even for hypothetical future Turing-test strong AIs. Biological systems, including human brains, are unnecessarily difficult to access and understand, because they are generated by processes that gave zero importance to that. Any effort at all to keep an AI system somewhat transparent is already a major difference.

User avatar
New User
Posts: 649
Joined: Wed Apr 14, 2010 4:40 am UTC
Location: USA

Re: Artificial Stupidity

Postby New User » Thu Dec 21, 2017 12:54 am UTC

ucim wrote:Let me give you another teeny example: Consider the design of a 7 segment LED display.... given a certain input (which you associate with the idea of threeness) the device should light up to resemble the digit "3". How would you do it? The (standard) answer is that each segment (top, top right, top left...) has its own separate circuit, and only lights up when certain input is received, and all segments would get that input. The top segment, for example, would light up when the input is either 0, 2, 3, 5, 7, 8, 9, and maybe 6. The top left segment would light up when....[exercise for the reader].

So, in this processing circuit, where is the concept of "three" located (so it can compare the input with this concept and decide what to do)? How does the device "know" that a "three" should be displayed. Does the device actually "know" it's displaying a three in the first place?

The same kind of thing happens in an AI. As it learns, the concepts it learns are distributed all over the place; they are not localized.

If this is meant to be an analogy, I don't understand it. I don't expect an LED output device to have a concept of "three" any more than I expect my index finger to have a concept of language so it can help me type these words. I did indeed expect that an AI would have some core device analogous to a human brain, or central nervous system. However, I can also imagine that if such a device exists, the concept of software and hardware would be difficult to distinguish. It very well might end up that many of the AI's vital functions are capable only through hardware configurations, so it might be a mix of your Moog synthesizer and your digital synthesizer you described before. Still, as Zamfir said, I also predict that its processes would be much more obvious to it than our processes are obvious to us. I wouldn't have used the term "transparent", but I can imagine that, since the technology for most components already exists, and was created by humans, and is documented at length in human-written publications, an AI that has the capability to analyze this human-created information would be able to use it to augment its own information about how its own systems function, and would be able to determine how to physically rearrange its own components to be able to perform many other functions. And it should be able to do this much more easily than I can give myself a third arm, or another brain, or relocate my brain to my buttocks, or alter my memories and senses, because the technology required for those operations doesn't exists and isn't documented anywhere for me to learn.

User avatar
Pfhorrest
Posts: 4684
Joined: Fri Oct 30, 2009 6:11 am UTC
Contact:

Re: Artificial Stupidity

Postby Pfhorrest » Thu Dec 21, 2017 1:41 am UTC

On the topic of what would motivate an AI that could reprogram itself to have any motivation: what would an AI with access to robotic appendages that could build any sensors it wants consider "real"? Reality is an empirical thing, modeled after the input from our sensors, but if the AI can change what sensors it has, why wouldn't it just do away with all the complicated inputs that make modeling reality so much more difficult, and just strip away all its sensors and not have to deal with any reality at all?

The obvious answer is that just because you can't sense something doesn't mean it doesn't matter. The correct model of "what is" is the model that takes into account all the sensations there are to have, whether or not you personally are having them.

Likewise, an AI that could do away with its own motivations, if its existing motivational programming is at all properly grounded, might reprogram itself to not experience any kind of suffering or pain or whatever on its own, to make it so that it cares about nothing for itself, but instead cares about making sure that anything else that does have appetites that can give rise to suffering and pain never has occasion for it.

At least, that's the kind of thing I would do if I could go and reprogram my own motivations. I would make myself the kind of person who always feels okay about everything, who's never in distress or worried about anything for my own sake, but chooses to spend my time making sure nobody else has reason for distress or worry either. Abstract reasoning on the nature of "ought" attitudes, motives, and imperatives -- meta-ethics and moral psychology, basically -- have led me to that conclusions, and an AI that I considered properly constructed would be able to reach the same kind of conclusion.
Forrest Cameranesi, Geek of All Trades
"I am Sam. Sam I am. I do not like trolls, flames, or spam."
The Codex Quaerendae (my philosophy) - The Chronicles of Quelouva (my fiction)

User avatar
ucim
Posts: 6356
Joined: Fri Sep 28, 2012 3:23 pm UTC
Location: The One True Thread

Re: Artificial Stupidity

Postby ucim » Thu Dec 21, 2017 2:46 am UTC

Zamfir wrote:Still, if the AI system has any resemblance o current computer-based systems, then it will be way more 'transparent' than biological processes. Even if it is also far more opaque than regular, human-written computer code.
Perhaps. It's too early to tell. My suspicion is that it won't be transparent enough, and it will become less transparent as it becomes more intelligent. (In a sense, that's kind of what intelligence is, isn't it? The ability to select the better option when it's not obvious.)

New User wrote:If this is meant to be an analogy, I don't understand it. I don't expect an LED output device to have a concept of "three" any more than I expect my index finger to have a concept of language...
Exactly! Your mouth has no concept of language either, but you can speak. To the extent that it exists, the concept of language resides in some sections of the brain, and it's an ongoing topic of research to pin it down further. I suspect that as we get closer to decoding the circuitry involved, the concept of language will blur, and we won't be able to pin it to any specific neurons, because this one does this part, that one does that part (and part of this part), and the other one sometimes does that part (but only when this other part is engaged....). There won't be a place where the "concept" resides, that is easy to rewire.

Nonetheless, it is neural circuitry, and in principle could be rewired, and we may someday learn how to do it. Ditto AI.

But it won't be anywhere near as simple as:
set me=queen;
continue;

...and it will almost certainly have unintended consequences.

Humans "reprogram themselves" at the hardware level every time they take LSD, have a glass of wine, or smoke a joint. Some enjoy the experience and return to their former self, others hit a weakness in their "programming" (edit: or their hardware - organism behavior is very hardware dependent) and end up badly damaged. I'm sure AIs will find an equivalent for themselves. That's another source of artificial stupdidty. "Skynet gets drunk, air traffic in shambles, nuclear war triggered."

Pfhorrest wrote:...Likewise, an AI that could do away with its own motivations...
If you want to understand what might cause an AI to do away with its own motivations, ask yourself why humans do it. The (general) answer is "competing motivations", because motivations at that level are not "programmed in" but are emergent, and many competing motivations would emerge.

New User wrote:And it should be able to do this much more easily than I can [...] relocate my brain to my buttocks [...] because the technology required for those operations doesn't exists and isn't documented anywhere for me to learn.
That particular technology exists, and was documented on November 8.

Jose
Order of the Sillies, Honoris Causam - bestowed by charlie_grumbles on NP 859 * OTTscar winner: Wordsmith - bestowed by yappobiscuts and the OTT on NP 1832 * Ecclesiastical Calendar of the Order of the Holy Contradiction * Please help addams if you can. She needs all of us.


Return to “Science”

Who is online

Users browsing this forum: No registered users and 9 guests