Single instruction, Turing complete architecture

A place to discuss the science of computers and programs, from algorithms to computability.

Formal proofs preferred.

Moderators: phlip, Moderators General, Prelates

dshigure
Posts: 8
Joined: Thu Oct 04, 2007 1:08 am UTC

Single instruction, Turing complete architecture

Postby dshigure » Thu Oct 18, 2007 2:03 am UTC

Today in my Computer Architecture class, I learned something interesting. Similar to how nand gates can be used to simulate the usage of any other gate, apparently Turing completeness in a processor can theoretically be achieved by the following 3 input instruction alone:

urisc op1, op2, label

Subtract op1 from op2, store the result in op2, and if the result is negative, jump to label.

The teacher didn't go into the proof that it was Turing complete, so I will take a stab at it myself. I will start with the (hopefully not too unlikely) supposition that op1 and op2 may contain memory locations or registers, and that op1 may contain the literal 1. (Note: I know I can omit the word "urisc" but I left it there in hope that it will be a tad easier to read)

To store 0 in loc:

urisc loc, loc, label

To store 1 in loc:

; Store 0 in temp and loc first, as mentioned above
urisc 1, temp, next ; Stores -1 in temp
next: urisc temp, loc, label ; Subtracts -1 from a zero value, making it +1

To decrement a value:

; Store 1 in temp first, as mentioned above
urisc temp, loc, next
next:

To increment a value:

; Store 0 in temp first, and then decrement it, as mentioned above
urisc temp, loc next
next:

Note that these capabilities allow us to store any literal value.

Control structures, even loops:

; Store 0 in temp0, 1 in temp1, 0 in i, and number of iterations in loc
loop: urisc i, loc, endloop
; Store number of iterations back in loc and 0 back in temp0
; Do some computation
urisc temp1, temp0, loop
endloop:

I am just shy of the Turing completeness section of my Theory of Computation class, but for this proof am I missing anything?

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: Single instruction, Turing complete architecture

Postby quintopia » Thu Oct 18, 2007 4:58 pm UTC

well, you can obviously use any symbol, you can move tape left and right by incrementing and decrementing memory locations, and you can do branches from state to state using the control structure you defined. You convinced me. Now write a quine in OISC.

dshigure
Posts: 8
Joined: Thu Oct 04, 2007 1:08 am UTC

Re: Single instruction, Turing complete architecture

Postby dshigure » Thu Oct 18, 2007 9:33 pm UTC

well, you can obviously use any symbol, you can move tape left and right by incrementing and decrementing memory locations, and you can do branches from state to state using the control structure you defined. You convinced me. Now write a quine in OISC.


Damn... apparently someone actually did write a quine in OISC.

http://members.tripod.com/rkusnery/quines.html

User avatar
davean
Site Ninja
Posts: 2495
Joined: Sat Apr 08, 2006 7:50 am UTC
Contact:

Re: Single instruction, Turing complete architecture

Postby davean » Fri Oct 19, 2007 7:44 pm UTC

If this interests you, you may also be interested in the Iota combinator.

Have you covered combinator calculus?

As for your proof, it is a decent start, but you are going about it a rather long way. Turing machines are quite a bit simpler then the form you are trying to prove and proving in the form you have selected leads to a lot of cases to check. You're proof there definitely isn't complete.

dshigure
Posts: 8
Joined: Thu Oct 04, 2007 1:08 am UTC

Re: Single instruction, Turing complete architecture

Postby dshigure » Fri Oct 19, 2007 8:43 pm UTC

As for your proof, it is a decent start, but you are going about it a rather long way. Turing machines are quite a bit simpler then the form you are trying to prove and proving in the form you have selected leads to a lot of cases to check. You're proof there definitely isn't complete.


I wasn't sure how to go about mapping this to moving a tape back and forth, so I cheated a bit and tried to map this to another minimal language that has already been established to be Turing complete. I tried mapping it to brainf*ck, but wasn't sure how to go about dereferencing a pointer. I seem to remember a language called "Bare Bones" that achieves Turing completeness using not a whole heck of a lot more than this.

Now I need to go dig something up...

User avatar
davean
Site Ninja
Posts: 2495
Joined: Sat Apr 08, 2006 7:50 am UTC
Contact:

Re: Single instruction, Turing complete architecture

Postby davean » Fri Oct 19, 2007 8:46 pm UTC

easy, use a positional counter that you increment and decrement?

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: Single instruction, Turing complete architecture

Postby quintopia » Sat Oct 20, 2007 10:32 pm UTC

Um, that still requires memory indirection I think. A nice thing to be able to do, that.

dshigure
Posts: 8
Joined: Thu Oct 04, 2007 1:08 am UTC

Re: Single instruction, Turing complete architecture

Postby dshigure » Tue Oct 23, 2007 5:01 pm UTC

Hmm, another week of Theory of Computation has made me pretty sure that an architecture is Turing Complete if it supports these operations: Reading, writing, and branching states depending on what was read. With this architecture, I was able to show how to store arbitrary values, and create control structures based on values that can potentially be fetched from arbitrary memory locations. Maybe my proof holds afterall? Or is there some way that some system that supports arbitrary reading, writing and conditional branching not be Turing Complete?

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: Single instruction, Turing complete architecture

Postby quintopia » Tue Oct 23, 2007 7:43 pm UTC

Modify a DFA so that it outputs a symbol (of your choosing) on every state transition. Doesn't this do everything that you said?

dshigure
Posts: 8
Joined: Thu Oct 04, 2007 1:08 am UTC

Re: Single instruction, Turing complete architecture

Postby dshigure » Wed Oct 24, 2007 12:47 am UTC

Modify a DFA so that it outputs a symbol (of your choosing) on every state transition. Doesn't this do everything that you said?


Not quite. The DFA might be able to output something to a terminal, but the output is not available to be read as input later.

User avatar
quintopia
Posts: 2906
Joined: Fri Nov 17, 2006 2:53 am UTC
Location: atlanta, ga

Re: Single instruction, Turing complete architecture

Postby quintopia » Wed Oct 24, 2007 1:13 am UTC

So when you say "arbitrary reading," what you actually mean is the ability for the machine to read from both its input and output? Perhaps you also mean the ability to read any symbol from anywhere in the input and output at any time?

With these caveats, I would hesitantly agree that any such machine is turing-complete.

However, I don't believe the above proof explicitly shows that OISC can read arbitrary locations in memory.

A TM can read "the last input" by moving tape right until it hits an end-of-input flag (zero), then moving left once and reading.

How does OISC, given an input of length x, store every symbol in a different location, and read back the particular symbols it wants?

User avatar
Yakk
Poster with most posts but no title.
Posts: 11045
Joined: Sat Jan 27, 2007 7:27 pm UTC
Location: E pur si muove

Re: Single instruction, Turing complete architecture

Postby Yakk » Mon Oct 29, 2007 4:19 pm UTC

To show it turning complete, I'd start by building an easier set of operations.

Each operation would take a fixed number of urisk instructions, but I would never use the urisk instructions.

I'd reserve a bunch of memory for use by this emulation layer.

Say: 3 work registers, A B and OUT. 1 STACK register. 1 LOAD register.

Operations:
LOAD_A
LOAD_B
UNLOAD_OUT
POP[] (offset into STACK, into LOAD)
SAVE[] (offset into STACK, from LOAD)
SET[] (sets LOAD to an integer value)
PUSH (onto STACK)

All of these look easy to run.

Next, some simple arithmetic.
OUT=A+B
OUT=A-B
I'll leave this nebulous -- basically, we implement only what we need.

Next, control flow.
JUMP register stores an address.
LOAD_JUMP copies LOAD to JUMP.

IF goes to JUMP if OUT is non-zero.
CALL pushes the address of the next simulated instruction onto the RETURN stack, then goes to JUMP.
RETURN pops the address off of the stack into JUMP, then goes to JUMP.

Now, while the program won't know where the CALL execution pointer is, the writer of the program will. :)

Then, if the above isn't sufficiently convincing, we write a simple Turing machine interpreter. But you basically have an assembly language.

The trick is you avoid working with your actual urisk instruction set -- instead, you figure out how to write the above ASM-like instructions, and "cross compile" to your urisk. Your actual URISK output will use a bunch of scratch registers (that I didn't mention!) and be quite inefficient: but you'll get a usable language quicker than trying to poke at URISK instructions.

To convince yourself, just see how easy it is to implement each of the above. :0

...

As a bit of fun, in the hardware course I took, we where required to implement in a HDL that urisk chip, and then run it through (virtual) tests. My proof of min clock speed was to set up a bunch of special latches that signaled when the chip was steady, run the chip at a very slow clock speed, and then use perl to determine the slowest latch collapse. Then I ran it at the predicted max speed, and it worked. Then I ran it at faster than the predicted max speed, and it failed.

Proof by perl program. ;)
One of the painful things about our time is that those who feel certainty are stupid, and those with any imagination and understanding are filled with doubt and indecision - BR

Last edited by JHVH on Fri Oct 23, 4004 BCE 6:17 pm, edited 6 times in total.


Return to “Computer Science”

Who is online

Users browsing this forum: No registered users and 5 guests