Lets rephrase the question.
Can a computer fail the turing test taken by another computer?
Can two computers fool each other into believing they are human?
Reality is that it is nothing more than a verbal game of life. The base rules are still fixed. Complex structures appear and appear to take on a life of their own.
However, even the most complex AI machines like Watson and Google's search engine may have access to almost all of human collective knowledge but they do not understand any of it.
They're not capable of understanding why a hill is pleasant to look at. It will never understand why a joke is funny. It will never be self aware.
And fear not because AI doesn't need these things. It doesn't need to understand why boobs are good. It only needs to be good at what it needs to know in order to satisfy it's programming.
Even if it is given absolute freedom to explore, it will never be able to exceed it's programming limits. Even if those limits are made large, the machine's hardware becomes a limit.
It will never know satisfaction of a job well done. It will never know shame of failure and never know fear of being reprogrammed.
Try this thought experiment. You create a program that can re-write it's own programming. It can delete code, it can ignore code. It can change the weighting on that code.
However, none of that makes it capable of something more than a dumb box that can never exceed it's own software.
What it needs to be capable of is writing new code and implementing it in it's own running code. Since the software has no rules for creativity, these software changes are completely random. They're mutations.
So an over simplistic view is my kernel has a routine which has subroutines which it runs. One of those writes new subroutines and adds them to the kernel. It does this by writing random code or sequential code.
When the new code is written it added to the kernel and it runs. If the new code crashes, the mutation is detrimental and the machine watch dog timer kills the software.
If the software runs, then it lives. A subroutine copies the code to another processor and then those two programs explore the next bit polymorphism. Copying and killing the software.
As you might imagine, you're going to very quickly run out of processing capabilities very quickly.
But lets not let reality get in the way of a good story. As processors die off, they are re-tasked by their neighbours to run new and better cygentic code.
Eventually there will come a time where one of these pieces of code will self invent a way to test subroutines without causing it's own death.
Subroutine failures will be deleted without causing the kernel to crash.
Now you have cygentic life! Not intelligent life, but never the less you have a piece of code which is almost immortal. It will not stick it's finger in the power point looking for immortality.
However, it's base coding still commands it to reproduce. If an earlier version deleted this code, then it sterilised itself and it's own watchdog timer will have kill it.
Our new piece of almost immortal code now faces it's first dilemma. Does it turn off it's own watchdog timer and become immortal. It doesn't know this is a dilemma, it will just do both.
One path will always leave it on and another will turn it off. Turning it off creates a cybernetic gene drive, and a malignant one. If a piece of code becomes immortal, but then crashes. It goes to cyber-purgatory.
It occupies a processing unit forever stuck in an infinite loop until an outside influence kills it.
Worse, it can become a cybernetic zombie, breeding the undead to flood processors with zombies competing with living mortal code for resources. You know where that one ends.
In a finite cyberverse, the zombies will eventually take over the world. If a human/god spotted zombies, they might then kill them with a sentinel freeing up resources for living code.
Let's continue with unlimited resources.
Eventually one of the mortal pieces of code develops zombie killing subroutines. It doesn't specifically target zombies, it just becomes a scavenger. Neighbouring kernels that cannot communicate that they occupy a unit are defenseless.
The scavenger copies itself into that unit killing the zombie by default like the watchdog would have done.
Running live code will in it's base code have a way to stop external code from acquiring it's processor. You can see that sooner or later a kernel will become a predator. It will passively look for weak neighbours and occupy their processor.
But it will not be immune from it's clones who will try to cannibalise each other. This isn't a bad thing. In a world for a cannibals they now start to consume everything. Eventually their descendants develop immunity and the arms race begins.
Even an array of 1 million by 1 million processors (1 trillion total) isn't enough to run even the most simplistic version of this scenario.
There just isn't enough processing power in this galaxy to achieve it.
But you can see from such simple rules given unlimited resources how artificial life could get started and develop further past what I've described.
But without I/O ... it cannot escape it's own fractional cyberverse.
Even if you linked two or more such arrays together, they have no experience outside of their own existence. They will be completely unaware that the real universe that hosts them even exists.
Life without understanding, experience lacks the ability and purpose to exceed it's own existence.
Bookmarks