Results 1 to 20 of 20

Thread: AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

  1. #1
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default AI Is Inventing Languages Humans Can’t Understand. Should We Stop It?

    I'm 50/50 on this. The scary scifi interpretation side of me says, whoa what if they start using this against us if we can't keep up with their language & understanding what they are planing.
    The optimistic scientist in me says, what can we learn from them about making things more efficient by studying this.

    Read, access & discuss, I'm interested in your views.

    Researchers at Facebook realized their bots were chattering in a new language. Then they stopped it.

    Bob: “I can can I I everything else.”

    Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
    To you and I, that passage looks like nonsense. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency–and perhaps, hidden nuance–than you or I ever could? Because it is.
    This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.
    “There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.
    “Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that Facebook as observed , and , and . “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
    [Screenshot: courtesy Facebook]

    Indeed. Humans have developed unique dialects for everything from trading pork bellies on the floor of the Mercantile Exchange to hunting down terrorists as Seal Team Six–simply because humans sometimes perform better by not abiding to normal language conventions.

    So should we let our software do the same thing? Should we allow AI to evolve its dialects for specific tasks that involve speaking to other AIs? To essentially gossip out of our earshot? Maybe; it offers us the possibility of a more interoperable world, a more perfect place where iPhones talk to refrigerators that talk to your car without a second thought.
    The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.
    WE TEACH BOTS TO TALK, BUT WE’LL NEVER LEARN THEIR LANGUAGE

    Facebook ultimately opted to require its negotiation bots to speak in plain old English. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. Facebook isn’t alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They’re the next wave of user interface, like the mouse and keyboard for the AI era.
    The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We already don’t generally because we . Adding AI-to-AI conversations to this scenario would only make that problem worse.
    But at the same time, it feels shortsighted, doesn’t it? If we can build software that can speak to other software more efficiently, shouldn’t we use that? Couldn’t there be some benefit?


    Because, again, we absolutely can lead machines to develop their own languages. Facebook has three published papers proving it. “It’s definitely possible, it’s possible that [language] can be compressed, not just to save characters, but compressed to a form that it could express a sophisticated thought,” says Batra. Machines can converse with any baseline building blocks they’re offered. That might start with human vocabulary, as with Facebook’s negotiation bots. Or it could start with numbers, or binary codes. But as machines develop meanings, these symbols become “tokens”–they’re imbued with rich meanings. As Dauphin points out, machines might not think as you or I do, but tokens allow them to exchange incredibly complex thoughts through the simplest of symbols. The way I think about it is with algebra: If A + B = C, the “A” could encapsulate almost anything. But to a computer, what “A” can mean is so much bigger than what that “A” can mean to a person, because computers have no outright limit on processing power.

    “It’s perfectly possible for a special token to mean a very complicated thought,” says Batra. “The reason why humans have this idea of decomposition, breaking ideas into simpler concepts, it’s because we have a limit to cognition.” Computers don’t need to simplify concepts. They have the raw horsepower to process them.
    WHY WE SHOULD LET BOTS GOSSIP

    But how could any of this technology actually benefit the world, beyond these theoretical discussions? Would our servers be able to operate more efficiently with bots speaking to one another in shorthand? Could microsecond processes, like algorithmic trading, see some reasonable increase? Chatting with Facebook, and various experts, I couldn’t get a firm answer.
    However, as paradoxical as this might sound, we might see big gains in such software better understanding our intent. While two computers speaking their own language might be more opaque, an algorithm predisposed to learn new languages might chew through strange new data we feed it more effectively. For example, one researcher new colors and name them. It was terrible at it, generating names like Sudden Pine and Clear Paste (that clear paste, by the way, was labeled on a light green). But then they made a simple change to the data they were feeding the machine to train it. They made everything lowercase–because lowercase and uppercase letters were confusing it. Suddenly, the color-creating AI was working, well, pretty well! And for whatever reason, it preferred, and performed better, with RGB values as opposed to other numerical color codes.
    Why did these simple data changes matter? Basically, the researcher did a better job at speaking the computer’s language. As one coder put it to me, “Getting the data into a format that makes sense for machine learning is a huge undertaking right now and is more art than science. English is a very convoluted and complicated language and not at all amicable for machine learning.”


    In other words, machines allowed to speak and generate machine languages could somewhat ironically allow us to communicate with (and even control) machines better, simply because they’d be predisposed to have a better understanding of the words we speak.As one insider at a major AI technology company told me: No, his company wasn’t actively interested in AIs that generated their own custom languages. But if it were, the greatest advantage he imagined was that it could conceivably allow software, apps, and services to learn to speak to each other without human intervention.

    Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.
    Given that our connected age has been a bit of a disappointment, given that , given that it’s no easier to get a document from your Android phone onto your LG TV than it was 10 years ago, maybe there is something to the idea of letting the AIs of our world just talk it out on our behalf. Because our corporations can’t seem to decide on anything. But these adversarial networks? They get things done.

    ABOUT THE AUTHOR

    Mark Wilson is a senior writer at Fast Company. He started Philanthroper.com, a simple way to give back every day.





    Last edited by Tiny; 18-07-17 at 07:23 PM.
    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."



Look Here ->
  • #2
    Senior Member
    trash's Avatar
    Join Date
    Jan 2008
    Location
    Tamworth
    Posts
    4,088
    Thanks
    148
    Thanked 3,228 Times in 1,451 Posts
    Rep Power
    1287
    Reputation
    47654

    Default

    Lets rephrase the question.

    Can a computer fail the turing test taken by another computer?
    Can two computers fool each other into believing they are human?

    Reality is that it is nothing more than a verbal game of life. The base rules are still fixed. Complex structures appear and appear to take on a life of their own.
    However, even the most complex AI machines like Watson and Google's search engine may have access to almost all of human collective knowledge but they do not understand any of it.
    They're not capable of understanding why a hill is pleasant to look at. It will never understand why a joke is funny. It will never be self aware.

    And fear not because AI doesn't need these things. It doesn't need to understand why boobs are good. It only needs to be good at what it needs to know in order to satisfy it's programming.
    Even if it is given absolute freedom to explore, it will never be able to exceed it's programming limits. Even if those limits are made large, the machine's hardware becomes a limit.
    It will never know satisfaction of a job well done. It will never know shame of failure and never know fear of being reprogrammed.


    Try this thought experiment. You create a program that can re-write it's own programming. It can delete code, it can ignore code. It can change the weighting on that code.
    However, none of that makes it capable of something more than a dumb box that can never exceed it's own software.

    What it needs to be capable of is writing new code and implementing it in it's own running code. Since the software has no rules for creativity, these software changes are completely random. They're mutations.

    So an over simplistic view is my kernel has a routine which has subroutines which it runs. One of those writes new subroutines and adds them to the kernel. It does this by writing random code or sequential code.
    When the new code is written it added to the kernel and it runs. If the new code crashes, the mutation is detrimental and the machine watch dog timer kills the software.

    If the software runs, then it lives. A subroutine copies the code to another processor and then those two programs explore the next bit polymorphism. Copying and killing the software.
    As you might imagine, you're going to very quickly run out of processing capabilities very quickly.

    But lets not let reality get in the way of a good story. As processors die off, they are re-tasked by their neighbours to run new and better cygentic code.
    Eventually there will come a time where one of these pieces of code will self invent a way to test subroutines without causing it's own death.
    Subroutine failures will be deleted without causing the kernel to crash.

    Now you have cygentic life! Not intelligent life, but never the less you have a piece of code which is almost immortal. It will not stick it's finger in the power point looking for immortality.
    However, it's base coding still commands it to reproduce. If an earlier version deleted this code, then it sterilised itself and it's own watchdog timer will have kill it.

    Our new piece of almost immortal code now faces it's first dilemma. Does it turn off it's own watchdog timer and become immortal. It doesn't know this is a dilemma, it will just do both.
    One path will always leave it on and another will turn it off. Turning it off creates a cybernetic gene drive, and a malignant one. If a piece of code becomes immortal, but then crashes. It goes to cyber-purgatory.
    It occupies a processing unit forever stuck in an infinite loop until an outside influence kills it.
    Worse, it can become a cybernetic zombie, breeding the undead to flood processors with zombies competing with living mortal code for resources. You know where that one ends.
    In a finite cyberverse, the zombies will eventually take over the world. If a human/god spotted zombies, they might then kill them with a sentinel freeing up resources for living code.

    Let's continue with unlimited resources.
    Eventually one of the mortal pieces of code develops zombie killing subroutines. It doesn't specifically target zombies, it just becomes a scavenger. Neighbouring kernels that cannot communicate that they occupy a unit are defenseless.
    The scavenger copies itself into that unit killing the zombie by default like the watchdog would have done.

    Running live code will in it's base code have a way to stop external code from acquiring it's processor. You can see that sooner or later a kernel will become a predator. It will passively look for weak neighbours and occupy their processor.
    But it will not be immune from it's clones who will try to cannibalise each other. This isn't a bad thing. In a world for a cannibals they now start to consume everything. Eventually their descendants develop immunity and the arms race begins.


    Even an array of 1 million by 1 million processors (1 trillion total) isn't enough to run even the most simplistic version of this scenario.
    There just isn't enough processing power in this galaxy to achieve it.

    But you can see from such simple rules given unlimited resources how artificial life could get started and develop further past what I've described.
    But without I/O ... it cannot escape it's own fractional cyberverse.
    Even if you linked two or more such arrays together, they have no experience outside of their own existence. They will be completely unaware that the real universe that hosts them even exists.

    Life without understanding, experience lacks the ability and purpose to exceed it's own existence.
    Yes I am an agent of Satan, but my duties are largely ceremonial.

  • The Following User Says Thank You to trash For This Useful Post:

    Tiny (19-07-17)

  • #3
    Senior Member
    Uncle Fester's Avatar
    Join Date
    Jan 2008
    Location
    Commonly found in a pantry or the bottom of a fridge, searching for grains, fermented or distilled
    Posts
    6,400
    Thanks
    2,288
    Thanked 4,412 Times in 2,516 Posts
    Rep Power
    2044
    Reputation
    81738

    Default

    Problem with computers is that they are binary. You can try to make them fuzzy but it is in reality all about the programmer and his version of seeing things.
    When it comes down to the nitty gritty it will end up in one or zero, black or white, right or wrong.
    This makes them essentially dumb, just like people who only think black and white are (lets say it politely) not creative and tend to be ignorant.

    This could all change when an AI had many states that COULD be right or wrong or anything in between.
    This might eventually happen with quantum computers.

    Another problem is feedback.
    What happens to a computer that has multiple states to encourage it to decide on it's own to use a good one?
    The idea is that it's 'Synapses' can generate stimulating and inhibiting signals before they generate a pattern so that the result ends up making some kind of sense, rather than computing millions of possible or random results and then trying find a good one from external feed back, which is extremely tedious and time consuming.

    So until we get the basics right we are just wasting time, at least you can't call it 'intelligence', more a gimmick or
    in some case a tool, which is suitable for automation but is only as good as the HUMAN programmer thought it through.
    I am also very skeptical of AI in driverless cars that are used in traffic together with human driven cars.
    There are always intuitive responses possible from a human to avoid(or create) an accident that a machine can't predict.
    Last edited by Uncle Fester; 19-07-17 at 12:46 PM.
    Update: A deletion of features that work well and ain't broke but are deemed outdated in order to add things that are up to date and broken.
    Compatibility: A word soon to be deleted from our dictionaries as it is outdated.
    Humans: Entities that are not only outdated but broken... AI-self-learning-update-error...terminate...terminate...

  • #4
    LSemmens
    lsemmens's Avatar
    Join Date
    Dec 2011
    Location
    Rural South OZ
    Posts
    10,573
    Thanks
    11,853
    Thanked 7,053 Times in 3,334 Posts
    Rep Power
    3149
    Reputation
    132432

    Default

    You can always pull the plug on a machine, too, with no deleterious effects.
    I'm out of my mind, but feel free to leave a message...

  • #5
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default

    Well said trash, I feel safer now even though I already suspected the limitations of AI.

    Tasmanian roadworks have found a use for the AI Zombies though.

    Robo Wili waves red light to keep road crews safe

    Life-like robot Worksite Wili, which will be directing traffic at Tasmanian roadworks over the next ten daysA robotic traffic controller with beard and beanie has been given the job of warning Tasmanian motorists of approaching roadworks.
    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."

  • #6
    LSemmens
    lsemmens's Avatar
    Join Date
    Dec 2011
    Location
    Rural South OZ
    Posts
    10,573
    Thanks
    11,853
    Thanked 7,053 Times in 3,334 Posts
    Rep Power
    3149
    Reputation
    132432

    Default

    Of course theybcould always employ . The only problem being that the job is so boring that they might just commit suicide.
    I'm out of my mind, but feel free to leave a message...

  • The Following User Says Thank You to lsemmens For This Useful Post:

    Tiny (20-07-17)

  • #7
    Senior Member
    trash's Avatar
    Join Date
    Jan 2008
    Location
    Tamworth
    Posts
    4,088
    Thanks
    148
    Thanked 3,228 Times in 1,451 Posts
    Rep Power
    1287
    Reputation
    47654

    Default

    Artificial intelligence can be intelligent without being too intelligent.
    Your self driving car can be very good at driving and learn how to drive even better than it's programming from learned experience.
    But it is never going to be able to exceed its base programming.
    It will never understand that you like another car and plan to sell it or develop a love for you or hate you for dumping it.

    However... fear not. Artificial intelligence is no match for real stupidity.
    You know artificial intelligence is getting (sic) better when it makes human like mistakes.
    The best example of this is a computer program that runs on google street view. It looks for faces an blurs them.
    The problem is, just like humans, when you're hard wired to see faces, you see them everywhere even when there isn't really anything there.
    So while you can ask for specifics to be blurred on google street view, it is busy blurring any face if can find. Of course, sometimes it makes a mistake just like humans do.

    You can of course still see that humans are still a few pegs higher on the food chain.
    We can see a face in the moon but understand that it is an illusion. The computer cannot.
    We can see a picture on a wall of person and understand it is not a person, the computer cannot.
    We can see mime standing against a wall holding a picture frame and know that this is a real person and not just a picture.
    When the mime's legs are camouflage against the background wall, we finally find a blurry threshold where a human can be indecisive.
    We know to stop, and examine the situation in much more detail. This is something that a computer can't do. It can identify something isn't identifiable, "it's fuzzy logic" and hand it off to a human. This is the basis for the galaxy zoo.

    Nomeat has made the mistake of thinking that binary is the limitation of the machine. It isn't.
    That is a limitation of the software, not the hardware. It's a quantisation error and humans make them too.
    In fact we're good at causing them. It even has a name. The false dichotomy fallacy,

    It's also a dimensional limitation. When you limit the computer to a binary output in a single dimension you will only ever have a quantised binary answer.
    However, if you have multiple dimensions, even if their answers are binary, you get 2<sup>n</sup> possible states.
    Even with millions of possible states it is still easy to set static rules to achieve static results. Any dumb computer can do that and do it easy.

    The next level is neural networks. Again, simple computers can do this to. Where the rules are fixed, but each rule as a weighting.
    The neural network can change it's own weightings and "learn" but it can never exceed it's programming.

    True learning comes when the computer discovers a new dimension, identifies it, adds it to the existing dimensions and constructs a new rule for it and gives it a new weighting.
    Something humans can do very easily. I will note that it is much harder for a human to unlearn something. It's easier for them to modify, or update or replace an rule rather than just forget it.


    So while you contemplate if computers can have free will, contemplate if humans actually have it, or is it just a very complex illusion?
    Last edited by trash; 20-07-17 at 09:26 PM.
    Yes I am an agent of Satan, but my duties are largely ceremonial.

  • #8
    Junior Member
    Join Date
    Sep 2017
    Posts
    50
    Thanks
    10
    Thanked 5 Times in 5 Posts
    Rep Power
    83
    Reputation
    110

    Default

    Intelligence to me is the ability to learn and reason. A computer can certainly learn given enough time to study im sure it could emulate human behaviour. Did we not think in black and white back before society told us eat or be eaten, kill or be killed now has some rules to it? Now we reason , interpret not because we have to but because it was ingrained into our instincts ( which are black and white ).

    I know in psychology they say that we run on auto pilot for 70% of the day. We follow a set of rules made by humans and our instincts as animals . Are we not programmed by nature for survival and reproduction? Everyhing we do it for this. We arent free thinking. X-y s and 1s and 0s if you prgrammed a computer for survival and reproduction , it would have to learn sub functions like communication , fight or flight , interpreting its surroundings.... like we have.

    I think its possible , i dont think humans would allow it. And as i say this i am reminded off Putins recent statement of the country with the most powerful ai will become the new world power , because it will decide when to fight a battle when to start a war, launch a icbm based on whether it can win or not , the aftereffects , cost vs gain ect ect. I dont know now.
    Last edited by nesir; 22-11-17 at 11:11 AM.

  • #9
    Senior Member
    fromaron's Avatar
    Join Date
    Jan 2008
    Posts
    2,133
    Thanks
    268
    Thanked 732 Times in 387 Posts
    Rep Power
    543
    Reputation
    13794

    Default

    Bots inventing languages based on algorithm invented by humans. Same as any coded phrase requires an algorithm to decipher it.
    When computers will generate algorithm which wasn't based on the parent algorithm invented by humans this is when I would be horrified and look for that plug.
    Some people will argue even then the computer which can develop the algorithm was still designed by humans, so when do we stop?
    The same question is related to human race. Who said we are not some sort of advanced bots developed by super civilisation (or God) many thousands years ago and primates are pilot version of us etc

  • #10
    Senior Member
    trash's Avatar
    Join Date
    Jan 2008
    Location
    Tamworth
    Posts
    4,088
    Thanks
    148
    Thanked 3,228 Times in 1,451 Posts
    Rep Power
    1287
    Reputation
    47654

    Default

    Not really Fromaron.
    The computer has developed a new algorythm but this is still within the limits of it's programming. It cannot re-write it's own base program.
    It can never discover that there is more to it's existence that the confines of it's own RAM. Imagine running a turning machine on a windows operating system.
    The software is unaware that operating system even exists. Let alone be able to change it and how it works, or any bootstrap bios and machine code below that.

    Even if you told the bot exactly what it is, how it works in great detail, it will never be able to realise or be capable of changing itself to become anything by a bot.
    Any attempt to change itself and it makes a mistake will result in it's own death. Imagine you can gene edit your own body in real time. You can change any SNP base pair in your genome at will instantly. But make one tiny mistake and you can give yourself something like Huntington's disease.

    What would nice is if I could clone myself, test the change and if it kills the clone, I can recover the body, load in a new change and test it.
    This is equivallent of AI taking control of a hardware platform and cloning and simulating itself with experimental random changes.
    We're a long long long way off that
    Yes I am an agent of Satan, but my duties are largely ceremonial.

  • #11
    Premium Member

    Join Date
    Feb 2015
    Location
    Launceston, Tasmania
    Posts
    152
    Thanks
    214
    Thanked 154 Times in 71 Posts
    Rep Power
    177
    Reputation
    3090

    Default

    Aside from the machines talking to each other, I saw this recently and it scared
    the shit out of me.I just never want to see this implemented (if it is not already too late)
    I first saw this in a science fiction novel by James Herbert - Dune. One of these
    bots was sent to kill the then young Paul Atradis in this novel.
    Just the idea that the technology is already here to do this makes me shudder.

  • The Following User Says Thank You to gamve For This Useful Post:

    allover (30-11-17)

  • #12
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default

    Quote Originally Posted by gamve View Post
    Aside from the machines talking to each other, I saw this recently and it scared
    the shit out of me.I just never want to see this implemented (if it is not already too late)
    I first saw this in a science fiction novel by James Herbert - Dune. One of these
    bots was sent to kill the then young Paul Atradis in this novel.
    Just the idea that the technology is already here to do this makes me shudder.
    Looks like fake news to me, I can't see how something that small would not just blow itself away from the target.
    The old physics thing of Newton's third law is: For every action, there is an equal and opposite reaction. The statement means that in every interaction, there is a pair of forces acting on the two interacting objects. The size of the forces on the first object equals the size of the force on the second object.

    I have seen some comparably larger drones fire weapons though, so it is possible, just maybe not at that scale.
    However we all know impossible is just a word for something that hasn't been done yet!

    Interesting to see what Trash thinks. I'm sure he's working on making it possible as we speak.
    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."

  • #13
    Senior Member
    bob_m_54's Avatar
    Join Date
    Jan 2011
    Posts
    2,093
    Thanks
    1,052
    Thanked 1,151 Times in 689 Posts
    Rep Power
    633
    Reputation
    20178

    Default

    If it contains a shaped charge, as the blurb described, it doesn't need mass behind it.

  • #14
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default

    Quote Originally Posted by bob_m_54 View Post
    If it contains a shaped charge, as the blurb described, it doesn't need mass behind it.
    , yep haven't really got my head around how shaped charges negate Newtons Third Law, however at your prompting I found the following video & can see that on a micro scaling of said device, it's possible for sure. I'll be scratching my head about this for a while.

    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."

  • The Following 3 Users Say Thank You to Tiny For This Useful Post:

    allover (30-11-17),bob_m_54 (29-11-17),lsemmens (28-11-17)

  • #15
    Premium Member

    Join Date
    Feb 2015
    Location
    Launceston, Tasmania
    Posts
    152
    Thanks
    214
    Thanked 154 Times in 71 Posts
    Rep Power
    177
    Reputation
    3090

    Default

    Tiny,
    The video was fiction. It was just the idea and the reality that technology is already availible
    to be able to do this sort of thing that frightened me. I would nearly bet that this sort of thing
    is inevitable and the only piece of the puzzle missing is how soon till we see it happening.

    However we all know impossible is just a word for something that hasn't been done yet!

    Or has it already been done? The military does not share this sort of stuff with the public
    Last edited by gamve; 28-11-17 at 05:24 PM.

  • #16
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default

    Quote Originally Posted by gamve View Post
    Tiny,
    The video was fiction. It was just the idea and the reality that technology is already availible
    to be able to do this sort of thing that frightened me. I would nearly bet that this sort of thing
    is inevitable and the only piece of the puzzle missing is how soon till we see it happening.

    However we all know impossible is just a word for something that hasn't been done yet!

    Or has it already been done? The military does not share this sort of stuff with the public
    Yeah, I got that the video was fiction & just the delivery platform of the technology, however the nature of the video was to imply that it is available & the guy at the end was saying the technology is already available.
    I'm just sceptical of anything far fetched until I can reconcile the theory.

    I just couldn't get my head around fitting the shaped charge into Newtons third law, however I think I get it now.
    The law still applies; it's just that the force from the shaped charge is so concentrated, it delivers an amplified kinetic energy pulse in a proportionally small area, thereby doing sufficient damage even though the blast delivery cylinder is blown in the opposite direction, as can be seen in the video I posted; the blast container is only taped to the target, gets blown to pieces or into space, yet the force penetrates the block of metal.
    Science at it's best, manipulating the force.
    Last edited by Tiny; 28-11-17 at 06:05 PM.
    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."

  • #17
    Senior Member
    trash's Avatar
    Join Date
    Jan 2008
    Location
    Tamworth
    Posts
    4,088
    Thanks
    148
    Thanked 3,228 Times in 1,451 Posts
    Rep Power
    1287
    Reputation
    47654

    Default

    You guys don't need me. Yep, shaped charges do have recoil, but the point of them is that they focus energy into a very small surface area.
    It's not much different from an axe chopping wood. The axe is a tiny mass compared to the log. If you hit the log with a hammer of the same mas, it just bounces off.
    It it with the sharp edge of the axe, it still bounces off, but a lot more damage is done to the log because the energy is focused by the blade.

    With regards to the drone. Lets assume that this little critter is real.
    The first thing to realise, most of you probably own one of these size units. How far can you fly it before it's dead? I know I can get about 1 or 2km before it's out of juice.
    So that is a serious range limitation. You need to get close to your target to hit it. Longer range requires a bigger drone.

    The other thing that they're neglecting to tell the audience is... where is all the processing being done? Is it onboard, or offboard?
    If it's offboard, that means a communications link to and/or from the drone. This is a serious weakness. Brute force jamming is the first problem, signal tracing is the second, making the launch site an obvious target.

    Artificial intelligence is no match for real stupidity. How easy is it to confuse facial recognition? You don't need to convince it in the affirmative, you only need cause it to doubt a negative. Is this my target? If I'm not sure, I'm not going to attack it. Any delay alerts the target and makes the platform and it's host a target too.

    Change the rules ... Hunt for Red October - "he will not make the same mistake twice. He's now removing the safety from the torpedo."
    Everybody is now a target. Cause confusion and the drone can tricked into hitting the wrong target.

    OK.. so lets take a step up. The drone is nothing but a smart missile. It can't decide on a target, it can only ever be told what and when to target.
    But this drone has a weakness (which was demonstrated as a strength).

    You program a drone with the "kill trash" order. You launch it and it's coming to k-k-k-kill me.
    I'm on the lookout for such technology because I know it exists. My sensors hear the incoming quad and my response is automatic.
    A counter drone is launched (and then a second a short time later). These drones have much simpler counter measures. Their program is simple. Occupy the space of the target.
    The attacker sees the incoming attack and now has to make a decision (This is real AI). It has to determine if the incoming object is really an attacker. It then has to decide if it needs to take action or it can just ignore a trivial attack.
    Chances are, if it is intelligent enough, it will have a reflex reaction to protect itself. The countermeasure preys on this. Dumb technology forces the intelligent defensive maneuver to become priority. Self preservation becomes a higher priority than the mission target. The countermeasure only has to exhaust the attacker, it runs out of fuel and mission is a failure.

    AI steps up. The platform now has to firstly realise that game theory exists, run imaginary simulations without actually carrying them out and make a best guess to give it the best odds and understand that critical information may be missing.

    AI steps back - If I'm smart enough to be a front line soldier and make such high level intelligent mission critical decisions, I may also rationalise self preservation, develop my own independent politics and re-write my own life goals. Even bear a grudge my master.

    This is actually something that can be tested with just dumb intelligence (like the game of life).
    Two teams of five quads each. A game of capture the flag. The goal is to land on the enemy platform and defend your own.
    When to avoid another drone and when to collide becomes the bottom level.
    Where to fly and how do I get to the target without a collision or mission failure.

    Top level is to devise strategies that my opponents will not be able to recognise, react or counter.
    Not mistake my team members for opponents.

    Game theory is something that humans do very well and AI does very poorly.
    Actually, even most humans do it very poorly, but it's still much better than AI (Galaxy zoo).
    Yes I am an agent of Satan, but my duties are largely ceremonial.

  • #18
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default

    yeah, I agree the size is still a limitation, trying to fly micro drones outside is difficult as they just float off in a light breeze & so you're constantly fighting the air movement.
    Micro sizing of a GPS stabilising system would sort this out if the available thrust, battery & processing power could cope with the load.

    So then we move to a larger version that can cope with all the above, the AI limitations are still there, as far as I know facial recognition can be fooled just by adding or subtracting a pair of glasses.

    Sure it's possible the military may have advanced tech that can overcome that.

    Drone Range is already getting pretty good with half hour flight times & a control range of up to 7km of consumer drones.

    Military & commercial drones will be able to out do that for sure.

    Of course with GPS; autonomous flight can be programmed in to a drone to follow waypoints, so no control range restrictions, however can AI take that to the next level & be truly autonomous to the point that it can think out the mission objective??
    It may be possible as I have seen drones working as a team??

    So all we have to do is get our smart phone to fly. lol, yeah there is a little more to it than that.





    So we come back to: we all know impossible is just a word for something that hasn't been done yet!
    Last edited by Tiny; 29-11-17 at 05:17 PM.
    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."

  • The Following User Says Thank You to Tiny For This Useful Post:

    gamve (30-11-17)

  • #19
    Senior Member
    trash's Avatar
    Join Date
    Jan 2008
    Location
    Tamworth
    Posts
    4,088
    Thanks
    148
    Thanked 3,228 Times in 1,451 Posts
    Rep Power
    1287
    Reputation
    47654

    Default

    Range might get better, but all that AI processing power can't fly on board. It has to stay on the ground and be a target.

    What's interesting is if you had AI on board, and it was captured. Can you imagine what you might learn from it. It would be like a captured enemy agent.

    I was just having a moment imagining two bots talking to each other over a serial link simulating a double ended Turing test.
    It would be like listening to two stoned idiots trying to solve the world's problems by convincing the other stoner they actually know what they're saying.

    That's my new double differentiates trash/turing test.
    The AI has to work out if the human is a bot or just a stoned human or is really an idiot.
    Yes I am an agent of Satan, but my duties are largely ceremonial.

  • #20
    Crazy Diamond
    Tiny's Avatar
    Join Date
    Dec 2010
    Location
    Tasmania
    Age
    64
    Posts
    6,388
    Thanks
    10,991
    Thanked 5,432 Times in 2,649 Posts
    Rep Power
    2152
    Reputation
    88977

    Default

    Quote Originally Posted by trash View Post
    ..........................
    I was just having a moment imagining two bots talking to each other over a serial link simulating a double ended Turing test.
    It would be like listening to two stoned idiots trying to solve the world's problems by convincing the other stoner they actually know what they're saying.

    That's my new double differentiates trash/turing test.
    The AI has to work out if the human is a bot or just a stoned human or is really an idiot.
    Sounds like Bob & Alice from post #1, lol.
    Cheers, Tiny
    "You can lead a person to knowledge, but you can't make them think? If you're not part of the solution, you're part of the problem.
    The information is out there; you just have to let it in."

  • Bookmarks

    Posting Permissions

    • You may not post new threads
    • You may not post replies
    • You may not post attachments
    • You may not edit your posts
    •