That article is behind a paywall. Please quote it if you want it responded to, I don't have time to hack my way through it (I'll do that for Forbes, but not some rando yellow journalism page).
I saw some radical headline about your cats wanting to kill you and a "study" by a psychologist. Hopefully you don't take that as an expert in the relevant field of animal behavior and neuroscience saying cats don't love their owners.
That said, consensus doesn't mean 100%, it means an overwhelming majority. There are always fringe and even dishonest scientists looking for publicity who are willing to say something stupid or exaggerated to get it (and funding).
How the hell do you so radically misrepresent everything I say?
Informatics rules out the possibility of PLANT sentience, the same can not be said for fish or even insects. Plants have too few channels of communication between processing nodes and those channels are not discrete enough. They aren't capable of anything like sentience.
That doesn't mean fish do feel pain. If you want to interpret "we can't rule it impossible" as "it's possible" then fine, but don't complain or misrepresent me.
Anybody who isn't a complete idiot will agree that it's at least *possible* that fish feel pain. Ruling something impossible is a strong claim to make, and no scientists are not in the habit of doing so. I'm sure you can find a scientist who is a strong advocate of fishing and isn't shy about making an idiot of himself going on the record making strong claims against the possibility of fish pain, but that's not the consensus.
Consensus is that fish probably feel pain, or at least something we would reasonably call pain. This has nothing to do with informatics which doesn't have the ability to comment on the situation aside from not ruling it out.
You do not understand correctly.
That was a rendering error that caused a crash. You can't use a text string to take over a computer like that.
Sure, if the program that simulates the SI is bad it could crash the computer, but that has nothing to do with the SI escaping.
That's not possible here for two reasons, answered in the first couple paragraphs of the Wikipedia article on buffer overflow:
https://en.wikipedia.org/wiki/Buffer_overflowWikipedia wrote:Exploiting the behavior of a buffer overflow is a well-known security exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer.
The SI would have to be aware of the surrounding architecture on the computer, which it has no access to, and it would have to be capable of overflowing intentionally to specified locations in memory.
While the program itself (if very badly designed) may have some accidental overflows somewhere that stem from data deriving from something the SI controls, the SI would not be aware of those or where it's ending up.
Can a billion monkeys on a billion typewriters accidentally compose Shakespeare if given a billion years to do so? Perhaps. But creating an escape hatch by dumb luck would be astronomically improbable, and it's not the only outcome: much more likely the bug would simply destabilize the operating system or program and "kill" the SI.
The monkeys can have a real shot at typing Shakespeare in a long time span because typing a paragraph of Chaucer before they finish Macbeth doesn't destroy the planet and end the endeavor.
It's much more likely that a free AI will simply evolve organically from bugs and viruses that are already out in the wild than one created could even accidentally escape. Or like I said, somebody might just free it.
SI just does not have the control over its environment that you assume. It's just a bunch of mutating neural weights acting kind of like simulated logic gates that give an output in the end from input. It's not a script, it doesn't deal with machine language, access memory, or anything like that. And it's a data set of a pre-established size, so any buffer overflow would probably only come about as a result of a bug in the analytics which the SI wouldn't be aware of.
It's basically like this: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
It's program output.
You seriously think the right configuration of cells in a simulation like that could escape? Even if it were self-aware, that is its universe. It's not code, it's a data set manipulated by code, and that's a big difference.
I don't want to waste time explaining this, I don't care if you believe SI can escape their sandboxes. Like I said, it's probably not a harmful belief, and maybe it'll help prevent people from developing them. I'm supportive of this irrational fear because of its good outcomes and I have no motivation to argue against it. It's like a religion that doesn't have any bigotry in it and they just think god wants them to be nice to each other and not hurt animals -- why fight against that? Just because it's incorrect?
Although I do wonder if bad arguments will hold back the anti-SI movement and weaken the good ones... but at the same time, people are clearly too stupid to understand *why* they're bad arguments. Even people who claim to win programming competitions.
For my part, I'm going to be honest and advocate against developing them on the basis of ethics. But you do you if you want to advance this silly argument.
So yeah Teo, you win. SI are magical and despite being essentially program output they can perform [insert bullshit technobabble here] and modify the code they're bound by and escape to take over the world. OK. Not interested in discussing this further.