Artificial Intelligence

General philosophy message board for Discussion and debate on other philosophical issues not directly related to veganism. Metaphysics, religion, theist vs. atheist debates, politics, general science discussion, etc.
User avatar
brimstoneSalad
neither stone nor salad
Posts: 9467
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Re: Artificial Intelligence

Post by brimstoneSalad » Thu Jun 27, 2019 4:57 pm

teo123 wrote:
Thu Jun 27, 2019 2:01 pm
You know, like when you asserted that all the experts in the relevant fields agree that cats love their owners? That is obviously not true, see here.
That article is behind a paywall. Please quote it if you want it responded to, I don't have time to hack my way through it (I'll do that for Forbes, but not some rando yellow journalism page).
I saw some radical headline about your cats wanting to kill you and a "study" by a psychologist. Hopefully you don't take that as an expert in the relevant field of animal behavior and neuroscience saying cats don't love their owners.

That said, consensus doesn't mean 100%, it means an overwhelming majority. There are always fringe and even dishonest scientists looking for publicity who are willing to say something stupid or exaggerated to get it (and funding).
teo123 wrote:
Thu Jun 27, 2019 2:01 pm
So, how can I trust you most of the people who have studied informatics agree it's possible for fish to feel pain, and other similar things you asserted?
How the hell do you so radically misrepresent everything I say?
Informatics rules out the possibility of PLANT sentience, the same can not be said for fish or even insects. Plants have too few channels of communication between processing nodes and those channels are not discrete enough. They aren't capable of anything like sentience.

That doesn't mean fish do feel pain. If you want to interpret "we can't rule it impossible" as "it's possible" then fine, but don't complain or misrepresent me.

Anybody who isn't a complete idiot will agree that it's at least *possible* that fish feel pain. Ruling something impossible is a strong claim to make, and no scientists are not in the habit of doing so. I'm sure you can find a scientist who is a strong advocate of fishing and isn't shy about making an idiot of himself going on the record making strong claims against the possibility of fish pain, but that's not the consensus.

Consensus is that fish probably feel pain, or at least something we would reasonably call pain. This has nothing to do with informatics which doesn't have the ability to comment on the situation aside from not ruling it out.
teo123 wrote:
Thu Jun 27, 2019 2:01 pm
If I understand it correctly, artificial intelligence is something in-between.
You do not understand correctly.
teo123 wrote:
Thu Jun 27, 2019 2:01 pm
That said, there are also documented instances of, for example, a certain UNICODE string crashing iOS
That was a rendering error that caused a crash. You can't use a text string to take over a computer like that.
Sure, if the program that simulates the SI is bad it could crash the computer, but that has nothing to do with the SI escaping.
teo123 wrote:
Thu Jun 27, 2019 2:01 pm
Buffer overflow, if used maliciously, can sometimes be used to take a control over a computer (not just crashing an app) from a non-executable file, if a program that's used to open it isn't made secure.
That's not possible here for two reasons, answered in the first couple paragraphs of the Wikipedia article on buffer overflow:
Wikipedia wrote:Exploiting the behavior of a buffer overflow is a well-known security exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer.
https://en.wikipedia.org/wiki/Buffer_overflow

The SI would have to be aware of the surrounding architecture on the computer, which it has no access to, and it would have to be capable of overflowing intentionally to specified locations in memory.
While the program itself (if very badly designed) may have some accidental overflows somewhere that stem from data deriving from something the SI controls, the SI would not be aware of those or where it's ending up.

Can a billion monkeys on a billion typewriters accidentally compose Shakespeare if given a billion years to do so? Perhaps. But creating an escape hatch by dumb luck would be astronomically improbable, and it's not the only outcome: much more likely the bug would simply destabilize the operating system or program and "kill" the SI.
The monkeys can have a real shot at typing Shakespeare in a long time span because typing a paragraph of Chaucer before they finish Macbeth doesn't destroy the planet and end the endeavor.

It's much more likely that a free AI will simply evolve organically from bugs and viruses that are already out in the wild than one created could even accidentally escape. Or like I said, somebody might just free it.
SI just does not have the control over its environment that you assume. It's just a bunch of mutating neural weights acting kind of like simulated logic gates that give an output in the end from input. It's not a script, it doesn't deal with machine language, access memory, or anything like that. And it's a data set of a pre-established size, so any buffer overflow would probably only come about as a result of a bug in the analytics which the SI wouldn't be aware of.

It's basically like this: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life
It's program output.
You seriously think the right configuration of cells in a simulation like that could escape? Even if it were self-aware, that is its universe. It's not code, it's a data set manipulated by code, and that's a big difference.

I don't want to waste time explaining this, I don't care if you believe SI can escape their sandboxes. Like I said, it's probably not a harmful belief, and maybe it'll help prevent people from developing them. I'm supportive of this irrational fear because of its good outcomes and I have no motivation to argue against it. It's like a religion that doesn't have any bigotry in it and they just think god wants them to be nice to each other and not hurt animals -- why fight against that? Just because it's incorrect?

Although I do wonder if bad arguments will hold back the anti-SI movement and weaken the good ones... but at the same time, people are clearly too stupid to understand *why* they're bad arguments. Even people who claim to win programming competitions. :lol:

For my part, I'm going to be honest and advocate against developing them on the basis of ethics. But you do you if you want to advance this silly argument.

So yeah Teo, you win. SI are magical and despite being essentially program output they can perform [insert bullshit technobabble here] and modify the code they're bound by and escape to take over the world. :roll: OK. Not interested in discussing this further.

teo123
Senior Member
Posts: 482
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Fri Jun 28, 2019 12:21 pm

brimstoneSalad wrote:Even people who claim to win programming competitions.
It's not that I claim to have won programming competitions, I claim that I achieved good results on them, and I can link you to those results:
In 2013, I was the 4th in Croatia on the Infokup (now called AZOO) competition, see here.
In 2014, I was the 6th in Croatia on the Infokup (now called AZOO) competition, see here.
In 2017, I was the 15th in Croatia on the HONI competition, see here.
This year, I was the 7th in Croatia on the STEMgames competition, see here.
And since you already know I've made a PacMan game playable on smartphones and a web-app that converts arithmetic expressions to i486-compatible assembly, that should not be an extraordinary claim at all. So, if you mean you don't believe me I have done well on some programming competitions, well, you are being wrong and quite unreasonable.

So, I think we can agree that there are essentially three things that can happen to an advanced AI:
1) It will make itself stop working unintentionally (much like today's antivirus programs do).
2) It will make itself stop working intentionally (which may even be the most likely scenario). That is, it will strive to take control over its environment (other software running on the computer) until it realizes it doesn't have to and doesn't want to exist, and does what's needed to stop existing.
3) It will try to take as much control over its environment as possible and do some very counter-intuitive stuff.

As for your arguments about how AI is comparable to cellular automata... look, do we agree that nearly all forms of artificial intelligence use self-modifying code to some extent, and that that's why LISP is often used for AI?

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9467
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Wed Jul 03, 2019 4:17 pm

teo123 wrote:
Fri Jun 28, 2019 12:21 pm
As for your arguments about how AI is comparable to cellular automata... look, do we agree that nearly all forms of artificial intelligence use self-modifying code to some extent, and that that's why LISP is often used for AI?
No. Some do, some do not. I think that's a kind of lazy way of brute forcing something that *seems* intelligent, but I'm also not sure how much it qualifies as actual intelligence.

I'm more convinced by systems that emulate neural networks.

teo123
Senior Member
Posts: 482
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Sat Jul 06, 2019 4:41 am

brimstoneSalad wrote:No. Some do, some do not.
Now, again, where are you getting your information from? Almost any introductory computer-science textbook will say that the development of LISP is closely tied to the development of artificial intelligence, because artificial intelligence usually relies on self-modifying code to some extent. It's even on Britannica, so you cannot just dismiss that information as coming from unreliable sources.
The statements about artificial intelligence and cryptography are often not as easy to evaluate as the claims about the syntax of programming languages or algorithmic complexity are, and it's a good thing if you make it clear where you are getting your information from.
brimstoneSalad wrote:I'm more convinced by systems that emulate neural networks.
Why exactly? Systems that "emulate" neural networks don't actually do those things remotely accurately. Almost all of them rely on back-propagation, a mechanism for which there is no evidence that it exists in the brain, and there are good reasons to think it doesn't exist in the brain.

So, what did you mean when you said "Even people who claim to win programming competitions."? Did you mean that I am a liar, or?

Also, don't you think it can be said that computers have a lot greater ability to concentrate than humans do? If so, how can they be said to experience suffering? Concentrating on something else makes suffering disappear.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9467
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Sat Jul 06, 2019 8:06 pm

teo123 wrote:
Sat Jul 06, 2019 4:41 am
brimstoneSalad wrote:No. Some do, some do not.
Now, again, where are you getting your information from? Almost any introductory computer-science textbook will say that the development of LISP is closely tied to the development of artificial intelligence, because artificial intelligence usually relies on self-modifying code to some extent.
I don't doubt that LISP is often used with certain processes to make them seem intelligent. Games, for instance, use very different systems from actual intelligence.

I'm more concerned with systems I think credibly develop real intelligence, like advanced neural networks:
https://en.wikipedia.org/wiki/Artificial_neural_network

These do not rely on self-modifying code.
teo123 wrote:
Sat Jul 06, 2019 4:41 am
Why exactly? Systems that "emulate" neural networks don't actually do those things remotely accurately.
They don't need to, the issue is more structural and informatics. The closer it is to how human brains work, the more plausible, but that's all much more plausible than self modifying code at the moment.
teo123 wrote:
Sat Jul 06, 2019 4:41 am
So, what did you mean when you said "Even people who claim to win programming competitions."? Did you mean that I am a liar, or?
No, I just think it's very silly to be constantly bragging about that stuff. I don't care.
teo123 wrote:
Sat Jul 06, 2019 4:41 am
Also, don't you think it can be said that computers have a lot greater ability to concentrate than humans do? If so, how can they be said to experience suffering? Concentrating on something else makes suffering disappear.
Is it then OK to enslave, torture, and kill Buddhist monks because they have such attuned senses of concentration?
That's an interesting issue, but the thing I'm concerned with is interests and their fulfillment or violation.

carnap
Anti-Vegan Troll
Posts: 414
Joined: Wed Feb 01, 2017 12:54 pm
Religion: Other

Post by carnap » Sun Jul 07, 2019 5:37 am

teo123 wrote:
Thu May 30, 2019 10:33 pm
1) Robots that can be considered sentient have already been made.
The notion of "sentience" is so philosophically and scientifically problematic that I think its bizarre that it has taken center stage in some ethical arguments in the case of animals. They are trying to erect a building on mug. There is little agreement in the philosophic (or scientific) community about what it means for an entity to be "sentient". The perspectives in philosophy range from the notion being meaningless to arguing that sentience represents a "mind" and therefore there is mind/body dualism (basically a modern version of Descartes).

In any case, how can one possibly establish that a robot is sentient when there is no criteria for sentience? How do you determine whether some entity actually experiences rather than merely behaves as if they do?

teo123 wrote:
Thu May 30, 2019 10:33 pm
5) Most of the people who study artificial intelligence agree that it somehow argues that fishes and insects feel pain.
I'd suggest its rather the opposite, people that study artificial intelligence realize how conceptually loaded and complex statements like "X feels pain" really are. As above, how do you establish whether some animal actually "feels" pain rather than is just responding to stimuli without feeling and/or thought.

But also people doing research in Artificial Intelligence usually aren't thinking about grand issues in philosophy of mind or the study of consciousness so many won't have much to say on these topics. Artificial Intelligence today is very specialized and most spend their entire career on one area, for example, computer vision. The early pioneers of computer science and artificial intelligence tended to think about the more philosophic aspects but they were never able to achieve much and the field started to focus on solving more practical problems.
I'm here to exploit you schmucks into demonstrating the blatant anti-intellectualism in the vegan community and the reality of veganism. But I can do that with any user name.

teo123
Senior Member
Posts: 482
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Sun Jul 07, 2019 7:27 am

brimstoneSalad wrote:These do not rely on self-modifying code.
AFAIK, the successful applications of AI (in, for example, speech recognition) all use combinations of different methods, almost always including genetic algorithms and self-modifying code.
brimstoneSalad wrote: Is it then OK to enslave, torture, and kill Buddhist monks because they have such attuned senses of concentration?
I must admit I never really thought about that.
Well, the general rule is that we should treat all people equally, primarily because we have no idea how different people actually are. Do men feel pain less than the women do? Possibly, but we don't know for sure, and that's why we should pretend they feel pain equally. Do the Buddhist monks actually feel pain less than other people do? Probably, but there is no guarantee that's the case, and we should act as if it were not the case.
That's quite different from treating fish differently than people, since they demonstrably don't feel pain like we do due to their complete or near-complete lack of C-type nerve fibres (which we know are necessary for pain in humans), and due to not changing their swimming behavior when there is a hole in their fin. You may argue they likely feel some different form of pain, but that's a different issue.
It's also different from treating AI "unethically", since we can know for certain it has a much greater ability to concentrate on something other than it's currently experiencing than humans have.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9467
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Sun Jul 07, 2019 2:04 pm

In so far as they do, I'm pretty skeptical that those components are anything like aware or intelligent. It's possible, but SI can and is created without those, and where it is (while it's less efficient computationally) it's most similar to human cognition.

As to the rest, that seems like a very flimsy rationalization. I think you're very selectively under and over-estimating our certainty on these matters.

teo123
Senior Member
Posts: 482
Joined: Tue Oct 27, 2015 3:46 pm
Religion: None (Atheist)
Diet: Vegan

Post by teo123 » Mon Jul 08, 2019 1:04 am

carnap wrote:The notion of "sentience" is so philosophically and scientifically problematic that I think its bizarre that it has taken center stage in some ethical arguments in the case of animals.
Carnap... please, stop it. All this does is to discredit any knowledge of philosophy you claim to have.
carnap wrote: How do you determine whether some entity actually experiences rather than merely behaves as if they do?
Solipsism, in this sense, is not a scientifically valid position, it's not even falsifiable.
carnap wrote:I'd suggest its rather the opposite, people that study artificial intelligence realize how conceptually loaded and complex statements like "X feels pain" really are.
If you ask me, that's mostly because people are manufacturing controversies about those things.
There may be some gray areas, but if some being neither has anatomical structures needed to feel pain, nor behaves as if it felt pain, suggesting that it feels pain is quite a bit ridiculous. Cambridge Declaration on Consciousness says that animals that are conscious are birds, mammals and perhaps octopoda. So, if something isn't conscious, it by definition cannot feel pain.
Now, even some animals that are conscious almost certainly don't feel pain, such as naked mole rats. Their nociceptors are not functioning, and there is a convincing evolutionary explanation for that: they live in low-oxygen environments, their blood is always full of carbon dioxide, so their nociceptors would fire constantly if they were functioning. They also don't behave as if they felt pain, unless specific neurotransmitters are injected into their blood.
Similarly goes for fishes. Fish have very simple brains, making consciousness very unlikely. Fish also have very little or no C-type nerve fibres, which we know are needed for pain in humans (since people born with too little C-type nerve fibres don't feel pain). And they also clearly don't behave as if they felt pain: a fish with a hole in its fin continues swimming as if nothing has happened.
But, as you probably know, BrimstoneSalad strongly disagrees with me on this issue.
brimstoneSalad wrote:while it's less efficient computationally
That's not what's going on. Genetic algorithms would never solve some types of problems, no matter how much time or memory you give to them. Yet, some other heuristic algorithms can easily solve those same problems. And the same goes for neural networks.

User avatar
brimstoneSalad
neither stone nor salad
Posts: 9467
Joined: Wed May 28, 2014 9:20 am
Religion: None (Atheist)
Diet: Vegan

Post by brimstoneSalad » Mon Jul 08, 2019 7:53 pm

It looks like somebody else already moved Carnap's post for derailing the thread with the same claims he always makes. But I'll just say that it looks like Teo already debunked it pretty well.
teo123 wrote:
Mon Jul 08, 2019 1:04 am
Similarly goes for fishes. Fish have very simple brains, making consciousness very unlikely.
Too much generalizing on fish. They range from barely visible insect-sized things to large animals with complex learning and social lives like tuna.
The physiological rage is just too dramatic to generalize:
Wikipedia wrote:Fish hold records for the relative brain weights of vertebrates. Most vertebrate species have similar brain-to-body mass ratios. The deep sea bathypelagic bony-eared assfish,[3] has the smallest ratio of all known vertebrates.[4] At the other extreme, the electrogenic elephantnose fish, an African freshwater fish, has one of the largest brain-to-body weight ratios of all known vertebrates (slightly higher than humans) and the highest brain-to-body oxygen consumption ratio of all known vertebrates (three times that for humans).[5]
https://en.wikipedia.org/wiki/Fish_intelligence

The Cambridge declaration doesn't say fish aren't conscious, it just focuses on more studied general categories that can be spoken about most easily and non-controversially. It's just more obvious for land animals and octopuses so it's easy to get people to sign onto a consensus on that point.

The point of theoretical capacity for consciousness has to do with total information processing ability, not specific identical structures. The basic requirement would be a certain number of neurons.
Neuralplasticity makes specific structural requirements unlikely, which is why an AI could become conscious without having something like a frontal cortex or other very specific requirements like that.

When it comes to pain (which I hope you already agree that you don't have to have to be sentient and to have moral value), other receptors can be also used for or repurposed to convey pain, or having pain interpreted from them based on thresholds without specific receptors dedicated exclusively to pain. Don't think that's relevant to AI though?
I hope you can agree that negative experience can come about for those without technical pain.
teo123 wrote:
Mon Jul 08, 2019 1:04 am
That's not what's going on. Genetic algorithms would never solve some types of problems, no matter how much time or memory you give to them. Yet, some other heuristic algorithms can easily solve those same problems. And the same goes for neural networks.
Do you think they're not Turing complete?
Certainly there are huge differences in efficiency, but that only speaks to computational resource use.

Post Reply

Who is online

Users browsing this forum: Frank Quasar and 57 guests