Renzatic on 11/4/2016 at 20:55
Quote Posted by Yakoob
var emotion = Math.random(happy, psychopathic);
;p
Ha! Nerd humor always makes me laugh! I HATE NERD HUMOR! I WILL KILL YOU! Nah, just funnin' with you. OR AM I? :mad:
Vae on 24/9/2019 at 15:01
Boston Dynamics Spot, now available for early adopters!
[video=youtube;wlkCQXHEgjA]https://www.youtube.com/watch?v=wlkCQXHEgjA&list=TLGGk4Nns8V5vtwyNDA5MjAxOQ[/video]
(
https://www.bostondynamics.com/spot) https://www.bostondynamics.com/spot
henke on 24/9/2019 at 16:27
Ok, now we're living in the future.
Vae on 13/12/2023 at 23:58
Tesla Optimus is coming along...
[video=youtube;cpraXaw7dyc]https://www.youtube.com/watch?v=cpraXaw7dyc[/video]
mxleader on 14/12/2023 at 04:03
AI can already beat chess masters so it has already won.
Nicker on 14/12/2023 at 06:16
Also missing from the list: We won't even know when AI achieves sentience and it will not care.
Many people still deny the sentience of other members of our species, because of variations in skin pigmentation. We only just began to recognize sentience in other mammals and birds. And they share brain anatomy and functions with us. There are ant species which build nests that have structures devoted to ventilation and cooling. Are hive-minds sentient? Are we even allowed to speculate on that?
In the novel "Terminal Cafe" a self-aware being emerges within the ecology of the World Wide Web, from mutated virus programs. It has the mind of a genius, the resources of the entire globe and the emotional volatility of a toddler. In another book, which I forget the title of, a colony of robots on the moon, are given a handful qualities by their human inventor: a limited life span for their chassis (death), the need to build a replacement body in which to transfer their programming (reproduction), and the random alteration of a tiny part of their coding during each life cycle (mutation). In the story these robots develop human like emotions and motivations but in reality would we even recognize a sentient machine?
A big part of the confusion is conflation of concepts and over-extension of the term AI, using it to mean cybernetic extension of humans, sentient machines and simulated humans. But AI outstripped the human ability to manipulate information decades ago. While homo sapiens are the most capable organic information processors (that we know of), that's not what makes us human. Dema mentioned will and I think that is the key. It's not how much information you can process but what you make of it. And you can't make anything out of it without a motivation to do so. It's not just about computing power, it's about abstraction and invention. If necessity is the mother of invention, the inventor must have needs, even artificial ones.
And if the abstractions and inventions of a sentient machine (I prefer "self-willed-construct") don't serve our needs or expectations, will our hubris blind us to the existence artificial humanity, not if but when it arises?
Sulphur on 14/12/2023 at 06:54
For an AI to be determined as sentient, we have the definitional problem of what sentience is in the first place. If it is the most literal meaning, 'to sense', then anything that processes input as stimulus and responds to it is sentient. If it's the more philosophical meaning of 'being able to feel', then we have the problem of how you distinguish an entity with conscious feeling from an entity that simulates the input-response process but has no internal state that corresponds to something we traditionally define as sentience and cognition (see: P-zombie, The Chinese Room).
What this means is that our definitions are missing or have elided something, or at the very least, are ill-suited to answer the question of how you ascribe consciousness and life to an entity. They may even be irrelevant to the question.
Robots and AI organically developing emotions always seems like a far-fetched idea to me. Of what practical use would an emotion be to a computer? Human beings need them because of the way they link mental and physical processes together as a sort of codified shorthand response to situations that we've evolved over time to recognise one way or the other. An artificial intelligence would have no need for this, given enough computational power leading to fine-grained assessments possible at an instant in any situation, so other than ascribing mental states to human beings if it had to interact with them, and replicating them to assuage our human feelings if it had to, I don't see why it would want to or even suddenly develop such a feature as emotionality.
Nicker on 14/12/2023 at 07:40
Sentience is indeed a problematic term in this context. At least as problematic as AI, since we conflate AI with sentience in popular culture and many of the debates around the ethics and dangers of AI, circle around us imagining that it might develop its own motives (Robot Overlords, and all that).
I am using it to describe beings with a theory of mind, self awareness. But even those refinements are problematic. If all it means is to experience emotions then it's inadequate.
As for emotions. We are again projecting human expectations on artificial beings. Firstly we elevate our emotions, giving them great value which may not be deserved. We create art and literature about our feels, build monuments to sentimentality. But it's just biochemistry, the earliest form of data processing by our lizard brains.
Why would an artificial being need the same emotions as we have? The same value judgements? They could have their own "emotions", their own background colours, informing them whether to proceed cautiously or enthusiastically, whether something is a threat or a benefit. We probably wouldn't recognize them as emotions, mostly because we believe that, as the pinnacle of creation, we are the measure of what a being is.
Must sleep...
Qooper on 20/12/2023 at 12:29
I don't have much time, but I had an excellent cup of coffee so I had to write a bit.
Quote Posted by Nicker
Are hive-minds sentient?
It could be that car traffic as a system is sentient, and the global cash streams (plural singular, erm... plingular) is reasoning about 11-dimensional hypermorality. Jeff buys a Minimoog on ebay? That payment was going to be one of its thoughts, but because Jeff cancelled his purchase, it forgot what it was thinking about.
Quote:
Are we even allowed to speculate on that?
What do you mean "allowed"? What type of "allowed" are you referring to here?
demagogue on 20/12/2023 at 17:01
I can take a few notes from my Philosophy of Mind days to note that these days I'm a partisan of Higher Order Thought (HOT) theory, in that I don't believe just any complex system becomes "sentient" just because it's complex. And I don't think a system that's designed to have sense-like inputs matched to outputs is sentient either, like both classic categories of AI, Good Old Fashioned state machine AI and Deep Learning setups like Large Language Models.
I think a perception or "affect" has to be literally modeled as an affect for the decision-making apparatus. That is, there's a first layer where attention is put on to an affect & maybe there's a decision made about it or some orientation formed, but it doesn't become an affect until a second layer represents that relationship explicitly.
To use a simple example of a Convolutional Neural Net identifying images with labels--where it breaks the image down into features, then links the arrangement of features to activation channeling to the right label--at the end of that chain, there's going to be activation of a label, which the system can then output outright like "chair".
But in HOT theory, the system isn't sentient yet that it saw a chair. If you wanted to make that sentient, you'd have to re-represent that outcome as affects, that is, e.g., as a set of impulses to articulate the word "chair" strongly paired with a set of impulses connecting those impulses with impulses on a re-representation of the image and its features & its proprioceptive place in space, etc. You have to re-represent everything in some affective packaging.
The moment you're just blindly activating some state or output and not re-representing the activation on content as an affect itself, where the system doesn't get any affective representation of what's happening internally, you've dropped the sentience ball. Anyway that's HOT Theory in a nutshell and I think the strongest argument why AI systems, as they're designed right now, aren't going to be sentient no matter how sophisticated they get, because they're not designed to be sentient.