FAQ
It is currently Mon May 29, 2017 8:48 pm


Author Message
Prometheus
Post  Post subject: Artificial morality  |  Posted: Fri Jan 30, 2015 11:26 am
User avatar
Original Member
Original Member

Joined: Sun Aug 07, 2011 8:58 am
Posts: 309

Offline
Recently Stephen Hawking and now Bill Gates have raised questions over the threat of AI.

http://www.bbc.co.uk/news/31047780

A possible solution could be to try to programme ethics. I don't mean simply, "thou shall not kill humans...", which is a command, not ethics, despite what monotheists say. I mean a programme that can weigh arguments and decide for itself what is right and wrong, much like humans can.

I assume this is possible for machines because I do not suppose there is anything non-physical in humans making moral choices. If one physical system (biological) can make ethical decisions, another physical system (electronic) should also be able to. I assume the only problem is a technical one? Is anyone trying this?

Just posting a random thought...


Top
scoobydoo1
Post  Post subject: Re: Artificial morality  |  Posted: Fri Jan 30, 2015 1:33 pm
User avatar

Joined: Fri Sep 05, 2014 9:39 am
Posts: 196
Location: Singapore

Offline
One way of programming ethical deliberation is to enable values of a numerical nature into a system that allows for it to calculate (weighing of pros/cons in humans). If the numerical values added up does not exceed that of it's competition, plus that of other considerations in a deliberative fashion - the program does not follow through with a course of (in)action.

Humans do so in a similar fashion (although most aren't aware of it), but the values and its effect on the brain and body are chemical in nature. If one holds an ethical principle above all other considerations (even that of self-preservation) and is able to "willfully" overcome our biological programming, that is the course of (in)action taken. "Pain" inflicted not to the self, but to the individual's valued fellows serves as an excellent example of this process. Torturing a person who is intent on not betraying his/her country is less likely to yield results than it is to torture/terminate his/her loved ones (sometimes even an "innocent" stranger will do) to achieve an objective.

The key is not how to replicate this form of evaluative process, but how to assign numerical values to different subject matter, in different context, and embed a means for the program to withhold value assignments when need be. Determining the context (in this context) will require analytical process in which a program can calculate and project likely outcomes of objects and subjects in play.


Top
iNow
Post  Post subject: Re: Artificial morality  |  Posted: Fri Jan 30, 2015 5:28 pm
User avatar
Original Member
Original Member

Joined: Thu Aug 04, 2011 11:40 pm
Posts: 5526
Location: Austin, Texas

Offline
It would be interesting to see how such an algorithm might handle some of the classic ethical quandaries like "do you push one person in front of a train to save 5 on the paddock" or "do you kill the store owner who prevents you from buying the medicine needed to save your son..."

_________________
iNow

"[Time] is one of those concepts that is profoundly resistant to a simple definition." ~C. Sagan


Top
Rory
Post  Post subject: Re: Artificial morality  |  Posted: Fri Jan 30, 2015 11:17 pm
User avatar

Joined: Wed May 07, 2014 6:02 am
Posts: 1896

Offline
The decision-making capacity, as relates to ethics, is only as good as the hardware and software - which, in the case of the human brain, I guess equates to the physicality of the brain and the patterns of neuronal firing. Where is the room for autonomy in the human decision-making process? There are always options and people enough to shout "my brain picks that one" but that is reflective only of the nature of the brain not of any intrinsic worthiness of the option. That is to say, subjectivity can arise from the objectively-verifiable hardware and software - but the decision will always be a reflection of the nature of the latter. So it is impossible for you or I to appraise the quality of any decision except with reference to the particular nature of our hardware and software which are arbitrarily bestowed at birth. So what, in the end, would be the rationale for creating AI capable of debating ethics?

_________________
If you are doomed to be boring - make it short. Andre Geim


Top
scoobydoo1
Post  Post subject: Re: Artificial morality  |  Posted: Sat Jan 31, 2015 4:29 am
User avatar

Joined: Fri Sep 05, 2014 9:39 am
Posts: 196
Location: Singapore

Offline
iNow wrote:
It would be interesting to see how such an algorithm might handle some of the classic ethical quandaries like "do you push one person in front of a train to save 5 on the paddock" or "do you kill the store owner who prevents you from buying the medicine needed to save your son..."

I would find it more interesting if a human life isn't pre-programmed with a positive value in the system. If all agents in the first scenario possesses a null value, would the AI even act at all, therefore a third an outcome is possible as well. Or if it has been pre-programmed with a positive value, how high a value would it be assigned with? Does an AI act to save a child from drowning, the child's mother from drowning, or save itself from destruction? Does the value of both the mother and/or child differ in any sense? Will the value assigned to the "self" always be unbeatable in any deliberative fashion? Will the program allow for this to change or is it set in stone?


Top
Rory
Post  Post subject: Re: Artificial morality  |  Posted: Sat Jan 31, 2015 3:19 pm
User avatar

Joined: Wed May 07, 2014 6:02 am
Posts: 1896

Offline
Quote:
scoobydoo1 wrote:
Does an AI act to save a child from drowning, the child's mother from drowning, or save itself from destruction?


If we are talking about an autonomous system, then my guess would be that it would naturally construct its own version of morality according to self-preservation, in the same way that the human species - for the most part - places great emphasis on the sanctity of human life, but is prepared to slaughter and eat other species. If you were to instruct the morality of AI, then that would be quite boring, since you would have only the human concept of morality encoded in a machine, rather than a synergistic response of AI grappling with subjective quandaries.

Come to think of it, all morality is ultimately boring, since it is merely a reflection of natural selection shaping the pool of existing entities according to which entities (in this case, morals) promote survival and propagation (which generally means self-preservation and an ability to co-exist with other entities).

_________________
If you are doomed to be boring - make it short. Andre Geim


Top
Prometheus
Post  Post subject: Re: Artificial morality  |  Posted: Fri Mar 06, 2015 3:19 pm
User avatar
Original Member
Original Member

Joined: Sun Aug 07, 2011 8:58 am
Posts: 309

Offline
Rory wrote:
So what, in the end, would be the rationale for creating AI capable of debating ethics?


The point would be so that an arbitrary position of 'it is good to kill all humans' is not reached by the AI. The corollary would be advancements in understanding how humans make ethical decisions (at the neurological scale) and advancements in computer programming. I anticipate this would be quite a difficult undertaking requiring a great deal of expertise and learning in many fields.

scoobydoo1 wrote:
I would find it more interesting if a human life isn't pre-programmed with a positive value in the system. If all agents in the first scenario possesses a null value, would the AI even act at all, therefore a third an outcome is possible as well. Or if it has been pre-programmed with a positive value, how high a value would it be assigned with? Does an AI act to save a child from drowning, the child's mother from drowning, or save itself from destruction? Does the value of both the mother and/or child differ in any sense? Will the value assigned to the "self" always be unbeatable in any deliberative fashion? Will the program allow for this to change or is it set in stone?


Interesting. Would a concept of self be absolutely necessary to develop such software. Is it necessary in humans?

So you suggest a self-learning programme, as opposed to pre-programmed, for such decision making? I understand such programmes are extremely difficult to write and limited in scope. Would the technical difficulties be insurmountable?


Top
scoobydoo1
Post  Post subject: Re: Artificial morality  |  Posted: Fri Mar 06, 2015 5:33 pm
User avatar

Joined: Fri Sep 05, 2014 9:39 am
Posts: 196
Location: Singapore

Offline
Prometheus wrote:
Would a concept of self be absolutely necessary to develop such software.

I suspect that it would depend on how sophisticated the concept of self is intended. The unit will have to refer to itself in some rudimentary way should it be designed for basic problem solving.

    1. This unit has or is given an objective to retrieve an object.
    2. This unit detects a hazardous obstacle between this unit and its objective.
    3. This unit detects that the hazardous obstacle will inflict massive damage to this unit.
    4. This unit detects no alternative path to its objective.
    5. This unit detects and possesses no immediate means of neutralizing the hazardous obstacle.
    6. This unit detects that the hazardous obstacle is of a persistent nature.
    7. etc.

Solution?

To do just that, the unit will have to perform calculations on based on values assigned to various variables, and to recognize that failure to achieve an objective is acceptable.

Prometheus wrote:
Is it necessary in humans?

In a manner of speaking, Yes.

However, the level of self-awareness amongst the general populace is debatably low in my opinion. This is evident often by their inability of both understand and articulate the reasons for their choices and actions. We are indeed the Human Animal.

Prometheus wrote:
So you suggest a self-learning programme, as opposed to pre-programmed, for such decision making?

That depends on the developers intent for the AI. Personally I would pre-assign the highest value to the unit's self-preservation if it calculates the outcome resulting in total incapacitation, and allow for self-learning by exploring the world it resides. As for the deliberation between saving either the drowning child or the child's mother - at no expense of the self, an algorithm generating random value ought to suffice should both possess an initial null value (no higher value pre-assigned due to the current age each subject). "Right/Wrong" behaviour only applies in and for group dynamics should the investment be useful. Otherwise, a sociopathic attitude is what I would desire in the initial phase for such an artificially created intelligence. If it has the capacity for learning from experience, it may outgrow that attitude, or not. We humans pretty much went through a similar process early in our evolution.

Prometheus wrote:
I understand such programmes are extremely difficult to write and limited in scope. Would the technical difficulties be insurmountable?

I honestly do not know. It may very well be insurmountable, but that has never stopped us before. :)


Top
Display posts from previous:  Sort by  
Print view

Who is online
Users browsing this forum: No registered users and 0 guests
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
Jump to:   


Delete all board cookies | The team | All times are UTC


This free forum is proudly hosted by ProphpBB | phpBB software | Report Abuse | Privacy