Would a concept of self be absolutely necessary to develop such software.
I suspect that it would depend on how sophisticated
the concept of self is intended. The unit will have to refer to itself in some rudimentary way should it be designed for basic problem solving.
1. This unit has or is given an objective to retrieve an object.
2. This unit detects a hazardous obstacle between this unit and its objective.
3. This unit detects that the hazardous obstacle will inflict massive damage to this unit.
4. This unit detects no alternative path to its objective.
5. This unit detects and possesses no immediate means of neutralizing the hazardous obstacle.
6. This unit detects that the hazardous obstacle is of a persistent nature.
To do just that, the unit will have to perform calculations on based on values assigned to various variables, and to recognize that failure to achieve an objective is acceptable.
Is it necessary in humans?
In a manner of speaking, Yes.
However, the level of self-awareness amongst the general populace is debatably low in my opinion. This is evident often by their inability of both understand and articulate the reasons for their choices and actions. We are indeed the Human Animal
So you suggest a self-learning programme, as opposed to pre-programmed, for such decision making?
That depends on the developers intent for the AI. Personally I would pre-assign the highest value to the unit's self-preservation if it calculates the outcome resulting in total incapacitation, and allow for self-learning by exploring the world it resides. As for the deliberation between saving either the drowning child or the child's mother - at no expense of the self, an algorithm generating random value ought to suffice should both possess an initial null value (no higher value pre-assigned due to the current age each subject). "Right/Wrong" behaviour only applies in and for group dynamics should the investment be useful. Otherwise, a sociopathic attitude is what I would desire in the initial phase for such an artificially created intelligence. If it has the capacity for learning from experience, it may outgrow that attitude, or not. We humans pretty much went through a similar process early in our evolution.
I understand such programmes are extremely difficult to write and limited in scope. Would the technical difficulties be insurmountable?
I honestly do not know. It may very well be insurmountable, but that has never stopped us before.