Uncategorized

Neil Jordan: Artificial General Intelligence and Reasoning – Centre for Enterprise Markets and Ethics


Reasoning Robots

It was announced recently that developers of artificial intelligence models at Meta are working on the next stage of ‘artificial general intelligence’, with a view to eliminating mistakes and moving closer to human-level cognition. This will allow chatbots and virtual assistants to complete sequences of related tasks and was described as the beginning of efforts to enable AI models to ‘reason’, or to uncover their ability to do so.

This represents a significant advance. One example given, of a digital personal assistant being able to organise a trip from an office in Paris to another in New York, would require the AI model to seek, store, retrieve and process data recognised as relevant to a task. It would have to integrate ‘given’ information (for instance, the destination and the planned time of arrival), with stored information (such as the traveller’s home address) and search for new information (flight durations, train timetables and perhaps the morning’s traffic reports, for example), which it would have to select as relevant to the task of organising the journey. Following processing of the data, it would then have to instigate further processes (such as making reservations or purchasing tickets with third parties).

 

Human Reasoning and Moral Problems

Remarkable as such a development is, the concept of ‘reasoning’ in play is limited. Compare this to the kind of reasoning of which human beings are capable, particularly in morally ‘difficult’ situations.  

Consider the case of someone who, while shopping in a supermarket, catches sight of a toy that he knows his daughter would love for her birthday, but which, down on his luck, he is unable to pay for. The thought crosses his mind that he could simply make off with it. He might think about how this is to be done. Maybe he could run through the door with an armful of goods before the security personnel have time to intervene. This is not certain to succeed, particularly if there are staff outside who might be alerted. Even if he escapes, he is still likely to have been recorded by in-store security systems and having drawn attention to himself by his flight, he risks identification and subsequent arrest. Perhaps, then, he could just hide the toy in his coat, or even use a self-service checkout but fail to scan some of the items that he wishes to leave with. He might then wonder whether he should steal. On the one hand, having lost his job some weeks ago and having been unable to find work, he is very short of money and he has children to feed – and there is a birthday coming; on the other, should he be arrested, he risks punishment under the law, which will in all likelihood worsen his family’s situation. Perhaps he will then consider whether he has any right to act in the manner that he has been considering, reflecting on whether his level of poverty justifies what he had been intending to do. The supermarket chain makes millions in profits every year, while he is only trying to provide for his family and make his daughter happy – and with an item that the shop probably won’t even miss. Ultimately, a consideration of principle alone – a conviction about the general immorality of theft – leads to a resolve to return some of the food items to the shelf, put the toy in his basket and to pay for everything he has selected.  

 

Recognising Reasons

In this case, the subject has undoubtedly engaged in reasoning of an interesting and complex variety. First, it should be noted, the man has not processed data, but considered reasons. Moreover, he has considered not only a number of reasons, but reasons of different kinds. He has moved from an initial motive based on a desire (to please his daughter), through considerations of possible means and outcomes, to issues of familial obligation, distributive justice and moral principle. He has even considered both prudential and moral questions in relation to the matter of whether he ‘should’ steal. Just as easily, he could also have reflected on questions of political obligation and his duty to obey the law.

There are two points of importance here. The first is that in spite of their differences – a concern about the consequences of arrest for one’s family being different in kind from a speculation about the material injustice of one’s situation – the agent was able to recognise the relevance of these considerations as reasons. The second is that of all of the reasons under consideration, his action was ultimately motivated by a single consideration of a moral nature.

It is clear from this example that reasoning (at least as it relates to questions about how to act) extends far beyond calculation or processing data in order to reach a defined end. The man did not follow a set process to reach a pre-determined goal. Indeed, part of his dilemma involved reflection on what the goal should properly be, such that there was even reasoning about the desirable outcome. (He might, for instance, have decided to complete his shopping and leave the toy on the shelf, in the hope that it might soon be reduced in price.) Moreover, owing to the ‘qualitative’ differences between the reasons that he considered, he did not simply ‘pile them up’ and reach a decision based on a form of ‘difference calculation’. Indeed, it is not obvious that there is any intelligible sense in which one could ascribe a calculable ‘value’ either to the man’s conviction that stealing was wrong, or to his desire to please his daughter.

 

The Requirements of Reasoning

These two points reveal some important characteristics of human reasoning about action. The ability to deliberate about what constitutes a (good) reason for acting and to decide which reasons ultimately matter requires consciousness. This is surely one of the differences between data and reasons. Unlike machines processing data, we are aware of our reasons. Moreover, they have meaning for us, usually in relation to a projected aim or in light of our values – and quite often both. (The fact that a shop has CCTV, for instance, becomes meaningful if one is considering or planning theft.) In order for the man to reach the decision he did, he had to be aware of various reasons and to admit the overwhelming salience of the moral conviction that ultimately held sway because it meant more than the others. (To say that it ‘meant more’ is not to say that each reason has a measure of ‘meaning’ and that the moral consideration ‘weighed’ more than the others. This would be to reintroduce the idea of ‘calculation’. Rather, the moral consideration was a reason of a different kind and had a particular importance. Indeed, it is often the case that moral convictions will limit the courses of action that we are prepared to consider, such that a different person in a similar situation would not even have entertained the possibility of stealing.)

Thus, we are conscious of reasons and adopt them as the ground of our actions, such that they are usually realised or expressed in what we do. The fact that we are conscious of reasons and acquiesce in them when we act is what makes them reasons rather than causes: they are our reasons and make our actions meaningful. This is what renders our actions actions, rather than mere ‘behaviour’. In addition, they make our actions ours, such that we are responsible for them. This is why AI models functioning on data, without consciousness, cannot be considered agents and are not themselves deemed to be morally responsible.

 

Reasoning and Processing

Following a process or executing a calculation are of course central to certain types of reasoning, but it is doubtful whether in themselves they should be described as reasoning. Properly speaking, this surely requires consciousness. Where a student successfully solves an equation using a prescribed formula, but without understanding what is being done, what the formula achieves, why the answer is right or what it means, would we say that he or she had ‘reasoned’? This is debatable, but the student is conscious, has a concept of what equations are and grasps the notion of number. The tasks of which the most advanced AI models are soon to be capable will be completed without any conscious awareness of the task to be fulfilled or the meaning of the data deployed. This is not to denigrate what artificial intelligence is able to achieve: the technology is advancing quickly and with impressive results. In the absence of consciousness, however, its remarkable capacities might better be referred to as processing or calculation rather than ‘reasoning’ at all – and without reasons, it remains a long way from true human-level intelligence.

 


Neil Jordan is Senior Editor at the Centre for Enterprise, Markets and Ethics. For more information about Neil please click here.

 

 

 

 

 

 

 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *