The Dual Aspects of Legal Reasoning in the Era of Artificial Intelligence - Defeasible Reasoning and Argumentation Scheme

Cover Page

Cite item

Abstract

The age of artificial intelligence emphasises the possibility of justification and the dialectical aspects of legal reasoning. The need for validity in legal reasoning mainly stems from the existence of exceptions to rules and conflicts between rules. Formal logic may well account for exceptions to rules and thus characterise cancellable reasoning. The presented article focuses on legal issues related to Artificial Intelligence (AI) that are being discussed in the scientific community because of their importance for understanding the mechanisms of law realisation. Some of the most pressing issues in the application of artificial intelligence include: transparency of algorithms, cybersecurity vulnerabilities, unfairness, bias and discrimination, lack of adversariality, legal personality issues, intellectual property issues, adverse effects on employees, privacy and data protection issues, liability for damages and lack of liability. Recognising the importance of artificial intelligence in the field of law, and acknowledging that the field requires constant reassessment and flexibility, this article develops a discussion that is important given the seriousness of the impact of artificial intelligence technologies on legal actors. on legal issues related to Artificial Intelligence (AI) that are being discussed in the scientific community because of their importance for understanding the mechanisms of law realisation. Some of the most pressing issues in the application of artificial intelligence include: transparency of algorithms, cybersecurity vulnerabilities, unfairness, bias and discrimination, lack of adversariality, legal personality issues, intellectual property issues, adverse effects on employees, privacy and data protection issues, liability for damages and lack of liability. Recognising the importance of artificial intelligence in the field of law, and acknowledging that the field requires constant reassessment and flexibility, this article develops a discussion that is important given the seriousness of the impact of artificial intelligence technologies on legal actors.

Full Text

The application of artificial intelligence tech- attempts to apply computer technology in legal nology in the legal field is not a new issue. reasoning. From its inception, legal artificial As early as 1970, Bruce G. Buchanan and Thomas intelligence has been burdened with a common E. Headrick published the article “An Examination expectation, that is, to improve the quantity, quality and availability of judicial services through tech- nology. Rules are general, so exceptions often need to be created in specific cases. Taking the punish- ment of murderers as an example, there are many exceptions to this rule, such as justifiable defense and emergency measures. Only if these exceptions do not apply can the conclusion of the general rule be tentatively deduced. If an exception is applicable, the previous preliminary conclusion should not be deduced. This phenomenon can be described as defeasibility of legal reasoning. Defeasibility means that if additional information is taken into account, the original conclusion may change. Based on the added facts, rules and other information, a justified conclusion may become an unjustified conclusion, a preliminary conclusion is thus defeated by this additional information [2]. In this situation, as a monotonous and monological reasoning, traditional deductive logic can only provide an inclusive way to classify the facts of the case under legal rules and infer the judgment conclusion. It cannot explain the defeasible nature of legal reasoning. Defeasibility basically arises for two reasons: exceptions to rules and conflicting rules. Exceptions to rules appear in many forms. A norm itself may contain exceptions, exceptions may also be located in different norms of the same statutory law, exceptions may also be located in different statutory laws, etc. The conflicts of rules mainly come from conflicts within the legal system, such as conflicts between special laws and general laws, etc. Since the mid-to-late 20th century, with the progress of research in many fields, such as epistemology, artificial intelligence, and argu- mentation theory, various new logical tools for dealing with defeasible reasoning have emerged in an endless stream. Compared with classical logic, these new logical tools can better Portrays legal reasoning [3], but still has shortcomings. Most of these logical tools are based on non-monotonic logic, which starting point is to describe reasoning based on insufficient information. When people have insufficient information, they make reasonable guesses and then withdraw them when they encounter counterexamples. Nonmonotonic logic can handle exceptions to the rules well. However, non-monotonic logic limits defeasibility to a single argument, which makes it difficult to reason with inconsistent information, that is, when faced with conflicts of rules, because once conflicts occur between rules, it emphasizes the com- parison between multiple arguments rather than the withdrawal of a single argument. This article introduces argumentation logics to describe the conflict, defeat, and recovery relationships between different arguments, and combines non-monotonic logic with argumentative logic to describe defeasible reasoning. This adjust- ment will make the legal reasoning under the artificial intelligence system closer to the real human thinking process. This article first explains the specific aspects of legal reasoning in the era of artificial intelligence (Part 2), namely defeasibility and dialectics; secondly, the defeasibility of legal reasoning requires some kind of logic that can handle defeasibility (Part 3); again, the dialectical nature of legal reasoning determines that legal reasoning must be shaped in an open dialogue process model (Part 4); finally, we’ll give a brief summary of the full text (Part 5). Legal reasoning is the process of deriving unknown legal propositions (conclusions) from known legal propositions or factual propositions. There are many differences in understanding in the academic community regarding the nature of legal reasoning. However, what is certain is that the rise of legal artificial intelligence has not created a kind of legal reasoning that is very different in nature from traditional law. It has only highlighted the original specific aspects of legal reasoning - defeasibility and dialectics. However, in characterizing these aspects requires more logical and symbolic processing so that intelligent systems can learn and repeat. Defeasibility is everywhere in the law. Lawmakers do not know everything and therefore cannot reliably foresee what the future will hold. Thus, legal rules, if taken literally or faithfully, sometimes produce outcomes that are absurd, unfair, inefficient, or in some other way sub- optimal. When absurd conclusions arise as an inevitable consequence of the under- and overinclusiveness of rules, a sound legal system will usually provide a mechanism to correct them, which relies on defeasible reasoning. Corresponding to artificial intelligence algo- rithms, we need to re-understand the nature of refereeing in judicial activities. Judicial adjudication can be seen as a process in which the judge compares and analyzes the statements, arguments, reasons, and assumptions put forward by the parties, and then chooses the best solution. If these statements, arguments, reasons, and assumptions are regarded as information or knowledge in a broad sense, judicial adjudication is then a selection process based on legal and factual information. This requires “complete justification”|, which means that the judge must not only explain the reasons to support the chosen option, but also must explain that there are no better alternatives, so as to prove that the judge’s final decision is the relatively best one [4], otherwise it cannot be called “complete justifi- cation”, and this is very demanding and difficult for judges. However, from the perspective of artificial intelligence, a comprehensive capture of legal rules and factual information makes “complete justification” possible. In this way, judicial decisions can and should be understood as based on multiple possible normative assumptions [5]. The process of selecting among the obligation of the court is to choose one of them as the best decision based on the context of the case [6], which can greatly increase the persuasiveness of judicial decisions. The nature of defeasible reasoning matches the conditions of artificial intelligence. In traditional judicial reasoning, which is based on monotonic logic, the conclusion of the reasoning will not introduce new information because all necessary information is already included in the premises. In defeasible reasoning, however, the conclusion can contain more than the information provided by the premises. In particular, the combination of Big Data and artificial intelligence has greatly increased the possibility of discovering normative conflicts and exceptions of rules in the same legal system. In traditional artificial environments, the ability to detect these conflicts and exceptions is relatively limited. The application of artificial intelligence technology has brought new opportunities to the legal field, allowing legal reasoning to deal with complex situations more comprehensively. Compared with general legal reasoning, legal reasoning in the context of legal artificial intelli- gence should be more dialectical. Of course, this does not mean that traditional legal reasoning does not have this characteristic, but it just means that this characteristic will be more prominent in the context of legal artificial intelligence. Dialectics refers to the fact that legal reasoning remains open to skeptical viewpoints, opposing viewpoints, which in turn leads to the uncertainty or probability characteristics of legal reasoning. In most court trials, both parties present different hypotheses during the proceedings in preparation for the final judgment. This process can be understood as a debate on which both parties are engaged in to win the support of the judge. Virtually every version of the case advanced by one party is inherently in conflict with the versions advanced by the other parties. In order to make a decision, the court must re-examine the dialogue between the parties, carefully compare and evaluate the arguments of both sides, and weigh the different opinions in order to choose the most credible answer to the facts and legal issues of the case, and ensure that the final verdict was reasonable and just. From the perspective of artificial intelligence legal reasoning, on the whole, this dialecticality is mainly reflected in three aspects: First, the ontological level of dialectics reflected in the defeasibility of law, such as Jaap Hage, a repre- sentative figure in legal artificial intelligence, analyzes related issues from the ontology [7]. The second is the dialectical nature of the epistemo- logy level due to problems such as the open structure of legal concepts. There is a sharp contradiction between the generality of conceptual expression and the particularity of application practice, which makes this dialectic exist both in legal theory and in legal practice. The third is the dialectical nature reflected in the logical expression of legal rules and legal principles. For example, rules often have exceptions, and conflicts and oppositions may arise between different rules. In this case, a dialectical approach is needed to determine which rule is ultimately applicable rule. The defeasibility of legal reasoning requires a certain kind of special logic. Obviously, monotonic logic with a linear structure cannot solve the exceptions and conflicting rules of the rules. Non-monotonic logic that can accommo- date exceptions becomes the best choice. Using non-monotone logic can also improve logic. The steps of reasoning are formalized and symbolized. In monotonic logic, legal rules are mainly expressed in the form of conditional sentences, that is, there is a substantial implication relationship between the logical antecedent and the logical consequent, so the legal conclusion is bound to be contained in its antecedent. Logical reasoning following this linear structure will reflect its monotonicity. It can ensure that the contamination of the conclusion by newly added information is eliminated, and the final conclusion can be obtained without exhausting all knowledge. However, one cannot be sure that newly added information will not shake the final conclusion, nor can one ensure that a conclusion declared without exhausting the knowledge search will be true. Monotonic logic cannot solve the following three aspects: First, it cannot accommodate exceptions to reasoning in the structure, nor can it clearly determine the extent of the impact that exceptions will have on the original rules. Second, over-reliance on substantive reasoning and modifications such as interest measurement and value judgment to determine the open structure of legal concepts makes it difficult to complete one of the core tasks of research on artificial intelligence and law, which is to symbolize and logic the open structure of legal concepts. Third, the inability to handle inconsistent information reasoning, that is, the inability to handle rule conflicts in an orderly manner (for example, newly enacted law and previously enacted law, superordinate law and subordinate law, the special law and the general law), and it is impossible to achieve an openstructured argumentation form on the law. At this time, the function of non-monotonic logic comes to the fore. On the one hand, it can accommodate exceptions so that the originally obtained conclu- sions are invalidated by the addition of other infor- mation. On the other hand, it can also analyze every point in logical reasoning and legal argumentation every step is formalized and symbolized. The application of artificial intelligence in law plays an extremely important role in advancing the normative and logical research on the concept of defeasibility. It requires the use of logical operation symbols to algorithmize legal arguments as daily logic. In logic, John. L. Pollock has a classic discussion of defeasibility: “P prima facie justifies S” means “the view must be true: if S believes or will believe P, and S has no reason to think P is false, then S is or will be justified by believing P.” [8] Pollock went on to call these reasons that justify the thesis “logically good reasons”, which are different from “conclusive reasons” that guarantee the truth of the conclusion. This kind of “logically good reason” can be further divided into “prima facie reasons”: “A prima facie reason is a reason that is itself a good reason for trusting something and that will A warrant justifies something, but when combined with some other belief it may no longer be a good reason.” Here, defeasibility means that the “apparently valid reason” is no longer valid due to “some other belief”. These negative beliefs are called “defeaters”: “If P is a logical reason for S to believe Q, then R is a logical reason for S to believe Q if and only if the combination of P and R is not a logical reason for S to believe Q. R is the reason to defeat P.” Pollock also distinguished between rebutting defeaters and undercutting defeaters: the former targets the belief (or claim) itself, while the latter targets the connection between the belief and its supporting reasons. If we have concluded C on the basis of a particular foundation G, then the rebuttal nullifier provides a reason for us to believe C, while the undercut nullifier provides a reason for believing that G is not a foundation for C [9]. It remains questionable whether exceptions to the rule must rely on defeasible reasoning. Alchourron’s famous claim is that defeasibility problems can be dealt with through deductive reasoning plus belief revision, so there is no need to use non-monotonic reasoning [10]. Alexy believes that the way of substantive implication can also shape the defeasibility of legal arguments [11]. The following will argue with concrete examples to demonstrate why using non-mono- tonic reasoning to deal with exceptions to the rule is better than belief revision and substantive entailment. For example, under normal circumstances, starting from the legal rule “Whoever intentionally injures others shall be sentenced to a fixed-term imprisonment” and the fact that satisfies its antecedent “A intentionally injures others”, one can conclude that “A should be sentenced to a fixed-term imprisonment”. The following reasoning is established: (P1) Anyone who intentionally harms others shall be sentenced to fixed-term imprisonment; (P2) A intentionally harms others; (Q) A should be sentenced to fixed-term imprisonment. But when A’s behavior is legitimate defense, this reasoning is no longer valid. Although there is still this rule and this fact, considering the behavioral attributes of legitimate defense, the conclusion cannot be drawn from this rule and the facts. As we can see, in a legal syllogism, people can withdraw the conclusion without withdrawing any premise, but this is not consistent with the logic of deductive reasoning. In a fidelity deductive reasoning, if you want to withdraw the conclusion, you must withdraw at least one premise because the truth of the conclusion is implied by the conjunction of the premises. The method of belief modification means that when the above behavior of A is legitimate defense, people realize that the rule in the above reasoning is wrong, at least inaccurate, and should be revised to “Those who intentionally harm others should be sentenced to fixed-term imprisonment, unless it is legitimate defense.” In this sense, legal rules are not defeasible, but merely amendable. When exceptions occur, it is not that the antecedents of the rule are satisfied and the conclusion does not follow, but that the rule is not satisfied at all. However, if we adopt this understanding, how should we view the reasoning starting from the rules under normal circumstances? If the premise of the reasoning is still the original rule, it will become reasoning starting from wrong or inaccu- rate premises, and thus it is not worthy of trust. If the premise of reasoning is considered to be the revised rules, then the factual premises outside the rules include not only “A intentionally harms others” but also “A’s behavior does not constitute legiti- mate defense.” The problem is that this under- standing is inconsistent with the actual reasoning process. Under normal circumstances, people will not consider whether A’s behavior is legitimate defense, but directly draw a conclusion based on the premise “intentional harm to others”. In addition, for the rule that “those who intentionally harm others shall be sentenced to fixed-term imprisonment”, there are not only exceptions for legitimate defense, but also emer- gency avoidance, official behavior, victim’s consent, incapacity, etc. A complete refactoring must include all exceptions. This reconstruction is unlikely to be complete, since the exceptions to the rule cannot in principle be enumerated exhaustively. In fact, even if the exceptions are limited and can be listed in advance, they are subject to limited resources such as time and energy. In many cases, people can only rely on rules that do not list all exceptions. The method of substantive implication means reinterpreting the rules and reconstructing the rules and rule-based reasoning in another way. It does not place every exception into the rules, but reinterprets the rules as having “normality” situation in them. For example, regarding the rules and facts in the above example, under normal circumstances, people can make such an inference that when A’s behavior is legitimate defense, the conclusion is withdrawn, but this is because the premise P is withdrawn, because legitimate defense does not have the proper meaning of social harm that intentional harm has, so it does not conflict with the fidelity requirements of deductive reasoning. The question is, in such a reasoning process, is P assumed to be true or proven to be true? If premise P is proved to be true, then all exceptions still need to be considered, whether there is legitimate defense, emergency avoidance, etc. Such an understanding simply puts some of the reasoning behind the scenes, obscuring the complexity of the scheme. If the premise P is assumed to be true, then, since there is no link to eliminate the assumption in this reasoning process, the conclusion Q must also be assumed to be true. The reason why people want to reconstruct legal reasoning into deductively valid truth-preserving reasoning is precisely to be able to determine that the conclusion is true when the premises are established, so that the entire reasoning is reliable. However, the result of pursuing deductive fidelity will inevitably make it impossible to determine whether a certain premise is true, so the reliability of legal reasoning will be completely out of reach, and this kind of deductive reconstruction will be meaningless. Faced with the limitations of monotonic reasoning methods such as belief modification and substantive implication, legal practice shows that it is an effective solution to treat rule-based legal reasoning as a non-monotonic reasoning or defeasible reasoning. The essence is that the expansion of the premise set can leading to a change in conclusion, and this expansion is permitted by the open structure of the law. In the process of constructing legal arguments, as the legal interpretations applied to the case are updated, the rules are constantly revised, and as new evidence is added, the legal facts of the case will also change, thus making the original legal conclusion verdicts were changed and even rebutted. Through the previous analysis, we can see that legal arguments based on legal reasoning are defeasible and can be defeated by stronger argu- ments, that is, through the refutation of premises, conclusions or inference relationships, and the continuous introduction of new counter-arguments. This invalidates the original argument. When reconstructing the form of the judicial reasoning model, the general construction process is as follows: Suppose there is such a legal rule: A1: A person who reaches the age of 18 has full capacity for civil conduct. Then add an exception to the rule: A2: People who are unable to identify their own behavior are not considered to have full capacity for civil conduct. Rules and its exceptions can be combined to create new rules: A3: People who have reached the age of 18 and are not unable to identify their own behavior have full capacity for civil conduct. For a norm with clear rules and few exceptions, we can create a final rule that accommodates all exceptions by incorporating exceptions into the initial rule a limited number of times. Go ahead and create another exception to the above rule: A4: Minors over the age of 16 who rely on their own labor income as their main source of living are regarded as persons with full capacity for civil conduct. Then the new rules can be derived as follows: A5: A person who has reached the age of 18 and is not unable to identify his own behavior, or a minor who is over the age of 16 and whose main source of livelihood is his own labor income, is a person with full capacity for civil conduct. As the simple situation shown, one can quickly draw conclusions as to whether someone has full civil capacity. Of course, this is not a complete formalization. If new exceptions to the rules are subsequently obtained, new judgment branches will need to be added to the original complete rules. Finally, a legal rule that accommo- dates all exceptions will be obtained. In simple cases, this model of reasoning with revised premises is not problematic. But even if it is a simple application situation for judgment, it still requires a lot of energy to shape the rules. If the form of defeasibility reasoning is adopted, and the premise does not need to be true, the above related rules can be organized in the form of a set: A1: Once you reach the age of 18, you have full capacity for civil conduct; And it does not satisfy Bn = {B1, B2, B3) B1: People with mental illness are not considered to have full capacity for civil conduct. B2: Minors over the age of 16 who rely on their own labor income as their main source of living are regarded as persons with full capacity for civil conduct. B3: Adults who are unable to identify their own behavior are not considered persons with full capacity for civil conduct. At the level of argumentation, rules of type A1 are elements to be proved, while rule cases such as Bn that represent exceptions are elements not to be refuted [12]. If the constitutive requirements in A1 are met at the same time, but the conditions in the exception set are not sufficient to activate the refutation requirements, a conclusion can be drawn that supports the premise. Since we do not need to enter the complete rule that ultimately applies, we only need to list the set of exceptions to the rule separately for judgment. It can be seen from the above argument that the characteristic of the defeasibility argument is that the addition of exceptions results in the original inference result no longer being logically deductible. Compared with the application of complete rules, this argument mode does not determine in advance that the unique conclusion can be deduced after bringing in each element, but emphasizes the confrontational relationship between the elements to be proved and the elements that have not been refuted. If the subsequent argument is supported by better reasons, the conclusion of the previous argument no longer receives equal support. In order to achieve this hierarchical legal reasoning model, it is necessary to introduce the corresponding argumentation schema. At the level of argumentation level L0, the existence of premise P supports our conclusion Q; At the level of L1, the introduction of exceptions denies the argumentation at level L0; Subsequently, the introduction of second-order exceptions may cause the argumentation at level L1 to be denied, and then restore the conclusion of the L0 level argument, and so on. In other words, during the operation of the defeasible model, temporary validity vetoes may occur, but the negation of the previous conclusion does not mean the end of the argument. Negating the argument results of the previous stage may also be used in subsequent argumentation reasons. After being defeated in the comparison process, until the final argument conclusion is reached, all previous judgment nodes need to be open to truth judgments that may be right or wrong. Confrontation must be allowed between each level of argumentation. Once a level of argumentation is “sealed”, it may lead to working backward from the final level of argumentation, then deleting all previous objections, and finally arriving at an argument with only supporting reasons. Defeasibility needs to be demonstrated through reasoning and dialogue, which requires the participation of multiple subjects in the argument rather than just the monologue of the arguer. The final argument is completed when intersubjective agreement is reached on the basis of the rebuttal’s withdrawal of the challenge or the arguer's revision of his argument. The entire judicial adjudication process is not a process in which a subject unilaterally states his or her views, but a process in which two or more parties to the case argue. Legal reasoning in the traditional sense is only regarded as reasoning from premises to conclusion, but this is only a part of the legal argumentation process. It’s just a certain stage. In addition, this process also includes the refutation of the argument made by the arguer, the arguer’s response to the refutation, the arguer’s improvement of the initial argument, the refutation of the argument’s further questioning, etc, it can be seen that legal argumentation is not a monotonous process, but a reasoning process in which information is constantly increasing and inferences are constantly revised. Research on non-unitary legal reasoning systems is full of hope. They can fully embody the spirit of legal discourse theory, but there are still many problems here. The first thing that needs to be solved is how to deal with the balance between different arguments and draw the conclusion which argument is the best. Some scholars believe that a non-unitary legal reasoning system can ultimately weigh the correct- ness of different viewpoints and make choices among them. In fact, however, this is a very difficult goal to achieve. When rules conflict with rules or arguments with arguments, higher-level non-rule standards such as policies, principles, and basic values should be resorted to. At this time, dialectical logic should be added to defeasible reasoning to balance different arguments in an open and dialogic debate system. From the perspective of artificial intelligence legal reasoning modeling, dialecticality is mainly reflected in the following aspects: First, fromthe perspective of the modeling process, dialecticality is mainly reflected in the non-monotonicity of its expression logic, rather than the main source of monotony. It is the dialectical relationship between new facts or information about the context of the dispute and the existing premises on which normative conclusions are based [13]. Second, from the perspective of the logic of modeling, this dialecticality is mainly reflected in the defeasibility of its reasoning, that is, using a dialectical approach to deal with the defeasibility of reason logic. Third, from the perspective of debate in modeling, it is mainly reflected as a kind of debateability and defeasibility that needs to be realized through a dialogue model, and is presented as two (or more) participants - due to different debate roles (Supporters or opponents) - Argument games surrounding different propositions [14]. To put it simply, argumentation schemes are a reasoning pattern that is solidified in daily language debates. It is a reasoning method that is neither deductive nor inductive. Although the argumentation scheme appears to be in the form of affirming the antecedent, it is essentially different from the affirmative antecedent of classical logic. The aforementioned treatment of non-monotonic logic in Part 3, which treats all types of defeasibility within the same argument, does not really correspond to the real situation of legal reasoning. In legal practice, people often arrive at conclusions on the basis of comparing the strength of competing conclusions, and the foregoing assumptions cannot characterize this thought process. For a rule of law, its exceptions always outweigh it, whereas the rule that conflicts with it does not; on the other hand, the rule that conflicts with it can generally be used to reach the opposite conclusion, but its exceptions do not. Non-monotonic logic handles exceptions at the logical level, and argumentation styles can handle conflicts between rules at the argumentation level, so that the correct understanding of these relevant rules can be properly characterized. We will describe the general structure of argumentation schemes from a logical perspective. Generally speaking, an argumentation scheme consists of a conclusion, a set of premises, a set of conditions for the use of the type, and a set of exceptions that prevent the use of the type. So its general structure is: Conclusion Premise: Premise 1, Premise 2... Premise n Conditions: Condition 1, Condition 2... Condition k Exceptions: Exception 1, Exception 2... Exception i This logically oriented approach deviates from the concept of traditional logic in some aspects because it is a concrete, dialectical logical approach. “Concrete” means that instances of the pattern may belong to specific debate situations and are not necessarily universally applicable and context-independent; “dialectical” means that the pattern may encounter counter-argument, that is, there may be situations in which the type does not lead to its conclusion even if it obtains its premises. The following will sort out the argument structure of some scholars, and establish the argument form of this article based on the views of these scholars. The argument structure described by Joel Katzav and Chris Reed is [15]: (1) The form of the argument premise(s) (2) Form of guarantee for argumentation (3) conclusion Typically, a guarantee is expressed in the form of a conditional. The antecedent of this conditional form corresponds to the premise given in the form, and the consequent corresponds to one or more facts being transmitted. For example, an argument from cause to effect: (1) A (2) If A, then A causes B So, B Of course, one might be inclined to use the form “A causes B” to characterize guarantees in a cause-to-effect argument, rather than the conditional form used in (2). But a proposition consistent with “A causes B” would deduce the premises given by the form “A”, thus making (1) redundant. To avoid this, it is common to use the conditional form to represent guarantees in argumentation schemes. Examples of prominent argumentation schemes that are defeasible affirmative antecedent rules can be described as: (1) P (2) If P, then usually Q Therefore (hypothetically), Q The form represented by this structure can be attacked by arguing that there are exceptions to the rule (e.g., P and R and if P and R, then usually not Q). Each form has its own typical method of critical examination, so it is worthwhile to use them to supplement the study of abstract inferential forms alone [16]. Giovanni Sartor believes that the general structure of reasoning schemata is [17]: A1; …; and An - It is B1; … and the reason for Bm. Such reasons may be reasons for drawing a final conclusion or reasons for drawing a defeasible conclusion. If the characteristics of argumentation schemes as defeasible reasoning patterns are highlighted, the general pattern that can include almost all argumentation schemes is[ 18]: Major premise: α ⇒ β; Minor premise: α; Conclusion: β; The connective “⇒” represents a defeasible implication, and the major premise means “if α, then hypothetically β”. This means that the conclusion β is valid only if there are no exceptions and no prevailing reasons for the opposite conclusion (rebuttal), β is derived from α. The argumentation scheme can be recast into a defeasible affirmative antecedent form, in which the antecedent of the major premise of the conditional sentence is the conjunction of state- ments, and each statement represents a prere- quisite of the initial argumentation scheme. For example, the form of appeal to expert opinion is: Major premise: (E is an expert and E said A) ⇒A; Minor premise: E is an expert and E said A; Conclusion: A is true. Critical questions can be reformulated as counterevidence that weakens (no longer applies) the relevant pattern or disproves (refutes) its premise. For example, a critical question about the unreliability of experts could be rephrased as this weakener: E is unreliable ⇒ [(E is an expert and E said A) ⇒A is true]. No matter how you characterize the general structure of an argumentation scheme, you need to capture two elements (premise, argument or reason and conclusion, argument or claim) and a defeasible inference rule. Specific argumentation schemes are all developed on this basis and are the embodiment of premises, conclusions and inference rules. In fact, a formula proposed by Robert C. Pinto best reflects the essence of the argumentation scheme: (s) (t) (x) If s thinks (expectation, affirmation, assumption, etc.) at t that “x is F”, then, in the absence of weakening or overturning evidence at t, it is reasonable for s to believe (expect, affirm, assume, etc.) that “x is G” at t [19]. In this way, argument evaluation shifts from a truth-preserving norm to an entitlement-preser- ving norm. The former means that when the premises of a valid argument are true, the conclusion cannot be false; the latter means that, according to the argument, when the premises are acceptable (reasonable), people have the right to draw a defeasible conclusion. We can propose a simplified general structure of the argumentation scheme [20]. Usually, those who meet the condition P can be considered (expectation, affirmation, assumption, etc.) C. In the here and now, exceptions (evidence that weakens or subverts C) have been ruled out. Therefore, it is reasonable to think (expect, affirm, assume, etc.) that conclusion C is exclu- sive (albeit fallible) [21]. To sum up, due to limited abilities, we can only construct a very rough and simple argumentation scheme so far, and more detailed characterization can only wait for in-depth research in the future. But from another perspective, such a simple model may be more suitable for artificial intelligence technology [22]. The process of defeasible legal reasoning is a demonstration of the argumentative process regarding the strength and weakness of reasons. As long as the rules can be analyzed cor- respondingly at the level of reasons, they can enter the judgment logic of the intelligent system. However, in the judicial context, adjudication must not only pursue substantive justice, but also pay attention to the appearance of justice, that is, to substantiate the argument results. The arguer’s conclusion is derived from the support of premises and reasons. The legitimacy of the premises or reasons becomes the focus of argumentation disputes. A single defeasible reasoning can only reflect one level of the argument presented by the arguer, so it needs to be done in dialectics. Conduct further discussions on the argumentation disputes at different levels to complete the confirmation of the legitimacy of the premises or reasons [23]. The explicitness of argumentative rationality imposes an obligation on the arguer to respond to criticism at the dialectical level, and if they are ignored the argument will appear unreasonable. Therefore, the argumentation process also pays attention to the attack and defense of the argument, which can be related to the critical issues of the argumentation scheme and is closely related to the evaluation of the argument. In addition, judging from the current techno- logical development, there are still technical difficulties in converting legal language into computer language flawlessly [24]. More importantly, intelligent systems are also unable to make value judgments like human beings, and it is still doubtful whether many propositions that are not yet conclusive in terms of human ethical knowledge can be left to machines that analyze the probability of a case’s verdict from past materials, and they also run into a series of problems related to value and dignity, and the subjective status of human beings. It is worth mentioning that the limits of artificial intelligence’s application of defeasible reasoning deserve our vigilance. Defeasible reasoning can only provide “guarantee” at best and cannot provide “confirmation” like deductive reasoning. The more difficult part is that artificial intelligence algorithms are generally not public. In this case, the combination of non-public algorithms, closed systems and plausibility conclusions will not only increase the difficulty of accountability for wrong conclusions, and will also shake the credibility of the content of judicial judgments.
×

About the authors

Ze Li

Shanghai University of Political Science and Law

Email: strawberry7576@hotmail.com
ORCID iD: 0000-0002-0815-8537

Doctor of Law, Associate Professor of the School of Law

Shanghai, China

Feiping Lei

Shanghai University of Political Science and Law

Email: leifeiping1997@163.com
ORCID iD: 0009-0005-3934-0605

PhD student

Shanghai, China

Dmitry N. Ermakov

RUDN University

Author for correspondence.
Email: ermakov-dn@rudn.ru
ORCID iD: 0000-0002-0811-0058
SPIN-code: 6835-3155

Doctor of Economics, Doctor of Political Sciences, Professor of the Department of Innovation Management in Industries, Engineering Academy

Moscow, Russia

Nina V. Symaniuk

Ural Federal University

Email: n.v.symaniuk@urfu.ru
ORCID iD: 0000-0002-8446-857X
SPIN-code: 6130-5695

Associate Professor, Department of Public Law, Graduate School of Economics and Management

Ekaterinburg, Russia

Ilya V. Poletaev

RUDN University

Email: 1142220172@rudn.ru
ORCID iD: 0000-0002-3767-6659
SPIN-code: 1110-4456

PhD student of the Department of Innovation Management and Foreign Economic Activity in Industry, Graduate School of Industrial Policy and Entrepreneurship

Moscow, Russia

Naofal M.H. Aziz

RUDN University

Email: 1042208064@rudn.ru
ORCID iD: 0009-0004-5014-6195

PhD student of the Department of Innovation Management in Industries of the Engineering Academy

Moscow, Russia

Ahmed Obaid

RUDN University

Email: 1042218171@rudn.ru
ORCID iD: 0009-0001-1556-6985

PhD student of the Department of Innovation Management in Industries of the Engineering Academy

Moscow, Russia

Pavel I. Ivanov

RUDN University

Email: 1142220389@rudn.ru
ORCID iD: 0009-0002-5456-6934

PhD student of the Department of Innovation Management in Industries of the Engineering Academy

Moscow, Russia

References

  1. Buchanan BG, Headrick TE. Some Speculation about Artificial Intelligence and Legal Reasoning. Stanford Law Review; 1970.
  2. Lodder AR. Dialaw: On Legal Justification and Dialogical Models of Argumentation. Dordrecht, Boston, London: Kluwer Academic Publ.; 1999.
  3. Reiter R. A logic for default reasoning. Artificial Intelligence. 1980;13(1-2):81-132.
  4. Prakken H, Sartor G. Modeling Reasoning with Precedents in a Formal Dialogue Game. Artificial Intelligence and Law. 1998;6(2). https://doi.org/10.1023/A:10082783 09945
  5. Gardner A. von der L. An Artificial Intelligence Approach to Legal Reasoning. MIT Press; 1987. Available from: https://archive.org/details/artificialintell00vond (accessed: 14.09.2023).
  6. Taruffo M. Judicial Decisions and Artificial Intelligence. Artificial Intelligence and Law. 2004;6:311-324 https://doi.org/10.1023/A:1008230426783
  7. Hage J. Studies in legal logic. Springer; 2005.
  8. Pollock JL. Knowledge and Justification. Princeton University Press; 1972.
  9. Pollock JL. Knowledge and Justification. Princeton University Press; 1974.
  10. Alchourron С, Gardenfors P, Makinsn D. On the Logic of Theory Change: Partial Meet Contraction and Revision Functions. Journal of Symbolic Logic. 1985;50(2): 510-530. https://doi.org/10.2307/2274239
  11. Alexy R. Book Review of Logical Tools for Modelling legal Argument. Argumentation. 2000;14:66-72.
  12. Sartor G. Defeasibility in Legal Reasoning. In: Bankowski Z, White I, Hahn U (eds.) Informatics and the Foundations of Legal Reasoning. Law and Philosophy Library, (vol. 21). Dordrecht: Springer. 1995. https://doi.org/10.1007/978-94-015-8531-6_4
  13. Swet AS. The judicial construction of Europe. Oxford University Press; 2004.
  14. De Mat E, Winkels R, Van Engers T. Automated detection of reference structures in law. IOS Press; 2006.
  15. Katzav J, Reed C. A Classification System for Arguments. University of Dundee, Scotland UK, 2010.
  16. Prakken H, AI & Law. Logic and Argument Schemes. Argumentation. 2005;19:303-320. https://doi.org/10.1007/s10503-005-4418-7
  17. Sartor G. Legal Reasoning: A Cognitive Approach to the Law. Heidelberg: Springer; 2005.
  18. Walton D, Sartor G. Teleological Justification of Argumentation Schemes. Argumentation. 2003;27:111-142. https://doi.org/10.1007/s10503-012-9262-y
  19. Pinto RC. Evaluating Inferences: The Nature and Role of Warrants. Arguing on the Toulmin Model. Hitchcock D., Verheij B. (eds.) Dordrech: Springer Publ.; 2006;10:115-143. https://doi.org/10.1007/978-1-4020-4938-5_9
  20. Garcia A.J., Simari G.R. Defeasible logic programming: an argumentative approach. Theory and practice of logic programming. 2004. p. 95-136.
  21. Sovrano F, Sapienza S, Palmirani M, Vitali F. Metrics, Explainability and the European AI Act Proposal. J - Multidisciplinary Scientific Journal. 2022;5(1):126-138. https://doi.org/10.3390/j5010010
  22. Pagallo U, Durante M. The Good, the Bad, and the Invisible with Its Opportunity Costs: Introduction to the ‘J’ Special Issue on “the Impact of Artificial Intelligence on Law”. J - Multidisciplinary Scientific Journal. 2022;5(1):139-149. https://doi.org/10.3390/j5010011
  23. Casanovas P, de Koker L, Hashmi M. Law, SocioLegal Governance, the Internet of Things, and Industry 4.0: A Middle-Out/Inside-Out Approach. J - Multidisciplinary Scientific Journal. 2022;5(1):64-91. https://doi.org/10.3390/j5010005
  24. Bentzen B, Liao B, Liga D, Markovich R, Wei B, Xiong M, Xu T (eds.) Logics for AI and Law. Joint Proceedings of the Third International Workshop on Logics for New-Generation Artifi cial Intelligence and the International Workshop on Logic, AI and Law. September 8-9 and 11-12, 2023. Hangzhou: College Publications Publ.; 2023.

Copyright (c) 2024 Li Z., Lei F., Ermakov D.N., Symaniuk N.V., Poletaev I.V., Aziz N.M., Obaid A., Ivanov P.I.

License URL: https://creativecommons.org/licenses/by-nc/4.0/legalcode

This website uses cookies

You consent to our cookies if you continue to use our website.

About Cookies