This website displays experiments conducted with a non-monotonic reasoning mechanism called logical lateration.
**Citation:** If you are using the website, please cite the publications:

- H. R. Schmidtke. A canvas for thought. Procedia Computer Science 145 (2018): 805-812. (Link)
- H. R. Schmidtke. A survey on verification strategies for intelligent transportation systems. Journal of Reliable Intelligent Environments, 4(4):211–224, 2018. (Link)
- H. R. Schmidtke. Logical lateration – a cognitive systems experiment towards a new approach to the grounding problem. Cognitive Systems Research, 52:896 – 908, 2018. (Link)

Connecting logic – or language – and perception is a key step towards understanding human cognition as well as building AI systems with human-like intelligence. It is a key component for explainable AI systems to be deployed in critical domains, such as autonomous transportation systems. Understanding the interface has proven to be more difficult than anticipated in the 1980s. We know that we can extract symbols from images via machine learning (ML) techniques via a trained classifier. However, the opposite direction so far seemed more elusive. It is, of course, possible to construct software or ontologies that given linguistic or qualitative descriptions of a layout produce an image. However, a human translator needs to construct the software or ontology, while ML is a meta-mechanism for which we also find evidence in natural cognitive systems in the form of neural networks. We also have a rich tradition in logical reasoning and linguistics, but, since there did not exist a way back from logic to action/perception that was similarly generic in nature as ML, these are currently further removed from the perceivable world, and proposals are discussed to reduce the level of logic and language to ML. This has a multitude of ramifications, in particular, some might argue that associative reasoning is the *new* reasoning with logical reasoning portrayed as the *old* reasoning, soon to be replaced. In evolutionary terms, however, associative reasoning -- leveraged, e.g., in recommender systems for applications in social media, shopping, and advertisement -- is certainly the older form of reasoning, which we share, e.g., with most mammals, while language and logical reasoning appeared only very lately and only fully developed in human cognition. Objectively, there is an astonishing gap between human beings and their closest relatives among the primates. We have, for instance, airplanes, laws, markets, and operas, and what fundamentally and obviously distinguishes us cognitively from our evolutionarily close relatives are language and logic. The amount of creativity we exhibit and the amount of accuracy and precision with which we can relay our findings, even with purely verbal communication, allows us to spread the findings of a single individual over space and time. Bridging the gap between logical reasoning and language, on the one side, and association learning and perception, on the other side, has therefore received much interest from AI and Cognitive Systems research, in general.

One way to bridge the gap -- conjectured in the 1980s to be one possible solution -- would be if logic/language would have an analogous property. We can extract linguistic symbols from input images via classification. Can we extract images from logical formulae? Is there a systematic relationship between a sentence such as "A is north of B" and a map that depicts A and B? This project studies a candidate for such a mechanism called "Logical Lateration" (LL). LL is a purely logical reasoning mechanism that converts formulae into a logical format that has analogous properties, i.e., that can be drawn.

To understand how this works, we need to have some understanding about equivalences between logical languages. We would usually formalize "A is north of B" in terms of predicate logic, with "north of" as a binary predicate or binary relation: N(A,B). The relation N for "north of", at least within smaller regions belongs to the partial order relations. These relations are particularly interesting for reasoning and have been studied extensively. The subset relation over sets is a familiar example, as is the subsumption relation between concepts in taxonomic reasoning, e.g., with description logics. The logical entailment relation over propositional formulae, which can be mapped to the subset relation between the models of the formulae, the rows in the truth table that evaluate to true, is another example. "North of" is an ordering relation, and thus also a partial ordering relation.

Given those systematic equivalences, we can represent simple partial order relational statements in a simple propositional logic format: "A is north of B" can be logically represented as "B ⋀ N → A." This is intuitively similar to the natural language statement in the sense that the verb "is" is separating the two operands of "→", and "B ⋀ N" is expressing the propositional phrase "north of B." This formula has the truth table shown below:

N | A | B | B ⋀ N → A | A ⋀ N | B ⋀ N | CN(A) | CN(B) |
---|---|---|---|---|---|---|---|

0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |

0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 |

0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 |

0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 |

1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 |

1 | 0 | 1 | 0 | 0 | 1 | 0 | 0 |

1 | 1 | 0 | 1 | 1 | 0 | 1 | 0 |

1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |

sum: 2 | sum: 1 |

The truth table shows which truth value a logical formula obtains given the truth values for its components. This is the standard method to determine the semantics, i.e., meaning, of a propositional logic formula.

In order to obtain the north-coordinate for A, Logical Lateration first calculates the truth values of the formula

We can have arbitrarily many independent relations. If we add a relation E (east of) in a statement "A is east of C," -- formally: "C ⋀ E → A" -- the number of rows in the truth table quadruple (double for E, then double for C), replicating for each the same properties as before regarding N. The two statements together connected with ⋀ have a 0 where any of the two have a 0 entry, that is the new statement only reduces the number of 1s. For the fourfold replicated truth table above, we distinguish each by their values in C and E: only the table where C=1 and E=1 is modified with "C ⋀ E → A" and only in the four rows where A=0, all other rows stay the same. The rows that are thus removed are rows where C=1, E=1, and A=0, and B and N can be either 0 or 1. Only the case N=1 is decisive for the calculation of the north-coordinate. C=1, E=1, N=1, A=0, B=1 has already been removed, that is, only the row C=1, E=1, N=1, A=0, B=0, where A and B are the same and which is not counted in calculating the north-coordinate for A or B, is removed, that is, neither loses a point. The ordering for N between values for A and B remains untouched in all tables where C=0 or E=0, the absolute value thus will multiply. We thus obtain coordinates CN(A)= 4*2 = 8, CN(B) = 4*1 = 4. Since there are no limitations on the north-coordinate for C, it only deviates from the maximum 8 as a side effect. Since both variations in B for C=1, E=1, N=1, A=0, have been removed by the first and the second formula, respectively, we obtain CN(C) = 8-2 = 6, positioning C, about whose north-coordinate nothing is known, at a latitude between A and B, one possible realization.

Generally speaking, we can see that unrelated relations do not impact each other, as adding a statement on any relation Ri will always equally remove rows for any permutation of Rj=1, Rj=0, in relations Rj, for i ≠ j. Also, adding any sentence "x ⋀ Ri → y" will remove models where x=1 and Ri=1 but not where y=1 and Ri=1, i.e., reduce the Ri-coordinate of x while keeping that of y as before. Any z>y in Ri will also remain unaffected. Any w<x in Ri however will also be modified correctly, as any positions where w=1, x=1, and y=0, are removed, thus retaining the relation between w and x and positioning w below y, as well. Regarding the positioning in underdetermined cases, the process will choose a position that in tendency moves an object towards the higher coordinates.

As this tiny example shows, truth tables very quickly get very large. The simple example of three regions and two directions already takes 2^5 = 32 rows. However, human cognition also is severely restricted with respect to the number of items we can retain in working memory at a time, and employs sophisticated compression strategies. This is thus, from a cognitive systems perspective, not a downside, but a limitation that we could expect a cognitively realistic model to potentially have. For being able to handle the task of reasoning and model counting efficiently, the implemented system uses a compressed format of the truth table -- logically, a special disjunctive normal form representation --, in which we leave out all non-models (rows where the truth table has a 0) and collect several rows into one by abbreviating positions that can be either 0 or 1 with an asterisk (*). The above table for B ⋀ N → A thus becomes: 0**, 100, 11*. An entry with *n* asterisks thus represents 2^n models. When counting models we therefore take the number of asterisks into account, counting 2^n models. We get particularly good images for geographic purposes if we focus on the objects only, leaving out asterisks in the relations, which appear when incomplete information is provided.

In the Drawing experiment, the system generates drawings from natural language descriptions. In order to do this quickly – and to make sure realizable descriptions are generated –, a simple drawing tool was developed that allows the user to quickly generate textual descriptions of realizable layouts by drawing layouts. These descriptions (not the drawings) are sent to the server, which applies the logical lateration method to the descriptions creating what it "imagines" the layout may look like. You can try the experiment for yourself: create a layout and check the description. Send it to the server and compare the description of the original with the drawing generated by the system. Did the mechanism generate a correct realization of the description?

In the Mental Map experiment, we leveraged the system to demonstrate its use for cognitive science experiments. During an undergraduate research project, a student described his mental map of landmarks in Eugene in textual format. We created a map of all locations through logical lateration from his descriptions. As you can see the generated map shows – apart from the three anchor points at the boundary we used to calculate the linear transformation –, a significant fishlens effect with the university as its focal point being positioned with astonishing accuracy by the system.

For the Text Map experiment, the language was slightly expanded to allow real landmark names, such as *Eugene* or *Skinner's Butte*, instead of numbers, such as 1, 2, 3. This allows us to use the system for cognitive science experiments about survey knowledge. The system leverages a simple pseudonatural language parser implemented using David Beazley's PLY. The latest implementation allows the user to inspect the truth table (as in the example above but shown with models as columns, instead of as rows, and leaving out all non-models, that is, rows with value 0) and see in detail how a coordinate in logical lateration is computed: in the table click on a row for a relation, e.g., *north-south* and an object, e.g., *Skinner's Butte*. The columns where both have an entry 1 are highlighted. If you count columns (taking asterisks in the object rows into account as described above), you should arrive at the same value as calculated and indicated above the table. Note that for large examples, you may only see the a part of the table.

For more information on this and other research or to contact me, please visit my Google Scholar page or Homepage.

© System by Hedda R. Schmidtke; Design by InfoGraphics Lab.