This website displays experiments conducted with a novel Vector Symbolic Architecture mechanism called logical lateration. Based on a logical language with an analogous semantics, logical lateration allows the construction of intelligent systems with imagery and (rudimentary) access consciousness.
Publications:

  • H. R. Schmidtke. Logical lateration – a cognitive systems experiment towards a new approach to the grounding problem. Cognitive Systems Research, 52:896 – 908, 2018. (DOI)
  • H. R. Schmidtke. Logical rotation with the Activation Bit Vector Machine. Procedia Computer Science, 169:568–577, 2020.
  • H. R. Schmidtke. Textmap: A general purpose visualization system. Cognitive Systems Research, 59:27–36, 2020. (DOI
  • H. R. Schmidtke. The TextMap general purpose visualization system: Core mechanism and case study. In A. Samsonovich, editor, Biologically Inspired Cognitive Architectures 2019, volume 948 of Advances in Intelligent Systems and Computing, pages 455–464, Cham, Switzerland, 2020. Springer.
  • H. R. Schmidtke. Multi-modal actuation with the Activation Bit Vector Machine. Cognitive Systems Research, 66:162 – 175, 2021. (DOI, PDF available from 12/22)
  • H. R. Schmidtke. Reasoning and learning with context logic. Journal of Reliable Intelligent Environments, 2021. (DOI)
  • H. R. Schmidtke. Towards a fuzzy context logic. In Fuzzy Systems. IntechOpen, 2021.
  • J. D. Kralik, J. H. Lee, P. S. Rosenbloom, P. C. Jackson, S. L. Epstein, O. J. Romero, R. Sanz, O. Larue, H. R. Schmidtke, S. W. Lee, and K. McGreggor. Metacognition for a common model of cognition. Procedia Computer Science, 145:730–739, 2018.
Full list

Research

Connecting logic – or language – and perception is a key step towards understanding human reasoning. A key obstacle was the symbol grounding problem, the question how the symbols of human reasoning, words for abstract ideas we only know about through inference and language can be grounded in perception. For instance, how can we mentally build a map after being told about spatial relations between locations?

A key component for solving the symbol grounding problem was the development of Vector Symbolic Architectures (VSA) by Kanerva and his colleagues since the 1990s. VSA are a plausible abstraction of neuronal architectures that provide ways to represent conventional symbolic reasoning data types, such as lists, and algorithms such as iteration through a list over a simplified representation of the brain as a large binary or phasor vector. Over the years many aspects of animal and human cognition from bee navigation (Kleyko et al.) to analogy formation via graph matching (Gayler) could be implemented. However, the rift between what the symbolic structures represented and what an agent could perceive, the actual semiotic relation of meaning was not resolved in that manner.

The Context Logic language, a logical language similar to natural language, discovered and studied by the author of this website has a semantics that is such that it can be implemented on a VSA. Moreover this semantics gives rise to a unique, inherent analogous semantics, such as image semantics. This means that for any formula in Context Logic, there is an image or image sequence that arises automatically as its meaning. With the language itself being close to natural language, this means that any (unambigous) natural language text corresponds to an image or image sequence, which moreover can be extracted by neuron-like structures.

Systems and Experiments

In the log2img application, the system generates drawings from triples provided in CSV format.

In the log2mov application, the system generates sequences of images upon reading a set of Context Logic statements.

Contact

For more information on this and other research or to contact the developer, please visit the Google Scholar page, Homepage, or LinkedIn page.