In this paper, we review recent approaches for explaining concepts in neural networks. Concepts can act as a natural link between learning and reasoning: once the concepts are identified that a neural learning system uses, one can integrate those concepts with a reasoning system for inference or use a reasoning system to act upon them to improve or enhance the learning system. On the other hand, knowledge can not only be extracted from neural networks but concept knowledge can also be inserted into neural network architectures. Since integrating learning and reasoning is at the core of neuro-symbolic AI, the insights gained from this survey can serve as an important step towards realizing neuro-symbolic AI based on explainable concepts.Comm...
This paper is based on our previous work on neural coding. It is a self-organized model supported by...
A neurobiologically constrained deep neural network mimicking cortical areas relevant for sensorimot...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...
Explanation is an important function in symbolic artificial intelligence (AI). For instance, explana...
We investigate the potential of Neural-Symbolic integration to reason about what a neural network ha...
The success of neural networks builds to a large extent on their ability to create internal knowledg...
Neural networks are successfully used to imitate and model cognitive processes. However, to provide ...
Traditional deep learning interpretability methods which are suitable for model users cannot explain...
Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the com...
This report presents a mathematical model of the semantics, or meaning, of the connec-tionist struct...
Much of the recent hype around artificial intelligence stems from recent advances in Neural Networks...
This chapter reviews progress made by brain-reading (neurosemantic) studies that use multivariate an...
Category theory can be applied to mathematically model the semantics of cognitive neural systems. We...
Despite the many advances of artificial intelligence (Al) technology, most notably in the area of ex...
One of the most important functions of concepts is that of producing classifications; and since ther...
This paper is based on our previous work on neural coding. It is a self-organized model supported by...
A neurobiologically constrained deep neural network mimicking cortical areas relevant for sensorimot...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...
Explanation is an important function in symbolic artificial intelligence (AI). For instance, explana...
We investigate the potential of Neural-Symbolic integration to reason about what a neural network ha...
The success of neural networks builds to a large extent on their ability to create internal knowledg...
Neural networks are successfully used to imitate and model cognitive processes. However, to provide ...
Traditional deep learning interpretability methods which are suitable for model users cannot explain...
Research on Deep Learning has achieved remarkable results in recent years, mainly thanks to the com...
This report presents a mathematical model of the semantics, or meaning, of the connec-tionist struct...
Much of the recent hype around artificial intelligence stems from recent advances in Neural Networks...
This chapter reviews progress made by brain-reading (neurosemantic) studies that use multivariate an...
Category theory can be applied to mathematically model the semantics of cognitive neural systems. We...
Despite the many advances of artificial intelligence (Al) technology, most notably in the area of ex...
One of the most important functions of concepts is that of producing classifications; and since ther...
This paper is based on our previous work on neural coding. It is a self-organized model supported by...
A neurobiologically constrained deep neural network mimicking cortical areas relevant for sensorimot...
As the demand for explainable deep learning grows in the evaluation of language technologies, the va...