We did a first approach to the effects of datasets to neural networks in [1]. There, the size of datasets is reduced to speed up the training process but keeping the performance of the neural network. Furthermore, recently, we took part in a European project call FET-OPEN [2] devoted to the development of an explainable artificial intelligence inspired in the way the human brain explain its memories and were one of the research lines leaded by our team is precisely knowledge representation and explainability of neural networks.
Recently, the extraction of knowledge from artificial neural networks has received a lot of attention from the research community and the problem is attacked from different angles. In this research line, we plan to deploy a new topology-based framework to establish a relationship between the internal knowledge representation and “knowledge primitives”. Following this aim, the main challenges we want to address are to preserve ontological knowledge representation and axiomatization, to provide enoughlevels of knowledge/information abstractionand to ground acquired knowledge to both real-world concepts and physical or geometrical properties. Specific goals are listed below:
In [3] we provided an effective proof of the universal approximation theorem through a constructive method to find the weights of a two-hidden-layer feed-forward network which approximates a given continuous function between two triangulable metric spaces. The method only depends on the desired level of approximation to the given function. Our approach is based on the classical result from algebraic topology named simplicial approximation theorem. Roughly speaking, our result is based on two observations: Firstly, triangulable spaces can be “modelled” using simplicial complexes, and a continuous function between two triangulable spaces can be approximated by a simplicial map between simplicial complexes. Secondly, a simplicial map between simplicial complexes can be “modelled” as a two-hidden-layer feed-forward network.
In this research line we plan to develop a complete framework of neural networks entirely based on simplicial complexes. To do so, there are two main blocks to study and develop: