Graduation date: 1990Under certain conditions, a neural network may be trained to perform a\ud specific task by altering the weights of only a portion of the synapses.\ud Specifically, it has been noted that certain three layer feed-forward networks may\ud be trained to certain tasks by adjusting only the synapses to the output unit.\ud This paper investigates the conditions under which this hobbling of the process\ud may be possible.\ud The investigation assumes that the existence of a set of weights which\ud perform a task with low error implies that the task is learnable. Thus an\ud algorithm is developed which attempts to find such a set, given the weights at the\ud hidden units as fixed. Success of the method is equated with the abilit...
In this paper we present a foundational study on a constrained method that defines learning problems...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
In this paper, we study the supervised learning in neural networks. Unlike the common practice of ba...
It is widely believed that end-to-end training with the backpropagation algorithm is essential for l...
Performance metrics are a driving force in many fields of work today. The field of constructive neur...
This article extends neural networks to the case of an uncountable number of hidden units, in severa...
The universal approximation capability exhibited by one-hidden layer neural networks is explored to ...
A pruning method is presented that is applicable to a one hidden layer classification network, where...
I li.i DISTRIBUIYTON AVAILABILITY STATE MEK 1ýc. DISTRIBUTION COOL Approved for public release; Dist...
A new method of pruning away hidden neurons in neural networks is presented in this paper. The hidde...
The number of required hidden units is statistically estimated for feedforward neural networks that ...
The performance of an Artificial Neural Network (ANN) strongly depends on its hidden layer architect...
The authors study neural network models in which the synaptic efficacies are restricted to have a pr...
When a large feedforward neural network is trained on a small training set, it typically performs po...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
In this paper we present a foundational study on a constrained method that defines learning problems...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
In this paper, we study the supervised learning in neural networks. Unlike the common practice of ba...
It is widely believed that end-to-end training with the backpropagation algorithm is essential for l...
Performance metrics are a driving force in many fields of work today. The field of constructive neur...
This article extends neural networks to the case of an uncountable number of hidden units, in severa...
The universal approximation capability exhibited by one-hidden layer neural networks is explored to ...
A pruning method is presented that is applicable to a one hidden layer classification network, where...
I li.i DISTRIBUIYTON AVAILABILITY STATE MEK 1ýc. DISTRIBUTION COOL Approved for public release; Dist...
A new method of pruning away hidden neurons in neural networks is presented in this paper. The hidde...
The number of required hidden units is statistically estimated for feedforward neural networks that ...
The performance of an Artificial Neural Network (ANN) strongly depends on its hidden layer architect...
The authors study neural network models in which the synaptic efficacies are restricted to have a pr...
When a large feedforward neural network is trained on a small training set, it typically performs po...
A linearly separable Boolean function is learned by a diluted perceptron with optimal stability. A d...
In this paper we present a foundational study on a constrained method that defines learning problems...
Abstract- We propose a novel learning algorithm to train networks with multi-layer linear-threshold ...
In this paper, we study the supervised learning in neural networks. Unlike the common practice of ba...