Abstract
Deep neural networks (NNs) have been proved to be efficient learning systems for supervised and unsupervised tasks. However, learning complex data representations using deep NNs can be difficult due to problems such as lack of data, exploding or vanishing gradients, high computational cost, or incorrect parameter initialization, among others. Deep representation and transfer learning (RTL) can facilitate the learning of data representations by taking advantage of transferable features learned by an NN model in a source domain, and adapting the model to a new domain.
Original language | English |
---|---|
Pages (from-to) | 464-465 |
Number of pages | 2 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 32 |
Issue number | 2 |
DOIs | |
Publication status | Published - 4 Feb 2021 |
Keywords
- Artificial Intelligence
- Computer Networks and Communications
- Computer Science Applications
- Software
ASJC Scopus subject areas
- Software
- Artificial Intelligence
- Computer Networks and Communications
- Computer Science Applications