|
|
|
|
LEADER |
03015nma a2200661 u 4500 |
001 |
EB002049647 |
003 |
EBX01000000000000001193313 |
005 |
00000000000000.0 |
007 |
cr||||||||||||||||||||| |
008 |
220822 ||| eng |
020 |
|
|
|a books978-3-0365-0803-0
|
020 |
|
|
|a 9783036508023
|
020 |
|
|
|a 9783036508030
|
100 |
1 |
|
|a Geiger, Bernhard
|
245 |
0 |
0 |
|a Information Bottleneck
|h Elektronische Ressource
|b Theory and Applications in Deep Learning
|
260 |
|
|
|a Basel, Switzerland
|b MDPI - Multidisciplinary Digital Publishing Institute
|c 2021
|
300 |
|
|
|a 1 electronic resource (274 p.)
|
653 |
|
|
|a machine learning
|
653 |
|
|
|a information theory
|
653 |
|
|
|a conspicuous subset
|
653 |
|
|
|a hand crafted priors
|
653 |
|
|
|a deep networks
|
653 |
|
|
|a semi-supervised classification
|
653 |
|
|
|a representation learning
|
653 |
|
|
|a deep learning
|
653 |
|
|
|a neural networks
|
653 |
|
|
|a learnable priors
|
653 |
|
|
|a variational inference
|
653 |
|
|
|a compression
|
653 |
|
|
|a deep neural networks
|
653 |
|
|
|a learnability
|
653 |
|
|
|a latent space representation
|
653 |
|
|
|a information bottleneck principle
|
653 |
|
|
|a classifier
|
653 |
|
|
|a Information technology industries / bicssc
|
653 |
|
|
|a information bottleneck
|
653 |
|
|
|a stochastic neural networks
|
653 |
|
|
|a ensemble
|
653 |
|
|
|a classification
|
653 |
|
|
|a decision tree
|
653 |
|
|
|a regularization methods
|
653 |
|
|
|a optimization
|
653 |
|
|
|a mutual information
|
653 |
|
|
|a information
|
653 |
|
|
|a regularization
|
653 |
|
|
|a bottleneck
|
700 |
1 |
|
|a Kubin, Gernot
|
700 |
1 |
|
|a Geiger, Bernhard
|
700 |
1 |
|
|a Kubin, Gernot
|
041 |
0 |
7 |
|a eng
|2 ISO 639-2
|
989 |
|
|
|b DOAB
|a Directory of Open Access Books
|
500 |
|
|
|a Creative Commons (cc), https://creativecommons.org/licenses/by/4.0/
|
028 |
5 |
0 |
|a 10.3390/books978-3-0365-0803-0
|
856 |
4 |
2 |
|u https://directory.doabooks.org/handle/20.500.12854/76429
|z DOAB: description of the publication
|
856 |
4 |
0 |
|u https://www.mdpi.com/books/pdfview/book/3864
|7 0
|x Verlag
|3 Volltext
|
082 |
0 |
|
|a 000
|
082 |
0 |
|
|a 600
|
520 |
|
|
|a The celebrated information bottleneck (IB) principle of Tishby et al. has recently enjoyed renewed attention due to its application in the area of deep learning. This collection investigates the IB principle in this new context. The individual chapters in this collection: • provide novel insights into the functional properties of the IB; • discuss the IB principle (and its derivates) as an objective for training multi-layer machine learning structures such as neural networks and decision trees; and • offer a new perspective on neural network learning via the lens of the IB framework. Our collection thus contributes to a better understanding of the IB principle specifically for deep learning and, more generally, of information-theoretic cost functions in machine learning. This paves the way toward explainable artificial intelligence.
|