Coding Theorems of Information Theory

The imminent exhaustion of the first printing of this monograph and the kind willingness of the publishers have presented me with the opportunity to correct a few minor misprints and to make a number of additions to the first edition. Some of these additions are in the form of remarks scattered thro...

Full description

Main Author: Wolfowitz, Jacob
Corporate Author: SpringerLink (Online service)
Format: eBook
Language:English
Published: Berlin, Heidelberg Springer Berlin Heidelberg 1964, 1964
Edition:2nd ed. 1964
Series:Ergebnisse der Mathematik und ihrer Grenzgebiete. 2. Folge, A Series of Modern Surveys in Mathematics
Subjects:
Online Access:
Collection: Springer Book Archives -2004 - Collection details see MPG.ReNa
LEADER 05336nmm a2200349 u 4500
001 EB000684905
003 EBX01000000000000000537987
005 00000000000000.0
007 cr|||||||||||||||||||||
008 140122 ||| eng
020 |a 9783662002377 
100 1 |a Wolfowitz, Jacob 
245 0 0 |a Coding Theorems of Information Theory  |h Elektronische Ressource  |c by Jacob Wolfowitz 
250 |a 2nd ed. 1964 
260 |a Berlin, Heidelberg  |b Springer Berlin Heidelberg  |c 1964, 1964 
300 |a 2 illus  |b online resource 
505 0 |a 1. Heuristic Introduction to the Discrete Memoryless Channel -- 2. Combinatorial Preliminaries. -- 2.1. Generated sequences -- 2.2. Properties of the entropy function -- 3. The Discrete Memoryless Channel -- 3.1. Description of the channel -- 3.2. A coding theorem -- 3.3. The strong converse -- 3.4. Strong converse for the binary symmetric channel -- 3.5. The finite-state channel with state calculable by both sender and receiver -- 3.6. The finite-state channel with state calculable only by the sender -- 4. Compound Channels -- 4.1. Introduction -- 4.2. The canonical channel -- 4.3. A coding theorem -- 4.4. Strong converse -- 4.5. Compound d.m.c. with c.p.f. known only to the receiver or only to the sender -- 4.6. Channels where the c.p.f. for each letter is stochastically deter-mined -- 4.7. Proof of Theorem 4.6 4 -- 4.8. The d.m.c. with feedback -- 4.9. Strong converse for the d.m.c. with feedback -- 5. The Discrete Finite-Memory Channel. -- 5.1. The discrete channel --  
505 0 |a 8. The Semi-Continuous Memoryless Channel -- 8.1. Introduction -- 8.2. Strong converse of the coding theorem for the s.c.m.c -- 8.3. Proof of Lemma 8.2.1 -- 8.4. The strong converse with (math) in the exponent -- 9. Continuous Channels with Additive Gaussian Noise. -- 9.1. A continuous memoryless channel with additive Gaussian noise -- 9.2. Message sequences within a suitable sphere -- 9.3. Message sequences on the periphery of the sphere or within a shell adjacent to the boundary -- 9.4. Another proof of Theorems 9.2.1 and 9.2.2 -- 10. Mathematical Miscellanea -- 10.1. Introduction -- 10.2. The asymptotic equipartition property -- 10.3. Admissibility of an ergodic input for a discrete finite-memory channel -- 11. Group Codes. Sequential Decoding. -- 11.1. Group Codes -- 11.2. Canonical form of the matrix M -- 11.3. Sliding parity check codes -- 11.4. Sequential decoding -- References -- List of Channels Studied or Mentioned 
505 0 |a 5.2. The discrete finite-memory channel -- 5.3. The coding theorem for the d.f.m.c -- 5.4. Strong converse of the coding theorem for the d.f.m.c -- 5.5. Rapidity of approach to C in the d.f.m.c -- 5.6. Discussion of the d.f.m.c -- 6. Discrete Channels with a Past History. -- 6.1. Preliminary discussion -- 6.2. Channels with a past history -- 6.3. Applicability of the coding theorems of Section 7.2 to channels with a past history -- 6.4. A channel with infinite duration of memory of previously transmitted letters -- 6.5. A channel with infinite duration of memory of previously received letters -- 6.6. Indecomposable channels -- 6.7. The power of the memory -- 7. General Discrete Channels -- 7.1. Alternative description of the general discrete channel -- 7.2. The method of maximal codes -- 7.3. The method of random codes -- 7.4. Weak converses -- 7.5. Digression on the d.m.c -- 7.6. Discussion of the foregoing -- 7.7. Channels without a capacity --  
653 |a Coding theory 
653 |a Mathematics, general 
653 |a Information theory 
653 |a Coding and Information Theory 
653 |a Mathematics 
710 2 |a SpringerLink (Online service) 
041 0 7 |a eng  |2 ISO 639-2 
989 |b SBA  |a Springer Book Archives -2004 
490 0 |a Ergebnisse der Mathematik und ihrer Grenzgebiete. 2. Folge, A Series of Modern Surveys in Mathematics 
856 |u https://doi.org/10.1007/978-3-662-00237-7?nosfx=y  |x Verlag  |3 Volltext 
082 0 |a 003.54 
520 |a The imminent exhaustion of the first printing of this monograph and the kind willingness of the publishers have presented me with the opportunity to correct a few minor misprints and to make a number of additions to the first edition. Some of these additions are in the form of remarks scattered throughout the monograph. The principal additions are Chapter 11, most of Section 6. 6 (inc1uding Theorem 6. 6. 2), Sections 6. 7, 7. 7, and 4. 9. It has been impossible to inc1ude all the novel and inter­ esting results which have appeared in the last three years. I hope to inc1ude these in a new edition or a new monograph, to be written in a few years when the main new currents of research are more clearly visible. There are now several instances where, in the first edition, only a weak converse was proved, and, in the present edition, the proof of a strong converse is given. Where the proof of the weaker theorem em­ ploys a method of general application and interest it has been retained and is given along with the proof of the stronger result. This is wholly in accord with the purpose of the present monograph, which is not only to prove the principal coding theorems but also, while doing so, to acquaint the reader with the most fruitful and interesting ideas and methods used in the theory. I am indebted to Dr