05321nmm a2200337 u 4500001001200000003002700012005001700039007002400056008004100080020001800121100002100139245008700160250001700247260006300264300002900327505099200356505094301348505096602291653001803257653002503275653002303300653003403323653001603357710003403373041001903407989003803426490010603464856007203570082001103642520133003653EB000684905EBX0100000000000000053798700000000000000.0cr|||||||||||||||||||||140122 ||| eng a97836620023771 aWolfowitz, Jacob00aCoding Theorems of Information TheoryhElektronische Ressourcecby Jacob Wolfowitz a2nd ed. 1964 aBerlin, HeidelbergbSpringer Berlin Heidelbergc1964, 1964 a2 illusbonline resource0 a1. Heuristic Introduction to the Discrete Memoryless Channel -- 2. Combinatorial Preliminaries. -- 2.1. Generated sequences -- 2.2. Properties of the entropy function -- 3. The Discrete Memoryless Channel -- 3.1. Description of the channel -- 3.2. A coding theorem -- 3.3. The strong converse -- 3.4. Strong converse for the binary symmetric channel -- 3.5. The finite-state channel with state calculable by both sender and receiver -- 3.6. The finite-state channel with state calculable only by the sender -- 4. Compound Channels -- 4.1. Introduction -- 4.2. The canonical channel -- 4.3. A coding theorem -- 4.4. Strong converse -- 4.5. Compound d.m.c. with c.p.f. known only to the receiver or only to the sender -- 4.6. Channels where the c.p.f. for each letter is stochastically deter-mined -- 4.7. Proof of Theorem 4.6 4 -- 4.8. The d.m.c. with feedback -- 4.9. Strong converse for the d.m.c. with feedback -- 5. The Discrete Finite-Memory Channel. -- 5.1. The discrete channel -- 0 a8. The Semi-Continuous Memoryless Channel -- 8.1. Introduction -- 8.2. Strong converse of the coding theorem for the s.c.m.c -- 8.3. Proof of Lemma 8.2.1 -- 8.4. The strong converse with (math) in the exponent -- 9. Continuous Channels with Additive Gaussian Noise. -- 9.1. A continuous memoryless channel with additive Gaussian noise -- 9.2. Message sequences within a suitable sphere -- 9.3. Message sequences on the periphery of the sphere or within a shell adjacent to the boundary -- 9.4. Another proof of Theorems 9.2.1 and 9.2.2 -- 10. Mathematical Miscellanea -- 10.1. Introduction -- 10.2. The asymptotic equipartition property -- 10.3. Admissibility of an ergodic input for a discrete finite-memory channel -- 11. Group Codes. Sequential Decoding. -- 11.1. Group Codes -- 11.2. Canonical form of the matrix M -- 11.3. Sliding parity check codes -- 11.4. Sequential decoding -- References -- List of Channels Studied or Mentioned0 a5.2. The discrete finite-memory channel -- 5.3. The coding theorem for the d.f.m.c -- 5.4. Strong converse of the coding theorem for the d.f.m.c -- 5.5. Rapidity of approach to C in the d.f.m.c -- 5.6. Discussion of the d.f.m.c -- 6. Discrete Channels with a Past History. -- 6.1. Preliminary discussion -- 6.2. Channels with a past history -- 6.3. Applicability of the coding theorems of Section 7.2 to channels with a past history -- 6.4. A channel with infinite duration of memory of previously transmitted letters -- 6.5. A channel with infinite duration of memory of previously received letters -- 6.6. Indecomposable channels -- 6.7. The power of the memory -- 7. General Discrete Channels -- 7.1. Alternative description of the general discrete channel -- 7.2. The method of maximal codes -- 7.3. The method of random codes -- 7.4. Weak converses -- 7.5. Digression on the d.m.c -- 7.6. Discussion of the foregoing -- 7.7. Channels without a capacity -- aCoding theory aMathematics, general aInformation theory aCoding and Information Theory aMathematics2 aSpringerLink (Online service)07aeng2ISO 639-2 bSBAaSpringer Book Archives -20040 aErgebnisse der Mathematik und ihrer Grenzgebiete. 2. Folge, A Series of Modern Surveys in Mathematics uhttps://doi.org/10.1007/978-3-662-00237-7?nosfx=yxVerlag3Volltext0 a003.54 aThe imminent exhaustion of the first printing of this monograph and the kind willingness of the publishers have presented me with the opportunity to correct a few minor misprints and to make a number of additions to the first edition. Some of these additions are in the form of remarks scattered throughout the monograph. The principal additions are Chapter 11, most of Section 6. 6 (inc1uding Theorem 6. 6. 2), Sections 6. 7, 7. 7, and 4. 9. It has been impossible to inc1ude all the novel and interĀ esting results which have appeared in the last three years. I hope to inc1ude these in a new edition or a new monograph, to be written in a few years when the main new currents of research are more clearly visible. There are now several instances where, in the first edition, only a weak converse was proved, and, in the present edition, the proof of a strong converse is given. Where the proof of the weaker theorem emĀ ploys a method of general application and interest it has been retained and is given along with the proof of the stronger result. This is wholly in accord with the purpose of the present monograph, which is not only to prove the principal coding theorems but also, while doing so, to acquaint the reader with the most fruitful and interesting ideas and methods used in the theory. I am indebted to Dr