Learning Theory 19th Annual Conference on Learning Theory, COLT 2006, Pittsburgh, PA, USA, June 22-25, 2006, Proceedings

Bibliographic Details
Other Authors: Simon, Hans Ulrich (Editor), Lugosi, Gábor (Editor)
Format: eBook
Language:English
Published: Berlin, Heidelberg Springer Berlin Heidelberg 2006, 2006
Edition:1st ed. 2006
Series:Lecture Notes in Artificial Intelligence
Subjects:
Online Access:
Collection: Springer eBooks 2005- - Collection details see MPG.ReNa
Table of Contents:
  • Aggregation and Sparsity Via ?1 Penalized Least Squares
  • A Randomized Online Learning Algorithm for Better Variance Control
  • Online Prediction and Reinforcement Learning I
  • Online Learning with Variable Stage Duration
  • Online Learning Meets Optimization in the Dual
  • Online Tracking of Linear Subspaces
  • Online Multitask Learning
  • Online Prediction and Reinforcement Learning II
  • The Shortest Path Problem Under Partial Monitoring
  • Tracking the Best Hyperplane with a Simple Budget Perceptron
  • Logarithmic Regret Algorithms for Online Convex Optimization
  • Online Variance Minimization
  • Online Prediction and Reinforcement Learning III
  • Online Learning with Constraints
  • Continuous Experts and the Binning Algorithm
  • Competing with Wild Prediction Rules
  • Learning Near-Optimal Policies with Bellman-Residual Minimization Based Fitted Policy Iteration and a Single Sample Path
  • Other Approaches
  • Ranking with a P-Norm Push
  • Subset Ranking Using Regression
  • Invited Presentations
  • Random Multivariate Search Trees
  • On Learning and Logic
  • Predictions as Statements and Decisions
  • Clustering, Un-, and Semisupervised Learning
  • A Sober Look at Clustering Stability
  • PAC Learning Axis-Aligned Mixtures of Gaussians with No Separation Assumption
  • Stable Transductive Learning
  • Uniform Convergence of Adaptive Graph-Based Regularization
  • Statistical Learning Theory
  • The Rademacher Complexity of Linear Transformation Classes
  • Function Classes That Approximate the Bayes Risk
  • Functional Classification with Margin Conditions
  • Significance and Recovery of Block Structures in Binary Matrices with Noise
  • Regularized Learning and Kernel Methods
  • Maximum Entropy Distribution Estimation with Generalized Regularization
  • Unifying Divergence Minimization and Statistical Inference Via Convex Duality
  • Mercer’s Theorem, Feature Maps, and Smoothing
  • Learning Bounds for Support Vector Machines with Learned Kernels
  • Query Learning and Teaching
  • On Optimal Learning Algorithms for Multiplicity Automata
  • Exact Learning Composed Classes with a Small Number of Mistakes
  • DNF Are Teachable in the Average Case
  • Teaching Randomized Learners
  • Inductive Inference
  • Memory-Limited U-Shaped Learning
  • On Learning Languages from Positive Data and a Limited Number of Short Counterexamples
  • Learning Rational Stochastic Languages
  • Parent Assignment Is Hard for the MDL, AIC, and NML Costs
  • Learning Algorithms and Limitations on Learning
  • Uniform-Distribution Learnability of Noisy Linear Threshold Functions with Restricted Focus of Attention
  • Discriminative Learning Can Succeed Where Generative Learning Fails
  • Improved Lower Bounds for Learning Intersections of Halfspaces
  • Efficient Learning Algorithms Yield Circuit Lower Bounds
  • OnlineAggregation
  • Optimal Oracle Inequality for Aggregation of Classifiers Under Low Noise Condition
  • Active Sampling for Multiple Output Identification
  • Improving Random Projections Using Marginal Information
  • Open Problems
  • Efficient Algorithms for General Active Learning
  • Can Entropic Regularization Be Replaced by Squared Euclidean Distance Plus Additional Linear Constraints