본문 바로가기
장바구니0

First Course in Machine Learning, 2/Ed > 인공지능

도서간략정보

First Course in Machine Learning, 2/Ed
판매가격 55,000원
저자 Rogers
도서종류 외국도서
출판사 Taylor & Francis
발행언어 영어
발행일 2016-08
페이지수 427
ISBN 9781498738484
도서구매안내 온, 온프라인 서점에서 구매 하실 수 있습니다.

구매기능

  • 도서 정보

    도서 상세설명

    Linear Modelling: A Least Squares Approach
    LINEAR MODELLING
    De ning the model
    Modelling assumptions
    De ning a good model
    The least squares solution—a worked example
    Worked example
    Least squares t to the Olympic data
    Summary
    MAKING PREDICTIONS
    A second Olympic dataset
    Summary
    VECTOR/MATRIX NOTATION
    Example
    Numerical example
    Making predictions
    Summary
    NON-LINEAR RESPONSE FROM A LINEAR MODEL
    GENERALISATION AND OVER-FITTING
    Validation data
    Cross-validation
    Computational scaling of K-fold cross-validation
    REGULARISED LEAST SQUARES
    EXERCISES
    FURTHER READING

    Linear Modelling: A Maximum Likelihood Approach
    ERRORS AS NOISE
    Thinking generatively
    RANDOM VARIABLES AND PROBABILITY
    Random variables
    Probability and distributions
    Adding probabilities
    Conditional probabilities
    Joint probabilities
    Marginalisation
    Aside—Bayes' rule
    Expectations
    POPULAR DISCRETE DISTRIBUTIONS
    Bernoulli distribution
    Binomial distribution
    Multinomial distribution
    CONTINUOUS RANDOM VARIABLES { DENSITY
    FUNCTIONS
    POPULAR CONTINUOUS DENSITY FUNCTIONS
    The uniform density function
    The beta density function
    The Gaussian density function
    Multivariate Gaussian
    SUMMARY
    THINKING GENERATIVELY...CONTINUED
    LIKELIHOOD
    Dataset likelihood
    Maximum likelihood
    Characteristics of the maximum likelihood solution
    Maximum likelihood favours complex models
    THE BIAS-VARIANCE TRADE-OFF
    Summary
    EFFECT OF NOISE ON PARAMETER ESTIMATES
    Uncertainty in estimates
    Comparison with empirical values
    Variability in model parameters—Olympic data
    VARIABILITY IN PREDICTIONS
    Predictive variability—an example
    Expected values of the estimators
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    The Bayesian Approach to Machine Learning
    A COIN GAME
    Counting heads
    The Bayesian way
    THE EXACT POSTERIOR
    THE THREE SCENARIOS
    No prior knowledge
    The fair coin scenario
    A biased coin
    The three scenarios—a summary
    Adding more data
    MARGINAL LIKELIHOODS
    Model comparison with the marginal likelihood
    HYPERPARAMETERS
    GRAPHICAL MODELS
    SUMMARY
    A BAYESIAN TREATMENT OF THE OLYMPIC 100m DATA 122
    The model
    The likelihood
    The prior
    The posterior
    A first-order polynomial
    Making predictions
    MARGINAL LIKELIHOOD FOR POLYNOMIAL MODEL
    ORDER SELECTION
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING
    Bayesian Inference
    NON-CONJUGATE MODELS
    BINARY RESPONSES
    A model for binary responses
    A POINT ESTIMATE—THE MAP SOLUTION
    THE LAPLACE APPROXIMATION
    Laplace approximation example: Approximating a
    gamma density
    Laplace approximation for the binary response model
    SAMPLING TECHNIQUES
    Playing darts
    The Metropolis{Hastings algorithm
    The art of sampling
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    Classification
    THE GENERAL PROBLEM
    PROBABILISTIC CLASSIFIERS
    The Bayes classifier
    Likelihood—class-conditional distributions
    Prior class distribution
    Example—Gaussian class-conditionals
    Making predictions
    The naive-Bayes assumption
    Example—classifying text
    Smoothing
    Logistic regression
    Motivation
    Non-linear decision functions
    Non-parametric models—the Gaussian process
    NON-PROBABILISTIC CLASSIFIERS
    K-nearest neighbours
    Choosing K
    Support vector machines and other kernel methods
    The margin
    Maximising the margin
    Making predictions
    Support vectors
    Soft margins
    Kernels
    Summary
    ASSESSING CLASSIFICATION PERFORMANCE
    Accuracy—0/1 loss
    Sensitivity and speci city
    The area under the ROC curve
    Confusion matrices
    DISCRIMINATIVE AND GENERATIVE CLASSIFIERS
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    Clustering
    THE GENERAL PROBLEM
    K-MEANS CLUSTERING
    Choosing the number of clusters
    Where K-means fails
    Kernelised K-means
    Summary
    MIXTURE MODELS
    A generative process
    Mixture model likelihood
    The EM algorithm
    Updating _k
    Updating _k
    Updating _k
    Updating qnk
    Some intuition
    Example
    EM nds local optima
    Choosing the number of components
    Other forms of mixture component
    MAP estimates with EM
    Bayesian mixture models
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    Principal Components Analysis and Latent Variable Models
    THE GENERAL PROBLEM
    Variance as a proxy for interest
    PRINCIPAL COMPONENTS ANALYSIS
    Choosing D
    Limitations of PCA
    LATENT VARIABLE MODELS
    Mixture models as latent variable models
    Summary
    VARIATIONAL BAYES
    Choosing Q(_)
    Optimising the bound
    A PROBABILISTIC MODEL FOR PCA
    Q_ (_ )
    Qxn(xn)
    Qwm(wm)
    The required expectations
    The algorithm
    An example
    MISSING VALUES
    Missing values as latent variables
    Predicting missing values
    NON-REAL-VALUED DATA
    Probit PPCA
    Visualising parliamentary data
    Aside—relationship to classification
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    Advanced Topics

    Gaussian Processes
    PROLOGUE—NON-PARAMETRIC MODELS
    GAUSSIAN PROCESS REGRESSION
    The Gaussian process prior
    Noise-free regression
    Noisy regression
    Summary
    Noisy regression—an alternative route Alternative covariance functions
    Linear
    Polynomial
    Neural network
    ARD
    Composite covariance functions
    Summary
    GAUSSIAN PROCESS CLASSIFICATION
    A classi cation likelihood
    A classi cation roadmap
    The point estimate approximation
    Propagating uncertainty through the sigmoid
    The Laplace approximation
    Summary
    HYPERPARAMETER OPTIMISATION
    EXTENSIONS
    Non-zero mean
    Multiclass classi cation
    Other likelihood functions and models
    Other inference schemes
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    Markov Chain Monte Carlo Sampling
    GIBBS SAMPLING
    EXAMPLE: GIBBS SAMPLING FOR GP
    CLASSIFICATION
    Conditional densities for GP classi cation via Gibbs sampling
    Summary
    WHY DOES MCMC WORK?
    SOME SAMPLING PROBLEMS AND SOLUTIONS
    Burn-in and convergence
    Autocorrelation
    Summary
    ADVANCED SAMPLING TECHNIQUES
    Adaptive proposals and Hamiltonian Monte Carlo
    Approximate Bayesian computation
    Population MCMC and temperature schedules
    Sequential Monte Carlo
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING

    Advanced Mixture Modelling
    A GIBBS SAMPLER FOR MIXTURE MODELS
    COLLAPSED GIBBS SAMPLING
    AN INFINITE MIXTURE MODEL
    The Chinese restaurant process
    Inference in the in nite mixture model
    Summary
    DIRICHLET PROCESSES
    Hierarchical Dirichlet processes
    Summary
    BEYOND STANDARD MIXTURES—TOPIC MODELS
    CHAPTER SUMMARY
    EXERCISES
    FURTHER READING
    Glossary
    Index
  • 사용후기

    사용후기가 없습니다.

  • 배송/교환정보

    배송정보

    배송 안내 입력전입니다.

    교환/반품

    교환/반품 안내 입력전입니다.

선택하신 도서가 장바구니에 담겼습니다.

계속 둘러보기 장바구니보기
회사소개 개인정보 이용약관
Copyright © 2001-2019 도서출판 홍릉. All Rights Reserved.
상단으로