Machine Learning

Description

Mind Map on Machine Learning, created by Chi Wing Yau on 10/02/2015.
Chi Wing Yau
Mind Map by Chi Wing Yau, updated more than 1 year ago
Chi Wing Yau
Created by Chi Wing Yau over 10 years ago
36
1
1 2 3 4 5 (0)

Resource summary

Machine Learning
  1. Supervised Learning
    1. Vapnik-Chervonenkis (VC) Dimension
      1. Classification
        1. Bayesian Decision Theory

          Annotations:

          • P(C|x) = P(C) * p(x|C) / p(x)    
          1. posterior p(C | x)

            Annotations:

            • = prior * likelihood / evidence
            1. prior P(C = i)

              Annotations:

              • probability of being class i 
              1. likelihood p(x|C)

                Annotations:

                • conditional probability, probability of event x given being class C 
                1. evidence p(x)

                  Annotations:

                  •  marginal probability probabilty of event x
                2. Risk function
                  1. Reject class
                    1. Discriminant Functions g(x)
                      1. Associative Rule
                        1. Support(X,Y) ≡ P(X,Y)
                          1. Confidence(X → Y) ≡ P(Y|X)
                            1. Lift(X → Y) = P(X,Y) / (P(X) * P(Y))

                              Annotations:

                              • = P(Y|X) / P(Y) kind of descripting relationship btw X and Y               
                        2. Regression
                          1. Modeling
                            1. Triple tradeoff
                              1. Complexity

                                Annotations:

                                • [Underfitting] Too low complexity, it may give high probability of false positive.
                                1. amount of data points
                                  1. Generalisation error

                                    Annotations:

                                    • [Overfitting] Hypothesis class is too complicated and fit the training data points. However, the new data points may match the hypothesis and the classification error may raise.
                                  2. Math notation
                                    1. g(x|θ)

                                      Annotations:

                                      •  where g(·) is the model, x is the input, and θ are the parameters 
                                      1. Loss function, L(·)
                                        1. E(θ|X) =
                                          1. θ∗ =argminE(θ|X)
                                      Show full summary Hide full summary

                                      0 comments

                                      There are no comments, be the first and leave one below:

                                      Similar

                                      Python
                                      Jay Prakash
                                      Machine Learning
                                      Luan Pessoa Rocha
                                      Machine Learning
                                      Vinh Phạm
                                      Terminology
                                      hvrd1
                                      Artificial Intellegence
                                      nicky elin
                                      Machine learning: Supervision
                                      Domhnall Murphy
                                      Should You Adopt Cognitive Technology for Your Business or Not?
                                      Cred Force
                                      Relation extraction
                                      François Plesse
                                      Machine Learning
                                      Alberto Ochoa
                                      Técnicas
                                      Lina Ochoa