## Transcribed Text

Chapter 4 - Exercise #3:
This problem relates to the QDA model, in which the observations
within each class are drawn from a normal distribution with a class-
specific mean vector and a class specific covariance matrix. We con-
sider the simple case where p = 1; i.e. there is only one feature.
Suppose that we have K classes, and that if an observation belongs
to the kth class then X comes from a one-dimensional normal dis-
tribution, X ~ N(Hk, of) Recall that the density function for the
one-dimensional normal distribution is given in (4.11). Prove that in
this case, the Bayes' classifier is not linear. Argue that it is in fact
quadratic.
Hint: For this problem, you should follow the arguments laid out in
Section 4.4.2, but without making the assumption that of = = ok.
Chapter 4 - Exercise #4:
4. When the number of features p is large. there tends to be a deteri-
oration in the performance of KNN and other local approaches that
perform prediction using only observations that are near the test ob-
servation for which A prediction must be made. This phenomenon is
known as the curse of dimensionality, and it ties into the fact that
curse of di-
non-parametric approaches often perform poorly when P is large. We mensionality
will now investigate this curse.
(a) Suppose that we have a set of observations, each with measure-
ments on p = 1 feature, X. We assume that X is uniformly
(evenly) distributed on [0,1]. Associated with each observation
is a response value. Suppose that we wish to predict a test obser-
vation's response using only observations that are within 10% of
the range of X closest to that test observation. For instance, in
order to predict the response for a test observation with X = 0.6,
4.7 Exercises
169
we will use observations in the range [0.55, 0.65]. On average,
what fraction of the available observations will we use to make
the prediction?
(b) Now suppose that we have a set of observations, each with
measurements on p = 2 features, X1 and X2. We assume that
(X1, X2) are uniformly distributed on [0,1 X [0, 1]. We wish to
predict a test observation's response using only observations that
are within 10% of the range of X1 and within 10% of the range
of X2 closest to that test observation. For instance, in order to
predict the response for a test observation with X1 = 0.6 and
X2 = 0.35, we will use observations in the range [0.55,0.65] for
X1 and in the range [0.3,0.4] for X2. On average, what fraction
of the available observations will we use to make the prediction?
(c) Now suppose that we have a set of observations on p = 100 fea-
tures. Again the observations are uniformly distributed on each
feature, and again each feature ranges in value from 0 to 1. We
wish to predict a test observation's response using observations
within the 10% of each feature's range that is closest to that test
observation. What fraction of the available observations will we
use to make the prediction?
(d) Using your answers to parts (a)-(c), argue that a drawback of
KNN when p is large is that there are very few training obser-
vations "near" any given test observation.
(e) Now suppose that we wish to make a prediction for a test obser-
vation by creating a p-dimensional hypercube centered around
the test observation that contains, on average, 10% of the train-
ing observations. For p = 1,2, and 100, what is the length of
each side of the hypercube? Comment on your answer.
Note: A hypercube is a generalization of a cube to an arbitrary
number of dimensions. When p = 1, a hypercube is simply a line
segment, when p = 2 it is a square, and when p = 100 it is a
100-dimensional cube.
Chapter 4 - Exercise #5:
รถ. Wc now exomine the differences between LDA and QDA.
(a) If the Bayea decision boundary is lincar, do we expect LDA or
QDA to perform better on the training set? On the test set?
(b) If the Bayes decision boundary is non-linear, do we expect LDA
or QDA to perform better on the training set? On the test set?
(c) In generol, as the somplo size n increasea, do we expect the test
prediction accuracy of QDA relative to LDA to improve, decline,
or be unchanged? Why?
70
4. Classification
(d) True or Palse: Eiven if the Bayes decision boundary for A given
problem is linear, wc will probably achicve a superior tost cr-
ror rale using QDA rather than LDA because QDA is flexible
cnough to model E. lincor docision boundary. Juatify your answor.
Chapter 4 - Exercise #6:
6. Suppose we collect data for a group of students in a statistics class
with variables X1 = hours studied, X2 = undergrad GPA, and Y =
receive an A. We fit a logistic regression and produce estimated
coefficient, Bo = - = =
(a) Estimate the probability that a student who studies for 40 h and
has an undergrad GPA of 3.5 gets an A in the class.
(b) How many hours would the student in part (a) need to study to
have a 50% chance of getting an A in the class?
Chapter 4 - Exercise #7:
7.
Suppose that we wish to predict whether a given stock will issue a
dividend this year ("Yes" or "No") based on x, last year's percent
profit. We examine a large number of companies and discover that the
mean value of X for companies that issued a dividend was X = 10,
while the mean for those that didn't was X = 0. In addition, the
variance of X for these two sets of companies was 62 = 36. Finally,
80 % of companies issued dividends. Assuming that X follows a nor-
mal distribution, predict the probability that a company will issue
a dividend this year given that its percentage profit was X = 4 last
year.
Hint: Recall that the density function for a normal Tandom variable
You will need to Bayes' theorem.
use
Chapter 4 - Exercise #8:
8. Suppose that we take a data set, divide it into equally-sized training
and test sets, and then try out two different classification procedures.
First we use logistic regression and get an error rate of 20% on the
training data and 30% on the test data. Next we use 1-nearest neigh-
bors (i.e. K = 1) and get an average error rate (averaged over both
test and training data sets) of 18%. Based on these results, which
method should we prefer to use for classification of new observations?
Why?
Chapter 4 - Exercise #9:
9. This problem has to do with odds.
(a) On average, what fraction of people with an odds of 0.37 of
defaulting on their credit card payment will in fact default?
(b) Suppose that an individual has a 16% chance of defaulting on
her credit card payment. What are the odds that she will de-
fault?
Chapter 4 - Exercise #10:
10. This question ahould be answered using the Wockly data set, which
is part of the ISLR package. This data is similar in nature to the
Suarket data from this chapter's lab, except that it contains 1,089
weekly retuirns for 21 years, from the beginning of 1990 to the end of
2010.
(a) Produce some numerical and graphical summaries of the Weekly
data. Do there appear to be any patterns?
(b) Use the full data. set to perform n logistic regression with
Direction as the response and the live lag variables plus Volumo
as predictors. Use the summary function to print the results. Do
any of the predictors appear to be statistically significant? If so,
which ones?
(c) Compute the confusion matrix and overall fraction of correct
predictions. Explain what the confusion matrix is telling you
about the types of mistakes made by logistic regression.
(d) Now fit the logistic regresaion model using a troining dato period
from 1990 to 2008, with Lag2 as the only predictor. Compute the
confusion matrix and the overall Traction of correct predictions
for the held out data (that is, the data from 2009 and 2010).
() Repeat (d) using LDA.
(f) Repest (d) using QDA.
(g) Repeat (d) using KNN with K - 1.
(h) Which of these methods appears to provide the hest results on
this data?
(i) Experiment wich different combinations of predictors, includ-
ing possible transformations and interactions, for each of the
methods. Report the variables, method, and associated confu-
sion matrix that appears to provide the best results on the hold
out data. Note that you should also experiment with values for
K in the KNN classifier.
Chapter 5 - Exercise #2:
2. We will now derive the probability that a given observation is part
of a bootstrap sample. Suppose that we obtain a bootstrap sample
from a set of n observations.
(a) What is the probability that the first bootstrap observation is
not the jth observation from the original sample? Justify your
answer.
(b) What is the probability that the second bootstrap observation
is not the jth observation from the original sample?
(c) Argue that the probability that the jth observation is not in the
bootstrap sample is (1 - 1/n)n.
(d) When n = 5, what is the probability that the jth observation is
in the bootstrap sample?
(e) When n = 100, what is the probability that the jth observation
is in the bootstrap sample?
98
5. Resampling Methods
(f) When 72 = 10,000, what is the probability that the jth observa-
tion is in the bootstrap sample?
(g) Create a plot that displays, for each integer value of n from 1
to 100, 000, the probability that the jth observation is in the
bootstrap sample. Comment on what you observe.
(h) We will now investigate numerically the probability that a boot-
strap sample of size 72 = 100 contains the jth observation. Here
j = 4. We repeatedly create bootstrap samples, and each time
we record whether or not the fourth observation is contained in
the bootstrap sample.
> storemrep (NA, 10000)
> for (i in 1:10000) {
store [i] -sum (sample (1:100, rep=TRUE)==4) >0
}
> mean(store)
Comment on the results obtained.
Chapter 5 - Exercise #5:
Applied
5. In Chapter 4, we used logistic regression to predict the probability of
default using income and balance on the Default data set. We will
now estimate the test error of this logistic regression model using the
validation set approach. Do not forget to set a random seed before
beginning your analysis.
(a) Fit a logistic regression model that uses income and balance to
predict default.
(b) Using the validation set approach, estimate the test error of this
model. In order to do this, you must perform the following steps:
i. Split the sample set into a training set and a validation set.
5.4 Exercises
199
ii. Fit a multiple logistic regression model using only the train-
ing observations.
iii. Obtain a prediction of default status for each individual in
the validation set by computing the posterior probability of
default for that individual, and classifying the individual to
the default category if the posterior probability is greater
than 0.5.
iv. Compute the validation set error, which is the fraction of
the observations in the validation set that are misclassified.
(c) Repeat the process in (b) three times, using three different splits
of the observations into a training set and a validation set. Com-
ment on the results obtained.
(d) Now consider a logistic regression model that predicts the prob-
ability of default using income, balance, and a dummy variable
for student. Estimate the test error for this model using the val-
idation set approach. Comment on whether or not including a
dummy variable for student leads to a reduction in the test error
rate.

These solutions may offer step-by-step problem-solving explanations or good writing examples that include modern styles of formatting and construction
of bibliographies out of text citations and references. Students may use these solutions for personal skill-building and practice.
Unethical use is strictly forbidden.

# 4.3

# In driving (4.13) from (4.12), terms that are not

# dependent on k are ignored or aborbed into the multiplicative

# constant because we jsut seek X = x that makes pk(x) the largest.

# When sigma_k = sigma, 1/2*sigam^2 is the coefficient of x^2 and

# the whole term with x^2 is absorbed into the multiplicative constant.

# But, when sigam_k, 1/2*sigma_k^2 depends on k and this coefficent

# with x^2 remains in the expression for the Bayes classifier.

# Basicallly, the quadratic distriminant classifier is (4.13) with

# 1/2*sigma_k^2 * x^2 as it includes the x-squred in the expression.

# 4.4

# (a) 10% (0.65 - 0.55 = 0.10)

# (b)

# (0.65-0.55) * (0.4 - 0.3) = 0.01...