# Solution to exercises week 8, The Multivariate Gaussian Classifier

Some theory

## Multivariate Gaussian classifier

We are provided with a Landsat sattelite image containing 6 image "bands". We will use each of these bands as a feature image. Lets load them, I'm saving them in a so called cell array.

tm = cell(6,1);
% We are also going to need these training and test masks. These masks
% indicate what pixels are known to belong to each class. Thus indicating
% what pixels we should use for training, and what pixels we can use to
% validate the result (test).

figure(1)
subplot(211)
imshow(tm{1},[]);
colorbar
title('Feature 1');

subplot(212)
imshow(tm{2},[]);
colorbar
title('Feature 2');

figure(2)
subplot(211)
imshow(tm{3},[]);
colorbar
title('Feature 3');

subplot(212)
imshow(tm{4},[]);
colorbar
title('Feature 4');

figure(3)
subplot(211)
imshow(tm{5},[]);
colorbar
title('Feature 5');

subplot(212)
imshow(tm{6},[]);
colorbar
title('Feature 6');

figure(4)
subplot(211)
imagesc(tm_train);
colorbar
axis image

subplot(212)
imagesc(tm_test);
colorbar
axis image
drawnow


## Let's do the classification

[M,N] = size(tm_train);
res_train = zeros(M,N);
res_test = zeros(M,N);

% I have one implementation where I train the classifier, but also run the
% classifier on the trining data. Giving me the accuarcy for the classifier
% on the traning data
[class_img,error_train,confusion_train,u,c] = trainMultiGaussClassifier(tm,tm_train);
% The second implementation takes the mean values (u) and the covariance
% matrices (c) from the traning part and classifies the image. From this I
% calculate the accuracy on the test part of the image
[class_img2,error_test,confusion_test] = multiGaussClassifierNoTraining(tm,tm_test,u,c);
% Note that the two classified images will be the sam (class_img and
% class_img2) the difference is which mask I use.

% Lets make the resulting image within the training masks
res_train(tm_train==1) = class_img(tm_train==1);
res_train(tm_train==2) = class_img(tm_train==2);
res_train(tm_train==3) = class_img(tm_train==3);
res_train(tm_train==4) = class_img(tm_train==4);

% And within the test masks
res_test(tm_test==1) = class_img(tm_test==1);
res_test(tm_test==2) = class_img(tm_test==2);
res_test(tm_test==3) = class_img(tm_test==3);
res_test(tm_test==4) = class_img(tm_test==4);

% And display them
figure(5);
imagesc(class_img);
title('Classified image');
colormap jet
drawnow

figure(6);
imagesc(res_train);
colormap jet
drawnow

figure(7)
imagesc(res_test);
colormap jet
drawnow

figure(8)
imagesc(klassim)
title('Given classification (from tm classres)');
colormap jet
drawnow

p = sum(sum(class_img == klassim))*100/(N*M);
fprintf('Comparing my result with tm_classres: %f \n',p);

error_train

confusion_train

error_test

confusion_test

Comparing my result with tm_classres: 100.000000

error_train =

0.0807

confusion_train =

1340           2           0         310
43        1253           0           2
0           0        1738           0
131           3           0        1266

error_test =

0.1240

confusion_test =

1474           3           1         251
513        2311           0           0
14           0        1953          12
213           2           0        1390



## Exercise 4, (named Exercise 3) see the note for details

C = [1.2 0.4; 0.4 1.8];

mu1 = [0.1 ; 0.1];
mu2 = [2.1 ; 1.9];
mu3 = [-1.5 ; 2.0];

x = [1.6 ; 1.5];

g1 = -(1/2)*(x-mu1)'*inv(C)*(x-mu1)

g2 = -(1/2)*(x-mu2)'*inv(C)*(x-mu2)

g3 = -(1/2)*(x-mu3)'*inv(C)*(x-mu3)

[a,c] = max([g1 g2 g3]);
c

g1 =

-1.1805

g2 =

-0.1205

g3 =

-4.7095

c =

2



## Exercise 5, (named Exercise 4) see the note for details

C = [1.1 0.3; 0.3 1.9];

mu1 = [0 ; 0];
mu2 = [3 ; 3];

x = [1.0 ; 2.2];

g1 = -(1/2)*(x-mu1)'*inv(C)*(x-mu1)

g2 = -(1/2)*(x-mu2)'*inv(C)*(x-mu2)

[a,c] = max([g1 g2]);
c

% post_id = 642; %delete this line to force new post;

g1 =

-1.4760

g2 =

-1.8360

c =

1



1. Cameron Lowell Palmer says:

1. omrindal says:

It's a part of the task 😉

2. Ni, Hung Chih says:

Hi:
The matlab function cov() can only apply two matrics at a time but we have six matrics to implement. Is there another way to use this function? or we have to implement the covariance maric by loop?

1. omrindal says:

Hi,

I don't really understand what you mean. I put all the features from one class in one matrix, for example called temp_matrix. So that temp_matrix has dimensions number_of_pixels_from_each_feature X number_of_features. For example 1652x6. Then you can get the covariance matrix for this class by calling cov(temp_matrix). Then you have to do this for every class.

3. Arthur Biancarelli says:

Hi,

I have trouble extracting the covariance matrix from the features. Could you tell me again how you get your feature matrix from the features images? (you put them in one matrix called temp_matrix, how ?)
Thanks

1. omrindal says:

I do this
 temp_matrix(:,feature) = double(featureImage{feature}(training_mask==i)); 

where feature is the index of the features going from 1 to 6. and i is the index of the class, going from 1 to 4.

1. Arthur Biancarelli says:

Ok many thanks for the quick reply

4. Ni Hubg Chih says:

Hi :
can you explain a little bit more about exercise3
why ||x-u||^2 = xt*x - 2*xt*u + ut*u
does it suppose to be
x^2 - 2*x*u + u^2

Or,
using (x-u)^t * (x-u)
it supposed to be
xt*x - xt*u - ut*x + ut*u

5. omrindal says:

Hello,

Yeah, it is ||(x-u)||^2 = (x-u)^T(x-u), where ^T means the transpose since this is vectors. And then:
(x-u)^T(x-u) = x^Tx -x^T*u -u^Tx + u^Tu = x^Tx -2x^T*u + u^Tu.

Does it make sense?

6. Ni, Hung Chih says:

(x-u)^T(x-u) = x^T*x -x^T*u -u^T*x + u^T*u
doesn't equal to x^T*x -2*x^T*u + u^T*u
since x^T*u doesn't equal to u^T*x
or is it equal ?

1. omrindal says:

Hey,

in our case u and x are vectors, so:

so

and

.