联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Python编程Python编程

日期:2021-04-17 11:28

Part 1: ANN Inference (10 marks in total + 1 mark for FIT1053)

As indicated in the overview figure (i.e., Figure 1), an ANN is organised in layers, and each layer consists of a

number of “neurons” that we refer to as vertices. When classifying an input image, the input data flows first

to the vertices in the input layer (i.e., coloured in with pink in Figure 1), from there on through a number of

inner layers (i.e., coloured in with yellow in Figure 1), and finally through an output layer (i.e., coloured in

with blue in Figure 1) that computes the final scores. The vertices in the input and the output layers work a

bit di?erently than the inner layers. So let us focus on those first. The vertices in the input layer are extremely

simple: they just forward the value of a single input pixel to all vertices of the subsequent layer.

Single Output Vertex (1 mark)

The output vertices are a little more complicated: given an input vector x = (x0,...,xd1)

of some dimension

d, they compute a linear function:

f(x) = w0x0 + ··· + wd1xd1

+ b = w · x + b (1)

defined by a d-dimensional weight vector of w and a single number b called the bias. Note that we use the

usual notation w · x to denote the dot product of w and x, i.e., the sum w0x0 + ··· + wd1xd1.

Write function linear(x, w, b) that can be used to compute the output of an individual vertex in the

output layer by adhering to the following specification:

Input: A list of inputs (x), a list of weights (w) and a bias (b).

Output: A single number corresponding to the value of f(x) in Equation 1.

For instance, for input vector x = (1, 3.5), weight vector w = (3.8, 1.5) and bias b = 1.7

the output of a single

vertex (i.e., the output of vertex 4 in Figure 4) is computed as:

>>> x = [1, 3.5]

>>> w = [3.8, 1.5]

>>> b = -1.7

>>> linear(x, w, b)

7.35

Figure 4: Visualization of the example ANN that has three layers in total with 2 vertices in its first layer (i.e.,

coloured in with pink), 2 vertices in its last layer (i.e., coloured in with blue), and 2 vertices in inner layer (i.e.,

coloured in with yellow). The red arrow visualizes the layer-by-layer information flow through the ANN.

Output Layer (1 mark)

Now to compute the combined output of the whole output layer that consists of l vertices, we just have

to compute the joint output of many linear functions. That is, given the weight vectors w0,...wl1,

biases

6

b0,...,bl1

of the individual vertices and the input data x flowing in from the preceeding layer, the output of

the output layer is described by the function:

Write function linear layer(x, w, b) that can be used to compute the output of the whole output layer by

adhering to the following specification:

Input: A list of inputs (x), a table of weights (w) and a list of biases (b).

Output: A list of numbers corresponding to the values of f(x) in Equation 2.

For instance, for input vector x = (1, 3.5), weight matrix w =

? 3.8, 1.5

1.2,

1.1

and biases b = (1.7,

2.5) the

combined output of the whole output layer (i.e., the output of vertices 4 and 5 in the example ANN that is

visualized in Figure 4) is computed as:

>>> x = [1, 3.5]

>>> w = [[3.8, 1.5], [-1.2, 1.1]]

>>> b = [-1.7, 2.5]

>>> linear_layer(x, w, b)

[7.35, 5.15]

Inner Layers (1 mark)

Next we need to compute the output of the vertices in the inner layers. Individually, the output of a single

vertex that is located in an inner layer is computed by the following piecewise linear function:

f(x) = max(w · x + b, 0) (3)

for a given input vector x, weight vector w and bias b. Therefore, given the weight vectors w0,...wl1,

biases

b0,...,bl1

of the individual vertices and the input data x flowing in from the preceeding layer, the output of

an inner layer is described by the function:

CA (4)

Write function inner layer(x, w, b) that can be used to compute the output of an inner layer by adhering

to the following specification:

Input: A list of inputs (x), a table of weights (w) and a list of biases (b).

Output: A list of numbers corresponding to the values of f(x) in Equation 4.

For instance, for input vector x = (1, 0),

4.2) the combined

output of the whole inner layer (i.e., the output of vertices 2 and 3 in the example ANN that is visualized in

Figure 4) is computed as:

>>> x = [1, 0]

>>> w = [[2.1, -3.1], [-0.7, 4.1]]

>>> b = [-1.1, 4.2]

>>> inner_layer(x, w, b)

[1, 3.5]

Full Inference (2 marks)

Finally, we can put everything together to compute the output of the whole ANN (e.g., scores) given some input

vector (e.g., pixels). Specifically, the output of the ANN is computed layer-by-layer starting from the input

layer, continuing with the inner layer(s) and ending with the output layer. That is, the output of the vertices

in the input layer must be computed first, which is then used in the output computation of the vertices in the

inner layer(s), which is finally used in the output computation of the vertices in the output layer.

Write function inference(x, w, b) that can be used to compute the output of an ANN by adhering to

the following specification:

7

Input: A list of inputs (x), a list of tables of weights (w) and a table of biases (b).

Output: A list of numbers corresponding to output of the ANN.

The function inference behaves as follows for the example ANN (i.e., visualized in Figure 4) with weight

>>> x = [1, 0]

>>> w = [[[2.1, -3.1], [-0.7, 4.1]], [[3.8, 1.5], [-1.2, 1.1]]]

>>> b = [[-1.1, 4.2], [-1.7, 2.5]]

>>> inference(x,w,b)

[7.35, 5.15]

Next, we will focus on implementing functions that read in and store data from text files.

Reading Weights (1 mark)

The weights of the ANN will be stored in a text file and must be read in and stored. For example, the weights

of the example ANN that are visualized in Figure 4 (also included in ‘example weights.txt’) will have the

following format:

#

2.1,-3.1

-0.7,4.1

#

3.8,1.5

-1.2,1.1

where the character # is used to separate two adjacent weight matrices.

Write function read weights(file name) that can be used to read in the weights of the ANN by adhering

to the following specification:

Input: A string (file name), that corresponds to the name of the file that contains the weights of the ANN.

Output: A list of tables of numbers corresponding to the weights of the ANN.

For example, the function read weights behaves as follows:

>>> w_example = read_weights(‘example_weights.txt’)

>>> w_example

[[[2.1, -3.1], [-0.7, 4.1]], [[3.8, 1.5], [-1.2, 1.1]]]

>>> w = read_weights(‘weights.txt’)

>>> len(w)

3

>>> len(w[2])

10

>>> len(w[2][0])

16

Reading Biases (1 mark)

Similarly, the biases of the ANN will be stored in a text file and must be read in and stored. For example, the

biases of the example ANN that are visualized in Figure 4 (also included in ‘example biases.txt’) will have

the following format:

#

-1.1,4.2

#

-1.7,2.5

where the character # is used to separate two adjacent bias vectors.

Write function read biases(file name) that can be used to read in the biases of the ANN by adhering to

the following specification:

Input: A string (file name), that corresponds to the name of the file that contains the biases of the ANN.

8

Output: A table of numbers corresponding to the biases of the ANN.

For example, the function read biases behaves as follows:

>>> b_example = read_biases(‘example_biases.txt’)

>>> b_example

[[-1.1, 4.2], [-1.7, 2.5]]

>>> b = read_biases(‘biases.txt’)

>>> len(b)

3

>>> len(b[0])

16

Reading the Image File (1 mark)

The image in Figure 2 visualizes a black-and-white image of a handwritten digit using zeros for white pixels

and ones for black pixels over 28x28 pixels (i.e., 784 in total). Write function read image(file name) that can

be used to read in the image visualized by Figure 2 by adhering to the following specification:

Input: A string (file name), that corresponds to the name of the file that contains the image.

Output: A list of numbers corresponding to input of the ANN.

For example, the function read image behaves as follows:

>>> x = read_image(‘image.txt’)

>>> len(x)

784

Finally, we will solve the handwritten digit classification task.

Output Selection (1 mark)

The computed scores by the ANN will be used to predict what number is represented in the image. Specifically,

given a vector of computed scores x = (x0,...,xd1)

of some dimension d, the predicted number i will be the

index of score xi with the highest score. Write function argmax(x) that can be used to return the index of an

element with the maximum value.

Input: A list of numbers (i.e., x) that can represent the scores computed by the ANN.

Output: A number representing the index of an element with the maximum value in the list x. If there are

multiple elements with the same maximum value, the function should return the minimum index (e.g.,

see the example below).

For example, argmax behaves as follows for input list x:

>>> x = [1.3, -1.52, 3.9, 0.1, 3.9]

>>> argmax(x)

2

Number Prediction (1 mark)

Finally, write function predict number(image file name,weights file name,biases file name) that solves

the handwritten digit classification task.

Input: A string (i.e., image file name) that corresponds to the image file name, a string (i.e., weights file name)

that corresponds to the weights file name and a string (i.e., biases file name) that corresponds to the

biases file name.

Output: The number predicted in the image by the ANN.

For example, predict number behaves as follows for file names ‘image.txt’, ‘weights.txt’ and ‘biases.txt’:

>>> i = predict_number(‘image.txt’,‘weights.txt’,‘biases.txt’)

>>> print(‘The image is number ’ + str(i))

The image is number 4

9

9


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp