Change back-propagation to backpropagation
This commit is contained in:
parent
d79f968c6f
commit
3ed52bf88a
@ -790,8 +790,7 @@ and updating the weights within the network is it possible to gain
|
|||||||
experience $E$ at carrying out a task $T$. How the weights are updated
|
experience $E$ at carrying out a task $T$. How the weights are updated
|
||||||
depends on the algorithm which is used during the \emph{backward pass}
|
depends on the algorithm which is used during the \emph{backward pass}
|
||||||
to minimize the error. This type of procedure is referred to as
|
to minimize the error. This type of procedure is referred to as
|
||||||
\emph{back-propagation} (see
|
\emph{backpropagation} (see section~\ref{ssec:theory-backprop}).
|
||||||
section~\ref{ssec:theory-back-propagation}).
|
|
||||||
|
|
||||||
One common type of loss function is the \gls{mse} which is widely used
|
One common type of loss function is the \gls{mse} which is widely used
|
||||||
in regression problems. The \gls{mse} is a popular choice because it
|
in regression problems. The \gls{mse} is a popular choice because it
|
||||||
|
|||||||
Loading…
x
Reference in New Issue
Block a user