• Home
  • Line#
  • Scopes#
  • Navigate#
  • Raw
  • Download
1namespace Eigen {
2
3/** \page TutorialReductionsVisitorsBroadcasting Tutorial page 7 - Reductions, visitors and broadcasting
4    \ingroup Tutorial
5
6\li \b Previous: \ref TutorialLinearAlgebra
7\li \b Next: \ref TutorialGeometry
8
9This tutorial explains Eigen's reductions, visitors and broadcasting and how they are used with
10\link MatrixBase matrices \endlink and \link ArrayBase arrays \endlink.
11
12\b Table \b of \b contents
13  - \ref TutorialReductionsVisitorsBroadcastingReductions
14    - \ref TutorialReductionsVisitorsBroadcastingReductionsNorm
15    - \ref TutorialReductionsVisitorsBroadcastingReductionsBool
16    - \ref TutorialReductionsVisitorsBroadcastingReductionsUserdefined
17  - \ref TutorialReductionsVisitorsBroadcastingVisitors
18  - \ref TutorialReductionsVisitorsBroadcastingPartialReductions
19    - \ref TutorialReductionsVisitorsBroadcastingPartialReductionsCombined
20  - \ref TutorialReductionsVisitorsBroadcastingBroadcasting
21    - \ref TutorialReductionsVisitorsBroadcastingBroadcastingCombined
22
23
24\section TutorialReductionsVisitorsBroadcastingReductions Reductions
25In Eigen, a reduction is a function taking a matrix or array, and returning a single
26scalar value. One of the most used reductions is \link DenseBase::sum() .sum() \endlink,
27returning the sum of all the coefficients inside a given matrix or array.
28
29<table class="example">
30<tr><th>Example:</th><th>Output:</th></tr>
31<tr><td>
32\include tut_arithmetic_redux_basic.cpp
33</td>
34<td>
35\verbinclude tut_arithmetic_redux_basic.out
36</td></tr></table>
37
38The \em trace of a matrix, as returned by the function \c trace(), is the sum of the diagonal coefficients and can equivalently be computed <tt>a.diagonal().sum()</tt>.
39
40
41\subsection TutorialReductionsVisitorsBroadcastingReductionsNorm Norm computations
42
43The (Euclidean a.k.a. \f$\ell^2\f$) squared norm of a vector can be obtained \link MatrixBase::squaredNorm() squaredNorm() \endlink. It is equal to the dot product of the vector by itself, and equivalently to the sum of squared absolute values of its coefficients.
44
45Eigen also provides the \link MatrixBase::norm() norm() \endlink method, which returns the square root of \link MatrixBase::squaredNorm() squaredNorm() \endlink.
46
47These operations can also operate on matrices; in that case, a n-by-p matrix is seen as a vector of size (n*p), so for example the \link MatrixBase::norm() norm() \endlink method returns the "Frobenius" or "Hilbert-Schmidt" norm. We refrain from speaking of the \f$\ell^2\f$ norm of a matrix because that can mean different things.
48
49If you want other \f$\ell^p\f$ norms, use the \link MatrixBase::lpNorm() lpNnorm<p>() \endlink method. The template parameter \a p can take the special value \a Infinity if you want the \f$\ell^\infty\f$ norm, which is the maximum of the absolute values of the coefficients.
50
51The following example demonstrates these methods.
52
53<table class="example">
54<tr><th>Example:</th><th>Output:</th></tr>
55<tr><td>
56\include Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.cpp
57</td>
58<td>
59\verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_norm.out
60</td></tr></table>
61
62\subsection TutorialReductionsVisitorsBroadcastingReductionsBool Boolean reductions
63
64The following reductions operate on boolean values:
65  - \link DenseBase::all() all() \endlink returns \b true if all of the coefficients in a given Matrix or Array evaluate to \b true .
66  - \link DenseBase::any() any() \endlink returns \b true if at least one of the coefficients in a given Matrix or Array evaluates to \b true .
67  - \link DenseBase::count() count() \endlink returns the number of coefficients in a given Matrix or Array that evaluate to  \b true.
68
69These are typically used in conjunction with the coefficient-wise comparison and equality operators provided by Array. For instance, <tt>array > 0</tt> is an %Array of the same size as \c array , with \b true at those positions where the corresponding coefficient of \c array is positive. Thus, <tt>(array > 0).all()</tt> tests whether all coefficients of \c array are positive. This can be seen in the following example:
70
71<table class="example">
72<tr><th>Example:</th><th>Output:</th></tr>
73<tr><td>
74\include Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.cpp
75</td>
76<td>
77\verbinclude Tutorial_ReductionsVisitorsBroadcasting_reductions_bool.out
78</td></tr></table>
79
80\subsection TutorialReductionsVisitorsBroadcastingReductionsUserdefined User defined reductions
81
82TODO
83
84In the meantime you can have a look at the DenseBase::redux() function.
85
86\section TutorialReductionsVisitorsBroadcastingVisitors Visitors
87Visitors are useful when one wants to obtain the location of a coefficient inside
88a Matrix or Array. The simplest examples are
89\link MatrixBase::maxCoeff() maxCoeff(&x,&y) \endlink and
90\link MatrixBase::minCoeff() minCoeff(&x,&y)\endlink, which can be used to find
91the location of the greatest or smallest coefficient in a Matrix or
92Array.
93
94The arguments passed to a visitor are pointers to the variables where the
95row and column position are to be stored. These variables should be of type
96\link DenseBase::Index Index \endlink, as shown below:
97
98<table class="example">
99<tr><th>Example:</th><th>Output:</th></tr>
100<tr><td>
101\include Tutorial_ReductionsVisitorsBroadcasting_visitors.cpp
102</td>
103<td>
104\verbinclude Tutorial_ReductionsVisitorsBroadcasting_visitors.out
105</td></tr></table>
106
107Note that both functions also return the value of the minimum or maximum coefficient if needed,
108as if it was a typical reduction operation.
109
110\section TutorialReductionsVisitorsBroadcastingPartialReductions Partial reductions
111Partial reductions are reductions that can operate column- or row-wise on a Matrix or
112Array, applying the reduction operation on each column or row and
113returning a column or row-vector with the corresponding values. Partial reductions are applied
114with \link DenseBase::colwise() colwise() \endlink or \link DenseBase::rowwise() rowwise() \endlink.
115
116A simple example is obtaining the maximum of the elements
117in each column in a given matrix, storing the result in a row-vector:
118
119<table class="example">
120<tr><th>Example:</th><th>Output:</th></tr>
121<tr><td>
122\include Tutorial_ReductionsVisitorsBroadcasting_colwise.cpp
123</td>
124<td>
125\verbinclude Tutorial_ReductionsVisitorsBroadcasting_colwise.out
126</td></tr></table>
127
128The same operation can be performed row-wise:
129
130<table class="example">
131<tr><th>Example:</th><th>Output:</th></tr>
132<tr><td>
133\include Tutorial_ReductionsVisitorsBroadcasting_rowwise.cpp
134</td>
135<td>
136\verbinclude Tutorial_ReductionsVisitorsBroadcasting_rowwise.out
137</td></tr></table>
138
139<b>Note that column-wise operations return a 'row-vector' while row-wise operations
140return a 'column-vector'</b>
141
142\subsection TutorialReductionsVisitorsBroadcastingPartialReductionsCombined Combining partial reductions with other operations
143It is also possible to use the result of a partial reduction to do further processing.
144Here is another example that finds the column whose sum of elements is the maximum
145 within a matrix. With column-wise partial reductions this can be coded as:
146
147<table class="example">
148<tr><th>Example:</th><th>Output:</th></tr>
149<tr><td>
150\include Tutorial_ReductionsVisitorsBroadcasting_maxnorm.cpp
151</td>
152<td>
153\verbinclude Tutorial_ReductionsVisitorsBroadcasting_maxnorm.out
154</td></tr></table>
155
156The previous example applies the \link DenseBase::sum() sum() \endlink reduction on each column
157though the \link DenseBase::colwise() colwise() \endlink visitor, obtaining a new matrix whose
158size is 1x4.
159
160Therefore, if
161\f[
162\mbox{m} = \begin{bmatrix} 1 & 2 & 6 & 9 \\
163                    3 & 1 & 7 & 2 \end{bmatrix}
164\f]
165
166then
167
168\f[
169\mbox{m.colwise().sum()} = \begin{bmatrix} 4 & 3 & 13 & 11 \end{bmatrix}
170\f]
171
172The \link DenseBase::maxCoeff() maxCoeff() \endlink reduction is finally applied
173to obtain the column index where the maximum sum is found,
174which is the column index 2 (third column) in this case.
175
176
177\section TutorialReductionsVisitorsBroadcastingBroadcasting Broadcasting
178The concept behind broadcasting is similar to partial reductions, with the difference that broadcasting
179constructs an expression where a vector (column or row) is interpreted as a matrix by replicating it in
180one direction.
181
182A simple example is to add a certain column-vector to each column in a matrix.
183This can be accomplished with:
184
185<table class="example">
186<tr><th>Example:</th><th>Output:</th></tr>
187<tr><td>
188\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.cpp
189</td>
190<td>
191\verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple.out
192</td></tr></table>
193
194We can interpret the instruction <tt>mat.colwise() += v</tt> in two equivalent ways. It adds the vector \c v
195to every column of the matrix. Alternatively, it can be interpreted as repeating the vector \c v four times to
196form a four-by-two matrix which is then added to \c mat:
197\f[
198\begin{bmatrix} 1 & 2 & 6 & 9 \\ 3 & 1 & 7 & 2 \end{bmatrix}
199+ \begin{bmatrix} 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 \end{bmatrix}
200= \begin{bmatrix} 1 & 2 & 6 & 9 \\ 4 & 2 & 8 & 3 \end{bmatrix}.
201\f]
202The operators <tt>-=</tt>, <tt>+</tt> and <tt>-</tt> can also be used column-wise and row-wise. On arrays, we
203can also use the operators <tt>*=</tt>, <tt>/=</tt>, <tt>*</tt> and <tt>/</tt> to perform coefficient-wise
204multiplication and division column-wise or row-wise. These operators are not available on matrices because it
205is not clear what they would do. If you want multiply column 0 of a matrix \c mat with \c v(0), column 1 with
206\c v(1), and so on, then use <tt>mat = mat * v.asDiagonal()</tt>.
207
208It is important to point out that the vector to be added column-wise or row-wise must be of type Vector,
209and cannot be a Matrix. If this is not met then you will get compile-time error. This also means that
210broadcasting operations can only be applied with an object of type Vector, when operating with Matrix.
211The same applies for the Array class, where the equivalent for VectorXf is ArrayXf. As always, you should
212not mix arrays and matrices in the same expression.
213
214To perform the same operation row-wise we can do:
215
216<table class="example">
217<tr><th>Example:</th><th>Output:</th></tr>
218<tr><td>
219\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.cpp
220</td>
221<td>
222\verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_simple_rowwise.out
223</td></tr></table>
224
225\subsection TutorialReductionsVisitorsBroadcastingBroadcastingCombined Combining broadcasting with other operations
226Broadcasting can also be combined with other operations, such as Matrix or Array operations,
227reductions and partial reductions.
228
229Now that broadcasting, reductions and partial reductions have been introduced, we can dive into a more advanced example that finds
230the nearest neighbour of a vector <tt>v</tt> within the columns of matrix <tt>m</tt>. The Euclidean distance will be used in this example,
231computing the squared Euclidean distance with the partial reduction named \link MatrixBase::squaredNorm() squaredNorm() \endlink:
232
233<table class="example">
234<tr><th>Example:</th><th>Output:</th></tr>
235<tr><td>
236\include Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.cpp
237</td>
238<td>
239\verbinclude Tutorial_ReductionsVisitorsBroadcasting_broadcast_1nn.out
240</td></tr></table>
241
242The line that does the job is
243\code
244  (m.colwise() - v).colwise().squaredNorm().minCoeff(&index);
245\endcode
246
247We will go step by step to understand what is happening:
248
249  - <tt>m.colwise() - v</tt> is a broadcasting operation, subtracting <tt>v</tt> from each column in <tt>m</tt>. The result of this operation
250is a new matrix whose size is the same as matrix <tt>m</tt>: \f[
251  \mbox{m.colwise() - v} =
252  \begin{bmatrix}
253    -1 & 21 & 4 & 7 \\
254     0 & 8  & 4 & -1
255  \end{bmatrix}
256\f]
257
258  - <tt>(m.colwise() - v).colwise().squaredNorm()</tt> is a partial reduction, computing the squared norm column-wise. The result of
259this operation is a row-vector where each coefficient is the squared Euclidean distance between each column in <tt>m</tt> and <tt>v</tt>: \f[
260  \mbox{(m.colwise() - v).colwise().squaredNorm()} =
261  \begin{bmatrix}
262     1 & 505 & 32 & 50
263  \end{bmatrix}
264\f]
265
266  - Finally, <tt>minCoeff(&index)</tt> is used to obtain the index of the column in <tt>m</tt> that is closest to <tt>v</tt> in terms of Euclidean
267distance.
268
269\li \b Next: \ref TutorialGeometry
270
271*/
272
273}
274