\documentclass[reqno]{amsart} \usepackage{graphicx} \AtBeginDocument{{\noindent\small {\em Electronic Journal of Differential Equations}, Vol. 2005(2005), No. 135, pp. 1--10.\newline ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu \newline ftp ejde.math.txstate.edu (login: ftp)} \thanks{\copyright 2005 Texas State University - San Marcos.} \vspace{9mm}} \begin{document} \title[\hfilneg EJDE-2005/135\hfil Controllability of time-varying CNN's] {Controllability of time-varying cellular neural networks} \author[W. Aziz, T. Lara\hfil EJDE-2005/135\hfilneg] {Wadie Aziz, Teodoro Lara} % in alphabetical order \address{Wadie Aziz \hfill\break Departmento de F\'{i}sica y Matem\'{a}ticas, N\'{u}cleo Universitario ``Rafael Rangel", Universidad de los Andes, Trujillo, Venezuela} \email{wadie@ula.ve} \address{Teodoro Lara \hfill\break Departmento de F\'{i}sica y Matem\'{a}ticas, N\'{u}cleo Universitario ``Rafael Rangel", Universidad de los Andes, Trujillo, Venezuela} \email{teodorolara@cantv.net} \date{} \thanks{Submitted April 25, 2005. Published November 30, 2005.} \subjclass[2000]{37N25, 34K20, 68T05} \keywords{Cellular neural network; circulant matrix; sun product} \begin{abstract} In this work, we consider the model of Cellular Neural Network (CNN) introduced by Chua and Yang in 1988, but with the cloning templates $\omega$-periodic in time. By imposing periodic boundary conditions the matrices involved in the system become circulant and $\omega$-periodic. We show some results on the controllability of the linear model using a Theorem by Brunovsky for the case of linear and $\omega$-periodic system. Also we use this approach in image detection, specifically foreground, background and contours of figures in different scales of grey. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{definition}[theorem]{Definition} \newtheorem{remark}[theorem]{Remark} \allowdisplaybreaks \section{\bf Introduction} Since its invention in 1988 \cite{paper4:chu:yang, paper5:chuyang2} Cellular Neural Network (CNN) paradigm has evolved to cover broad class of problems and frameworks. It provides a powerful analogue non-linear computing structure of a variety of array computations. Array computations can be defined as the parallel execution of complex operations on a large number of processors placed on a geometrical regular grid. The extension of the CNN paradigm is the CNN universal machine in which distributed, global memories and logic functions support the execution of complex analogical algorithms. The key feature of the CNN architecture is its high operational speed (\cite{paper01:schul}). Several variations of the original CNN have been proposed and used for black and white image processing, edge detection, noise removal, horizontal or vertical line filtering, hole filling, objets shadowing and others. An analytical method, based on the comparison principle for differential equations, has been proposed for synthesizing CNN's for simple transformations on bipolar image (\cite{paper02:zampa}). In \cite{paper11:ros:chu} the delay model is introduced and used in image detection; with the introduction of delay a combination of connected component detector and vertical line detector is achieved in one output. Results about dynamic and stability of CNN (non-linear and delay case) are shown in \cite{paper13:chai}. Linear delayed and symmetric CNN are shown to be stable if the delay is suitable chosen \cite{paper6:civ}. Gilli \cite{paper7:gilli} investigates the stability of delayed CNN by means of a Liapunov functional. Because the CNN model (in any form) involves a control (or input) it makes sense wonder about its controllability. In \cite{paper8:lara} the controllability of the linear model of CNN (constant cloning templates) with periodic boundary conditions is studied. The introduction of periodic boundary conditions makes the matrices involved in the general equation to be circulant which is a great advantage in the search of controllability. Non autonomous CNN appears by considering the interactions among cells of the array depending on time, that is, the cloning templates $\widetilde{A}$ and $\widetilde{B}$ now are time-dependant \cite{paper8:lara}. From the point of view of the model, passing to non-autonomous system is natural. It means that when time goes on the interactions among cells change. This type of model may be used to achieve particular goals depending on time. The model we consider in this paper is basically the one given in \cite{paper8:lara}; i. e., $\widetilde{A}$ and $\widetilde{B}$ are $3 \times 3$-matrices, or equivalently, 1-neighborhood and we impose periodic boundary conditions (see \cite{paper4:chu:yang, paper5:chuyang2,paper11:ros:chu}). Now $\widetilde{A} \equiv \widetilde{A}(t)$, $\widetilde{B} \equiv \widetilde{B}(t)$, and of course, the $MN \times MN$ matrices $A$ and $B$ are now $A(t)$ and $B(t)$ respectively. Once the problem is set we shall give conditions under which is possible to guarantee controllability. Moreover, we generalize the model given in \cite{paper8:lara} and the condition on controllability given there is obtained in a more general form. \section{Preliminaries}\label{prelimi} In this section we set up the problem subject of study in this paper, we give the tools that will be used in proving the main results which are in next section; this includes among other things, circulant and Vandermonde matrices. As in \cite{paper11:ros:chu}, we consider an $M \times N$ CNN having $MN$ cells arranged in $M$ rows and $N$ columns; the cell in position $ij$ will be denoted by $c_{ij}$. \begin{definition} \label{def1} \rm The $1$-neighborhood of a cell $c_{ij}$, in a CNN is defined as \begin{equation}\label{arantxa1:eq:vecindad} N^{ij}=\big\{c_{i_1 j_1}: \max\{|i-i_{1}|;|j-j_{1}|\} \leq 1 ; \; 1\leq i_{1} \leq M, \; 1 \leq j_{1} \leq N\big\}. \end{equation} \end{definition} The dynamics of a CNN has both output feedback and input control mechanisms. The output feedback depends on the interactive parameters $a_{ij}(t)$, continuous functions depending on time $t \geq 0$; and the input control on parameters $b_{ij}(t)$, which also continuous functions and both $\omega$-periodic, $\omega > 0$. They are represented as $3 \times 3$ matrices (time varying cloning templates), called the feedback and the control operators, respectively, and given as \begin{equation*} \widetilde{A}(t)=(a_{ij}(t))_{3 \times 3}, \quad \widetilde{B}(t)=(b_{ij}(t))_{3 \times 3}, \quad t \geq 0. \end{equation*} If we consider $v \in \mathbf{R}^{MN}$ as the state voltage vector (voltage through the array), then $v=(v_{11}, \dots, v_{MN})^T$ ($T$ means transpose), $u=(u_{11}, \dots,u_{MN})^T \in \mathbf{R}^{MN}$ the input (control), $y=G(v)$ the output function, \begin{equation*} G:\mathbb{R}^{MN}\to \mathbb{R}^{MN};\quad G(v)=(g(v_{11}),g(v_{12}), \dots, g(v_{MN}))^{T}, \quad g:\mathbb{R}\to \mathbb{R} \end{equation*} is assumed to be differentiable. Also, we impose periodic boundary conditions, \begin{equation}\label{cond:perio} \begin{gathered} \left. \begin{gathered} v_{i_{1}k} = v_{i_{1}{(N+k)}} \\ v_{i_{1}k} = v_{i_{1}{(N+k)}} \end{gathered}\right\} \quad i_{1}=0, \dots, M+1, \quad k = 0,1 \\ \left. \begin{gathered} v_{ki_{2}} = v_{{(M+k)}i_{2}} \\ v_{ki_{2}} = v_{{(M+k)}i_{2}} \end{gathered}\right\} \quad i_{2}=0,\dots, N+1, \quad k = 0,1. \end{gathered} \end{equation} Then the whole equation can be written as \begin{equation}\label{eq:inicial} \dot{v}=-v + A(t)G(v) + B(t)u + I, \quad t \geq 0 \end{equation} which, after translations, can be put as ($I \in \mathbf{R}^{MN}$ is a constant vector) \begin{equation}\label{eq:inicial:trans} \dot{v}=-v + A(t)G(v) + B(t)u \end{equation} with \begin{gather*} A(t)G(v) = (\widetilde{A}(t) \odot\widehat{G}(v_{11}),\dots , \widetilde{A}(t) \odot \widehat{G}(v_{MN}))^{T}, \\ B(t)u = (\widetilde{B}(t) \odot \widehat{u}_{11}, \dots , \widetilde{B}(t) \odot \widehat{u}_{MN})^{T}, \\ \widetilde{G}(v_{ij}) = \begin{pmatrix} g(v_{i-1j-1}) & g(v_{i-1j}) & g(v_{i-1j+1}) \\ g(v_{ij-1}) & g(v_{ij}) & g(v_{ij+1}) \\ g(v_{i+1j-1}) & g(v_{i+1j}) & g(v_{i+1j+1})\,, \end{pmatrix} \end{gather*} $\widehat{u}_{ij}$, the components of control $u$, are defined accordingly. Note that the above matrices are well defined because of period boundary conditions. The $\odot$-product of matrices is given as \begin{equation*} K \odot L = \sum_{i,j} k_{ij}l_{ij}, \end{equation*} where $K$ and $L$ are matrices of the same size. As in \cite{paper8:lara}; $A(t)$ and $B(t)$ are $MN \times MN$ block circulant and each block in turn is also circulant. Following the same procedure as in \cite{paper8:lara}, we find that \begin{gather*} A(t) = \mathop{\rm circ}(A_{1}(t), A_{2}(t),0, \dots, 0, A_{3}(t)) \\ B(t) = \mathop{\rm circ}(B_{1}(t), B_{2}(t),0, \dots, 0, B_{3}(t)) \end{gather*} $A_{i}(t), \; B_{i}(t); \;\; i=1,2,3$ are $N \times N$--circulant matrices given as \begin{gather*} A_{1}(t) = \mathop{\rm circ}(a_{22}(t), a_{23}(t),0, \dots, 0, a_{21}(t)) \\ A_{2}(t) = \mathop{\rm circ}(a_{32}(t), a_{33}(t),0, \dots, 0, a_{31}(t)) \\ A_{3}(t) = \mathop{\rm circ}(a_{12}(t), a_{13}(t),0, \dots, 0, a_{11}(t)) \\ B_{1}(t) = \mathop{\rm circ}(b_{22}(t), b_{23}(t),0, \dots, 0, b_{21}(t)) \\ B_{2}(t) = \mathop{\rm circ}(b_{32}(t), b_{33}(t),0, \dots, 0, b_{31}(t)) \\ B_{3}(t) = \mathop{\rm circ}(b_{12}(t), b_{13}(t),0, \dots, 0, b_{11}(t)). \end{gather*} Because $\widetilde{A}(t) $ and $\widetilde{B}(t)$ are $\omega$-periodic so are matrices $A_i(t)$, $B_i(t)$, and consequently, $A(t)$, and $B(t)$. In this paper we study the case $G(v)=\alpha v$, $ \alpha>0$; that is, the linear case. Then (\ref{eq:inicial}), and consequently (\ref{eq:inicial:trans}) may be written as \begin{gather*} \dot{v}=-v + A(t)v + B(t)u + I,\\ \dot{v}=-v + A(t)v + B(t)u \end{gather*} respectively. We rewrite both foregoing equations, respectively, as \begin{gather}\label{eq:init:defini} \dot{v}=\widehat{A}(t)v + \widehat{B}(t)u + I, \\ \label{eq:init:defini2} \dot{v}=\widehat{A}(t)v + \widehat{B}(t)u. \end{gather} We can consider even a more general case than (\ref{eq:init:defini}); that is, when $I=I(t)$ which means that independent source depends on time ($I(t)$ is assumed to be continuous). That may be the case in the design of a particular device. Actually we are able to prove the following result. \begin{lemma} \label{lem2.2} For $\widehat{A}(t)$ and $\widehat{B}(t)$ as in (\ref{eq:init:defini}), $I=I(t)$ continuous function of $\;t$, the equation \begin{equation*} \dot{v}=\widehat{A}(t)v + \widehat{B}(t)u + I(t) \end{equation*} is equivalent to (\ref{eq:init:defini2}). \end{lemma} \begin{proof} First we implement the change of variable $v=P(t)y$ (Floquet Transformation), where $P(t)$ is $\omega$-periodic and invertible such that \begin{equation*} \Phi(t)=P(t)e^{Ct} \end{equation*} is the fundamental matrix of \begin{equation*} \dot{x}=\widehat{A}(t)x, \end{equation*} $C$ is an $n \times n$-constant matrix (\cite{paper04:hale}). After some computations we arrive to the equation \begin{equation}\label{tran:floquet} \dot{y}=Cy + P^{-1}(t)\widehat{B}(t)u + P^{-1}(t)I(t) \end{equation} and obtain an equation as (\ref{eq:init:defini2}) by making the change of variable \begin{equation*} y= z + \int_{0}^{t}e^{C(t-s)}I(s)ds \end{equation*} in (\ref{tran:floquet}). \end{proof} \begin{remark} \label{rmk2.3} \rm We have shown that even when the current independent source is time-dependent, the resulting model is worked out in the same way as (\ref{eq:init:defini2}). \end{remark} \begin{definition} \label{def2.4} \rm The $n \times n$ matrix \begin{equation*} V_{n}= \begin{pmatrix} 1 & 1 & 1 & 1 \\ \alpha_1 & \alpha_2 & \dots & \alpha_n \\ \vdots & \vdots & \dots & \vdots \\ \alpha_1^{n-1} & \alpha_2^{n-1} & \dots & \alpha_n^{n-1} \end{pmatrix} \equiv V_{n}(\alpha_{1},\dots, \alpha_{n}) \end{equation*} is called Vandermonde matrix of order $n$. \end{definition} \begin{proposition} \label{prop2.5} \label{cir:autoval} For $C(t)=\mathop{\rm circ}(c_{1}(t),c_{2}(t),\dots,c_{n}(t))$, an $n \times n$-circulant matrix depending on time, \begin{equation*} \det(C(t))=\prod_{i=1}^{n}h(\varepsilon_{i})(t); \quad h(x)(t)=\sum_{i=1}^{n}c_{i}(t)x^{i-1} \end{equation*} and for $k=1, \dots,n$; $\varepsilon_{k}$'s are the distinct $n^{th}$ roots of unity, that is, \[ \varepsilon_{k}=\exp[\frac{2\pi(k-1)i}{n}]. \] \end{proposition} \begin{proof} We consider $V_{n} \equiv V_{n}(\varepsilon_{1}, \dots, \varepsilon_{n})$, this is a non singular matrix, now it is easy to show that \begin{equation*} C(t)V_{n}=V_{n}\mathop{\rm diag} [h(\varepsilon_{k})(t) ]_{k=1}^{n}. \end{equation*} \end{proof} \begin{corollary}\label{circ:autovectores} For $C(t)=\mathop{\rm circ}(c_{1}(t),c_{2}(t),\dots,c_{n}(t))$ as before, its eigenvalues and eigenvectors are given by \begin{equation*} \lambda_{k}(t)=h(\varepsilon_{i})(t),\quad 1,\varepsilon_{k}, \dots, \varepsilon_{k}^{n-1})^{T}, \quad \; k=1, \dots, n. \end{equation*} \end{corollary} The proof of the above corollary, follows the fact that in Proposition \ref{cir:autoval}, \begin{equation*} V_{n}^{-1}C(t)V_{n}=\mathop{\rm diag}[h(\varepsilon_{k})(t)]_{k=1}^{n}. \end{equation*} Let \begin{gather*} p_{1}(x)(t) = a_{22}(t) + a_{23}(t)x + a_{21}(t)x^{N-1} \\ p_{2}(x)(t) = a_{32}(t) + a_{33}(t)x + a_{31}(t)x^{N-1} \\ p_{3}(x)(t) = a_{12}(t) + a_{13}(t)x + a_{11}(t)x^{N-1} \\ q_{1}(x)(t) = b_{22}(t) + b_{23}(t)x + b_{21}(t)x^{N-1} \\ q_{2}(x)(t) = b_{32}(t) + b_{33}(t)x + b_{31}(t)x^{N-1} \\ q_{3}(x)(t) = b_{12}(t) + b_{13}(t)x + b_{11}(t)x^{N-1}; \end{gather*} now using Proposition \ref{cir:autoval}, Corollary \ref{circ:autovectores}, and the forms of matrices $A_{i}(t)$, $B_{1}(t)$, $i=1,2,3$ we get that their corresponding eigenvalues are \begin{equation*} \lambda_{j}^{i}(t)=p_{i}(\varepsilon_{j})(t), \quad \eta_{j}^{i}(t)=q_{i}(\varepsilon_{j})(t), \quad i=1,2,3 \end{equation*} respectively. \begin{lemma} There are $D(t), \; E(t)$ block diagonal matrices of size $MN \times MN$ such that \begin{equation*} A(t)=(V_{M} \otimes V_{N})D(t)(V_{M}\otimes V_{N})^{-1} \quad B(t)=(V_{M} \otimes V_{N})E(t)(V_{M}\otimes V_{N})^{-1} \end{equation*} with \begin{equation*} V_{N} \equiv V_{N}(\varepsilon_{1}, \dots, \varepsilon_{N}), \quad V_{M} \equiv V_{M}(\omega_{1}, \dots, \omega_{M}) \end{equation*} Vandermonde matrices of order $N$ and $M$ respectively; $\varepsilon_{1}, \dots, \varepsilon_{N}$ and $\omega_{1}, \dots, \omega_{M}$ are the corresponding distinct $N^{th}$ and $M^{th}$ roots of unity. \end{lemma} \begin{remark} \label{rmk2.8} \rm $V_{M} \otimes V_{N}$ indicates the Kronecker or tensor product of matrices $V_{M}$ and $V_{N}$. \end{remark} \begin{proof} We will show for $A(t)$; the same procedure may be used for $B(t)$. It can be shown that \begin{align*} A(t) &= I_{M} \otimes A_{1}(t) + \Pi_{M} \otimes A_{2}(t) + \Pi_{M}^{M-1} \otimes A_{3}(t) \\ &= (V_{M} \otimes V_{N})D(t)(V_{M} \otimes V_{N})^{-1} \end{align*} where $I_{M}$ is the identity matrix of size $M$, \begin{align*} \Pi_{M} & = V_{M}\mathop{\rm diag}[\omega_{k}]_{k=1}^{M}V_{M}^{-1}, \\ D(t) & = \mathop{\rm diag}[D_{k}(t)]_{k=1}^{M}, \quad \hbox{for } 1 \leq k \leq M, \\ D_{k}(t) & = \mathop{\rm diag}[p_{1}(\varepsilon_{j})(t)]_{j=1}^{N} + \omega_{k} \mathop{\rm diag}[p_{2}(\varepsilon_{j})(t)]_{j=1}^{N} + \omega_{k}^{M-1} \mathop{\rm diag}[p_{3}(\varepsilon_{j})(t)]_{j=1}^{N}\\ & = \mathop{\rm diag}[p_{1}(\varepsilon_{j})(t) + \omega_{k} p_{2}(\varepsilon_{j})(t) + \omega_{k}^{M-1} p_{3}(\varepsilon_{j})(t)]_{j=1}^{N}. \end{align*} In the case of $B(t)$, we have \begin{align*} E(t) & = \mathop{\rm diag}[E_{k}(t)]_{k=1}^{M}, \quad \hbox{for } 1 \leq k \leq M, \\ E_{k}(t) & = \mathop{\rm diag}[q_{1}(\varepsilon_{j})(t) + \omega_{k}q_{2}(\varepsilon_{j})(t) + \omega_{k}^{M-1}q_{3}(\varepsilon_{j})(t)]_{j=1}^{N}. \end{align*} \end{proof} \begin{remark} \label{rmk2.9} \rm Note that, up to here, the shown results are independent of $\omega$-periodicity. That is, those are for circulant matrices depending on time whether or not they are $\omega$-periodic. \end{remark} \section{Main Results} In this section we shall state and prove the main results of the paper, that is, those concerning the controllability of system (\ref{eq:init:defini2}). Let us assume that matrices $A(t)$ and $B(t)$ given in (\ref{eq:init:defini2}) are continuous functions and \begin{equation*} A:[0,+\infty)\to \mathbf{R}^{n \times n}, \quad B:[0,+\infty)\to \mathbf{R}^{n \times n}; \;\; n=MN \end{equation*} and, of course $\omega$-periodic as stated before. \begin{definition} \label{def3.1} \rm System (\ref{eq:init:defini2}) is controllable in $[0,T]$ if for any $v_{0}, \; v_{1} \in \mathbf{R}^{n}$, there is $u \in C([0,T],\mathbf{R}^{n})$ such that the solution $v(t)$ corresponding to control $u$ satisfies \begin{equation*} v(0)=v_{0} \quad \hbox{and}\quad v(T)=v_{1}. \end{equation*} \end{definition} Our first result is about periodic case and will extend the results given in \cite{paper8:lara}, which is the constant case, to the periodic case. Here we shall assume that $G(v)= \alpha v$, $\alpha > 0$. \begin{lemma}\label{matrix:trans} Matrix $Y(t)$ given as \begin{equation*} Y(t)=\exp\Big[\int_{t_{0}}^{t}(\alpha A(s)-I_{n})ds\Big]; \quad t \geq 0, \end{equation*} where $t_{0} \geq 0$ is a fixed number, $I_{n}$ the identity matrix of order $n$ is the fundamental matrix of the system \begin{equation*} \dot{Y}=(\alpha A(t) - I_{n})Y. \end{equation*} \end{lemma} The following Proposition, from \cite[Prop. 3.1)]{paper1:brun} will be used in the proof of our next Theorem. \begin{proposition} Let $H(t)$ and $K(t)$ be $n \times n$ matrices, $\omega$-periodic and integrable over $[0,\omega]$, then, the system \begin{equation*} \dot{X}=H(t)X + K(t)u \end{equation*} is controllable in $[0,\omega]$ if and only if the rows of the function $Z^{-1}(t)K(t)$, $t \in [0,n \omega]$ are linearly independent; where $Z(t)$ is the fundamental matrix of system \begin{equation*} \dot{Z}=H(t)Z. \end{equation*} \end{proposition} \begin{theorem}\label{teo:main} Let $A(t)$ and $B(t)$ be $\omega$-periodic matrices which are integrable over $[0,\omega]$. Then the system (\ref{eq:inicial:trans}) with $G(v)=\alpha v$, $\alpha > 0$ is controllable in $[0,\omega]$ if only if \begin{equation*} q_{1}(\varepsilon_{j})(t) + \omega_{k}q_{2}(\varepsilon_{j})(t) + \omega_{k}^{M-1}q_{3}(\varepsilon_{j})(t) \neq 0 \end{equation*} for any $t \in [0,n \omega]$, $1 \leq j \leq N, \;\; 1 \leq k \leq M; \;\; n=MN$ and $q_{1},q_{2},q_{3}$ are the polynomials given in Section \ref{prelimi}. \end{theorem} \begin{proof} We use a foregoing proposition; so system (\ref{eq:inicial:trans}) is controllable if only if \begin{equation*} \mathop{\rm rank}[Y^{-1}(t)B(t)]=n, \quad \forall t \in [0,n \omega] \end{equation*} and $Y(t)$ is the matrix given by Lemma \ref{matrix:trans} ($t_{0}=0$). Now \begin{align*} Y^{-1}(t) & = \exp\Big[- \int_{t_{0}}^{t}(\alpha A(s) - I_{n})ds\Big] \\ & = (V_{M} \otimes V_{N}) \exp\big\{ \mathop{\rm diag}\Big[- \int_{t_{0}}^{t} \widetilde{D}(s)ds\Big]_{k=1}^{M}\big\}(V_{M} \otimes V_{N})^{-1}. \end{align*} Therefore, \begin{align*} &Y^{-1}(t)B(t)\\ &= (V_{M} \otimes V_{N})\exp\big\{\mathop{\rm diag}\Big[- \int_{t_{0}}^{t}\widetilde{D}(s)ds\Big]_{k=1}^{M}\big\} \mathop{\rm diag}[E_{k}(t)]_{k=1}^{M}(V_{M} \otimes V_{N})^{-1}, \end{align*} hence \begin{equation*} \mathop{\rm rank}[Y^{-1}(t)B(t)]=n, \quad t \in [0,n \omega] \end{equation*} if and only if \begin{equation*} \mathop{\rm rank}[\mathop{\rm diag}[E_{k}(t)]_{k=1}^{M}]=n, \quad t \in [0,n \omega] \end{equation*} if and only if (by using the form of $E_{k}(t)$ given in Section \ref{prelimi}) \[ q_{1}(\varepsilon_{j})(t) + \omega_{k}q_{2}(\varepsilon_{j})(t) + \omega_{k}^{M-1}q_{3}(\varepsilon_{j})(t) \neq 0 \] for $t \in [0,nT]$, $1 \leq j \leq N$, $1 \leq k \leq M$. \end{proof} \begin{corollary} \label{coro3.5} Under the hypothesis of Theorem \ref{teo:main} System \ref{eq:init:defini2} will be uncontrollable if there is $t^{*} \in [0,n \omega]$ such that \begin{equation*} q_{1}(\varepsilon_{j})(t^{*}) + \omega_{k}q_{2}(\varepsilon_{j})(t^{*}) + \omega_{k}^{M-1}q_{3}(\varepsilon_{j})(t^{*}) = 0, \end{equation*} for some $1 \leq j \leq N$, $1 \leq k \leq M$. \end{corollary} \section{Numerical Simulations} In this section we use our model of CNN in image detection; $\widetilde{A}, \, \widetilde{B}$ are taken as \begin{equation*} \widetilde{A}= \begin{pmatrix} \cos(t) & \sin(t) & \cos(t) \\ \sin(t) & -20\cos(t) & \sin(t) \\ \cos(t) & \sin(t) & \cos(t) \end{pmatrix}, \quad \widetilde{B}=\begin{pmatrix} \sin(t) & \cos(t) & \sin(t) \\ \cos(t) & -20\sin(t) & \cos(t) \\ \sin(t) & \cos(t) & \sin(t) \end{pmatrix}. \end{equation*} and $I=- \frac{3}{1000 + \cos(t)}$, where $t\in [0,2\pi]$. First we consider figure \ref{camera1}, a cameraman as input and some iterations. The size of the array is $320 \times 314$. For different values of $t$ and 3 ($z=3$) iterations we can see the contours and a kind of periodicity in the output images which is due to the periodicity of the matrices involved. \begin{figure}[htb] \includegraphics[width=0.7\textwidth]{fig1} % segundo \caption{Input and some iterations by a $330 \times 314$ matrix} \label{camera1} \end{figure} Note that the main features of the input are preserved; of course along the whole iteration process it maintains different levels of grey. It is remarkable the similarity between the output images for $t=0$ and $t=2 \pi$ for instance. It is important to mention that we had taken into account not only notable angles, balso angles as $t=7\pi/9$ an $t=4\pi/3$. \begin{figure}[htb] \includegraphics[width=0.7\textwidth]{fig2} % quinto \caption{Input and some iterations by a matrix of $330 \times 314$} \label{camera2} \end{figure} In figure \ref{camera2} we have the same input and the same angles in the different iterations; we change the number of iterations, now $z=5$. When we observe the corresponding output notice the striking similarity between the cases $t=0$ and $t=2 \pi$ in same fashion as in figure \ref{camera1}. \begin{figure}[htb] \includegraphics[width=0.7 \textwidth]{fig3} % segundo4 \caption{Input and some iterations by a $25 \times 35$ matrix} \label{ideograma} \end{figure} Our final example is a Chinese character, figure \ref{ideograma}, and a $25 \times 35$ array, we present the input image and some iterations with different angles and $z=3$ the number of iterations. Again we can observe the extraction of particular features along the iteration process, in a similar fashion to the previous input. \begin{thebibliography}{10} \bibitem{paper1:brun} P.~Brunovsky, \emph{Controllability and linear closed--loop controls linear periodic systems}, J. of Differential Equations \textbf{6} (1969), no.~6, 296--313. \bibitem{paper2:churos} L.~Chua and T.~Roska, \emph{Stability of a class of nonreciprocal cellular neural network}, IEEE. Transc. Circuits Syst. \textbf{37} (1990), 1520--1527. \bibitem{paper3:churos2} \bysame, \emph{The cnn paradigm}, IEEE Transc. on Circuit Sys.I: Fundamental Theory and Applic. \textbf{40} (1993), no.~3, 147--156. \bibitem{paper4:chu:yang} L.~Chua and L.~Yang, \emph{Cellular neural networks: Applications}, IEEE. Transc. Circuits Syst. \textbf{35} (1988), 1273--1290. \bibitem{paper5:chuyang2} \bysame, \emph{Cellular neural networks: Theory}, IEEE. Transc. Circuits Syst. \textbf{35} (1988), 1257--1271. \bibitem{paper01:schul} I'Szatma'ari et~al, \emph{Morphology and autowave metric on cnn applied to bubble--debris classification}, IEEE Transactions on Neural Networks \textbf{11} (2000), no.~5, 1385--1393. \bibitem{paper7:gilli} M.~Gilli, \emph{Stability of cellular neural networks with nonpositive templates and nonmonotonic output functions}, IEEE Transc. on Circuit Sys.I: Fundamental Theory and Applic. \textbf{41} (1994), no.~8, 518--528. \bibitem{paper04:hale} J.~Hale, \emph{Ordinary differential equations}, Robert E. Krieger Publishing Company, Malabar, Florida, 1980. \bibitem{paper14:wanganwei} Q.~Gan J.~Wang and Y.~Wei, \emph{Stability of cnn with opposite--sign templates and nonunity gain output functions}, IEEE Transc. on Circuit Sys.I: Fundamental Theory and Applic. \textbf{42} (1995), no.~7, 404--408. \bibitem{paper8:lara} Teodoro Lara, \emph{Cnn and controllobality}, Differential Equations and Dynamical Systems \textbf{10} (2002), no.~1, 33--51. \bibitem{paper6:civ} M.~Gilli Pier P.~Civallieri and L.~Pandolfi, \emph{On stability of cellular neural networks with delay}, IEEE Transc. on Circuit Sys.I: Fundamental Theory and Applic. \textbf{40} (1993), no.~3, 157--164. \bibitem{paper11:ros:chu} T.~Roska and L.~Chua, \emph{Cellular neural networks with nonlinear and delay--type template elements}, In Proc. IEEE int. Workshop on Cellular Neural Networks (1990), 12--25. \bibitem{paper13:chai} T.~Roska and Chai-Wah Wu, \emph{Stability and dynamics of delay--type general and cellular neural networks}, IEEE Transc. on Circuit Sys.I: Fundamental Theory and Applic. \textbf{39} (1992), no.~6, 487--490. \bibitem{paper03:la} P.~Ecimovciz T.~Lara and J.~H. Wu, \emph{Delayed cnn: Model, applications, implementations, and dynamics}, Differential Equations and Dynamical Systems \textbf{10} (2002), no.~1 and 2, 71--91. \bibitem{paper10:matchusu} L.~O.~Chua T.~Matsumoto and H.~Suzuki, \emph{Cnn cloning templates connected component detector}, IEEE. Transc. Circuits Syst. \textbf{37} (1990), no.~6, 633--635. \bibitem{paper12:ros5} T.~Boros A.~Radva'nyi T.~Roska, Leon~Chua and P.~Thiran, \emph{Detecting moving and standing objects using cellular neural networks}, Int. Journal on Circ. Theory and Applictions \textbf{20} (1992), 613--628. \bibitem{paper02:zampa} M.~Zamparelli, \emph{Genetically trained cellular neural networks}, IEEE Transactions on Neural Networks \textbf{10} (1997), no.~6, 1143--1151. \bibitem{paper16:zounoss} Fan Zou and Josef Nossek, \emph{Stability of cellular neural networks with opposite--sgn templates}, IEEE. Transc. Circuits Syst. \textbf{38} (1991), no.~6, 675--677. \end{thebibliography} \end{document} % ------------------------------------------------------------------------