\documentclass[reqno]{amsart} \usepackage{hyperref} \AtBeginDocument{{\noindent\small \emph{Electronic Journal of Differential Equations}, Vol. 2009(2009), No. 25, pp. 1--13.\newline ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu \newline ftp ejde.math.txstate.edu} \thanks{\copyright 2009 Texas State University - San Marcos.} \vspace{9mm}} \begin{document} \title[\hfilneg EJDE-2009/25\hfil Higher-order linear matrix descriptor] {Higher-order linear matrix descriptor differential equations of Apostol-Kolodner type} \author[G. I. Kalogeropoulos, A. D. Karageorgos, A. A. Pantelous\hfil EJDE-2009/25\hfilneg] {Grigoris I. Kalogeropoulos, Athanasios D. Karageorgos,\\ Athanasios A. Pantelous} % in alphabetical order \address{Grigoris I. Kalogeropoulos \newline Department of Mathematics, University of Athens, Athens, Greece} \email{gkaloger@math.uoa.gr} \address{Athanasios D. Karageorgos \newline Department of Mathematics, University of Athens, Athens, Greece} \email{athkar@math.uoa.gr} \address{Athanasios A. Pantelous \newline School of Engineering and Mathematical Sciences, City University,\newline Northampton Square, London, EC1V 0HB, UK} \email{Athanasios.Pantelous.1@city.ac.uk} \thanks{Submitted November 4, 2008. Published February 3, 2009.} \subjclass[2000]{34A30, 34A05, 93C05, 15A21, 15A22} \keywords{Matrix pencil theory; Weierstrass canonical form;\hfill\break\indent linear matrix regular descriptor differential equations} \begin{abstract} In this article, we study a class of linear rectangular matrix descriptor differential equations of higher-order whose coefficients are square constant matrices. Using the Weierstrass canonical form, the analytical formulas for the solution of this general class is analytically derived, for consistent and non-consistent initial conditions. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \newtheorem{definition}[theorem]{Definition} \section{Introduction} Linear Descriptor Matrix Differential Equations (LDMDEs) are inherent in many physical, engineering, mechanical, and financial/actuarial models. LDMDEs, which are also known in the literature as functional matrix differential equations, are a special and commonly used class of matrix differential equations. Having in mind the applications of LDMDEs, for instance in finance, we provide the well-known input-output Leondief model and its several important extensions, advice \cite{Ca}. In this article, our long-term purpose is to study the solution of LDMDES of higher order \eqref{e1.1} into the mainstream of matrix pencil theory. This effort is significant, since there are numerous applications. Thus, we consider \begin{equation} FX^{(r)} (t) = GX(t) \label{e1.1} \end{equation} where $F,G \in \mathcal{M}({n \times n;\mathbb{F}})$, (i.e. the algebra of square matrices with elements in the field $\mathbb{F}$) with $\det F = 0$ ($0$ is the zero element of ${\mathcal{M}}({n= 1,\mathbb{F}})$), and $X \in {\mathcal{C}}^\infty ({\mathbb{F},{\mathcal{M}}({n \times m;\mathbb{F}})})$. For the sake of simplicity we set ${\mathcal{M}}_n = {\mathcal{M}}({n \times n;\mathbb{F}})$ and ${\mathcal{M}}_{nm} = {\mathcal{M}}({n \times m;\mathbb{F}} )$. Matrix pencil theory has been extensively used for the study of Linear Descriptor Differential Equations (LDDEs) with time invariant coefficients, see for instance \cite{Ca}, \cite{Kal}-\cite{KaHa}. Systems of type \eqref{e1.1} are more general, including the special case when $F=I_n$, where $I_n$ is the identity matrix of $\mathcal{M}_n$, since the well-known class of higher-order linear matrix differential equations of Apostol-Kolodner type is derived straightforwardly, see \cite{Ap} for $r=2$, \cite{BeRa} and \cite{Kol}. The paper is organized as follows: In Section 2 some notations and the necessary preliminary concepts from matrix pencil theory are presented. Section 3 contains the case that system \eqref{e1.1} has consistent initial conditions. In Section 4 the non consistent initial condition case is fully discussed. In this case, the arbitrarily chosen initial conditions which have physical meaning for descriptor (regular) systems, in some sense, can be created or structurally changed at a fixed time $t=t_0$. Hence, it is derived that \eqref{e1.1} should adopt a generalized solution, in the sense of Dirac $\delta$-solutions. \section{Mathematical Background and Notation} This brief section introduces some preliminary concepts and definitions from matrix pencil theory, which are being used throughout the paper. Descriptor systems of type \eqref{e1.1} are closely related to matrix pencil theory, since the algebraic geometric, and dynamic properties stem from the structure by the associated pencil $sF-G$. \begin{definition}\label{de2.1} \rm Given $F,G\in \mathcal{M}_{nm}$ and an indeterminate $s\in\mathbb{F}$, the matrix pencil $sF-G$ is called regular when $m=n$ and $\det(sF-G)\neq 0$. In any other case, the pencil will be called singular. \end{definition} \begin{definition}\label{de2.2} \rm The pencil $sF-G$ is said to be \emph{strictly equivalent} to the pencil $s\tilde F - \tilde G$ if and only if there exist nonsingular $P\in\mathcal{M}_n$ and $Q\in\mathcal{M}_m$ such as \[ P({sF - G})Q = s\tilde F - \tilde G. \] \end{definition} In this article, we consider the case that pencil is \emph{regular}. Thus, the strict equivalence relation can be defined rigorously on the set of regular pencils as follows. Here, we regard \eqref{e2.1} as the set of pair of nonsingular elements of $\mathcal{M}_n$ \begin{equation} g := \{ (P,Q): P, Q \in \mathcal{M}_n ,\; P,Q \text{ nonsingular} \}\label{e2.1} \end{equation} and a composition rule $*$ defined on $g$ as follows: \begin{equation} *:g\times g \text{ such that } ({P_1 ,Q_1 })* ({P_2 ,Q_2 }) := ({P_1 \cdot P_2 ,Q_2 \cdot Q_1 }).\label{e2.3} \end{equation} It can be easily verified that $(g,*)$ forms a \emph{non-abelian group}. Furthermore, an action $\circ$ of the group $(g,*)$ on the set of \emph{regular} matrix pencils $\mathcal{L}^{\rm reg}_n$ is defined as $ \circ :g\times \mathcal{L}^{\rm reg}_n \to \mathcal{L}^{\rm reg}_n$ such that \[ ({({P,Q}),sF - G}) \to ({P,Q} ) \circ ({sF - G}) := P({sF - G})Q.\label{e2.4} \] This group has the following properties: \begin{itemize} \item[(a)] $({P_1 ,Q_1 }) \circ [ {({P_2 ,Q_2 } ) \circ ({sF - G})} ] = ({P_1 ,Q_1 } ) * ({P_2 ,Q_2 }) \circ ({sF - G})$ for every nonsingular $P_1 ,P_2 \in \mathcal{M}_n$ and $Q_1 ,Q_2 \in \mathcal{M}_n$. \item[(b)] $e_g \circ ({sF - G}) = sF - G$, $sF - G \in {\mathcal{L}}_n^{\rm reg} $ where $e_g = ({I_n ,I_n })$ is the \emph{identity element} of the group $(g,*)$ on the set of $\mathcal{L}^{\rm reg}_n$ defines a transformation group denoted by $\mathcal{N}$, see \cite{Ga}. \end{itemize} For $sF - G \in {\mathcal{L}}_n^{\rm reg} $, the subset \[ g \circ ({sF - G}) :=\left\{ {({P,Q}) \circ ({sF - G}):({P,Q} ) \in g} \right\} \subseteq {\mathcal{L}}_n^{\rm reg} \] will be called the orbit of $sF-G$ at $g$. Also $\mathcal{N}$ defines an equivalence relation on $\mathcal{L}^{\rm reg}_n$ which is called a \emph{strict-equivalence relation} and is denoted by $\mathcal{E}_{s-e}$. So, $({sF - G}){\mathcal{E}}_{s - e} ({s\tilde F - \tilde G})$ if and only if $P({sF - G})Q = s\tilde F - \tilde G$ , where $P,Q \in {\mathcal{M}}_n $ are nonsingular elements of algebra $\mathcal{M}_n$. The class of $\mathcal{E}_{s-e}(sF-G)$ is characterized by a uniquely defined element, known as a complex Weierstrass canonical form, $sF_w -Q_w$, see \cite{Ga}, specified by the complete set of invariants of $\mathcal{E}_{s-e}(sF-G)$. This is the set of \emph{elementary divisors} (e.d.) obtained by factorizing the invariant polynomials $f_i (s,\hat{s})$ into powers of homogeneous polynomials irreducible over field $\mathbb{F}$. In the case where $sF-G$ is a regular, we have e.d. of the following type: \begin{itemize} \item e.d. of the type $s^p$ \emph{are called zero finite elementary divisors} (z. f.e.d.) \item e.d. of the type $(s-a)^\pi$, $a \ne 0$ \emph{are called nonzero finite elementary divisors} (nz. f.e.d.) \item e.d. of the type $\hat{s}^q$ are called \emph{infinite elementary divisors} (i.e.d.). \end{itemize} Let $B_1 ,B_2 ,\dots, B_n $ be elements of $\mathcal{M}_n$. The direct sum of them denoted by $B_1 \oplus B_2 \oplus \dots \oplus B_n$ is the {block diag}$\{ B_1 ,B_2 ,\dots, B_n \}$. Then, the complex Weierstrass form $sF_w -Q_w$ of the regular pencil $sF-G$ is defined by $sF_w - Q_w := sI_p - J_p \oplus sH_q - I_q $, where the first normal Jordan type element is uniquely defined by the set of f.e.d. \begin{equation} ({s - a_1 })^{p_1 } , \dots ,({s - a_\nu } )^{p_\nu },\quad \sum_{j = 1}^\nu {p_j = p}\label{e2.5} \end{equation} of $sF-G$ and has the form \begin{equation} sI_p - J_p := sI_{p_1 } - J_{p_1 } ( {a_1 }) \oplus \dots \oplus sI_{p_\nu } - J_{p_\nu } ({a_\nu }) .\label{e2.6} \end{equation} And also the $q$ blocks of the second uniquely defined block $sH_q -I_q$ correspond to the i.e.d. \begin{equation} \hat s^{q_1} , \dots ,\hat s^{q_\sigma}, \quad \sum_{j = 1}^\sigma {q_j = q}\label{e2.7} \end{equation} of $sF-G$ and has the form \begin{equation} sH_q - I_q := sH_{q_1 } - I_{q_1 } \oplus \dots \oplus sH_{q_\sigma } - I_{q_\sigma}.\label{e2.8} \end{equation} Thus, $H_q$ is a nilpotent element of $\mathcal{M}_n$ with index $\tilde q = \max \{ {q_j :j = 1,2, \ldots ,\sigma } \}$, where \[ H^{\tilde q}_q=\mathbb{O},\label{e2.9} \] and $I_{p_j } ,J_{p_j } ({a_j }),H_{q_j }$ are defined as \begin{equation} \begin{gathered} I_{p_j } = \begin{bmatrix} 1 & 0 & \dots & 0 \\ 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & 1 \end{bmatrix} \in {\mathcal{M}}_{p_j } , \quad J_{p_j } ({a_j }) = \begin{bmatrix} {a_j } & 1 & 0 & \dots & 0 \\ 0 & {a_j } & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & {a_j } & 1 \\ 0 & 0 & 0 & 0 & {a_j } \end{bmatrix} \in {\mathcal{M}}_{p_j } \\ H_{q_j } = \begin{bmatrix} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \end{bmatrix} \in {\mathcal{M}}_{q_j }. \end{gathered} \label{e2.10} \end{equation} In the last part of this section, some elements for the analytic computation of $e^{A ({t-t_0 })} $, $t \in [t_0 ,\infty)$ are provided. To perform this computation, many theoretical and numerical methods have been developed. Thus, the interesting readers might consult papers \cite{BeRa,ChYa,Kol, Leo,Ver} and the references therein. In order to have computational formulas, see the following Sections 3 and 4, the following known results should firstly be mentioned. \begin{lemma}[\cite{ChYa}]\label{le1} $ e^{J_{p_j } ({a_j }) ({t - t_0 })} = ({d_{k_1 k_2 }})_{p_j }$, %\label{e2.11} where \[ d_{k_1 k_2 } = \begin{cases} e^{a_j ({t -t_0 })} \dfrac{{({t - t_0 })^{k_2 - k_1 }}}{{({k_2 - k_1 })!}}, &1 \le k_1 \le k_2 \le p_j \\ 0, &\text{otherwise} \end{cases} \] \end{lemma} Another expression for the exponential matrix of Jordan block, see \eqref{e2.10}, is provided by the following Lemma. \begin{lemma}[\cite{Ver}] \label{le2} \begin{equation} e^{J_{p_j } ({a_j }) ({t - t_0 })} = \sum_{i = 0}^{p_j -1} f_i ({t - t_0 })[ J_{p_j} ({a_j }) ]^i \label{e2.12} \end{equation} where the $f_i (t-t_0)$'s are given analytically by the following $p_j$ equations: \begin{equation} f_{p_j-1-k}(t-t_0)=e^{a_j(t-t_0)} \sum_{i = 0}^{k} b_{k,i}a^{k-i}_j \frac{({t - t_0})^{p_j-1-i}}{(p_j-1-i)!}, \ k=0,1,2,\dots, p_j-1 \label{e2.13} \end{equation} where \[ b_{k,i}=\sum_{l = 0}^{k-i} \binom{p_j}{l} \binom{k-l}{i}(-1)^l \] and \begin{equation} [ J_{p_j} ({a_j }) ]^i = (c_{k_1 k_2 }^{(i)})_{p_j} , \text{ for } \ 1\le k_1 ,k_2 \le p_j \label{e2.14} \end{equation} where \[ c_{k_1 k_2 }^{(i)} = \binom{i}{k_2 - k_1} a_j^{i -({k_2 - k_1 })}. \] \end{lemma} \section{Solution space form of consistent initial conditions} In this section, the main results for consistent initial conditions are analytically presented for the regular case. The whole discussion extends the existing literature; see for instance \cite{BeRa}. Moreover, it should be stressed out that these results offer the necessary mathematical framework for interesting applications, see also introduction. Now, in order to obtain a unique solution, we deal with consistent initial value problem. More analytically, we consider the system \begin{equation} FX^{(r)}(t)=GX(t),\label{e3.1} \end{equation} with known initial conditions \begin{equation} X ({t_0}),X' (t_0),\dots, X^{(r-1)} ({t_0}).\label{e3.2} \end{equation} From the regularity of $sF-G$, there exist nonsingular $\mathcal{M}(n\times n,\mathbb{F})$ matrices $P$ and $Q$ such that (see also section 2), such as \begin{gather} PFQ = F_w = I_p \oplus H_q ,\label{e3.3} \\ PGQ = G_w = J_p \oplus I_q ,\label{e3.4} \end{gather} where $I_p ,J_p ,H_q $ and $I_q$ are given by \eqref{e2.10} where \begin{gather*} I_p = I_{p_1 } \oplus \ldots \oplus I_{p_\nu }, \\ J_p = J_{p_1 } ({a_1 }) \oplus \ldots \oplus J_{p_\nu } ({a_\nu }) , \\ H_q = H_{q_1 } \oplus \ldots \oplus H_{q_\sigma},\\ I_q = I_{q_1 } \oplus \ldots \oplus I_{q_\sigma}. \end{gather*} Note that $\sum_{j = 1}^\nu {p_j = p}$ and $\sum_{j =1}^\sigma {q_j = q} $, where $p+q=n$. \begin{lemma}\label{le3} System \eqref{e3.1} is divided into two subsystems: The so-called slow subsystem \begin{equation} Y_p^{(r)}(t) = J_p Y_p (t), \label{e3.5} \end{equation} and the relative fast subsystem \begin{equation} H_q Y_q^{(r)}(t) = Y_q (t).\label{e3.6} \end{equation} \end{lemma} \begin{proof} Consider the transformation \begin{equation} X(t)=QY(t).\label{e3.7} \end{equation} Substituting the previous expression into \eqref{e3.1} we obtain \[ FQY^{(r)}(t)=GQY(t). \] Whereby, multiplying by $P$, we arrive at \[ F_wY^{(r)}(t)=G_w Y(t). \] Moreover, we can write $Y(t)$ as $Y(t) =\begin{bmatrix} Y_p (t) \\ Y_q (t) \end{bmatrix} \in {\mathcal{M}}_{nm}$. Taking into account the above expressions, we arrive easily at \eqref{e3.5} and \eqref{e3.6}. \end{proof} \begin{remark}\label{rmk1} \rm System \eqref{e3.5} is the standard form of higher-order linear matrix differential equations of Apostol-Kolodner type, which may be treated by classical methods, see for instance \cite{Ap, BeRa}, \cite{ChYa} and \cite{Kol} and references therein. Moreover, it should be also mentioned that section 5 of \cite{Ver} describes a method for solving higher-order equations of the form $q(D)X(t)=AX(t)$ where $q$ is a scalar polynomial, $D$ is differentiation with respect to $t$ and $A$ is a square matrix. Such equations clearly include the standard form equations of the Apostol-Kolodner type. Thus, it is convenient to define new variables as \begin{gather*} Z_1 (t)=Y_p (t),\\ Z_2 (t)=Y'_p (t),\\ \dots\\ Z_r (t)=Y^{(r-1)}_{p}(t). \end{gather*} Then, we have the system of ordinary differential equations \begin{equation} \begin{gathered} Z'_1 (t)=Z_2 (t),\\ Z'_2 (t)=Z_3 (t),\\ \dots\\ Z'_r (t)=J_{p}Z_1(t). \end{gathered}\label{e3.8} \end{equation} Now, \eqref{e3.8} can be expressed using vector-matrix equations, \begin{equation} \mathbf{Z}'(t)=\mathbf{AZ}(t)\label{e3.9} \end{equation} where $\mathbf{Z}(t) =[ Z_1^T (t)Z_2^T(t)\dots Z_r^T (t)]^T$ (where $(\;)^T$ is the transpose tensor) and the coefficient matrix $\mathbf{A}$ is given by \begin{equation} \mathbf{A} = \begin{bmatrix} \mathbb{O} & {I_p } & \mathbb{O} & \dots & \mathbb{O} \\ \mathbb{O}& \mathbb{O} & {I_p } & \dots & \mathbb{O} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \mathbb{O}&\mathbb{O} &\mathbb{O} & \dots & {I_p } \\ {J_p } &\mathbb{O} &\mathbb{O} & \dots & \mathbb{O} \end{bmatrix} \label{e3.10} \end{equation} with corresponding dimension of $\mathbf{A}$ and $\mathbf{Z}(t)$, $pr\times pr$ and $pr\times m$, respectively. Equation \eqref{e3.9} is a linear ordinary differential system and has a unique solution for any initial condition \begin{equation} {\mathbf{Z}}({t_0 }) = \begin{bmatrix} {Z_1 ({t_0 })} \\ {Z_2 ({t_0 })} \\ \dots \\ {Z_r ({t_0 })} \\ \end{bmatrix} = \begin{bmatrix} {Y_p ({t_0 })} \\ {Y'_p ({t_0 })} \\ \dots \\ {Y_p^{({r - 1})} ({t_0 })} \end{bmatrix} \in {\mathcal{M}}_{pr \times m}.\label{e3.11} \end{equation} It is well-known that the solution of \eqref{e3.11} has the form \begin{equation} \mathbf{Z}(t)=e^{\mathbf{A}(t-t_0)}\mathbf{Z}(t_0).\label{e3.12} \end{equation} Then $Y_p (t)=Z_1 (t)=L\mathbf{Z}(t)$, where \begin{equation} L=[ I_p \mathbb{O} \dots \mathbb{O} ] \in \mathcal{M}_{p\times pr}.\label{e3.13} \end{equation} Finally, by combining \eqref{e3.11}-\eqref{e3.12} and \eqref{e3.13}, we obtain \begin{equation} Y_p (t)=Le^{\mathbf{A}(t-t_0)}\mathbf{Z}(t_0)\label{e3.14} \end{equation} To obtain a more analytic formula for the solution of \eqref{e3.12}, we should compute analytically the matrix $e^{\mathbf{A}(t-t_0)}\in \mathcal{M}_{pr}$, see \cite{BeRa}, \cite{ChYa}, \cite{Kol}, \cite{Leo} and \cite{Ver}. First, considering the Jordan canonical form, there exists a nonsingular matrix $R\in \mathcal{M}_{pr}$ such that $\mathbf{J}=R^{-1}\mathbf{A}R$, where $\mathbf{J}\in \mathcal{M}_{pr}$ is the Jordan Canonical form of matrix $\mathbf{A}$. Afterwards, defining \[ \mathbf{Z}(t)=R\mathbf{\Theta}(t)\label{e3.15} \] then, combining \eqref{e3.15} and \eqref{e3.9}, we obtain \[ R\mathbf{\Theta}'(t)=\mathbf{A}R\mathbf{\Theta}(t). \] Finally, multiplying the above expression by $R^{-1}$, we take \[ \mathbf{\Theta}' (t)=\mathbf{J}\mathbf{\Theta}(t).\label{e3.16} \] It is well-known that the solution of \eqref{e3.16} is given by \[ \mathbf{\Theta}(t)=e^{\mathbf{J}(t-t_0)}\mathbf{\Theta}(t_0 ),\label{e3.17} \] where \[ \mathbf{\Theta }({t_0}) = R^{-1} \mathbf{Z} ({t_0}) = R^{ - 1} \begin{bmatrix} {Y_p({t_0 })} \\ {Y'_p({t_0 })} \\ \dots \\ {Y_p^{({r - 1})} ({t_0 })} \\ \end{bmatrix} \in {\mathcal{M}}_{pr \times m} \] \end{remark} \begin{proposition}\label{pro1} The characteristic polynomial matrix $\mathbf{A}$ is given by \begin{equation} \varphi (\lambda) = \det({\lambda I_{pr} - {\mathbf{A}}}) = \prod_{i = 1}^v {({\lambda ^r - a_j })^{p_j }}, \label{e3.18} \end{equation} where $\sum^{v}_{j=1}p_j =p$. \end{proposition} \begin{proof} We obtain the characteristic polynomial of matrix $\mathbf{A}$ \[ \varphi (\lambda ) = \det ({\lambda I_{pr} - {\bf{A}}}) = \det \begin{bmatrix} {\lambda I_p } & { - I_p } & \mathbb{O} & \dots & \mathbb{O} \\ \mathbb{O} & {\lambda I_p } & { - I_p } & \dots & \mathbb{O} \\ \mathbb{O} & \mathbb{O} & {\lambda I_p } & \dots & \mathbb{O} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ { - J_p } &\mathbb{O} & \mathbb{O} & \dots & {\lambda I_p } \end{bmatrix}. \] Afterwards, we consider some simple transformations. Thus, we multiply the first block by $\lambda$ and we add it to the second one. Moreover, we multiply the second block by $\lambda$ and we add it to the third one. Continuing as above, we finally obtain that \[ \begin{bmatrix} {\lambda I_p } & { - I_p } & \mathbb{O} & \dots & \mathbb{O} \\ \mathbb{O}& {\lambda I_p } & { - I_p } & \dots & \mathbb{O} \\ \mathbb{O}& \mathbb{O} & {\lambda I_p } & \dots & \mathbb{O} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ { - J_p } & \mathbb{O} & \mathbb{O} & \dots & {\lambda I_p } \\ \end{bmatrix} \sim \begin{bmatrix} {\lambda I_p } & { - I_p } & \mathbb{O} & \dots & \mathbb{O} \\ {\lambda ^2 I_p } & \mathbb{O} & { - I_p } & \dots & \mathbb{O} \\ {\lambda ^3 I_p } & \mathbb{O} & \mathbb{O} & \dots & \mathbb{O} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ {\lambda ^r I_p - J_p } & \mathbb{O} & \mathbb{O} & \dots & \mathbb{O} \end{bmatrix} . \] Now we make $p$ row transformations to the determinant, as follows \[ ({ - 1})^p \det \begin{bmatrix} {\lambda ^r I_p - J_p } & \mathbb{O} & \mathbb{O} & \dots & \mathbb{O} \\ {\lambda ^2 I_p } & \mathbb{O} & { - I_p } & \dots & \mathbb{O} \\ {\lambda ^3 I_p } & \mathbb{O} & \mathbb{O} & \dots & \mathbb{O} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ {\lambda I_p } & { - I_p } & \mathbb{O} & \dots & \mathbb{O} \end{bmatrix}. \] Continuing as above, we transfer the above determinant into the form \begin{align*} & \underbrace {({ - 1})^p ({ - 1})^p \dots ({ - 1})^p }_{({r - 1}) - times} \det \begin{bmatrix} {\lambda ^r I_p - J_p } & \mathbb{O} & \mathbb{O} & \dots & \mathbb{O} \\ {\lambda I_p } & { - I_p } & \mathbb{O} & \dots & \mathbb{O} \\ {\lambda ^2 I_p } & \mathbb{O} & { - I_p } & \dots & \mathbb{O}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ {\lambda ^{r - 1} I_p } &\mathbb{O} & \mathbb{O}& \dots & {- I_p } \end{bmatrix} \\ & = ({ - 1})^{({r - 1})p}|{\lambda ^r I_p - J_p }| \underbrace{| { - I_p }| \dots |{-I_p }|}_{({r-1})-times}\\ & = ({ - 1})^{2({r - 1})p}| {\lambda ^r I_p - J_p }| =| {\lambda ^r I_p - J_p }|. \end{align*} Thus, we obtain the expression \[ \varphi(\lambda)=\det (\lambda I_{pr}-\mathbf{A})= |\lambda^r I_p -J_p |. \] Moreover, we recall that $ J_p =J_{p_1}(a_1)\oplus \dots \oplus J_{p_v}(a_v)$. Thus \[ | {\lambda ^r I_p - J_p }| = \prod_{j = 1}^v | {\lambda ^r I_{p_j } - J_{p_j } ({a_j }) |} . \] Note also that \[ | {\lambda ^r I_{p_j } - J_{p_j }({a_j })}| = \det \begin{bmatrix} {\lambda ^r - a_j } & { - 1} & 0 & \dots & 0 \\ 0 & {\lambda ^r - a_j } & { - 1} & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & { - 1} \\ 0 & 0 & 0 & \dots & {\lambda ^r - a_j } \\ \end{bmatrix} = ({\lambda ^r - a_j })^{p_j }, \] for $j=1,2,\dots, \nu$. Consequently, the characteristic polynomial \eqref{e3.18} is derived. \end{proof} \begin{remark}\label{rmk2} \rm The eigenvalues of matrix $\mathbf{A}$ are given by \eqref{e3.19} \begin{equation} \lambda _{jk} = \sqrt[r]{{| {a_j } |}} \Big({\cos\frac{{2k\pi +\varphi_j}}{r} + z\sin \frac{{2k\pi + \varphi _j }}{r}}\Big),\label{e3.19} \end{equation} where $a_j =|a_j |(\cos \varphi_j +z\sin \varphi_j)$ and $z^2 =-1$ for every $j=1,2,\dots, \nu$ and $k=0,1,2,\dots, r-1$. \end{remark} \begin{remark}\label{rmk3} \rm The characteristic polynomial is $\varphi (\lambda) = \prod_{j = 1}^\nu {({\lambda ^r - a_j })^{p_j } } $, with $a_i \neq a_j$ for $i\neq j$ and $\sum_{j = 1}^\nu {p_j = p} $. Without loss of generality, we define that \[ d_1 =\tau_1 , d_2 =\tau_2 ,\dots, d_l =\tau_l , \quad\text{and} \quad d_{l+1}<\tau_{l+1},\dots, d_\nu <\tau_\nu \] where $d_j ,\tau_j$, $j=1,2,\dots, \nu$ is the geometric and algebraic multiplicity of the given eigenvalues $a_j$, respectively. \noindent$\bullet$ Consequently, when $d_j =\tau_j$, \[ J_{jk} ({\lambda _{j\kappa } }) = \begin{bmatrix} {\lambda _{jk} } & {} & {} & {} \\ {} & {\lambda _{jk} } & {} & {} \\ {} & {} & \ddots & {} \\ {} & {} & {} & {\lambda _{jk} } \end{bmatrix} \in {\mathcal{M}}_{\tau _{jk} } , \] is also a diagonal matrix with diagonal elements the eigenvalue $\lambda_{jk}$, for $j=1,\dots, l$. \noindent$\bullet$ When $d_j <\tau_j$, \[ J_{jk,z_j } = \begin{bmatrix} {\lambda _{jk} } & 1 & {} & {} & {} \\ {} & {\lambda _{jk} } & 1 & {} & {} \\ {} & {} & {\lambda _{jk} } & \ddots & {} \\ {} & {} & {} & \ddots & 1 \\ {} & {} & {} & {} & {\lambda _{jk} } \\ \end{bmatrix} \in {\mathcal{M}}_{z_j } \] for $j=l+1,l+2,\dots, \nu$, and $z_j =1,2,\dots, d_j$. \end{remark} \begin{proposition}\label{pro2} The fast subsystem \eqref{e3.6} has only the zero solution. \end{proposition} \begin{proof} By successively taking $r$-th derivatives with respect to $t$ on both sides of \eqref{e3.6} and multiplying by left by the matrix $H_q$, $q^* -1$, times (where $q^*$ is the index of the nilpotent matrix $H_q$, i.e. $H^{q^*}_{q}=\mathbb{O}$), we obtain the following equations \begin{gather*} H_q Y^{(r)}_q (t)=Y_q (t),\\ H^2_q Y^{(2r)}_q (t)=H_q Y^{(r)}_q (t),\\ H^3_q Y^{(3r)}_q (t)=H^2_q Y^{(2r)}_q (t),\\ \dots \\ H^{q^*}_q Y^{(q^* r)}_q (t)=H^{q^* -1}_{q} Y^{((q^* -1)r)}_q (t). \end{gather*} The conclusion, i.e. $Y_q (t)=\mathbb{O}$, is obtained by repetitively substitution of each equation in the next one, and using the fact that $H^{q^*}_q =\mathbb{O}$. \end{proof} Hence, the set of consistent initial conditions for system $F_w Y^{(r)}(t)=G_w Y(t)$ has the form \begin{equation} \Big\{ {Y^{(k)} ({t_0 }) = \begin{bmatrix} Y_p^{(k)} ({t_0 }) \\ {\mathbb{O}_q } \\ \end{bmatrix},\; k = 0,1, \dots ,r - 1} \Big\} .\label{e3.20} \end{equation} \begin{theorem}\label{th1} The analytic solution of \eqref{e3.1} is given by \begin{equation} X(t) = Q_{n,p} LR\mathop \oplus_{j = 1}^\nu \mathop \oplus _{k = 0}^{r - 1} e^{J_{jk}({\lambda _{jk} })({t - t_0 })} R^{ - 1} {\mathbf{Z}}({t_0}) .\label{e3.21} \end{equation} where $L=[I_p \mathbb{O} \dots \mathbb{O} ]\in \mathcal{M}_{p\times pr}$; $R\in \mathcal{M}_{pr}$ such that $\mathbf{J}=R^{-1}\mathbf{A}R$, where $\mathbf{J}\in \mathcal{M}_{pr}$ is the Jordan Canonical form of matrix $\mathbf{A}$; and \[ \mathbf{Z}(t_0) = \big[ Y^T _q (t_0) \ Y'^T _q (t_0) \dots Y_{p}^{(r - 1)T}(t_0) \big]^T \in \mathcal{M} _{m \times pr} . \] \end{theorem} \begin{proof} Combining \eqref{e3.16}, \eqref{e3.17} and the above discussion, the solution is \[ \mathbf{\Theta }(t) = \mathop \oplus _{j = 1}^\nu \mathop \oplus _{k = 0}^{r - 1} e^{J_{jk} ({\lambda_{jk} })({t - t_0})} \mathbf{\Theta}({t_0}). \] Then, multiplying by $R$ and bearing in mind that $\mathbf{\Theta}(t_0) =R^{-1}\mathbf{Z}(t_0)$, we obtain \[ \mathbf{Z}(t) = R\mathbf{\Theta }(t) = R\mathop \oplus _{j = 1}^\nu \mathop \oplus _{k = 0}^{r - 1} e^{J_{jk} ({\lambda_{jk} })({t - t_0})} \mathbf{\Theta }({t_0}) = R\mathop \oplus _{j = 1}^\nu \mathop \oplus _{k = 0}^{r - 1} e^{J_{jk} (\lambda_{jk})(t - t_0)} R^{ - 1}\mathbf{Z}({t_0}). \] Now, consider \eqref{e3.13}, then we obtain \[ Y_p (t) = L{\mathbf{Z}}(t) \Leftrightarrow Y_p (t) = LR\mathop \oplus _{j = 1}^\nu \mathop \oplus _{k = 0}^{r - 1} e^{J_{jk} ({\lambda_{jk} })({t - t_0})} R^{ - 1} {\mathbf{Z}}({t_0}). \] Using the results of Proposition 2; i.e., that the second (fast) sub-system \eqref{e3.6} has only the zero solution, we obtain \[ X(t) = QY(t) = [ {Q_{n,p} } \ {Q_{n,q} } ] \begin{bmatrix} {Y_p (t)} \\ \mathbb{O} \end{bmatrix} = Q_{n,p} Y_p (t) . \] Finally, \eqref{e3.21} is obtained. \end{proof} The next remark presents the set of consistent initial condition for system \eqref{e3.1}. \begin{remark}\label{rmk4} \rm Combining \eqref{e3.7} and \eqref{e3.20}, we obtain \[ X({t_0}) = QY({t_0}) = [{Q_{n,p} } \ {Q_{n,q} } ] [ \begin{bmatrix} {Y_p ({t_0})} \\ \mathbb{O} \end{bmatrix} = Q_{n,p} Y_p ({t_0}) . \] Then, the set of consistent initial conditions for \eqref{e3.1} is given by \begin{equation} \big\{ {Q_{n,p} Y_p ({t_0})} \ {Q_{n,p} Y'_p ({t_0})} \ \dots \ {Q_{n,p} Y_p^{({r - 1})} ({t_0})} \big\}.\label{e3.22} \end{equation} Now, taking into consideration \eqref{e3.2} and \eqref{e3.22}, we conclude \begin{gather*} X(t_0)=Q_{n,p}Y_p (t_0),\\ X' (t_0)=Q_{n,p}Y'_p (t_0),\\ \dots\\ X^{(r-1)}(t_0)=Q_{n,p}Y^{(r-1)}_p (t_0). \end{gather*} \end{remark} \begin{remark}\label{rmk5} \rm If $\tilde{Q}_{n,p}$ is the existing left inverse of $Q_{n,p}$, then considering \eqref{e3.11}, we have \begin{align*} {\mathbf{Z}}({t_0}) &= \begin{bmatrix} {Y_p (t_0)} \\ {Y'_p ({t_0})} \\ \vdots \\ {Y_p^{({r - 1})} ({t_0})} \end{bmatrix} = \begin{bmatrix} {\tilde Q_{p,n} X_p ({t_0})} \\ {\tilde Q_{p,n} X'_p ({t_0})} \\ \vdots \\ {\tilde Q_{p,n} X_p^{({r - 1})}({t_0})} \end{bmatrix} \\ & = \begin{bmatrix} {\tilde Q_{p,n} } & & & \\ & {\tilde Q_{p,n} } & & \\ & & \ddots & \\ & & & {\tilde Q_{p,n} } \end{bmatrix} \begin{bmatrix} {X_p ({t_0})} \\ {X'_p ({t_0})} \\ \vdots \\ {X_p^{ ({r - 1})} ({t_0})} \end{bmatrix} \\ &= \tilde Q{\mathbf{\Psi }} ({t_0}). \end{align*} Finally, the solution \eqref{e3.21} is given by \begin{equation} X (t) = Q_{n,p} LR\mathop \oplus _{j = l+1}^\nu \mathop \oplus _{k = 0}^{r - 1} e^{J_{jk} ({\lambda_{jk} }) ({t - t_0})} R^{ - 1} \tilde{Q}{\mathbf{\Psi }}({t_0}).\label{e3.23} \end{equation} where $\mathbf{\Psi }({t_0}) = [ X_p^T ({t_0}) \ X'^T _p ({t_0}) \ \dots \ X_p^{({r - 1})T} ({t_0}) ]^T \in{\mathcal{M}}_{m \times pr}$ and $\tilde{Q}_{n,p}$ is the existing left inverse of $Q_{n,p}$. \end{remark} The following two expressions, i.e. \eqref{e3.24} and \eqref{e3.25} are based on Lemma 1 and 2, respectively. Thus, two new analytical formulas are derived which are practically very interesting. Their proofs are straightforward exercise of Lemma 1, 2 and \eqref{e3.22} \begin{lemma}\label{le4} Considering the results of Lemma 1, we obtain the expression \begin{equation} \begin{aligned} X(t)&=Q_{n,p}LR \Big[ \Big(\oplus^{l}_{j=0} \oplus^{r-1}_{k=0} e^{\lambda_{jk} {(t-t_0)}}I_{\tau_{jk}}\Big) \\ &\quad \oplus \Big( \oplus^{\nu}_{j=l+1} \oplus^{r-1}_{k=0} \oplus^{d_j}_{z_j =1} (d_{k_1 k_2})_{z_j}\Big) \Big] R^{-1}\tilde{Q}\mathbf{\Psi}(t_0). \end{aligned} \label{e3.24} \end{equation} where \[ d_{k_1 k_2}= \begin{cases} e^{\lambda_{jk}(t-t_0)}\dfrac{(t-t_0)^{k_2 -k_1}}{(k_2 -k_1)!}, & 1\le k_1 \le k_2 \le z_j \\ 0, & \text{otherwise} \end{cases} \] for $j=l+1 ,l+2 ,\dots, \nu$ and $z_j =1,2,\dots, d_j$. \end{lemma} \begin{lemma}\label{le5} Considering the results of Lemma \ref{le2}, we obtain the expression \begin{equation} \begin{aligned} X(t)&=Q_{n,p}LR \Big[ \Big( \oplus^{l}_{j=0} \oplus^{r-1}_{k=0} e^{\lambda_{jk} {(t-t_0)}I_{\tau_{jk}}}\Big)\\ &\quad \oplus \Big( \oplus^{\nu}_{j=l+1} \oplus^{r-1}_{k=0} \oplus^{d_j}_{z_j =1} \sum^{z_j -1}_{i=0} f_i (t-t_0)[J_{z_j}(\lambda_{jk})]^i\Big) \Big] R^{-1}\tilde{Q}\mathbf{\Psi}(t_0). \end{aligned} \label{e3.25} \end{equation} where the polynomial $f_i (t-t_0)$ satisfies the following system of $z_j$ equations (for $z_j =1,2,\dots, d_j$), \[ f_{z_j-1-\kappa}(t-t_0)=e^{\lambda_{j\kappa}(t-t_0)} \sum_{i = 0}^{\kappa} b_{\kappa,i}\lambda^{\kappa-i}_{j\kappa} \frac{({t - t_0})^{z_j-1-i}}{(z_j-1-i)!}, \ \kappa=0,1,2,\dots, z_j-1 %\label{e2.13} \] where \[ b_{\kappa,i}=\sum_{l = 0}^{\kappa-i} \binom{z_j}{l} \binom{\kappa-l}{i}(-1)^l \] and \[ [ J_{z_j} (\lambda_{j\kappa}) ]^i = (c_{k_1 k_2 }^{(i)})_{z_j} ,\quad \text{ for } 1\le k_1 ,k_2 \le z_j %\label{e2.14} \] where \[ c_{k_1 k_2 }^{(i)} = \binom{i}{k_2 - k_1} \lambda_{j\kappa}^{i -({k_2 - k_1 })}. \] \end{lemma} \section{Form on non-consistent initial condition} In this short section, we describe the impulse behavior of the original system \eqref{e3.1}, at time $t_0$. In that case, we reformulate Proposition \ref{pro2}, so the impulse solution is finally obtained. \begin{proposition}\label{pro3} The subsystem \eqref{e3.6} has the solution \begin{equation} Y_q (s) = - \sum_{j = 0}^{r - 1} \sum_{k = 0}^{q^* - 1} \delta^{(rk - 1 - j)} (t)H_q^k Y_q^{(j)}({t_0}).\label{e4.1} \end{equation} \end{proposition} \begin{proof} Let start by observing that --as is well known-- there exists a $q^* \in\mathbb{N}$ such that $H^{q^*}_q =\mathbb{O}$ i.e. the $q^*$ is the annihilation index of $H_q$. Whereby taking the Laplace transformation of \eqref{e3.6}, see \cite{Do}, the following expression derives \[ H_q \Im \{ {Y_q^{(r)} (t)} \} = \Im \{ {Y_q (t)} \} \] and by defining $\Im \{ Y_q (t)\} =\mathcal{X}_q (s)$, it is obtained \[ H_q \Big(s^r \mathcal{X}_q (s)- \sum^{r-1}_{j=0}s^{r-1-j}Y^{(j)}_{q}(t_0)\Big) =\mathcal{X}_q (s) \] or equivalently \begin{equation} (s^r H_q -I_q)\mathcal{X}_q (s)= H_q \sum^{r-1}_{j=0}s^{r-1-j}Y^{(j)}_{q}(t_0).\label{e4.2} \end{equation} Since $q^*$ is the annihilation index of $H_q$, it is known that \[ (s^r H_q -I_q)^{-1}=-\sum^{q^* -1}_{k=0}(s^r H_q)^k, \] where $H^0 _q =I_q$, see for instance \cite{KaHa} and \cite{Ver}. Thus, substituting the above expression into the \eqref{e4.2}, the following equation is taken \[ \mathcal{X}_q (s)=-\sum^{q^* -1}_{k=0} (s^r H_q)\sum^{r-1}_{i=0} s^{r-1-i}Y_q (t_0) \] or equivalently, after some algebra \begin{equation} \mathcal{X}_q (s)=-\sum^{r-1}_{j=0} \sum^{q^* -1}_{k=0} \Im \{ \delta^{(rk-1-j)}\}(t) H^k _q Y^{(j)}_q (t_0).\label{e4.3} \end{equation} Since $\Im \{ \delta^{(k)}(t)\} =s^k$, the expression \eqref{e4.3} is transformed into \eqref{e4.4} \begin{equation} \mathcal{X}_q (s)=-\sum^{r-1}_{j=0} \sum^{q^* -1}_{k=0}\Im \{ \delta^{(rk-1-j)}\} (t) H^k _q Y^{(j)}_q (t_0).\label{e4.4} \end{equation} Now, by applying the inverse Laplace transformation to \eqref{e4.4}, Equation \eqref{e4.1} is derived. \end{proof} \begin{theorem}\label{th2} The solution of \eqref{e3.1} is given by \begin{equation} \begin{aligned} X(t)&= Q_{n,p}LR \oplus^{\nu}_{j=0} \oplus^{r-1}_{k=0} e^{J_{jk}(\lambda_{jk})(t-t_0)} R^{-1}Z(t_0)\\ &\quad -Q_{n,q} \sum^{r-1}_{j=0} \sum^{q^* -1}_{k=0} \delta^{(rk-1-j)}(t) H^k _q Y^{(j)}_q (t_0). \end{aligned} \label{e4.5} \end{equation} where $L=[I_p \mathbb{O} \dots \mathbb{O} ]\in \mathcal{M}_{p\times pr}$; $R\in \mathcal{M}_{pr}$ such that $\mathbf{J}=R^{-1}\mathbf{A}R$, where $\mathbf{J}\in \mathcal{M}_{pr}$ is the Jordan Canonical form of matrix $\mathbf{A}$; and \[ \mathbf{Z}(t_0) = \big[ Y^T _q (t_0) \ Y'^T _q (t_0) \ \dots \ Y_{p}^{(r - 1)T}(t_0)\big ]^T \in \mathcal{M} _{m \times pr} . \] \end{theorem} \begin{proof} Combining the results of Theorem \ref{th1} and the above discussion, the solution is provided by \eqref{e4.5}. \end{proof} \begin{remark}\label{rmk6} \rm For $t>t_0$, it is obvious that \eqref{e3.21} is satisfied. Thus, we should stress out that the system \eqref{e3.1} has the above impulse behaviour at time instant where a non-consistent initial value is assumed, while it returns to smooth behaviour at any subsequent time instant. \end{remark} \section*{Conclusions} In this article, we study the class of linear rectangular matrix descriptor differential equations of higher-order whose coefficients are square constant matrices. By taking into consideration that the relevant pencil is regular, we get effected by the Weierstrass canonical form in order to decompose differential system into two sub-systems (i.e. the \emph{slow} and the \emph{fast} sub-system). Afterwards, we provide analytical formulas for that general class of Apostol-Kolodner type of equations when we have consistent and non-consistent initial conditions. Moreover, as a further extension of the present paper, we can discusse the case where the pencil is singular. Thus, the Kronecker canonical form is required. The non-homogeneous case has also a special interest, since it appears often in applications. For all these, there is some research in progress. \subsection*{Acknowledgments}The authors would like to express their sincere gratitude to Prof. I. G. Stratis for his helpful and fruitful discussion that improved this article. Moreover, the authors are very grateful to the anonymous referee's comments which highly improved the quality of the paper. \\ This work is dedicated to our secretary Ms Popi Bolioti. \begin{thebibliography}{00} \bibitem{Ap}%1 T. M. Apostol; \emph{Explicit formulas for solutions of the second order matrix differential equation $Y'' = AY$}, Amer. Math. Monthly 82 (1975), pp. 159-162. \bibitem{BeRa}%2 R. Ben Taher and M. Rachidi; \emph{Linear matrix differential equations of higher-order and applications}, E. J. of Differential Eq., Vol. 2008 (2008), No. 95, pp. 1-12. \bibitem{Ca}%3 S. L. Campbell; \emph{Singular systems of differential equations}, Pitman, San Francisco, Vol. 1, 1980; Vol. 2, 1982. \bibitem{ChYa}%4 H.-W. Cheng and S. S.-T. Yau; \emph{More explicit formulas for the matrix exponential}, Linear Algebra Appl. 262 (1997), pp. 131-163. \bibitem{Do}%5 G. Doetsch, \emph{Introduction to the theory and application of the Laplace transformation}, Springer-Verlag, 1974. \bibitem{Ga}%6 R. F. Gantmacher; \emph{The theory of matrices I \& II}, Chelsea, New York, 1959. \bibitem{Kal}%7 G. I. Kalogeropoulos; \emph{Matrix pencils and linear systems}, Ph.D Thesis, City University, London, 1985. \bibitem{Kar}%8 N. Karcanias; \emph{Matrix pencil approach to geometric systems theory}, Proceedings of IEE 126 (1979) 585-590. \bibitem{KaHa}%9 N. Karcanias and G. E. Hayton; \emph{Generalized autonomous differential systems, algebraic duality, and geometric theory}, Proceedings of IFAC VIII, Triennial World Congress, Kyoto, Japan, 1981. \bibitem{Kol}%10 I. I. Kolodner; \emph{On exp(tA) with A satisfying a polynomial}, J. Math. Anal. and Appl. 52 (1975), pp. 514-524. \bibitem{Leo}%11 I. E. Leonardo; \emph{The matrix exponential}, SIAM Review Vol. 38, No. 3 (1996), pp. 507-512. \bibitem{Me}%12 C. D. Meyer, Jr. \emph{Matrix Analysis and Applied Linear Algebra}, SIAM publications, Package edition, 2001. \bibitem{Ver}%13 L. Verde-Star; \emph{Operator identities and the solution of linear matrix difference and differential equations}, Studies in Applied Mathematics 91 (1994), pp. 153-177. \end{thebibliography} \end{document}