\documentclass[reqno]{amsart} \usepackage{hyperref} \usepackage{cite,amssymb} \AtBeginDocument{{\noindent\small \emph{Electronic Journal of Differential Equations}, Vol. 2015 (2015), No. 108, pp. 1--17.\newline ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu \newline ftp ejde.math.txstate.edu} \thanks{\copyright 2015 Texas State University - San Marcos.} \vspace{9mm}} \begin{document} \title[\hfilneg EJDE-2015/108\hfil Limit cycles for cubic polynomial differential systems] {Limit cycles for piecewise smooth perturbations of a cubic polynomial differential center} \author[S. Li, T. Huang \hfil EJDE-2015/108\hfilneg] {Shimin Li, Tiren Huang} \address{Shimin Li (corresponding author)\newline School of Mathematics and Statistics, Guangdong University of Finance and Economics, Guangzhou 510320, China} \email{lism1983@126.com} \address{Tiren Huang \newline Department of Mathematics, Zhejiang Sci-Tech University, Hangzhou 310018, China} \email{htiren@zstu.edu.cn} \thanks{Submitted October 24, 2014. Published April 21, 2015.} \makeatletter \@namedef{subjclassname@2010}{\textup{2010} Mathematics Subject Classification} \makeatother \subjclass[2010]{34A36, 34C07, 37G15} \keywords{Limit cycle; piecewise smooth system; averaging method} \begin{abstract} In this article, we study the planar cubic polynomial differential system \begin{gather*} \dot{x}=-yR(x,y)\\ \dot{y}=xR(x,y) \end{gather*} where $R(x,y)=0$ is a conic and $R(0,0)\neq 0$. We find a bound for the number of limit cycles which bifurcate from the period annulus of the center, under piecewise smooth cubic polynomial perturbations. Our results show that the piecewise smooth cubic system can have at least 1 more limit cycle than the smooth one. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{remark}[theorem]{Remark} \allowdisplaybreaks \section{Introduction and statement of main results} In the qualitative theory of real planar differential system, one of the important problems is the determination and distribution of limit cycles, such as the famous Hilbert's 16th problem and its weak form \cite{Li}. Limit cycles can arise by bifurcation in several different ways. One of the main methods is perturbing a system which has a center via Poincar\'e bifurcation, in such a way that limit cycles bifurcate in the perturbed system from the period annulus of the center for the unperturbed system. This has been studied intensively perturbing the limit cycles of the center for the Hamiltonian system (e.g. \cite{GLV}) and non-Hamiltonian integrable system (e.g.\cite{LV}). In this article, we consider the non-Hamiltonian integrable system \begin{equation} \label{e1} \begin{gathered} \dot{x}=-yR(x,y),\\ \dot{y}=xR(x,y), \end{gathered} \end{equation} where $R(0,0)\neq 0$ and the dot denotes derivatives with respect to the variable $t$. Note that system \eqref{e1} has a first integral $H(x,y)=x^2+y^2$, thus the origin is a center. The problem of studying the number of limit cycles which bifurcate from the period annulus surrounding the origin of system \eqref{e1} under smooth perturbation has been considered intensively, see for instance \cite{LZL} and the references quoted therein. Motivated by the non-smooth phenomena in the real world, piecewise smooth systems were widely investigated in the past few decades \cite{BD,F,K}. According to the extent of discontinuity, piecewise smooth systems can be distinguished by the following three classes: Piecewise smooth continuous systems, Filippov systems and Impacting systems, see \cite{BD}. There are several papers \cite{CGP,GT,LHR,LH,YHH} concerning the number of limit cycles bifurcate from piecewise smooth continuous systems. In \cite{CGP,GT}, applying Lyapunov constants, the authors study the center-focus problem as well as the number of limit cycles which bifurcate from a weak focus for piecewise smooth continuous systems. Bifurcation of limit cycles by perturbing piecewise smooth Hamiltonian systems has been studied in \cite{LHR,LH,YHH}. Generally speaking, piecewise smooth differential system can have more limit cycles than the smooth one. For piecewise smooth continuous linear systems, the authors in \cite{HZ} showed that piecewise linear differential system can have 2 limit cycles surrounding the origin. Later on, in \cite{HY}, the authors proved that piecewise smooth linear systems can have 3 limit cycles surrounding a unique equilibrium. In \cite{BPT}, the authors consider higher order piecewise smooth perturbation of a linear center. For piecewise smooth quadratic systems, the authors in \cite{CD} showed that there are piecewise quadratic system with 9 small amplitude limit cycles. In \cite{LM}, the authors showed that there are at least 5 limit cycles can bifurcate from quadratic isochronous centers under piecewise smooth quadratic perturbations. Recently, the author in \cite{X} showed that the piecewise smooth quadratic isochronous systems can have at least 6 limit cycles. To the best of our knowledge, there are only a few papers considering the number of limit cycles for piecewise smooth cubic systems. The objective of this article is to study the number of limit cycles that bifurcate from the period annulus surrounding the origin of system \eqref{e1} under piecewise smooth cubic polynomial perturbations. More precisely, for $|\varepsilon|>0$ sufficiently small, we want to a bound for the number of limit cycles for the piecewise smooth system \begin{equation} \label{e2} \begin{pmatrix}\dot{x}\\ \dot{y}\end{pmatrix} =\begin{cases} \begin{pmatrix}-yR(x,y)+\varepsilon p_1(x,y)\\ xR(x,y)+\varepsilon q_1(x,y)\end{pmatrix}, & y> 0, \\[4pt] \begin{pmatrix}-yR(x,y)+\varepsilon p_2(x,y)\\ xR(x,y)+\varepsilon q_2(x,y)\end{pmatrix}, & y< 0, \end{cases} \end{equation} where \begin{equation} \label{e3} \begin{gathered} p_1(x,y)=a_1x+a_2y+a_3x^2+a_4xy+a_5y^2+a_6x^{3} +a_{7}x^2y+a_{8}xy^2+a_{9}y^{3},\\ q_1(x,y)=b_1x+b_2y+b_3x^2+b_4xy+b_5y^2+b_6x^{3} +b_{7}x^2y+b_{8}xy^2+b_{9}y^{3},\\ p_2(x,y)=c_1x+c_2y+c_3x^2+c_4xy+c_5y^2+c_6x^{3}+c_{7}x^2y +c_{8}xy^2+c_{9}y^{3},\\ q_2(x,y)=d_1x+d_2y+d_3x^2+d_4xy+d_5y^2+d_6x^{3} +d_{7}x^2y+d_{8}xy^2+d_{9}y^{3}, \end{gathered} \end{equation} and $R(x,y)=0$ is one of the following six conics: \begin{itemize} \item[(E)] Ellipse $R(x,y)=\frac{x^2}{a^2}+\frac{y^2}{b^2}-1=0$ with $a>0$, $b>0$. \item[(CE)] Complex ellipse $R(x,y)=\frac{x^2}{a^2}+\frac{y^2}{b^2}+1=0$ with $a>0$, $b>0$. \item[(H)] Hyperbola $R(x,y)=\frac{x^2}{a^2}-\frac{y^2}{b^2}-1=0$ with $a>0$, %b>0$. \item[(CL)] Two complex straight lines intersecting in a real point (the origin) $R(x,y)=a^2x^2+y^2=0$ with $a\neq 0$. In this case, we allow that $R(0,0)=0$. \item[(RPL)] Two real parallel straight lines $R(x,y)=x^2-a^2=0$ with $a>0$. \item[(CPL)] Two complex parallel straight lines $R(x,y)=x^2+a^2=0$ with $a>0$. \end{itemize} Let $H(3)$ denote the maximum number of limit cycles bifurcating from the period annulus surrounding the origin of piecewise smooth system \eqref{e2}. From the first order averaging method, we have the following theorem. \begin{theorem}\label{thm1} Consider system \eqref{e2} with $|\varepsilon|>0$ sufficiently small. \begin{itemize} \item[(I)] If $R(x,y)=a^2x^2+y^2$, then $H(3)=2$. \item[(II)] In the other cases above, $5\leqslant H(3)\leqslant 15$. \end{itemize} \end{theorem} Note that if $a_i=c_i$, $b_i=d_i$, $i=1,2,\dots,9$, then $p_1(x,y)\equiv p_2(x,y),q_1(x,y)\equiv q_2(x,y)$, and system \eqref{e2} is a smooth system. Let $\bar{H}(3)$ denote the maximum number of limit cycles bifurcating from the period annulus surrounding the origin of smooth system \eqref{e2}. From the first order averaging method, then we have the following theorem. \begin{theorem}\label{thm2} Consider \eqref{e2} with $|\varepsilon|>0$ sufficiently small, suppose that $a_i=c_i$, $b_i=d_i$, $i=1,2,\dots,9$. \begin{itemize} \item[(i)] If $R(x,y)=a^2x^2+y^2$, then $\bar{H}(3)=1$. \item[(ii)] In the other cases above, $\bar{H}(3)=3$. \end{itemize} \end{theorem} \begin{remark} \label{rmk3} \rm From the above two theorems, we know that the piecewise smooth system \eqref{e2} have at least 1 (resp. 2) more limit cycles than the smooth one for the case $R(x,y)=a^2x^2+y^2$ (resp. other cases above). In \cite{LV}, the authors considered the case for $a_i=c_i$, $b_i=d_i$, $i=1,2,6,7,8,9$ and $a_i=b_i=c_i=d_i=0$, $i=3,4,5$. \end{remark} The organizing of this paper is as follows. In section 2, we introduce the first order averaging method for discontinuous system derived from \cite{LNT}. In section 3, we compute the averaged function according the different kind of conics. In section 4, we prove Theorems \ref{thm1} and \ref{thm2}. In section 5, we give a conclusion. \section{Averaging method for discontinuous system} In this section we summarize the theorems of first order averaging method for discontinuous differential system as obtained in \cite{LNT}. The original theorem is given for a system of differential equations, but since we will use it only for one differential equation, we state them in this case. For a general introduction to averaging method, see the book \cite{SV}. We consider the discontinuous differential equation \begin{equation} \label{e4} \frac{\mathrm{d}r}{\mathrm{d}\theta}=\varepsilon F(\theta,r)+\varepsilon^2R(\theta,r,\varepsilon), \end{equation} with \begin{equation} \label{e5} \begin{gathered} F(\theta,r)=F_1(\theta,r)+\operatorname{sign}(h(\theta,r))F_2(\theta,r),\\ R(\theta,r,\varepsilon)=R_1(\theta,r,\varepsilon) +\operatorname{sign}(h(\theta,r))R_2(\theta,r,\varepsilon), \end{gathered} \end{equation} where $F_1, F_2: \mathbb{R}\times D\to \mathbb{R}, R_1, R_2: \mathbb{R}\times D\times (-\varepsilon_{0}, +\varepsilon_{0})\to \mathbb{R}$ and $h:\mathbb{R}\times D\to \mathbb{R}$ are continuous functions, T-periodic in the variable in $\theta$ and $D$ is an open subset of $\mathbb{R}$. The sign function is defined as \[ \operatorname{sign}(u)=\begin{cases} 1 & \text{if } u>0,\\ 0 & \text{if } u=0,\\ -1 & \text{if } u<0. \end{cases} \] We also assume that $h$ is a $C^{1}$ function having $0$ as a regular value. Denote by $\mathcal{M}=h^{-1}(0)$, by $\Sigma=\{0\}\times D \nsubseteqq \mathcal{M}$, by $\Sigma_{0}=\Sigma\setminus\mathcal {M}\neq \emptyset $, and its elements by $z\equiv (0,z)\notin \mathcal{M}$. Define the averaged function $f:D\to \mathbb{R}$ as \begin{equation} \label{e6} f(r)=\int_{0}^{T}F(\theta,r)\mathrm{d}\theta. \end{equation} \begin{theorem}[\cite{LNT}]\label{thm4} Assume the following three conditions ate satisfied. \begin{itemize} \item[(i)] $F_1, F_2, R_1, R_2$ and $h$ are locally $L$-Lipschitz with respect to $r$. \item[(ii)] For $a\in \Sigma_{0}$ with $f(a)=0$, there exist a neighborhood $V$ of $a$ such that $f(z)\neq 0$ for all $z\in \bar{V}\setminus \{a\}$ and the Brouwer degree function $d_{B}(f,V,0)\neq 0$, see \cite[Appendix A]{BL} for the definition of the Brouwer degree. \item[(iii)] If $\partial h/\partial \theta \neq 0$, then for all $(\theta,r) \in \mathcal{M}$ we have that $(\partial h/\partial \theta)(\theta,r)\neq 0$; and if $\partial h/\partial \theta\equiv 0$, then $\langle \nabla_{r}h, F_1\rangle^2-\langle \nabla_{r}h, F_2\rangle^2>0$ for all $(\theta,z)\in [0,T]\times\mathcal{M}$, where $\nabla_{r}h$ denotes the gradient of the function $h$ restricted to variable $r$. \end{itemize} Then for $|\varepsilon|>0$ sufficiently small, there exists a $T$-periodic solution $r(\theta,\varepsilon)$ of system $\eqref{e4}$ such that $r(0,\varepsilon)\to a$ (in the sense of Hausdorff distance) as $\varepsilon\to 0$. \end{theorem} To verify the hypothesis (ii) of Theorem \ref{thm4}, we have the following remark, see for instance \cite{BL}. \begin{remark} \label{rmk5} \rm Let $f: D\to \mathbb{R}$ be a $C^{1}$ function with $f(a)=0$, where $D$ is an open subset of $\mathbb{R}$ and $a\in D$. Whenever the Jacobian matrix of $J_{f}(a)\neq 0$ , there exists a neighborhood $V$ of $a$ such that $f(r)\neq 0$ for all $r\in \bar{V}\setminus \{a\}$. Then $d_{B}(f,V,0)\neq 0$. \end{remark} By Theorem \ref{thm4} and Remark \ref{rmk5}, if system \eqref{e4} satisfies the hypothesis (i) and (iii), then every simple zero of the averaged function $f(r)$ defined by \eqref{e6} provides a limit cycle of system \eqref{e4}. In the next section, we will compute the averaged function case by case. \section{Averaged function} Taking polar coordinates $x=r\cos\theta$, $y=r\sin\theta, \theta\in(0,2\pi)$ and viewing $\theta$ as the new independent variable. For $|\varepsilon|>0$ sufficiently small, we can transform the differential system \eqref{e2} into the equivalent differential equation \begin{equation} \label{e7} \frac{\mathrm{d}r}{\mathrm{d}\theta} =\begin{cases} \varepsilon X^{+}(\theta,r)+\varepsilon^2 Y^{+}(\theta, r, \varepsilon), &\sin\theta>0,\\ \varepsilon X^{-}(\theta, r)+\varepsilon^2 Y^{-}(\theta, r, \varepsilon), & \sin\theta<0, \end{cases} \end{equation} where \begin{equation} \label{e8} \begin{gathered} X^{+}(\theta, r)=\frac{\cos\theta p_1(\theta,r)+\sin\theta q_1(\theta,r)}{R(\theta,r)},\\ Y^{+}(\theta, r, \varepsilon)=\frac{\sin\theta p_1(\theta,r)-\cos\theta q_1(\theta,r)}{r R(\theta,r) -\varepsilon\left(\sin\theta p_1(\theta,r)-\cos\theta q_1(\theta,r)\right)},\\ X^{-}(\theta, r)=\frac{\cos\theta p_2(\theta,r)+\sin\theta q_2(\theta,r)}{R(\theta,r)},\\ Y^{-}(\theta, r, \varepsilon)=\frac{\sin\theta p_2(\theta,r)-\cos\theta q_2(\theta,r)}{r R(\theta,r) -\varepsilon\left(\sin\theta p_2(\theta,r)-\cos\theta q_2(\theta,r)\right)}, \end{gathered} \end{equation} with \begin{gather*} R(\theta,r)=R(r\cos\theta,r\sin\theta),\\ p_i(\theta,r)=p_i(r\cos\theta,r\sin\theta),\quad i=1,2,\\ q_i(\theta,r)=q_i(r\cos\theta,r\sin\theta),\quad i=1,2. \end{gather*} Recall that $p_i(x,y),q_i(x,y)$, $i=1,2$ are given by \eqref{e3}. Denote \begin{gather*} F_1(\theta,r)=\frac{1}{2}\left(X^{+}(\theta,r)+X^{-}(\theta,r)\right),\\ F_2(\theta,r)=\frac{1}{2}\left(X^{+}(\theta,r)-X^{-}(\theta,r)\right),\\ R_1(\theta,r,\varepsilon)=\frac{1}{2}\left(Y^{+}(\theta,r,\varepsilon) +Y^{-}(\theta,r,\varepsilon)\right),\\ R_2(\theta,r,\varepsilon)=\frac{1}{2}\left(Y^{+}(\theta,r,\varepsilon) -Y^{-}(\theta,r,\varepsilon)\right), \end{gather*} then differential equation \eqref{e7} becomes \begin{equation} \label{e9} \frac{\mathrm{d}r}{\mathrm{d}\theta}=\varepsilon F(\theta,r)+\varepsilon^2R(\theta,r,\varepsilon), \end{equation} where \begin{equation} \label{e10} \begin{gathered} F(\theta,r)=F_1(\theta,r)+\operatorname{sign}(\sin\theta)F_2(\theta,r),\\ R(\theta,r,\varepsilon)=R_1(\theta,r,\varepsilon) +\operatorname{sign}(\sin\theta)R_2(\theta,r,\varepsilon). \end{gathered} \end{equation} It is obvious that $F(\theta,r)$, $R(\theta,r,\varepsilon)$, $h(\theta,r)=\sin\theta$ are locally L-Lipschitz with respect $r$. Since $\mathcal{M}=\{(r,\theta)|\theta=0,\pi\}$, the function $\partial h(\theta,r)/\partial\theta=\cos\theta\neq 0$ when $(r,\theta)\in\mathcal {M}$. By Theorem \ref{thm4}, we need to estimate the number of simple zeros for the averaged function \begin{equation} \label{e11} \begin{aligned} f(r) &=\int_{0}^{2\pi}F(\theta,r)\mathrm{d}\theta\\ &=\int_{0}^{\pi}X^{+}(\theta,r)\mathrm{d}\theta +\int_{\pi}^{2\pi}X^{-}(\theta,r)\mathrm{d}\theta. \end{aligned} \end{equation} Note that for $r>0$, the zeros of the averaged function $f(r)$ coincide with the zeros of the function $F(r)=rf(r)$. Hence, in order to simplify further computation, we will compute $F(r)$ as follows: \begin{equation} \label{e12} \begin{aligned} F(r) &=\int_{0}^{\pi}\frac{r\cos\theta p_1(\theta,r) +r\sin\theta q_1(\theta,r)}{R(\theta,r)}\mathrm{d}\theta\\ &\quad+\int_{\pi}^{2\pi}\frac{r\cos\theta p_2(\theta,r) +r\sin\theta q_2(\theta,r)}{R(\theta,r)}\mathrm{d}\theta\\ &=\int_{0}^{\pi}\frac{r\cos\theta p_1(r\cos\theta,r\sin\theta) +r\sin\theta q_1(r\cos\theta,r\sin\theta)}{R(r\cos\theta,r\sin\theta)} \mathrm{d}\theta\\ &\quad-\int_{0}^{\pi}\frac{r\cos\theta p_2(-r\cos\theta,-r\sin\theta) +r\sin\theta q_2(-r\cos\theta,-r\sin\theta)}{R(-r\cos\theta,-r\sin\theta)} \mathrm{d}\theta. \end{aligned} \end{equation} In the following subsections, we will deduce the averaged function \eqref{e12} case by case. \subsection{ Ellipse-case (E) } In this subsection, we study system \eqref{e2} with $R(x,y)=\frac{x^2}{a^2}+\frac{y^2}{b^2}-1$. Without loss of generality, we can assume that $01$, Otherwise, we change the variables $x=y_1,y=x_1,t_1=-a^2t$. Substituting $R(\theta,r)=r^2(a^2\cos^2\theta+\sin^2\theta)$ in \eqref{e12}, we have \begin{equation} \label{e25} F(r)= k_1+k_2r+k_3r^2, \end{equation} where $r\in(0,+\infty)$ and \begin{equation} \label{e26} \begin{aligned} k_1&= \frac{(a_1+c_1+ab_2+ad_2)\pi}{a(a+1)},\\ k_2&= \frac{-2(a_4+b_3-b_5-c_4-d_3+d_5)}{a^2-1}\\ &\quad +\frac{2(a_4+b_3-a^2b_5-c_4-d_3+a^2d_5)}{a^2-1} \arctan (\sqrt{a^2-1}t),\\ k_3&= \frac{\pi((2+a)(a_6+c_6)+a(a_{8}+b_{7}+c_{8}+d_{7})+2a^2(b_{9} +d_{9}))}{2a(a+1)^2}. \end{aligned} \end{equation} From above analysis, we have the following proposition. \begin{proposition} \label{prop10} Suppose that $R(x,y)=a^2x^2+y^2$, then the averaged function \eqref{e25} are linear combination of the 3 linearly independent functions $1,r,r^2$, and the coefficients $k_i$, $i=1,2,3$ given by \eqref{e26} can be chosen arbitrarily. Moreover, if $a_i=c_i$, $b_i=d_i$, $i=1,2,\dots,9$, then the averaged function \eqref{e25} are linear combination of the 2 linearly independent functions $1,r^2$, and the coefficients $k_i$, $i=1,3$ given by \eqref{e26} can be chosen arbitrarily. \end{proposition} \begin{proof} It is obvious that functions $1,r,r^2$ are linearly independent. The determinant of Jacobian matrix \begin{equation*}\begin{array}{ll} \det \frac{\partial(k_1,k_2,k_3)}{\partial(a_1,a_4,a_6)} =\frac{(a+2)\pi^2(1-\arctan \sqrt{a^2-1})}{a^2(a-1)(a+1)^{4}}\neq 0. \end{array}\end{equation*} It is obvious that the coefficients $k_i$, $i=1,2,3$ given by \eqref{e26} can be chosen arbitrarily since $a_1,a_4,a_6$ are arbitrary. \end{proof} \subsection{Two real parallel straight lines case (RPL)} In this subsection, we study system \eqref{e2} with $R(x,y)=x^2-a^2$. Substituting $R(\theta,r)=r^2\cos^2\theta-a^2$ in \eqref{e12}, we have \begin{equation} \label{e27} F(r)= k_1f_1(r)+k_2f_2(r)+k_3f_3(r)+k_4f_4(r)+k_5f_5(r)+k_6f_6(r), \end{equation} where $r\in(0,a)$, \begin{equation} \label{e28} \begin{gathered} f_1(r)= \frac{a}{\sqrt{a^2-r^2}}-1,\quad f_2(r)= a-\sqrt{a^2-r^2},\\ f_3(r)= r^2,\quad f_4(r)= r^2\sqrt{a^2-r^2},\\ f_5(r)= 2r-a\ln \Big(\frac{a+r}{a-r}\Big),\quad f_6(r)= r^2\ln \Big(\frac{a+r}{a-r}\Big), \end{gathered} \end{equation} and \begin{equation} \label{e29} \begin{aligned} k_1&= -(a_1+c_1+a^2a_6+a^2c_6)\pi,\\ k_2&= \frac{-\pi}{a}(b_2+d_2+a^2(a_{8}+b_{7}-b_{9}+c_{8}+d_{7}-d_{9})),\\ k_3&= \frac{a_6+a_{8}+b_{7}-3b_{9}+c_6+c_{8}+d_{7}-3d_{9}}{2},\\ k_4&= \frac{b_{9}+d_{9}}{a},\\ k_5&= a_4+b_3-b_5-c_4-d_3+d_5,\\ k_6&= \frac{-b_5+d_5}{a}. \end{aligned} \end{equation} From above analysis, we have the following proposition. \begin{proposition} \label{prop11} Suppose that $R(x,y)=x^2-a^2$, then the averaged function \eqref{e27} are linear combination of the 6 linearly independent functions $f_i(r)$, $i=1,2,3,4,5,6$ defined by \eqref{e28}, and the coefficients $k_i$, $i=1,2,3,4,5,6$ given by \eqref{e29} can be chosen arbitrarily. Moreover, if $a_i=c_i$, $b_i=d_i$, $i=1,2,\dots,9$, then the averaged function \eqref{e27} are linear combination of the 4 linearly independent functions $f_i(r)$, $i=1,2,3,4$ defined by \eqref{e28}, and the coefficients $k_i,i=1,2,3,4$ given by \eqref{e29} can be chosen arbitrarily. \end{proposition} The proof of the above proposition is similar to that of Proposition \ref{prop7}, we omit it here. \subsection{Two complex parallel straight lines case (CPL)} In this subsection, we study system \eqref{e2} with $R(x,y)=x^2+a^2$. Substituting $R(\theta,r)=r^2\cos^2\theta+a^2$ in \eqref{e12}, we have \begin{equation} \label{e30} F(r)= k_1f_1(r)+k_2f_2(r)+k_3f_3(r)+k_4f_4(r)+k_5f_5(r)+k_6f_6(r), \end{equation} where $r\in(0,+\infty)$, \begin{equation} \label{e31} \begin{gathered} f_1(r)= \frac{a}{\sqrt{a^2+r^2}}-1,\quad f_2(r)= \frac{r^2}{\sqrt{a^2+r^2}},\\ f_3(r)= r^2,\quad f_4(r)= \frac{r^{4}}{\sqrt{a^2+r^2}},\\ f_5(r)= r-a\arctan \big(\frac{r}{a}\big),\quad f_6(r)= r^2\arctan \big(\frac{r}{a}\big), \end{gathered} \end{equation} and \begin{equation} \label{e32} \begin{aligned} k_1&= -(a_1-b_2+c_1-d_2-a^2(a_6-a_{8}-b_{7}+b_{9}+c_6-c_{8}-d_{7}+d_{9}))\pi,\\ k_2&= \frac{\pi}{a}(b_2+d_2-a^2(a_{8}+b_{7}-2b_{9}+c_{8}+d_{7}-2d_{9})),\\ k_3&= \frac{a_6+a_{8}+b_{7}-3b_{9}+c_6+c_{8}+d_{7}-3d_{9}}{2},\\ k_4&= \frac{b_{9}+d_{9}}{a},\\ k_5&= 2(a_4+b_3-b_5-c_4-d_3+d_5),\\ k_6&= \frac{2(b_5-d_5)}{a}. \end{aligned} \end{equation} From above analysis, we have the following proposition. \begin{proposition} \label{prop12} Suppose that $R(x,y)=x^2+a^2$, then the averaged function \eqref{e30} are linear combination of the 6 linearly independent functions $f_i(r)$, $i=1,2,3,4,5,6$ defined by \eqref{e31}, and the coefficients $k_i$, $i=1,2,3,4,5,6$ given by \eqref{e32} can be chosen arbitrarily. Moreover, if $a_i=c_i$, $b_i=d_i$, $i=1,2,\dots,9$, then the averaged function \eqref{e30} are linear combination of the 4 linearly independent functions $f_i(r)$, $i=1,2,3,4$ defined by \eqref{e31}, and the coefficients $k_i$, $i=1,2,3,4$ given by \eqref{e32} can be chosen arbitrarily. \end{proposition} The proof of the above proposition is to the proof of Proposition \ref{prop7}, we omit it here. \section{Proof of the main results} The tools for estimating the number of zeros for the averaged function $F(r)$ are the following two lemmas. \begin{lemma}[\cite{CGP1}] \label{lem13} Consider $p+1$ linearly independent analytical functions $f_i: U \to \mathbb{R}$, $i=0,1,\dots,p$, where $U \in \mathbb{R}$ is an interval. Suppose that there exists $j \in \{0,1,\dots,p\}$ such that $f_j$ has constant sign. then there exists $p+1$ constants $C_i, i=0,1,\dots,p$ such that $f(x)=\sum^{p}_{i=0}C_if_i(x)$ has at least $p$ simple zeros in $U$. \end{lemma} \begin{lemma}[\cite{LZLZ}] \label{lem14} Suppose that the real function $F(h)\in C^1(U)$ satisfies \begin{equation} \label{e33} \alpha(h)F'(h)-\beta(h)F(h)=\gamma(h), \end{equation} where $\alpha(h),\beta(h),\gamma(h)\in C^0(U)$ and $U \in \mathbb{R}$ is an interval, then \begin{equation} \label{e34} \#\{F(h)\}\leqslant 1+\#\{\alpha(h)\}+ \#\{\gamma(h)\}. \end{equation} Here $\#\{F(h)\}$ is the number of zeros of $F(h)$ in $U$. \end{lemma} Although the proof of Lemma \ref{lem14} can be found in \cite{LZLZ}, we include it here for the sake of convenience. \begin{proof} Let $h_1$ and $h_2$ are two consecutive zeros of $F(h)$. From \eqref{e33}, we have \begin{equation*}\label{eq1}\begin{array}{ll} \alpha(h_1)F'(h_1)=\gamma(h_1),\quad \alpha(h_2)F'(h_2)=\gamma(h_2), \end{array}\end{equation*} which shows $\gamma(h_1)\gamma(h_2)\leqslant 0$ if $\alpha(h)\neq0$ in $(h_1-\varepsilon,h_2+\varepsilon)$, $0<\varepsilon<<1$. Therefore, between any two consecutive zeros of $F(h)$, there must exist at least one zero of $\alpha(h)$ or $\gamma(h)$. The result of this lemma follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1}] (I). From \eqref{e25}, $F(r)$ is a polynomial in variable $r$ with the degree of $2$, since the coefficients are arbitrary, then we can conclude that $F(r)$ has at most $2$ zeros in the interval $(0,+\infty)$ and this bound is sharp. From the first order averaging method of Theorem \ref{thm4}, system \eqref{e2} has at most $2$ limit cycles bifurcating from the period annulus surrounding the origin. Moreover, there are system \eqref{e2} has $2$ limit cycles bifurcating from the period annulus surrounding the origin. Thus we have $H(3)=2$, this complete the proof of statement (I). (II). For the case $R(x,y)=\frac{x^2}{a^2}+\frac{y^2}{b^2}-1$. First we consider the lower bound of zeros for $F(r),r\in(0,r_{0})$, in this case $r_{0}=a$. By the Proposition \ref{prop7}, $F(r)$ is an linear combination of the $6$ independent generating functions $f_i(r),i=1,2,3,4,5,6$ defined by \eqref{e16} with arbitrary coefficients. All these functions are analytic in $(0, r_{0})$, and some of them are strictly positive in this interval, hence $F(r)$ can have at least $5$ simple zeros in $(0, r_{0})$ by Lemma \ref{lem13}. Now we find an upper bound for the zeros for $F(r)$ in $(0,r_{0})$. From \eqref{e15}, we have \begin{align*} F(r)&=k_1f_1(r)+k_2f_2(r)+k_3f_3(r)+k_4f_4(r)+k_5f_5(r)+k_6f_6(r)\\ &=\frac{1}{\sqrt{a^2-r^2}\sqrt{b^2-r^2}}G(r), \end{align*} where \begin{equation} \label{e35} \begin{aligned} G(r)&= abk_1+k_2r^2+k_3r^{4}\\ &\quad +\sqrt{a^2-r^2}\sqrt{b^2-r^2}\left(-k_1-2\sqrt{b^2-a^2}k_5r+k_4r^2\right)\\ &\quad +\sqrt{a^2-r^2}\ln \Big(\frac{a\sqrt{b^2-r^2}+\sqrt{b^2-a^2}r}{a\sqrt{b^2-r^2}-\sqrt{b^2-a^2}r}\Big) \left(ab^2k_5+k_6r^2\right), \end{aligned} \end{equation} Recall that $r\in(0,a)$ and $a