Find Non Trivial Solution of Eigenvalue
Nontrivial Solution
Basic Methods for Image Restoration and Identification
Reginald L. Lagendijk , Jan Biemond , in Handbook of Image and Video Processing (Second Edition), 2005
4.2 Maximum Likelihood Blur Estimation
In case the point-spread function does not have characteristic spectral zeros or in case a parametric blur model such as motion or out-of-focus blur cannot be assumed, the individual coefficients of the point-spread function have to be estimated. To this end maximum likelihood estimation procedures for the unknown coefficients have been developed [9, 12, 13, 18]. Maximum likelihood estimation is a well-known technique for parameter estimation in situations where no stochastic knowledge is available about the parameters to be estimated [15].
Most maximum likelihood identification techniques begin by assuming that the ideal image can described with the 2D auto-regressive model (20a). The parameters of this image model — that is, the prediction coefficients ai,j and the variance σ2 v of the white noise v(n 1, n 2) — are not necessarily assumed to be known.
If we can assume that both the observation noise w(n 1, n 2) and the image model noise v(n 1, n 2) are Gaussian distributed, the log-likelihood function of the observed image, given the image model and blur parameters, can be formulated. Although the log-likelihood function can be formulated in the spatial domain, its spectral version is slightly easier to compute [13]:
(35a)
where θ symbolizes the set of parameters to be estimated, i.e., θ = {aj,j, σ 2 v, d(n 1, n 2), σ2 w }, and P(u, v) is defined as
(35b)
Here A(u, v) is the discrete 2-D Fourier transform of ai,j .
The objective of maximum likelihood blur estimation is now to find those values for the parameters ai,j, σ 2 v , d(n 1, n 2) and σ2 w that maximize the log-likelihood function L(θ). From the perspective of parameter estimation, the optimal parameter values best explain the observed degraded image. A careful analysis of (35) shows that the maximum likelihood blur estimation problem is closely related to the identification of 2D auto-regressive moving-average (ARMA) stochastic processes [16, 13].
The maximum likelihood estimation approach has several problems that require non-trivial solutions. Actually the differentiation between state-of-the-art blur identification procedures is mostly in the way they handle these problems [11]. In the first place, some constraints must be enforced in order to obtain a unique estimate for the point-spread function. Typical constraints are:
- •
-
the energy conservation principle, as described by equation (5b),
- •
-
symmetry of the point-spread function of the blur, i.e., d(--n 1, -- n 2) = d(n 1, n 2).
Secondly, the log-likelihood function (35) is highly nonlinear and has many local maxima. This makes the optimization of (35) difficult, no matter what optimization procedure is used. In general, maximum likelihood blur identification procedures require good initializations of the parameters to be estimated in order to ensure converge to the global optimum. Alternatively, multi-scale techniques could be used, but no "ready-to-go" or "best" approach has been agreed upon so far.
Given reasonable initial estimates for θ, various approaches exist for the optimization of L(θ). They share the property of being iterative. Besides standard gradient-based searches, an attractive alternative exists in the form of the expectation-minimization (EM) algorithm. The EM-algorithm is a general procedure for finding maximum likelihood parameter estimates. When applied to the blur identification procedure, an iterative scheme results that consists of two steps [12, 18] (see Fig. 13):
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780121197926500747
Boundary Control Method and Inverse Problems of Wave Propagation
M.I. Belishev , in Encyclopedia of Mathematical Physics, 2006
Spectral Problem
The Dirichlet homogeneous boundary-value problem is to find nontrivial solutions of the system
[10]
[11]
This problem is equivalent to the spectral analysis of the operator L; it has the discrete spectrum the eigenfunctions , form an orthonormal basis in .
Expanding the solutions of the problem (1)–(3) over the eigenfunctions of the problem [10], [11] one derives the spectral representation of waves:
[12]
where
Thus, for a given control f, the Fourier coefficients of the wave are determined by the spectrum and the derivatives .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0125126662003473
Variational Methods in Turbulence
F.H. Busse , in Encyclopedia of Mathematical Physics, 2006
Variational Problem for Turbulent Momentum Transport
In order to introduce the variational method for bounds on turbulent transports we consider the simplest configuration for which a nontrivial solution of the NSEs of motion exists: the configuration of plane Couette flow ( Figure 1 ). The Reynolds number is defined in this case in terms of the constant relative motion between the plates, , where i is the unit vector parallel to the plates and ν is the kinematic viscosity of the fluid. Using the distance d between the plates as length scale and as timescale, the basic equations can be written in the form
[6]
[7]
We use a Cartesian system of coordinates with the x, z-coordinates in the directions of i and k , respectively, where k is the unit vector normal to the plates such that the boundary conditions are given by
[8]
After separating the velocity field v into its mean and fluctuating parts, with , where the bar denotes the average over planes ., we obtain by multiplying eqn [6] by and averaging it over the entire fluid layer (indicated by angular brackets)
[9]
Here u denotes the component of perpendicular to k and w is its z -component. We define fluid turbulence under stationary conditions by the property that quantities averaged over planes . are time independent. Accordingly, the equation for the mean flow U can be integrated to yield
[10]
where the boundary condition [8] has been employed. With this relationship, U can be eliminated from the problem and the energy balance
[11]
is obtained where the identity has been used.
Since the momentum transport in the x-direction between the moving rigid plates is described by , we can conclude immediately that the momentum transport by turbulent flow always exceeds the corresponding laminar value because is positive according to the relationship [11]. Since a lower bound on M thus exists, an upper bound μ on as a function of Re is of primary interest. Following Howard (1963), it can be shown that is a monotonous function and it is therefor equivalent to ask for a lower bound R of Re at a given value μ of . We are thus led to the following formulation of the variational problem:
Find the minimum of the functional
[12]
among all solenoidal vector fields (with ) that satisfy the boundary condition at and the condition
The Euler–Lagrange equations as necessary conditions for an extremal value of the functional are given by
[13]
[14]
where is defined by
[15]
and where has been set. When eqns [13]–[15] are compared with the equations for and for U , a strong similarity can be noticed. The variational problem does not exhibit any time dependence, but the Euler–Lagrange equations may still be regarded as the symmetric analogue of the NSEs for steady flow.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0125126662002534
Disease Modelling and Public Health, Part A
Stefan Engblom , Stefan Widgren , in Handbook of Statistics, 2017
3.2 Equilibrium Behavior
We start by looking at equilibrium solutions of (29), that is, solutions (S, I, φ) such that the time derivative in (29) vanishes. By assumption φ ≥ 0 and we shall call [S, I, φ] = [Σ, 0, 0] the trivial equilibrium solution. Assuming it is positive,
(32)
is another nontrivial equilibrium solution for which
(33)
There are no other stationary solutions as is easily checked. The stability of the nontrivial solution can be investigated by inspecting the eigenvalues of the Jacobian, where using the invariant to reduce the system simplifies the process considerably. The reduced system is
(34)
The associated Jacobian is given by
(35)
where a stationary solution is to be substituted for [I, φ]. The sum of the eigenvalues is given by the trace,
(36)
and is clearly < 0. The eigenvalue product is given by the determinant,
(37)
By comparing (32) and (36)–(37) we find the following result by determining the signs of the eigenvalues.
Proposition 1
If there is a positive nontrivial stationary solution (32), then the trivial solution φ = 0 is unstable and the nontrivial solution (32) is stable. If no positive nontrivial stationary solution exists, then the trivial solution is stable.
The eigenvalues associated with the nontrivial equilibrium are given by
(38)
where the asymptotic behavior can be used to understand how the parameters set the time scale of the convergence to equilibrium. Samples of the ODE model around the equilibrium are shown in Fig. 4.
Without further ado we present the surprising result that
Proposition 2
In the stochastic case (26)–(28), for a finite population Σ, let N = N(ω) be the number of distinct occasions that a sample outcome trajectory [S, I, φ](t; ω) t≥0 satisfies I = 0. Then P(N ≤ n) ≥ 1 − (1−p) n for some value p. For Σ = 1, a simple bound is
(39)
Hence in the stochastic version of the model, contrary to the ODE interpretation, the infection is expected to go extinct in finite time. The trivial equilibrium solution is the only stable solution. Informally, we expect the stochastic model to evolve around the deterministic equilibrium (32) for some time before it eventually hits the state I = 0 where at a certain probability p it gets stuck indefinitely.
Proof
We first prove this for the boundary case Σ = 1, including the estimate (39), and we then make a straightforward generalization to any finite population Σ.
Without adding any restrictions, we can assume that φ ≤ 1/β. Suppose that at t = 0, I(0) = 0, and φ(0) = 1/β. Let p be the probability that there is no time τ such that I(τ) = 1.
We have that τ is exponentially distributed with an intensity that varies with time. Namely, as long as I = 0, we can solve explicitly , and have that τ ∼Exp(υφ(t)). One can sample such a τ by letting u ∼U(0, 1) and solve for τ the equation
We find
Clearly, there is no finite τ unless this inequality is satisfied and we can thus estimate the probability that the solution τ is finite,
In summary, by probability p we have that and the infection goes extinct. By probability 1 − p, instead I(τ) = 1 for some finite τ, and after an exponentially distributed waiting time we arrive again at the condition that I(0) = 0 and φ(0) ≤ 1/β, shifting units of time appropriately. Because φ is generally smaller for each such "attempt at extinction," the total number of trials is dominated by a negative binomial distribution NB(1, 1 − p) (see Fig. 5).
For Σ > 1 the system will take longer excursions away from the absorbing state I = 0, but will eventually arrive again at the critical conditions that I(0) = 0 and φ(0) ≤ 1/β such that the infection in this case goes extinct with probability greater than
(40)
The lower bound (40) is pessimistic as it assumes that φ attains the upper bound 1/β right before the event I = 0. Relaxing this to φ ∼ 1/(βΣ), the steady-state when I = 1, we get instead the approximation
(41)
It remains to analyze the time until first hitting the infection-free state. Due to the time-dependent φ this problem is difficult to approach, but a suitable approximation may be constructed as follows. Consider the frozen coefficient model
(42)
where . The rationale here is the assumption that φ relaxes approximately to equilibrium between the infectious events. Since I = 0 is an absorbing state, and by the simplicity of the model, an analytic distribution for the first hitting time can be developed. Define the generator Q of the process (42) (Brémaud, 1999). Then, starting at I(0) = Σ, the distribution for the time τ until first hitting I(τ) = 0 can be obtained as a sum of Σ exponential distributions of parameter λ i , where λ i are the positive eigenvalues of − Q (Gong et al., 2012):
(43)
This result is illustrated in Fig. 6.
We may finally combine the two estimates developed: the probability of extinction per event I = 0 (41), and the first hitting time at I = 0 (43). To obtain an approximate bound of the time until extinction τ we study the following simplified logic: as initial conditions we take the fully infected case φ(0) = 1/β and I(0) = Σ. Informally, should the first attempt at extinction not be a success, we put (φ, I)↦(1/β, Σ) and make a new attempt. Given that we are back at the original situation, the result for the extinction time τ is an infinite series:
(44)
with each τ i independent and distributed according to (43). The distribution for τ can be handled numerically but for the present purposes we are satisfied with the analytic result that
(45)
Analytic formulas for the eigenvalues appear to be difficult to obtain but a linear dependency with the population size Σ is clearly present in Fig. 7.
3.2.1 Conclusions: the SIS E Model
In the ODE-interpretation of the model, the difference 1/β − γ/υ determines the nontrivial equilibrium (32), which is stable if it exists and is positive. The rate at which the equilibrium is approached is in turn determined by (38). Importantly, assuming that most observations are obtained from near the equilibrium, only one of the parameters (β, γ, υ) can be expected to be observed from data without using additional prior information.
Contrary to the ODE case, the stochastic model is stable at the trivial solution and the model displays extinction of the infection in finite time. Some approximations and the numerical evidence of Fig. 7 reveal that the time until extinction scales linearly with Σ, the total population size. Conversely, a collection of isolated populations which interact via exchange of individuals and which sustain the infection for a certain period of time can be meaningfully understood of as a single population of a certain effective size.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0169716117300056
Constitutive Models for Engineering Materials
Kaspar J. Willam , in Encyclopedia of Physical Science and Technology (Third Edition), 2003
Eigenvalues and Eigenvectors
There exists a nonzero vector x such that the linear transformation σ · x is a multiple of x :
(206)
Note: The eigenvector x i spans the triad of principal directions, and the eigenvalues λ i define the three principal values of stress.
- 1.
-
Characteristic polynomial: The eigenvalue problem is equivalent to stating that:
(207)
For a nontrivial solution x ≠ 0, then (σ − λ i I ) must be singular. Consequently,(208)
generates the characteristic polynomial(209)
the roots of which are the eigenvalues λ(σ). According to the fundamental theorem of algebra, a polynomial of degree 3 has exactly 3 roots, thus each matrix σ ∈ ℜ3 has 3 eigenvalues. Note: All three eigenvalues are real as long as σ = σ t is symmetric, which is the case for nonpolar materials because of conjugate shear stresses σ ij = σ ji . - 2.
-
Cayley–Hamilton theorem: This theorem states that every square matrix satisfies its own characteristic equation. In other words, the scalar polynomial p(λ) = det(λ I − σ) also holds for the stress polynomial p(σ). One important application of the Cayley–Hamilton theorem is to express powers of the stress tensor σ k as a linear combination of the irreducible bases I , σ σ2 for k ≥ 2.
- 3.
-
Spectral properties of rank-one update of unit tensor: Spectral analysis of a square matrix generated by a rank-one update of the unit tensor of second-order
(210)
reduces to the eigenvalue analysis,(211)
The eigenvalues and eigenvectors of B = I + a ⊗, b are related to the eigenpairs of the update matrix; that is,(212)
In the case of a single rank-one update of the unit matrix, we find λ1( B ) = 1 + λ and λ k ( B ) = 1 ∀ k = 2, 3, …, n with the determinant det( B ) det( I + a ⊗, b ) = 1 + a · b and λ = a · b .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0122274105001356
Visual exploration of data through their graph representations
George Michailidis , in Recent Advances and Trends in Nonparametric Statistics, 2003
3 GRAPH LAYOUT
The problem of graph drawing/layout has received a lot of attention from various scientific communities. Simply put the problem is defined as: given a set of nodes connected by a set of edges, calculate the position of the nodes and the curve to be drawn for each edge. This simple description also reveals the intricacy of the problem; which space to use for the positions and what type of curves. For example, grid layouts position the nodes at points with integer coordinates, while hyperbolic layouts [8] embed the points on a sphere. Most layouts use straight lines for drawing the edges but some use curves of a certain degree [7]. Many layouts algorithms try to impose a set of aesthetic rules on the final drawing. For example, nodes and edges must be evenly distributed, edges should have all the same length, edge-crossings should be kept at a minimum, etc. Some of these rules clearly apply to certain graphs and/or are important in certain applications, while others have a more 'absolute' character [9]. Furthermore, each of the rules defines an associated optimization problem and some of them such as the edge-crossing minimization are computationally intractable but for very small graphs [7]. In addition in graph visualization, a major problem that needs to be addressed is the size of the graph. Few layout algorithms can deal effectively with thousands of nodes, although graphs with such size appear in a wide variety of applications. Some systems that use a combination of optimization algorithms and heuristics to handle such large graphs are NicheWorks [10], GVF [11] and H3Viewer [12].
A popular multivariate analysis technique that can be used for graph drawing purposes is Multidimensional Scaling (MDS) [13]. Its objective is to embed the vertices in a Euclidean space of appropriate dimensionality so that the Euclidean distances between the points that represent the nodes approximate well the path-length distances defined between the vertices in the original graph. The quality of the resulting representation is measured by an appropriate fit (loss) function.
In our approach, we adopt primarily the adjacency m.odel, i.e. we do not emphasize graph-theoretical distances, but we pay special attention to which vertices are adjacent and which vertices are not. Obviously, this is related to distance, but the emphasis is different. We also use a fit (loss) function to measure the quality of the resulting embedding.
Define the fit function
(1)
where Z is a n × s matrix that would contain the coordinates of the n nodes in the s-dimensional Euclidean space and dij (Ζ) denotes the distance between the s-dimensional points zi and zj . We are interested in minimizing (1), which in the case of a simple graph the end result would be that nodes sharing lots of interconnections should end up close together in the layout, while nodes without too many connections are expected to be located on the periphery of the drawing. For a weighted graph, the larger the weights are, the stronger the bond between the connected nodes and hence the closer together they should be drawn. Thus, in this formulation, unlike the MDS one, the number and strength of bonds is the main factor that determines the final layout.
In this study, we restrict attention to squared Euclidean distances; i.e. dij (Z) = (zi − zj )2. Then, some straightforward algebra shows that (1) can be written as
(2)
where L a n × n given by L = D − A with D containing the row sums of the adjacency matrix A. The matrix L is known in the graph theory literature as the Laplacian matrix of a graph [14]. In [5], the graph drawing problem under other distance functions such as the ℓh , h ∈ [1,2) is considered.
In order to avoid the trivial solution Z = 0 (all nodes collapse to the same point in Rs ) we impose the following normalization constraint
(3)
where u is a column vector comprised of ones. The second constraint will cent er the layout of the graph, while the first one would provide non-trivial solutions. Other possible normalizations of the general ϕ fit function are discussed in [5], Some routine algebra shows that minimizing σ(Ζ) subject to (3) is equivalent to maximizing 2
(4)
The solution to this new problem is known to be given by the s + 1 largest generalized eigenvectors of A, which can be computed in operations [15], since the largest one corresponding to an eigenvalue λ = 1 is a constant one and must be discarded. Notice that the solution is not orthogonal in Euclidean space but in the weighted (by D) Euclidean space. The spectrum of the problem at hand is 1 = λ1 ≥ λ2 ≥ … ≥ λn ≥ − 1 since the matrix D − 1 A that possesses the same spectrum is a stochastic matrix. Other properties of the solution that provide information about the underlying graph (e.g connected, bipartite, etc) as well as approximate colorings based on the the eigenvectors corresponding to the largest negative eigenvalues are discussed in [16].
In Figures 5 and 6 the graph layouts based on the above described method of the protein interactions structured data set and of a correlation matrix of protein expression data from three different experimental conditions are shown. It can be seen that the proteins PIB1 and BET1 that have very few interactions are located on the periphery of the drawing as expected. Moreover, the solution places the 'hub' proteins TLG1 and YIP1 close to the center of the plot and reveals a clustering structure in the data. The layout of the correlation matrix reveals a strong clustering pattern between the three experimental conditions, but also more variability (smaller correlations and thus looser bonds) within one set of conditions (the one depicted on the upper left corner of the plot).
3.1 The special case of bipartite graphs
As noted earlier, the graph representation of a contingency table and of a categorical data set have some special features, namely that the set of nodes can be naturally partitioned in two subsets; the categories of the each variable for the former case and the subset of nodes representing the objects (sleeping bags) and the subset of nodes representing the categories of all the variables for the latter one, thus giving rise to a bipartite graph. Let, Z = [X′ Y′]′, where X contains the coordinates of one subset of the nodes (e.g. the objects) and Y the coordinates of the remaining subset (e.g. the categories of the variables). The fit function can then be written as (given the special structure of the adjacency matrix A)
(5)
where DY is a diagonal matrix containing the column sums of W and DX another diagonal matrix containing the row sums of W. In the case of a contingency table both DY and DX contain the marginal frequencies of the two variables, while for a categorical data set DY = diag(W′W) contains the univariate marginals of the categories of all the variables and DX = JI is a constant multiple of the identity matrix with J being the number of variables in the data set. By adopting an analogous normalization constraint, namely X′DXX − Is , we can solve the resulting optimization problem through a block relaxation algorithm [17] as follows:
-
Step 1:
-
Step 2:
-
Step 3: Orthonormalize X using the Gram-Schmidt, procedure [6].
It can be seen that the solution at the optimum satisfies i.e category points are in the center of gravity of the objects belonging to a particular category. This is known in the literature as the centroid principle [6,17].
In Figure 7 the graph layout of the sleeping bags data set is shown. The solution captures the presence in the data set of good, expensive sleeping bags filled with down fibers and cheap, bad quality sleeping bags filled with synthetic fibers and the absence of bad, expensive sleeping bags. It also shows that there are some intermediate sleeping bags in terms of quality and price filled either with down or synthetic fibers. The centroid principle proves useful in the interpretation of the graph layout.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444513786500120
Elements of linear algebra
Giovanni Romeo , in Elements of Numerical Mathematical Economics with Excel, 2020
Theorem 3
Let us build the following bordered matrix from A and B:
Let be the leading principal (upper-leftmost) minor of order h.
- i.
-
Necessary and sufficient condition to have Q(x) = x T Ax positive definite for each nontrivial solution of the system Bx = [0] is that:
- ii.
-
Necessary and sufficient condition to have Q(x) = x T Ax negative definite for each nontrivial solution of the system Bx = [0] is that the sequence:
alternate in sign and the last element have the same sign as (−1) n .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128176481000037
European Symposium on Computer Aided Process Engineering-12
Costin S. Bildea , ... Piet D. Iedema , in Computer Aided Chemical Engineering, 2002
Isothermal PFR
For a n-th order reaction in an isothermal PFR (θ= 0), integration of Eq. 1 with boundary conditions Eqs 3-5 leads to:
(7)
The X vs. Da dependence, which can be obtained from Eq. 7, is presented in Figure 2 for n = 1 and n=2. For any Da, a trivial, unfeasible solution (X,f 3) = (0,∝) exists. The non-trivial solution is meaningful if Eq. 6 is satisfied. This is equivalent to the following feasibility condition:
(8)
Da T represent a transcritical bifurcation point. This represents a feasibility boundary. A steady state can exist only if the reactor consumes the entire amount of reactant fed in the process. This is impossible for slow kinetics or small reactor volume, which lead to reactant accumulation and infinite recycle flow rate. The transcritical bifurcation does not exist in case of a stand-alone reactor. Interestingly, it occurs for the same value DaT as in the case of CSTR - Separation - Recycle systems (Bildea et al., 2000).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1570794602801018
Eigenvalues and Eigenvectors
William Ford , in Numerical Linear Algebra with Applications, 2015
Defining Eigenvalues and Their Associated Eigenvectors
λ is an eigenvalue of n × n matrix A, and v ≠ 0 is an eigenvector if Av = λv; in other words, Av is parallel to v and either shrinks or contracts it. The relationship Av = λv is equivalent to (A − λI) v = 0, and in order for there to be a nontrivial solution, we must have
This is called the characteristic equation, and the polynomial
is the characteristic polynomial. The eigenvalues are the roots of the characteristic polynomial, and an eigenvector associated with an eigenvalue λ is a solution to the homogeneous system
The process of finding the eigenvalues and associated eigenvectors would seem to be
Locate the roots λ1, λ2,…, λ n of p and find a nonzero solution to (A − λ iI) vi = 0 for each λ i.
There is a serious problem with this approach. If p has degree five or more, the eigenvalues must be approximated using numerical techniques, since there is no analytical formula for roots of such polynomials. We will see in Chapter 10 that polynomial root finding can be difficult. A small change in a polynomial coefficient can cause large changes in its roots.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123944351000053
Navier–Stokes Equations
In Studies in Mathematics and Its Applications, 1977
4.4 The non-uniqueness theorem
Our purpose is to prove the following result.
Theorem 4.1
For λ sufficiently large, and for suitable values of L, the problem (4.2), (4.3), (4.6) possesses z–periodic solutions of period L which are different from the trivial solution (4.7).
Proof. We will prove that the equation (4.20) has a non-trivial solution in V, when λ is sufficiently large. According to Lemma 4.12, we can choose L so that λ1 is a simple eigenvalue of B in V: these are the values of L mentioned in Theorem 4.1.
It is known from Proposition 4.2 that (4.20) possesses only the trivial solution for λ sufficiently small (λ ⩽ c(r 1, r 2), see (4.24)).
The theorem is already proved if the equation (4.20) has a non trivial solution for some λ ∈ [0, λ1]. Therefore we will assume from now on that
(4.44)
With this assumption, the next lemmas, using the degree theory, will show the existence of non zero ϕ of , satisfying ϕ = λ T ϕ, with λ > λ1.
Lemma 4.14
Let ω be some open ball of V centered at 0.
There exists some δ > 0 such that ϕ = λ T ϕ has no solution on the boundary ∂ω of ω, for each λ in the interval [λ1, λ1 + δ].
Proof. We argue by contradiction. If this statement is false, there exists a sequence of λ n decreasing to λ1, and a sequence of uk belonging to ∂ω. such that
Since the sequence un is bounded, the sequence T u n is relatively compact (by Lemma 4.4), and there exists a subsequence T u ni converging to some limit v in V. Then u ni = λ ni T u ni converges to λ1 v. Since T is continuous, we must have
Thus λ1 v is a solution of ϕ = λ1 Tϕ, and because of (4.44), λ1 v = 0, v = 0. This contradicts the fact that ||λ1 v|| V is equal to the radius of the ball ω (||un || = radius of ω, ∀ n).
Lemma 4.15
Under the assumption (4.44), if Ω and δ are as in Lemma 4.14, the equation ϕ = λTϕ has no solution on ∂Ω, for any λ ∈ [0, λ1 + δ).
Obvious Corollary of Lemma 4.14.
This lemma allows us to define the degree d(I - λT, ω, 0) for λ ∈ [0, λ1 + δ).
Lemma 4.16
With δ and ω as before,
Proof. It follows from the property (iii) of the degree that d(I – λT, ω, 0) = d(I, ω, 0) = i(I) and this index is equal to one (the index of the identity).
Lemma 4.17
Under the assumption (4.44), there exists for any λ ∈(λ1, λ1 + δ) at least one non-trivial solution of ϕ = λ T ϕ.
Proof. According to Lemma 4.13, and the properties (iv) of the index, i(I – λB) is equal to 1 on [0, λ1) and is equal to −1 on (λ1, λ1 + δ). According to the property (iii) of the index, i(I – λT, 0) is + 1 for λ ∈ (0, λ1) and −1 for λ ∈ (λ1, λ1 + λ). If λ ∈ (λ1, λ1 + δ) and if zero is the only solution of ϕ = λ T ϕ in ω, we should have
according to the property (i) of the index. But we proved that
Thus the equation ϕ = λ T ϕ has a non-trivial solution for any λ ∈ (λ1, λ1 + δ).
The proof of Theorem 4.1 is complete.
Remark 4.3
The condition "λ sufficiently large", amounts to saying that the angular velocity a is large or that the viscosity v is small (for fixed f 1, r 2).
Remark 4.4
Under condition (4.44), there exists for each λ ∈ (λ1, λ1 + δ) a non-trivial solution ϕλ of (4.20). One can prove that ϕλ → 0 in V, as λ decreases to λ1. This is the bifurcation In case of the Benard problem the situation is very similar, but it can be proved that there only exists the trivial solution for λ ∈ [0, λ1]. Thus the assumption (4.44) is unnecessary, and one does prove the occurrence of a bifurcation (see V. I. Iudovich [2], Rabinowitz [1], Velte [1]).
A study of the Taylor problem by analytical methods is developed in Rabinowitz [5].
Acknowledgment. The author gratefully acknowledges useful remarks of P. H. Rabinowitz on Section 4.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S0168202409700700
Find Non Trivial Solution of Eigenvalue
Source: https://www.sciencedirect.com/topics/computer-science/nontrivial-solution