We introduce a class of one-dimensional continuous reflected backward stochastic Volterra integral equations driven by Brownian motion, where the reflection keeps the solution above a given stochastic process (lower obstacle). We prove existence and uniqueness by a fixed point argument and derive a comparison result. Moreover, we show how the solution of our problem is related to a time-inconsistent optimal stopping problem and derive an optimal strategy.
We study relaxed stochastic control problems where the state equation is a one dimensional linear stochastic differential equation with random and unbounded coefficients. The two main results are existence of an optimal relaxed control and necessary conditions for optimality in the form of a relaxed maximum principle. The main motivation is an optimal bond portfolio problem in a market where there exists a continuum of bonds and the portfolio weights are modeled as measure-valued processes on the set of times to maturity.
We study the optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state of the process. Moreover the cost functional is also of mean-field type, which makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. Under the assumption of a convex action space a maximum principle of local form is derived, specifying the necessary conditions for optimality. These are also shown to be sufficient under additional assumptions. This maximum principle differs from the classical one, where the adjoint equation is a linear backward SDE, since here the adjoint equation turns out to be a linear mean-field backward SDE. As an illustration, we apply the result to the mean-variance portfolio selection problem.
We study optimal 2-switching and n-switching problems and the corresponding system of variational inequalities. We obtain results on the existence of viscosity solutions for the 2-switching problem for various setups when the cost of switching is non-deterministic. For the n-switching problem we obtain regularity results for the solutions of the variational inequalities. The solutions are C-l,C-l-regular away for the free boundaries of the action sets.
We propose a stochastic semi-Markovian framework for disability modelling in a multi-period discrete-time setting. The logistic transforms of disability inception and recovery probabilities are modelled by means of stochastic risk factors and basis functions, using counting processes and generalized linear models. The model for disability inception also takes IBNR claims into consideration. We fit various versions of the models into Swedish disability claims data.
This paper introduces a system of SDEs of mean-field type that models pedestrian motion. The system lets the pedestrians spend time at and move along walls by means of sticky boundaries and boundary diffusion. As an alternative to Neumann-type boundary conditions, sticky boundaries and boundary diffusion have a "smoothing" effect on pedestrian motion. When these effects are active, the pedestrian paths are semimartingales with the first-variation part absolutely continuous with respect to the Lebesgue measure dt rather than an increasing process (which in general induces a measure singular with respect to dt) as is the case under Neumann boundary conditions. We show that the proposed mean-field model for pedestrian motion admits a unique weak solution and that it is possible to control the system in the weak sense, using a Pontryagin-type maximum principle. We also relate the mean-field type control problem to the social cost minimization in an interacting particle system. We study the novel model features numerically, and we confirm empirical findings on pedestrian crowd motion in congested corridors.
We extend the class of pedestrian crowd models introduced by Lachapelle and Wolfram [Transp. Res. B: Methodol., 45 (2011), pp. 1572–1589] to allow for nonlocal crowd aversion and arbitrarily but finitely many interacting crowds. The new crowd aversion feature grants pedestrians a “personal space” where crowding is undesirable. We derive the model from a particle picture and treat it as a mean-field type game. Solutions to the mean-field type game are characterized via a Pontryagin-type maximum principle. The behavior of pedestrians acting under nonlocal crowd aversion is illustrated by a numerical simulation.
This paper suggests a model for the motion of tagged pedestrians: Pedestrians moving towards a specified targeted destination, which they are forced to reach. It aims to be a decision-making tool for the positioning of fire fighters, security personnel and other services in a pedestrian environment. Taking interaction with the surrounding crowd into account leads to a differential nonzero-sum game model where the tagged pedestrians compete with the surrounding crowd of ordinary pedestrians. When deciding how to act, pedestrians consider crowd distribution-dependent effects, like congestion and crowd aversion. Including such effects in the parameters of the game, makes it a mean-field type game. The equilibrium control is characterized, and special cases are discussed. Behavior in the model is studied by numerical simulations.
The present paper studies the stochastic maximum principle in singular optimal control, where the state is governed by a stochastic differential equation With nonsmooth coefficients, allowing both classical control and singular control. The proof of the main result is based oil the approximation of the initial problem, by a sequence of control problems with smooth coefficients. We, then apply Ekeland's variational principle for this approximating sequence of control problems, in order to establish necessary conditions satisfied by a sequence of near optimal controls. Finally, we prove the convergence of the scheme, using Krylov's inequality in the nondegenerate case and the Bouleau-Hirsch now property in the degenerate one. The adjoint process obtained is given by means of distributional derivatives of the coefficients.
We establish a stochastic maximum principle in optimal control of a general class of degenerate diffusion processes with global Lipschitz coefficients, generalizing the existing results on stochastic control of diffusion processes. We use distributional derivatives of the coefficients and the Bouleau Hirsh flow property, in order to define the adjoint process on an extension of the initial probability space.
This paper studies optimal control of systems driven by stochastic differential equations, where the control variable has two components, the first being absolutely continuous and the second singular. Our main result is a stochastic maximum principle for relaxed controls, where the first part of the control is a measure valued process. To achieve this result, we establish first order optimality necessary conditions for strict controls by using strong perturbation on the absolutely continuous component of the control and a convex perturbation on the singular one. The proof of the main result is based on the strict maximum principle, Ekeland's variational principle, and some stability properties of the trajectories and adjoint processes with respect to the control variable.
This work examines the solvability of fractional conditional mean-field-type games. The evolution of the state is described by a time-fractional stochastic dynamics driven by jump-diffusion-regime switching Gauss-Volterra processes which include fractional Brownian motion and multi-fractional Brownian motion. The cost functional is non-quadratic and includes a fractional-integral of an higher order polynomial. We provide semi-explicitly the equilibrium strategies in state-and-conditional mean-field-type feedback form for all decision-makers.
In this paper, we present an approach to neural network mean-field-type control and its stochastic stability analysis by means of adversarial inputs (aka adversarial attacks). This is a class of data-driven mean-field-type control where the distribution of the variables such as the system states and control inputs are incorporated into the problem. Besides, we present a methodology to validate the feasibility of the approximations of the solutions via neural networks and evaluate their stability. Moreover, we enhance the stability by enlarging the training set with adversarial inputs to obtain a more robust neural network. Finally, a worked-out example based on the linear-quadratic mean-field type control problem (LQ-MTC) is presented to illustrate our methodology.
This article examines mean-field games for marriage. The results support the argument that optimizing the long-term wellbeing through effort and social feeling state distribution (mean-field) will help to stabilize marriage. However, if the cost of effort is very high, the couple fluctuates in a bad feeling state or the marriage breaks down. We then examine the influence of society on a couple using mean-field sentimental games. We show that, in mean-field equilibrium, the optimal effort is always higher than the one-shot optimal effort. We illustrate numerically the influence of the couple's network on their feeling states and their well-being.
We study risk-sensitive optimal control of a stochastic differential equation (SDE) of mean-field type, where the coefficients are allowed to depend on some functional of the law as well as the state and control processes. Moreover the risk-sensitive cost functional is also of mean-field type. We derive optimality equations in infinite dimensions connecting dual functions associated with Bellman functional to the adjoint process of the Pontryagin maximum principle. The case of linear-exponentiated quadratic cost and its connection with the risk-neutral solution is discussed.
In this article, we study mean-field-type games with jump-diffusion and regime switching in which the payoffs and the state dynamics depend not only on the state-action profile of the decision-makers but also on a measure of the state-action pair. The state dynamics is a measure-dependent process with jump-diffusion and regime switching. We derive novel equilibrium systems to be solved. Two solution approaches are presented: (i) dynamic programming principle and (ii) stochastic maximum principle. Relationship between dual function and adjoint processes are provided. It is shown that the extension to the risk-sensitive case generates a nonlinearity to the adjoint process and it involves three other processes associated with the diffusion, jump and regime switching, respectively.
We consider the stochastic target problem of finding the collection of initial laws of a mean-field stochastic differential equation such that we can control its evolution to ensure that it reaches a prescribed set of terminal probability distributions, at a fixed time horizon. Here, laws are considered conditionally to the path of the Brownian motion that drives the system. This kind of problems is motivated by limiting behavior of interacting particles systems with applications in, for example, agricultural crop management. We establish a version of the geometric dynamic programming principle for the associated reachability sets and prove that the corresponding value function is a viscosity solution of a geometric partial differential equation. This provides a characterization of the initial masses that can be almost surely transported toward a given target, along the paths of a stochastic differential equation. Our results extend those of Soner and Touzi, Journal of the European Mathematical Society (2002) to our setting.
We study the optimal control for stochastic differential equations (SDEs) of mean-field type, in which the coefficients depend on the state of the solution process as well as of its expected value. Moreover, the cost functional is also of mean-field type. This makes the control problem time inconsistent in the sense that the Bellman optimality principle does not hold. For a general action space a Peng's-type stochastic maximum principle (Peng, S.: SIAM J. Control Optim. 2(4), 966-979, 1990) is derived, specifying the necessary conditions for optimality. This maximum principle differs from the classical one in the sense that here the first order adjoint equation turns out to be a linear mean-field backward SDE, while the second order adjoint equation remains the same as in Peng's stochastic maximum principle.
Mathematical mean-field approaches play an important role in different fields of Physics and Chemistry, but have found in recent works also their application in Economics, Finance and Game Theory. The objective of our paper is to investigate a special mean-field problem in a purely stochastic approach: for the solution (Y, Z) of a mean-field backward stochastic differential equation driven by a forward stochastic differential of McKean-Vlasov type with solution X we study a special approximation by the solution (X-N, Y-N, Z(N)) of some decoupled forward-backward equation which coefficients are governed by N independent copies of (X-N, Y-N, Z(N)). We show that the convergence speed of this approximation is of order 1/root N. Moreover, our special choice of the approximation allows to characterize the limit behavior of root N(X-N - X, Y-N - Y, Z(N) - Z). We prove that this triplet converges in law to the solution of some forward-backward. stochastic differential equation of mean-field type, which is not only governed by a Brownian motion but also by an independent Gaussian field.
We study a general class of fully coupled backward-forward stochastic differential equations of mean-field type (MF-BFSDE). We derive existence and uniqueness results for such a system under weak monotonicity assumptions and without the non-degeneracy condition on the forward equation. This is achieved by suggesting an implicit approximation scheme that is shown to converge to the solution of the system of MF-BFSDE. We apply these results to derive an explicit form of open-loop Nash equilibrium strategies for nonzero sum mean-field linear-quadratic stochastic differential games with random coefficients. These strategies are valid for any time horizon of the game.
We establish existence of controlled Markov chain of mean-field type with unbounded jump intensities by means of a fixed point argument using the Wasserstein distance. Furthermore, we suggest conditions for existence of an optimal control and a saddle-point for respectively a control problem and a zero-sum differential game associated with risk sensitive payoff functionals of mean-field type. The conditions are derived using a Markov chain entropic backward SDE approach.
We establish existence of controlled Markov chain of mean-field type with unbounded jump intensities by means of a fixed point argument using the Wasserstein distance. Using a Markov chain entropic backward SDE approach, we further suggest conditions for existence of an optimal control and a saddle-point for respectively a control problem and a zero-sum differential game associated with risk sensitive payoff functionals of mean-field type.
We establish existence of Markov chains of mean-field type with unbounded jump intensities by means of a fixed point argument using the total variation distance. We further show existence of nearly-optimal controls and, using a Markov chain backward SDE approach, we suggest conditions for existence of an optimal control and a saddle-point for respectively a control problem and a zero-sum differential game associated with payoff functionals of mean-field type, under dynamics driven by such Markov chains of mean-field type.
We establish existence of Markov chains of mean-field type with unbounded jump intensities by means of a fixed point argument using the Total Variation distance. We further show existence of nearly-optimal controls and, using a Markov chain backward SDE approach, we suggest conditions for existence of an optimal control and a saddle-point for respectively a control problem and a zero-sum differential game associated with payoff functionals of mean-field type, under dynamics driven by such Markov chains of mean-field type.
Life insurance cash flows become reserve dependent when contract conditions are modified during the contract term on condition that actuarial equivalence is maintained. As a result, insurance cash flows and prospective reserves depend on each other in a circular way, and it is a non-trivial problem to solve that circularity and make cash flows and prospective reserves well-defined. In Markovian models, the (stochastic) Thiele equation and the Cantelli Theorem are the standard tools for solving the circularity issue and for maintaining actuarial equivalence. This paper expands the stochastic Thiele equation and the Cantelli Theorem to non-Markovian frameworks and presents a recursive scheme for the calculation of multiple contract modifications.
We consider an infinite horizon control problem for dynamics constrained to remain on a multidimensional junction with entry costs. We derive the associated system of Hamilton-Jacobi equations (HJ), prove the comparison principle and that the value function of the optimal control problem is the unique viscosity solution of the HJ system. This is done under the usual strong controllability assumption and also under a weaker condition, coined 'moderate controllability assumption'.
We suggest an optimality criterion, for choosing the best smoothing parameters for an extension of the so-called Hodrick-Prescott Multivariate (HPMV) filter. We show that this criterion admits a whole set of optimal smoothing parameters, to which belong the widely used noise-to-signal ratios. We also propose explicit consistent estimators of these noise-to-signal ratios, which in turn yield a new performant method to estimate the output gap.
The univariate Hodrick-Prescott filter depends on the noise-to-signal ratio that acts as a smoothing parameter. We first propose an optimality criterion for choosing the best smoothing parameters. We show that the noise-to-signal ratio is the unique minimizer of this criterion, when we use an orthogonal parametrization of the trend, whereas it is not the case when an initial-value parametrization of the trend is applied. We then propose a multivariate extension of the filter and show that there is a whole class of positive definite matrices that satisfy a similar optimality criterion, when we apply an orthogonal parametrization of the trend.
We construct a global weak solution to a d-dimensional system of zero-pressure gas dynamics modified by introducing a finite artificial viscosity. We use discrete approximations to the continuous gas and make particles move along trajectories of the normalized simple symmetric random walk with deterministic drift. The interaction of these particles is given by a sticky particle dynamics. We show that a subsequence of these approximations converges to a weak solution of the system of zero-pressure gas dynamics in the sense of distributions. This weak solution is interpreted in terms of a random process solution of a nonlinear stochastic differential equation. We get a weak solution of the inviscid system by tending the viscosity to zero.
We use the stochastic calculus of variations for the fractional Brownian motion to derive formulas for the replicating portfolios for a class of contingent claims in a Bachelier and a Black-Scholes markets modulated by fractional Brownian motion. An example of such a model is the Black-Scholes process whose volatility solves a stochastic differential equation driven by a fractional Brownian motion that may depend on the underlying Brownian motion.
Under some regularity conditions on P-0 and u(0), we derive a unique local strong solution of the following system of pressureless gas equations with viscosity: [GRAPHICS] P-t-->P-0, uP(t) --> u(0)P(0), weakly, as t --> 0(+), by constructing a nonlinear diffusion process as solution to the following SDE: [GRAPHICS] We show then that Pt is the probability density of X-t while the velocity field admits the Following stochastic representation: u(t,X) = E [u(0)(X-0) \ X-t = x].
This is a short introduction to some basic aspects of statistical estimation techniques known as graduation technique in life and disability insurance.
In this paper we examine mean-field-type games in blockchain-based distributed power networks with several different entities: investors, consumers, prosumers, producers and miners. Under a simple model of jump-diffusion and regime switching processes, we identify risk-aware mean-field-type optimal strategies for the decision-makers.
In this article, a profit optimization between electricity producers is formulated and solved. The problem is described by a linear jump-diffusion system of conditional mean-field type where the conditioning is with respect to common noise and a quadratic cost functional involving the second moment, the square of the conditional expectation of the control actions of the producers. We provide semi-explicit solution of the corresponding mean-field-type game problem with common noise. The equilibrium strategies are in state-and-conditional mean-field feedback form, where the mean-field term is the conditional price given the realization of the global uncertainty. The methodology is extended to a situation of incomplete information mean-field-type game in which each producer knows its own type but not the types of the other producers. We compute the Bayesian mean-field-type equilibrium in a semi-explicit way and show that it is not ex post resilient.
In this paper, we study a class of reflected backward stochastic differential equations (BSDEs) of mean-field type, where the mean-field interaction in terms of the distribution of the Y-component of the solution enters in both the driver and the lower obstacle. We consider in details the case where the lower obstacle is a deterministic function of (Y, E[Y]) and discuss the more general dependence on the distribution of Y. Under mild Lipschitz and in-tegrability conditions on the coefficients, we obtain the well-posedness of such a class of equations. Under further monotonicity conditions, we show convergence of the standard penalization scheme to the solution of the equa-tion, which hence satisfies a minimality property. This class of equations is motivated by applications in pricing life insurance contracts with surrender options.
We consider the life-cycle optimal portfolio choice problem faced by an agent receiving labor income and allocating her wealth to risky assets and a riskless bond subject to a borrowing constraint. In this paper, to reflect a realistic economic setting, we propose a model where the dynamics of the labor income has two main features. First, labor income adjusts slowly to financial market shocks, a feature already considered in Biffis et al. (2015). Second, the labor income gamma i of an agent i is benchmarked against the labor incomes of a population gamma(n) := (gamma(1),gamma(2)<b>, ...,gamma(n)) of n agents with comparable tasks and/or ranks. This last feature has not been considered yet in the literature and is faced taking the limit when n -> +infinity so that the problem falls into the family of optimal control of infinite-dimensional McKean-Vlasov Dynamics, which is a completely new and challenging research field. We study the problem in a simplified case where, adding a suitable new variable, we are able to find explicitly the solution of the associated HJB equation and find the optimal feedback controls. The techniques are a careful and nontrivial extension of the ones introduced in the previous papers of Biffis et al. (2015, 0000). (C) 2021 Elsevier B.V. All rights reserved.
We establish existence of nearly-optimal controls, conditions for existence of an optimal control and a saddle-point for respectively a control problem and zero-sum differential game associated with payoff functionals of mean-field type, under dynamics driven by weak solutions of stochastic differential equations of mean-field type.
We consider a class of stochastic impulse control problems of general stochastic processes i.e. not necessarily Markovian. Under fairly general conditions we establish existence of an optimal impulse control. We also prove existence of combined optimal stochastic and impulse control of a fairly general class of diffusions with random coefficients. Unlike, in the Markovian framework, we cannot apply quasi-variational inequalities techniques. We rather derive the main results using techniques involving reflected BSDEs and the Snell envelope.
We study a class of infinite horizon impulse control problems with execution delay when the dynamics of the system is described by a general stochastic process adapted to the Brownian filtration. The problem is solved by means of probabilistic tools relying on the notion of Snell envelope and infinite horizon reflected backward stochastic differential equations. This allows us to establish the existence of an optimal strategy over all admissible strategies.