Theses
by students of Jonathan Goodman, since 1999
Ph D Theses
by
June, 2000
Download in Postscript format (for postscript printers), or PDF (for Acrobat readers)
Adaptive refinement algorithms for partial differential equations place
greater resolution in regions of the domain that need it. This leads to
considerable savings in computer time for problems that require high resolution
only in small parts of the domain. Often a solution has thin layers, with the
solution varying rapidly across the layer but more slowly along the layer.
Anisotropic refinement uses this to further reduce the computational effort
by using computational elements aligned with the layers.
This thesis studies two aspects of anisotropic refinement: mesh generation and error estimation. Even if one knew how the structures to be resolved, constructing an efficient mesh is a computational challenge in the anisotropic case (isotropic mesh generation is much easier). This thesis presents a mesh generation algorithm that produces optimal order of approximation in a model case. Error estimation is harder in the anisotropic case because some error estimation algorithms only work for isotropic meshes and because we need more information: directions for refinement in addition to locations for refinement. This thesis presents an error estimation strategy with some theoretical justification that seems to work well in practice. Computational experiments are given.
Masters theses
by
September, 2000
Download in Postscript format (for postscript printers), or PDF
The usual tree or PDE based pricing methods for options use arbitrage
arguments that involve cost free hedging. It would be very costly to use the
implied dynamic hedging strategies in an environment with trading costs.
This study addresses the question of how closely one can match the
arbitrage prices in the real world with transaction costs. The optimal
hedging strategies are computed using dynamic programming. Unlike the
arbitrage pricing situation, here the optimal hedging strategy depends
on the trader's utility function and would be different for more or less
risk averse traders. However, even for small transaction costs, the
hedging strategies depend significantly on the risk averseness. However,
the resulting costs depend less on the utility function.
by
January, 1999
Download in Postscript format (for postscript printers), or PDF (for Acrobat readers)
Importance sampling is a technique that improves the efficiency of Monte
Carlo sampling. Monte Carlo computations of Value at Risk (VaR) tend to
be inefficient because they depend on finding the probabilities of rare
events. It is natural ase importance sampling strategies on the theory
of "large deviations" from probability. It is not clear initially
that this will work because the probability distributions in VaR computations
are often lognormal for which the exponential moments are infinite. Nevertheless,
in test computations on a portfolio of puts and calls on eight underlying
correlated lognormal stocks, efficiencies were improved by large factors.
Moreover, the algorithm identifies the most likely ways for large losses
to occur, which can be of interest in itself.
Two technical difficulties treated in this thesis are the constrained maximization of the likelihood function, and the "multiple maximum" problem. The maximization requires not only the values of the options, but alsosome sensitivities (Greeks). In real applications, usually there are several kinds of market movements that lead to large losses. It is important to identify and sample from all of these to get an efficient estimator. To keep the estimator consistent (in the sense of statistics), it is necessary to decompose the state space into disjoint regions, one for each local minimum.
by
January, 1999
Download in Microsoft Word format, part 1, and part 2, or in Postscript format, part 1 and part 2.
In the insurance business, "graduation" is the process of constructing reasonable estimates of mortality rates from noisy empirical data. This is often thought of as smoothing, because plots of the data are rough curves while we feel the actual mortality probabilities should be a reasonably smooth function of age. In practice, many graduation methods are in use. Some are just smoothing methods. Others are based on statistical principles. In choosing a graduation method, we must consider accuracy and computational cost. Some of the statistical methods, especially Bayesian methods, are difficult to implement in an efficient way.
This thesis discusses a particular Bayesian estimation method for graduation. We take a simple prior density for the mortality rates for each age group but insist that the "posterior" graduated rates form a convex curve. The convexity constraint makes it difficult to sample from the posterior density. Markov chain Monte Carlo (MCMC) is now commonly used for sampling the posterior density in Bayesian statistics. The thesis presents computational results from the most commonly used MCMC method and shows that it gives disastrous results. The method requires more than a million "resamplings" to produce a single independent sample. It would be very interesting and useful for actuarial applications to develop better MCMC sampling methods for this problem.