SUPERCOMPUTING IN MEDICAL IMAGING

Computational Challenges in Medical Imaging

by Paul Thompson

Here is some information on our development of supercomputing approaches to map patterns of brain structure and function in human populations. Please feel free to contact me if you have any questions!
[Updated - September 2000]

The most computationally intensive brain image analysis algorithms we are developing can be grouped into several conceptual categories. The two major categories are:

(1) Statistical Algorithms. These include randomizations, permutation tests, Monte Carlo simulations, and Bayesian methods. Bayesian methods use statistical information in a brain image database to guide the behavior of other algorithms. This class of algorithms also includes methods that model patterns in brain data as Gaussian random fields; these are now becoming standard techniques in image analysis for detecting abnormal brain structure or function. All of these algorithms are generally designed to recover patterns from large brain image databases. Applications include detecting and mapping abnormalities in a new patient or group, as shown here --

-- identifying regions of abnormal brain structure or function, comparing a subject's cortical structure with a database as shown here --

-- mapping brain changes in a population, as shown here --

resolving group trends in gray matter distribution, cortical organization, or gyral patterning, as shown here --

(in schizophrenic subjects)

-- and computing how these trends are affected by genetic, demographic, cognitive or therapeutic factors. Several of the most computationally intensive algorithms, developed this year (Thompson et al., 2000), have detected previously unseen patterns of gray matter loss in Alzheimer's disease and schizophrenia, based on databases of MRI images from 46 and 96 subjects respectively. Examples of these patterns can be found here:

Alzheimer's Disease (N=46 subjects):


[more information]

Schizophrenia (N=96 scans, 48 subjects scanned at two time-points):



(2) Numerical Optimizations. These computations include a vast array of energy minimization and fluid flow problems that are now routinely used for pathology detection, and for mapping patterns of abnormal brain structure. The types of these optimization algorithms are detailed elsewhere (Thompson et al., 2000). There are actually two categories of energy minimization approach, and they typically involve the successive deformation of a 3D object, to match an image of brain structures. The algorithms for structure extraction use time-dependent (hyperbolic) partial differential equations, and deform a mesh with 130,000 triangular tiles into the configuation of the anatomical structure. A second class includes algorithms that refer to large databases of images, and often elastically warp, or deform, one brain image to match a large number of others in a database. These typically use elliptic partial differential equations, or variational minimizations, similar to the procedures used in continuum mechanics and general relativity. This scheme is shown here:


[more information]

Growth patterns during brain development, for example, can be recovered by deforming one brain to match another -- illustrative maps can be found here.

The results of these studies were widely covered in the international media. A typical set of growth maps, detecting disease-specific changes in the cortex of 24 subjects, takes 100 CPU hours on an R10000 SGI workstation, or just over 3 hours when run on our networked cluster of 32 such workstations, each processor using 512MB of RAM. For each subject, this growth mapping algorithm calculates a three-dimensional elastic deformation vector field, with 384 x 384 x 256 x 3, or approximately 0.1 billion degrees of freedom, reconfiguring the anatomy at the earlier time point into the shape of the anatomy of the later scan. A volumetric deformation field is calculated, from which local measures of three-dimensional tissue dilation (growth) or contraction are quantified. The deformation field driving the earlier onto the later anatomy is extended to the full volume using a continuum-mechanical model based on the Cauchy–Navier operator of linear elasticity. The system of equations to optimize is shown here:

and in 3D here:

and here:

The resulting system of 0.1 billion second-order elliptic partial differential equations is currently solved by successive over-relaxation methods, with multi-grid acceleration. The principle (and mathematics) are shown here:


As documented in a recent tutorial chapter (Thompson et al., 2000) -- we have spent several years accelerating these algorithms using Green's functions, accelerated overrelaxation, radial basis function neural networks, parallellization of the boundary conditions, and multi-grid approaches. We have also spent time developing new data structures that store statistical data, derived from a large population, more effectively. An example is our recent development of new tensor field representations to store information on structural variability in a population, so that a whole database of brain models does not need to be read in by the algorithms that use this information.

In many ways, we may be operating near the limits of what can be improved by modifying the speed of the algorithms themselves. However, increases in numbers of CPUs not only increases the routine throughput of these computations, but will, in the future, allow hypotheses not currently accessible to be tested. Hypotheses about 500-1000 subjects could be tested on groups of diseased subjects whose data is now online at the lab. Our initial experiments on 50-100 subjects show that such experiments are likely to be a success.

New Algorithms Currently Limited by Computational Resources

Many novel brain image analyses combine both of these types of algorithms, using both statistical simulations and numerical optimizations to map brain structure and function in healthy and diseased subjects. Some of these are pushing the limits of our current resources, others are vastly beyond these limits. These computational hybrids include simulated annealing used in solving our sets of millions of coupled partial differential equations, and the evolutionary and genetic programming approaches we are using for template matching. Multi-start methods and meta-algorithms are also being used to seed optimizations in different regions of parameter space, and break down previously intractable problems. These problems will recover previously undetected anomalies, but they currently tax our available resources and run for many weeks even when parallelized across 32 processors.

Permutation tests now allow us to use data mining to search for unsuspected patterns, which are subsequently be assessed with a rigorous analysis on independent data. Our current data mining experiments assess whether brain structure and function are affected by genetic, demographic, and therapeutic factors. They run all the time, including during the night when most processors are less busy. A typical, recent test assessed the significance of the detected effects on gray matter distribution in schizophrenia by running 100,000 randomizations. This procedure tests whether the results could have occurred by accident by randomly assigning subjects to groups, and comparing the results to the detected effect. The process is extremely CPU-intensive. 100,000 null regressions at 32,000 points with 42 subjects takes 34-36 hours in a load-balancing queue on an SGI RealityMonster with 32 internal CPUs. This algorithm runs easily in parallel. At the end of each iteration it will tell you how many null simulations have beaten the real map, and how many were less than it. It then tabulates a reference distribution and estimates a significance value for the real grouping based on the random groups.

Most of our current analysis apply numerical optimization and statistical algorithms in series, or recursively, to datasets that are large in either the number of subjects covered (routinely 100s) or enormous in terms of their spatial and temporal resolution. The image below shows the system load for our RealityMonster with 32 internal CPUs, while running a permutation test in parallel.

[click for larger image]

Why are there such Large CPU Requirements? Won't these Problems go away with Better Algorithms?

The reasons differ on why there are large CPU (processor speed) requirements. Large memory requirements (main/RAM/cache) for brain databases are considerably easier to understand. Data storage and RAM requirements are often linear or polynomial in the image size or database size, irrespective of the analysis performed. However, with the algorithms, CPU time (or complexity) may be paradoxically high. For example, deforming one brain to match another is a typical operation in measuring structural and functional differences between two subjects:

[more information]

In this warping computation, you may have only two images, and a deformation field with 256-cubed degrees of freedom (actually times 3, as there's a value for each warping vector component at each point in the brain). In other words, the solution is describable using 256x256x256x3, or 50 million, floating point parameters. However, the CPU requirements may be vast in relation to this number. The number of possible solutions is about 1000 to the power 16 million. To search even a small fraction of all these solutions would take the fastest available CPU since the beginning of time. The most advanced routines of numerical analysis have been applied to address this problem, namely finding solutions to problems in which many millions of variables are required to describe the solution. Beginning with conjugate gradient and Newtonian methods, spectral methods, most of these search the solution space in a principled fashion, but are prohibitively slow. Recently we have used randomization methods, that traverse the solution space using jump-diffusion and Monte Carlo methods, to vastly expand the number of subjects assessed. An increase in the CPU numbers, or their speed, by a factor of 10 will, in the future, allow us to detect disease-specific patterns in thousands, rather than a hundred or so, subjects.

To see where specialized hardware is most beneficial, we have made advances in the mathematics, algorithm construction, software libraries, use of parallel array-processing operations, and parallellization routines to accelerate the processing of images in large databases. Considerable advances will be made with greater computational resources, allowing vast databases of images to be screened. Already encouraging results have been found with test datasets of up to 1000 subjects, with new cohorts of subjects arriving daily for analysis and meta-analysis.


PUBLICATIONS |CONTACT ME |RESUME |HOMEPAGE