搜索附件  
头雁微网 附件中心 技术应用 情报信息 (eBook-CFD)Collins-Fundamental Numerical Methods and Data Analysis.pdf
板块导航
附件中心&附件聚合2.0
For Discuz! X3.5 © hgcad.com

(eBook-CFD)Collins-Fundamental Numerical Methods and Data Analysis.pdf

 

Fundamental Numerical Methods and Data Analysis by George W. Collins, II:
Fundamental Numerical Methods and Data Analysis by George W. Collins, II

Fundamental Numerical
Methods and Data Analysis
by George W. Collins, II



                                                       © George W. Collins, II 2003

Table of Contents
List of Figures....................................................................................................................................vi
List of Tables......................................................................................................................................ix
Preface............................................................................................................................xi
Notes to the Internet Edition...................................................................................xiv
1. Introduction and Fundamental Concepts.......................................................................... 1
1.1 Basic Properties of Sets and Groups.......................................................................... 3
1.2 Scalars, Vectors, and Matrices................................................................................... 5
1.3 Coordinate Systems and Coordinate Transformations.............................................. 8
1.4 Tensors and Transformations.................................................................................... 13
1.5 Operators................................................................................................................... 18
Chapter 1 Exercises............................................................................................................... 22
Chapter 1 References and Additional Reading..................................................................... 23
2. The Numerical Methods for Linear Equations and Matrices........................................ 25
2.1 Errors and Their Propagation.................................................................................... 26
2.2 Direct Methods for the Solution of Linear Algebraic Equations............................. 28
a. Solution by Cramer's Rule............................................................................ 28
b. Solution by Gaussian Elimination................................................................ 30
c. Solution by Gauss Jordan Elimination......................................................... 31
d. Solution by Matrix Factorization: The Crout Method................................. 34
e. The Solution of Tri-diagonal Systems of Linear Equations........................ 38
2.3 Solution of Linear Equations by Iterative Methods................................................. 39
a. Solution by The Gauss and Gauss-Seidel Iteration Methods...................... 39
b. The Method of Hotelling and Bodewig ..................................................... 41
c. Relaxation Methods for the Solution of Linear Equations.......................... 44
d. Convergence and Fixed-point Iteration Theory........................................... 46
2.4 The Similarity Transformations and the Eigenvalues and Vectors of a Matrix........................................................................................................................ 48
i
Chapter 2 Exercises............................................................................................................... 53
Chapter 2 References and Supplemental Reading................................................................ 54
3. Polynomial Approximation, Interpolation, and Orthogonal Polynomials................... 55
3.1 Polynomials and Their Roots.................................................................................... 56
a. Some Constraints on the Roots of Polynomials........................................... 57
b. Synthetic Division......................................................................................... 58
c. The Graffe Root-Squaring Process.............................................................. 60
d. Iterative Methods.......................................................................................... 61
3.2 Curve Fitting and Interpolation................................................................................. 64
a. Lagrange Interpolation................................................................................. 65
b. Hermite Interpolation.................................................................................... 72
c. Splines........................................................................................................... 75
d. Extrapolation and Interpolation Criteria...................................................... 79
3.3 Orthogonal Polynomials........................................................................................... 85
a. The Legendre Polynomials........................................................................... 87
b. The Laguerre Polynomials........................................................................... 88
c. The Hermite Polynomials............................................................................. 89
d. Additional Orthogonal Polynomials............................................................ 90
e. The Orthogonality of the Trigonometric Functions..................................... 92
Chapter 3 Exercises................................................................................................................93
Chapter 3 References and Supplemental Reading.................................................................95
4. Numerical Evaluation of Derivatives and Integrals.........................................................97
4.1 Numerical Differentiation..........................................................................................98
a. Classical Difference Formulae......................................................................98
b. Richardson Extrapolation for Derivatives...................................................100
4.2 Numerical Evaluation of Integrals: Quadrature......................................................102
a. The Trapezoid Rule.....................................................................................102
b. Simpson's Rule.............................................................................................103
c. Quadrature Schemes for Arbitrarily Spaced Functions..............................105
d. Gaussian Quadrature Schemes....................................................................107
e. Romberg Quadrature and Richardson Extrapolation..................................111
f. Multiple Integrals.........................................................................................113
ii
4.3 Monte Carlo Integration Schemes and Other Tricks...............................................115
a. Monte Carlo Evaluation of Integrals...........................................................115
b. The General Application of Quadrature Formulae to Integrals.................117
Chapter 4 Exercises .............................................................................................................119
Chapter 4 References and Supplemental Reading...............................................................120
5. Numerical Solution of Differential and Integral Equations..........................................121
5.1 The Numerical Integration of Differential Equations.............................................122
a. One Step Methods of the Numerical Solution of Differential Equations......................................................................................................123
b. Error Estimate and Step Size Control.........................................................131
c. Multi-Step and Predictor-Corrector Methods.............................................134
d. Systems of Differential Equations and Boundary Value Problems.......................................................................................................138
e. Partial Differential Equations......................................................................146
5.2 The Numerical Solution of Integral Equations........................................................147
a. Types of Linear Integral Equations.............................................................148
b. The Numerical Solution of Fredholm Equations........................................148
c. The Numerical Solution of Volterra Equations..........................................150
d. The Influence of the Kernel on the Solution...............................................154
Chapter 5 Exercises..............................................................................................................156
Chapter 5 References and Supplemental Reading..............................................................158
6. Least Squares, Fourier Analysis, and Related Approximation Norms.......................159
6.1 Legendre's Principle of Least Squares.....................................................................160
a. The Normal Equations of Least Squares.....................................................161
b. Linear Least Squares....................................................................................162
c. The Legendre Approximation.....................................................................164
6.2 Least Squares, Fourier Series, and Fourier Transforms..........................................165
a. Least Squares, the Legendre Approximation, and Fourier Series..............165
b. The Fourier Integral.....................................................................................166
c. The Fourier Transform................................................................................167
d. The Fast Fourier Transform Algorithm......................................................169
iii
6.3 Error Analysis for Linear Least-Squares.................................................................176
a. Errors of the Least Square Coefficients......................................................176
b. The Relation of the Weighted Mean Square Observational Error to the Weighted Mean Square Residual......................................................178
c. Determining the Weighted Mean Square Residual....................................179
d. The Effects of Errors in the Independent Variable.....................................181
6.4 Non-linear Least Squares.........................................................................................182
a. The Method of Steepest Descent.................................................................183
b. Linear approximation of f(aj,x)...................................................................184
c. Errors of the Least Squares Coefficients.....................................................186
6.5 Other Approximation Norms...................................................................................187
a. The Chebyschev Norm and Polynomial Approximation...........................188
b. The Chebyschev Norm, Linear Programming, and the Simplex Method.........................................................................................................189
c. The Chebyschev Norm and Least Squares.................................................190
Chapter 6 Exercises..............................................................................................................192
Chapter 6 References and Supplementary Reading.............................................................194
7. Probability Theory and Statistics.....................................................................................197
7.1 Basic Aspects of Probability Theory.......................................................................200
a. The Probability of Combinations of Events................................................201
b. Probabilities and Random Variables...........................................................202
c. Distributions of Random Variables.............................................................203
7.2 Common Distribution Functions.............................................................................204
a. Permutations and Combinations..................................................................204
b. The Binomial Probability Distribution........................................................205
c. The Poisson Distribution.............................................................................206
d. The Normal Curve.......................................................................................207
e. Some Distribution Functions of the Physical World..................................210
7.3 Moments of Distribution Functions.........................................................................211
7.4 The Foundations of Statistical Analysis..................................................................217
a. Moments of the Binomial Distribution.......................................................218
b. Multiple Variables, Variance, and Covariance...........................................219
c. Maximum Likelihood..................................................................................221
iv
Chapter 7 Exercises.............................................................................................................223
Chapter 7 References and Supplemental Reading...............................................................224
8. Sampling Distributions of Moments, Statistical Tests, and Procedures......................225
8.1 The t, χ2 , and F Statistical Distribution Functions..................................................226
a. The t-Density Distribution Function...........................................................226
b. The χ2 -Density Distribution Function........................................................227
c. The F-Density Distribution Function..........................................................229
8.2 The Level of Significance and Statistical Tests......................................................231
a. The "Students" t-Test...................................................................................232
b. The χ2-test....................................................................................................233
c. The F-test.....................................................................................................234
d. Kolmogorov-Smirnov Tests........................................................................235
8.3 Linear Regression, and Correlation Analysis..........................................................237
a. The Separation of Variances and the Two-Variable Correlation Coefficient....................................................................................................238
b. The Meaning and Significance of the Correlation Coefficient..................240
c. Correlations of Many Variables and Linear Regression............................242
d Analysis of Variance....................................................................................243
8.4 The Design of Experiments.....................................................................................246
a. The Terminology of Experiment Design....................................................249
b. Blocked Designs..........................................................................................250
c. Factorial Designs.........................................................................................252
Chapter 8 Exercises...........................................................................................................255
Chapter 8 References and Supplemental Reading.............................................................257
Index.....................................................................................................................................257
v
List of Figures
Figure 1.1 shows two coordinate frames related by the transformation angles φij. Four coordinates are necessary if the frames are not orthogonal.................................................. 11
Figure 1.2 shows two neighboring points P and Q in two adjacent coordinate systems X and X' The differential distance between the two is dx􀁇. The vectorial distance to the two points is )P(X
􀁇 or )P('X
􀁇 and X)Q(􀁇 or )Q('X
􀁇respectively.................. 15
Figure 1.3 schematically shows the divergence of a vector field. In the region where the arrows of the vector field converge, the divergence is positive, implying an increase in the source of the vector field. The opposite is true for the region where the field vectors diverge............................................................................................. 19
Figure 1.4 schematically shows the curl of a vector field. The direction of the curl is determined by the "right hand rule" while the magnitude depends on the rate of change of the x- and y-components of the vector field with respect to y and x.................. 19
Figure 1.5 schematically shows the gradient of the scalar dot-density in the form of a number of vectors at randomly chosen points in the scalar field. The direction of the gradient points in the direction of maximum increase of the dot-density, while the magnitude of the vector indicates the rate of change of that density. ................. 20
Figure 3.1 depicts a typical polynomial with real roots. Construct the tangent to the curve at the point xk and extend this tangent to the x-axis. The crossing point xk+1 represents an improved value for the root in the Newton-Raphson algorithm. The point xk-1 can be used to construct a secant providing a second method for finding an improved value of x.......................................................................... 62
Figure 3.2 shows the behavior of the data from Table 3.1. The results of various forms of interpolation are shown. The approximating polynomials for the linear and parabolic Lagrangian interpolation are specifically displayed. The specific results for cubic Lagrangian interpolation, weighted Lagrangian interpolation and interpolation by rational first degree polynomials are also indicated. ......................... 69
Figure 4.1 shows a function whose integral from a to b is being evaluated by the trapezoid rule. In each interval Δxi the function is approximated by a straight line........................................................................................................................................103
Figure 4.2 shows the variation of a particularly complicated integrand. Clearly it is not a polynomial and so could not be evaluated easily using standard quadrature formulae. However, we may use Monte Carlo methods to determine the ratio area under the curve compared to the area of the rectangle................................................117
vi
Figure 5.1 show the solution space for the differential equation y' = g(x,y). Since the initial value is different for different solutions, the space surrounding the solution of choice can be viewed as being full of alternate solutions. The two dimensional Taylor expansion of the Runge-Kutta method explores this solution space to obtain a higher order value for the specific solution in just one step....................127
Figure 5.2 shows the instability of a simple predictor scheme that systematically underestimates the solution leading to a cumulative build up of truncation error..............135
Figure 6.1 compares the discrete Fourier transform of the function e-│x│ with the continuous transform for the full infinite interval. The oscillatory nature of the discrete transform largely results from the small number of points used to represent the function and the truncation of the function at t = ±2. The only points in the discrete transform that are even defined are denoted by ...............................173
Figure 6.2 shows the parameter space defined by the φj(x)'s. Each f(aj,xi) can be represented as a linear combination of the φj(xi) where the aj are the coefficients of the basis functions. Since the observed variables Yi cannot be expressed in terms of the φj(xi), they lie out of the space.........................................................................180
Figure 6.3 shows the χ2 hypersurface defined on the aj space. The non-linear least square seeks the minimum regions of that hypersurface. The gradient method moves the iteration in the direction of steepest decent based on local values of the derivative, while surface fitting tries to locally approximate the function in some simple way and determines the local analytic minimum as the next guess for the solution......................................................................................................................184
Figure 6.4 shows the Chebyschev fit to a finite set of data points. In panel a the fit is with a constant a0 while in panel b the fit is with a straight line of the form f(x) = a1 x + a0. In both cases, the adjustment of the parameters of the function can only produce n+2 maximum errors for the (n+1) free parameters...............................188
Figure 6.5 shows the parameter space for fitting three points with a straight line under the Chebyschev norm. The equations of condition denote half-planes which satisfy the constraint for one particular point.......................................................................189
Figure 7.1 shows a sample space giving rise to events E and F. In the case of the die, E is the probability of the result being less than three and F is the probability of the result being even. The intersection of circle E with circle F represents the probability of E and F [i.e. P(EF)]. The union of circles E and F represents the probability of E or F. If we were to simply sum the area of circle E and that of F we would double count the intersection...........................................................................202
vii
Figure 7.2 shows the normal curve approximation to the binomial probability distribution function. We have chosen the coin tosses so that p = 0.5. Here μ and σ can be seen as the most likely value of the random variable x and the 'width' of the curve respectively. The tail end of the curve represents the region approximated by the Poisson distribution............................................................................209
Figure 7.3 shows the mean of a function f(x) as <x>. Note this is not the same as the most likely value of x as was the case in figure 7.2. However, in some real sense σ is still a measure of the width of the function. The skewness is a measure of the asymmetry of f(x) while the kurtosis represents the degree to which the f(x) is 'flattened' with respect to a normal curve. We have also marked the location of the values for the upper and lower quartiles, median and mode.....................................................................................................................................214
Figure 1.1 shows a comparison between the normal curve and the t-distribution function for N = 8. The symmetric nature of the t-distribution means that the mean, median, mode, and skewness will all be zero while the variance and kurtosis will be slightly larger than their normal counterparts. As N → ∞, the t-distribution approaches the normal curve with unit variance...........................................227
Figure 8.2 compares the χ2-distribution with the normal curve. For N=10 the curve is quite skewed near the origin with the mean occurring past the mode (χ2 = 8). The Normal curve has μ = 8 and σ2 = 20. For large N, the mode of the χ2-distribution approaches half the variance and the distribution function approaches a normal curve with the mean equal the mode.................................................228
Figure 8.3 shows the probability density distribution function for the F-statistic with values of N1 = 3 and N2 = 5 respectively. Also plotted are the limiting distribution functions f(χ2/N1) and f(t2). The first of these is obtained from f(F) in the limit of N2 → ∞. The second arises when N1 ≥ 1. One can see the tail of the f(t2) distribution approaching that of f(F) as the value of the independent variable increases. Finally, the normal curve which all distributions approach for large values of N is shown with a mean equal to F &#61478; and a variance equal to the variance for f(F)....................................................................................................................220
Figure 8.4 shows a histogram of the sampled points xi and the cumulative probability of obtaining those points. The Kolmogorov-Smirnov tests compare that probability with another known cumulative probability and ascertain the odds that the differences occurred by chance...............................................................................237
Figure 8.5 shows the regression lines for the two cases where the variable X2 is regarded as the dependent variable (panel a) and the variable X1 is regarded as the dependent variable (panel b). ........................................................................................240
viii
List of Tables
Table 2.1 Convergence of Gauss and Gauss-Seidel Iteration Schemes................................... 41
Table 2.2 Sample Iterative Solution for the Relaxation Method.............................................. 46
Table 3.1 Sample Data and Results for Lagrangian Interpolation Formulae.......................... 67
Table 3.2 Parameters for the Polynomials Generated by Neville's Algorithm........................ 71
Table 3.3 A Comparison of Different Types of Interpolation Formulae................................. 79
Table 3.4 Parameters for Quotient Polynomial Interpolation.................................................. 83
Table 3.5 The First Five Members of the Common Orthogonal Polynomials........................ 90
Table 3.6 Classical Orthogonal Polynomials of the Finite Interval......................................... 91
Table 4.1 A Typical Finite Difference Table for f(x) = x2........................................................99
Table 4.2 Types of Polynomials for Gaussian Quadrature.....................................................110
Table 4.3 Sample Results for Romberg Quadrature................................................................112
Table 4.4 Test Results for Various Quadrature Formulae.......................................................113
Table 5.1 Results for Picard's Method.....................................................................................125
Table 5.2 Sample Runge-Kutta Solutions................................................................................130
Table 5.3 Solutions of a Sample Boundary Value Problem for Various Orders of Approximation.........................................................................................................145
Table 5.4 Solutions of a Sample Boundary Value Problem Treated as an Initial Value Problem..........................................................................................................145
Table 5.5 Sample Solutions for a Type 2 Volterra Equation..................................................152
Table 6.1 Summary Results for a Sample Discrete Fourier Transform..................................172
Table 6.2 Calculations for a Sample Fast Fourier Transform.................................................175
Table 7.1 Grade Distribution for Sample Test Results............................................................215
ix
Table 7.2 Examination Statistics for the Sample Test.............................................................215
Table 8.1 Sample Beach Statistics for Correlation Example..................................................241
Table 8.2 Factorial Combinations for Two-level Experiments with n=2-4............................253
Preface
&#8226; &#8226; &#8226;
The origins of this book can be found years ago when I was a doctoral candidate working on my thesis and finding that I needed numerical tools that I should have been taught years before. In the intervening decades, little has changed except for the worse. All fields of science have undergone an information explosion while the computer revolution has steadily and irrevocability been changing our lives. Although the crystal ball of the future is at best "seen through a glass darkly", most would declare that the advent of the digital electronic computer will change civilization to an extent not seen since the coming of the steam engine. Computers with the power that could be offered only by large institutions a decade ago now sit on the desks of individuals. Methods of analysis that were only dreamed of three decades ago are now used by students to do homework exercises. Entirely new methods of analysis have appeared that take advantage of computers to perform logical and arithmetic operations at great speed. Perhaps students of the future may regard the multiplication of two two-digit numbers without the aid of a calculator in the same vein that we regard the formal extraction of a square root. The whole approach to scientific analysis may change with the advent of machines that communicate orally. However, I hope the day never arrives when the investigator no longer understands the nature of the analysis done by the machine.
Unfortunately instruction in the uses and applicability of new methods of analysis rarely appears in the curriculum. This is no surprise as such courses in any discipline always are the last to be developed. In rapidly changing disciplines this means that active students must fend for themselves. With numerical analysis this has meant that many simply take the tools developed by others and apply them to problems with little knowledge as to the applicability or accuracy of the methods. Numerical algorithms appear as neatly packaged computer programs that are regarded by the user as "black boxes" into which they feed their data and from which come the publishable results. The complexity of many of the problems dealt with in this manner makes determining the validity of the results nearly impossible. This book is an attempt to correct some of these problems.
Some may regard this effort as a survey and to that I would plead guilty. But I do not regard the word survey as pejorative for to survey, condense, and collate, the knowledge of man is one of the responsibilities of the scholar. There is an implication inherent in this responsibility that the information be made more comprehensible so that it may more readily be assimilated. The extent to which I have succeeded in this goal I will leave to the reader. The discussion of so many topics may be regarded by some to be an impossible task. However, the subjects I have selected have all been required of me during my professional career and I suspect most research scientists would make a similar claim. xi
Unfortunately few of these subjects were ever covered in even the introductory level of treatment given here during my formal education and certainly they were never placed within a coherent context of numerical analysis.
The basic format of the first chapter is a very wide ranging view of some concepts of mathematics based loosely on axiomatic set theory and linear algebra. The intent here is not so much to provide the specific mathematical foundation for what follows, which is done as needed throughout the text, but rather to establish, what I call for lack of a better term, "mathematical sophistication". There is a general acquaintance with mathematics that a student should have before embarking on the study of numerical methods. The student should realize that there is a subject called mathematics which is artificially broken into sub-disciplines such a linear algebra, arithmetic, calculus, topology, set theory, etc. All of these disciplines are related and the sooner the student realizes that and becomes aware of the relations, the sooner mathematics will become a convenient and useful language of scientific expression. The ability to use mathematics in such a fashion is largely what I mean by "mathematical sophistication". However, this book is primarily intended for scientists and engineers so while there is a certain familiarity with mathematics that is assumed, the rigor that one expects with a formal mathematical presentation is lacking. Very little is proved in the traditional mathematical sense of the word. Indeed, derivations are resorted to mainly to emphasize the assumptions that underlie the results. However, when derivations are called for, I will often write several forms of the same expression on the same line. This is done simply to guide the reader in the direction of a mathematical development. I will often give "rules of thumb" for which there is no formal proof. However, experience has shown that these "rules of thumb" almost always apply. This is done in the spirit of providing the researcher with practical ways to evaluate the validity of his or her results.
The basic premise of this book is that it can serve as the basis for a wide range of courses that discuss numerical methods used in science. It is meant to support a series of lectures, not replace them. To reflect this, the subject matter is wide ranging and perhaps too broad for a single course. It is expected that the instructor will neglect some sections and expand on others. For example, the social scientist may choose to emphasize the chapters on interpolation, curve-fitting and statistics, while the physical scientist would stress those chapters dealing with numerical quadrature and the solution of differential and integral equations. Others might choose to spend a large amount of time on the principle of least squares and its ramifications. All these approaches are valid and I hope all will be served by this book. While it is customary to direct a book of this sort at a specific pedagogic audience, I find that task somewhat difficult. Certainly advanced undergraduate science and engineering students will have no difficulty dealing with the concepts and level of this book. However, it is not at all obvious that second year students couldn't cope with the material. Some might suggest that they have not yet had a formal course in differential equations at that point in their career and are therefore not adequately prepared. However, it is far from obvious to me that a student’s first encounter with differential equations should be in a formal mathematics course. Indeed, since most equations they are liable to encounter will require a numerical solution, I feel the case can be made that it is more practical for them to be introduced to the subject from a graphical and numerical point of view. Thus, if the instructor exercises some care in the presentation of material, I see no real barrier to using this text at the second year level in some areas. In any case I hope that the student will at least be exposed to the wide range of the material in the book lest he feel that numerical analysis is limited only to those topics of immediate interest to his particular specialty.
xii
Nowhere is this philosophy better illustrated that in the first chapter where I deal with a wide range of mathematical subjects. The primary objective of this chapter is to show that mathematics is "all of a piece". Here the instructor may choose to ignore much of the material and jump directly to the solution of linear equations and the second chapter. However, I hope that some consideration would be given to discussing the material on matrices presented in the first chapter before embarking on their numerical manipulation. Many will feel the material on tensors is irrelevant and will skip it. Certainly it is not necessary to understand covariance and contravariance or the notion of tensor and vector densities in order to numerically interpolate in a table of numbers. But those in the physical sciences will generally recognize that they encountered tensors for the first time too late in their educational experience and that they form the fundamental basis for understanding vector algebra and calculus. While the notions of set and group theory are not directly required for the understanding of cubic splines, they do form a unifying basis for much of mathematics. Thus, while I expect most instructors will heavily select the material from the first chapter, I hope they will encourage the students to at least read through the material so as to reduce their surprise when the see it again.
The next four chapters deal with fundamental subjects in basic numerical analysis. Here, and throughout the book, I have avoided giving specific programs that carry out the algorithms that are discussed. There are many useful and broadly based programs available from diverse sources. To pick specific packages or even specific computer languages would be to unduly limit the student's range and selection. Excellent packages are contain in the IMSL library and one should not overlook the excellent collection provided along with the book by Press et al. (see reference 4 at the end of Chapter 2). In general collections compiled by users should be preferred for they have at least been screened initially for efficacy.
Chapter 6 is a lengthy treatment of the principle of least squares and associated topics. I have found that algorithms based on least squares are among the most widely used and poorest understood of all algorithms in the literature. Virtually all students have encountered the concept, but very few see and understand its relationship to the rest of numerical analysis and statistics. Least squares also provides a logical bridge to the last chapters of the book. Here the huge field of statistics is surveyed with the hope of providing a basic understanding of the nature of statistical inference and how to begin to use statistical analysis correctly and with confidence. The foundation laid in Chapter 7 and the tests presented in Chapter 8 are not meant to be a substitute for a proper course of study in the subject. However, it is hoped that the student unable to fit such a course in an already crowded curriculum will at least be able to avoid the pitfalls that trap so many who use statistical analysis without the appropriate care.
Throughout the book I have tried to provide examples integrated into the text of the more difficult algorithms. In testing an earlier version of the book, I found myself spending most of my time with students giving examples of the various techniques and algorithms. Hopefully this initial shortcoming has been overcome. It is almost always appropriate to carry out a short numerical example of a new method so as to test the logic being used for the more general case. The problems at the end of each chapter are meant to be generic in nature so that the student is not left with the impression that this algorithm or that is only used in astronomy or biology. It is a fairly simple matter for an instructor to find examples in diverse disciplines that utilize the techniques discussed in each chapter. Indeed, the student should be encouraged to undertake problems in disciplines other than his/her own if for no other reason than to find out about the types of problems that concern those disciplines.
xiii
Here and there throughout the book, I have endeavored to convey something of the philosophy of numerical analysis along with a little of the philosophy of science. While this is certainly not the central theme of the book, I feel that some acquaintance with the concepts is essential to anyone aspiring to a career in science. Thus I hope those ideas will not be ignored by the student on his/her way to find some tool to solve an immediate problem. The philosophy of any subject is the basis of that subject and to ignore it while utilizing the products of that subject is to invite disaster.
There are many people who knowingly and unknowingly had a hand in generating this book. Those at the Numerical Analysis Department of the University of Wisconsin who took a young astronomy student and showed him the beauty of this subject while remaining patient with his bumbling understanding have my perpetual gratitude. My colleagues at The Ohio State University who years ago also saw the need for the presentation of this material and provided the environment for the development of a formal course in the subject. Special thanks are due Professor Philip C. Keenan who encouraged me to include the sections on statistical methods in spite of my shortcomings in this area. Peter Stoychoeff has earned my gratitude by turning my crude sketches into clear and instructive drawings. Certainly the students who suffered through this book as an experimental text have my admiration and well as my thanks.
George W. Collins, II
September 11, 1990
A Note Added for the Internet Edition
A significant amount of time has passed since I first put this effort together. Much has changed in Numerical Analysis. Researchers now seem often content to rely on packages prepared by others even more than they did a decade ago. Perhaps this is the price to be paid by tackling increasingly ambitious problems. Also the advent of very fast and cheap computers has enabled investigators to use inefficient methods and still obtain answers in a timely fashion. However, with the avalanche of data about to descend on more and more fields, it does not seem unreasonable to suppose that numerical tasks will overtake computing power and there will again be a need for efficient and accurate algorithms to solve problems. I suspect that many of the techniques described herein will be rediscovered before the new century concludes. Perhaps efforts such as this will still find favor with those who wish to know if numerical results can be believed.
George W. Collins, II
January 30, 2001
xiv
xv
A Further Note for the Internet Edition
Since I put up a version of this book two years ago, I have found numerous errors which largely resulted from the generations of word processors through which the text evolved. During the last effort, not all the fonts used by the text were available in the word processor and PDF translator. This led to errors that were more wide spread that I realized. Thus, the main force of this effort is to bring some uniformity to the various software codes required to generate the version that will be available on the internet. Having spent some time converting Fundamentals of Stellar Astrophysics and The Virial Theorem in Stellar Astrophysics to Internet compatibility, I have learned to better understand the problems of taking old manuscripts and setting then in the contemporary format. Thus I hope this version of my Numerical Analysis book will be more error free and therefore useable. Will I have found all the errors? That is most unlikely, but I can assure the reader that the number of those errors is significantly reduced from the earlier version. In addition, I have attempted to improve the presentation of the equations and other aspects of the book so as to make it more attractive to the reader. All of the software coding for the index was lost during the travels through various word processors. Therefore, the current version was prepared by means of a page comparison between an earlier correct version and the current presentation. Such a table has an intrinsic error of at least ± 1 page and the index should be used with that in mind. However, it should be good enough to guide the reader to general area of the desired subject.
Having re-read the earlier preface and note I wrote, I find I still share the sentiments expressed therein. Indeed, I find the flight of the student to “black-box” computer programs to obtain solutions to problems has proceeded even faster than I thought it would. Many of these programs such as MATHCAD are excellent and provide quick and generally accurate ‘first looks’ at problems. However, the researcher would be well advised to understand the methods used by the “black-boxes” to solve their problems. This effort still provides the basis for many of the operations contained in those commercial packages and it is hoped will provide the researcher with the knowledge of their applicability to his/her particular problem. However, it has occurred to me that there is an additional view provided by this book. Perhaps, in the future, a historian may wonder what sort of numerical skills were expected of a researcher in the mid twentieth century. In my opinion, the contents of this book represent what I feel scientists and engineers of the mid twentieth century should have known and many did. I am confident that the knowledge-base of the mid twenty first century scientist will be quite different. One can hope that the difference will represent an improvement.
Finally, I would like to thank John Martin and Charles Knox who helped me adapt this version for the Internet and the Astronomy Department at the Case Western Reserve University for making the server-space available for the PDF files. As is the case with other books I have put on the Internet, I encourage anyone who is interested to down load the PDF files as they may be of use to them. I would only request that they observe the courtesy of proper attribution should they find my efforts to be of use.
George W. Collins, II
April, 2003
Case Western Reserve University
Index
A
Adams-Bashforth-Moulton Predictor-Corrector..
Characteristic values........................................49
.......................................................................136
of a matrix............................................49
Analysis of variance...............................220, 245
Characteristic vectors.......................................49
design matrix for...............................243
of a matrix.............................................49
for one factor.....................................242
Chebyschev polynomials.................................90
Anti-correlation: meaning of..........................239
of the first kind................................................91
Approximation norm......................................174
of the second kind.....................................91
Arithmetic mean.............................................222
recurrence relation...................................91
Associativity defined.........................................3
relations between first and second kind........................................91
Average..........................................................211
Axial vectors....................................................11
Chebyshev norm
and least squares...............................190
B
defined...............................................186
Babbitt................................................................1
Chi square
Back substitution..............................................30
defined...............................................227
Bairstow's method for polynomials
distribution and analysis of variance.244
.........................................................................62
normalized.........................................227
Bell-shaped curve and the normal curve …… 209
statistic for large N............................230
Binomial coefficient …………….……...99, 204
Chi-square test
Binomial distribution function...............204, 207
confidence limits for.........................232
Binomial series...............................................204
defined..............................................232
Binomial theorem...........................................205
meaning of........................................232
Bivariant distribution.....................................219
Cofactor of a matrix.........................................28
Blocked data and experiment design……… 272
Combination
defined..............................................204
Bodewig...........................................................40
Communitative law............................................3
Bose-Einstein distribution function...............210
Complimentary error function.......................233
Boundary value problem................................122
Confidence level
a sample solution...............................140
defined..............................................231
compared to an initial value problem145
defined..............................................139
and percentiles..................................232
for correlation
Bulirsch-Stoer method...................................136
coefficients................................241, 242
for the F-test.....................................234
C
Confounded interactions
defined..............................................250
Cantor, G............................................................3
Constants of integration for ordinary differential
Cartesian coordinates...................................8, 12
equations...........................................122
Causal relationship and correlation 239, 240
Central difference operator
defined................................................99
Contravariant vector…………............... …… 16
Convergence of Gauss-Seidel iteration...........47
Convergent iterative function
Characteristic equation ....................................49
259
criterion for.........................................46
of a matrix..............................................49
Index
Degrees of freedom
Coordinate transformation.................................8
Corrector
Adams-Moulton................................136
and correlation..................................241
defined..............................................221
Correlation coefficient
and causality.....................................241
for binned data..................................236
and covariance..................................242
for the F-statistic...............................230
for the F-test.....................................233
for the t-distribution..........................227
and least squares...............................242
in analysis of variance......................244
defined..............................................239
Del operator.....................................................19
for many variables.............................241
(see Nabula)
for the parent population...................241
meaning of................................239, 240
symmetry of......................................242
Derivative from Richardson extrapolation....100
Descartes's rule of signs...................................57
Design matrix
Covariance.....................................................219
and the correlation coefficient..........241
coefficient of.....................................219
for analysis of variance.....................243
Determinant
calculation by Gauss-Jordan
of a symmetric function....................220
Covariant vectors
definition.............................................17
Method................................................33
of a matrix.............................................7
transformational invariance of……… 47
Cramer's rule....................................................28
Deviation
Cross Product...................................................11
from the mean...................................238
Crout Method...................................................34
statistics of........................................237
example of...........................................35
Cubic splines
constraints for.....................................75
Difference operator
definition.............................................19
Differential equations
Cumulative probability and KS tests.............235
and linear 2-point boundary
Cumulative probability distribution
value problems..................................139
of the parent population....................235
Curl..................................................................19
definition of.........................................19
Bulirsch-Stoer method...................................136
error estimate for..............................130
Curve fitting
defined................................................64
ordinary, defined...............................121
with splines.........................................75
partial................................................145
solution by one-step methods...........122
solution by predictor-corrector
D
methods.............................................134
solution by Runga-Kutta method...…126
Degree
step size control................................130
of a partial differential equation........146
systems of.........................................137
of an ordinary differential equation..121
Degree of precision
defined..............................................102
Dimensionality of a vector.................................4
Dirac delta function
as a kernel for an integral
for Gaussian quadrature....................106
equation...........................................155
for Simpson's rule.............................104
Directions cosines..............................................9
for the Trapezoid rule........................103
260
Numerical Methods and Data Analysis
F-test
Dirichlet conditions
and least squares...............................234
for Fourier series...............................166
defined..............................................233
Dirichlet's theorem.........................................166
for an additional parameter...............234
Discrete Fourier transform.............................169
Distribution function
for chi-square....................................227
meaning of........................................234
Factor
in analysis of variance......................242
for the t-statistic................................226
of an experiment...............................249
of the F-statistic................................229
Divergence.......................................................19
definition of.........................................19
Factored form
of a polynomial...................................56
Factorial design..............................................249
Double-blind experiments..............................246
Fast Fourier Transform............................92, 168
Fermi-Dirac distribution function..................210
E
Field
definition...............................................5
Effect
scalar.....................................................5
defined for analysis of variance........244
vector....................................................5
Eigen equation.................................................49
Finite difference calculus
of a matrix...........................................49
fundemental theorem of......................98
Eigen-vectors...................................................49
Finite difference operator
of a matrix...........................................49
use for numerical differentiation........98
sample solution for..............................50
First-order variances
Eigenvalues
defined..............................................237
of a matrix.....................................48, 49
Fixed-point
sample solution for..............................50
defined................................................46
Equal interval quadrature...............................112
Fixed-point iteration theory.............................46
Equations of condition
and integral equations.......................153
for quadrature weights......................106
and non-linear least squares......182, 186
Error analysis
and Picard's method..........................123
for non-linear least squares...............186
for the corrector in ODEs.................136
Error function.................................................232
Fourier analysis..............................................164
Euler formula for complex numbers..............168
Expectation value...........................................221
defined..............................................202
Fourier integral..............................................167
Fourier series...........................................92, 160
Experiment design.........................................245
terminology for.................................249
and the discrete Fourier transform....169
coefficients for..................................165
convergence of..................................166
using a Latin square..........................251
Fourier transform.....................................92, 164
Experimental area..........................................249
defined..............................................167
Extrapolation..............................................77, 78
for a discrete function.......................169
inverse of..........................................168
F
Fredholm equation
F-distribution function
defined..............................................227
defined..............................................146
solution by iteration..........................153
F-statistic........................................................230
solution of Type 1.............................147
and analysis of variance....................244
solution of Type 2.............................148
for large N.........................................230
261
Index
Hermitian matrix
Freedom
definition...............................................6
degrees of..........................................221
Higher order differential equations as systems
Fundamental theorem of algebra......................56
of first order equations………………...140
G
Hildebrandt......................................................33
Hollerith.............................................................1
Galton, Sir Francis.........................................199
Hotelling..........................................................40
Gauss, C.F. ............................................106, 198
Hotelling and Bodewig method
Gauss elimination
example of..........................................42
and tri-diagonal equations……………38
Hyper-efficient quadrature formula
for one dimension.............................103
Gauss Jordan Elimination................................30
in multiple dimensions......................115
Gauss-Chebyschev quadrature
Hypothesis testing and analysis of variance..245
and multi-dimension quadrature.......114
Gauss-Hermite quadrature.............................114
I
Gauss-iteration scheme
example of...........................................40
Identity operator..............................................99
Gauss-Jordan matrix inversion
Initial values for differential equations..........122
example of...........................................32
Integral equations
Gauss-Laguerre quadrature............................117
defined..............................................146
Gauss-Legendre quadrature...........................110
homogeneous and inhomogeneous...147
and multi-dimension quadrature.......115
linear types........................................147
Gauss-Seidel Iteration......................................39
Integral transforms.........................................168
example of...........................................40
Interaction effects and experimental design..251
Gaussian Elimination.......................................29
Interpolation
Gaussian error curve......................................210
by a polynomial..................................64
Gaussian quadrature ......................................106
general theory.....................................63
compared to other quadrature formulae112
Interpolation formula as a basis for quadrature
compared with Romberg quadrature.111
formulae……………………104
degree of precision for......................107
Interpolative polynomial
in multiple dimensions......................113
example of..........................................68
specific example of...........................108
Inverse...............................................................3
Gaussian-Chebyschev quadrature..................110
of a Fourier Transform.....................168
Gegenbauer polynomials ................................91
Iterative function
Generating function for orthogonal polynomials87
convergence of....................................46
Gossett...........................................................233
defined................................................46
Gradient...........................................................19
definition of.........................................19
multidimensional................................46
Iterative Methods
of the Chi-squared surface…………..183
and linear equations............................39
H
J
Heisenberg Uncertainty Principle..................211
Jacobi polynomials..........................................91
Hermite interpolation.......................................72
and multi-dimension Gaussian
as a basis for Gaussian quadrature….106
quadrature.........................................114
Hermite Polynomials.......................................89
Jacobian.........................................................113
recurrence relation…………...............89
Jenkins-Taub method for polynomials............63
262
Numerical Methods and Data Analysis
K
Legendre Polynomials.....................................87
Kernel of an integral equation........................148
for Gaussian quadrature....................108
and uniqueness of the solution… .. …154
recurrence relation..............................87
Lehmer-Schur method for polynomials...........63
effect on the solution.........................154
Kolmogorov-Smirnov tests............................235
Type 1...............................................236
Leibnitz............................................................97
Type 2...............................................236
Levels of confidence
defined..............................................231
Levi-Civita Tensor...........................................14
Kronecker delta......................................9, 41, 66
definition.............................................14
definition...............................................6
Kurtosis..........................................................212
213 defined..............................................221
Likelihood
...........................................of a function
maximum value for...........................221
of the normal curve...........................218
Linear correlation .........................................236
of the t-distribution...........................226
Linear equations
formal solution for..............................28
L
Linear Programming......................................190
Lagrange Interpolation.....................................64
and the Chebyshev norm..................190
and quadrature formulae...................103
Linear transformations.......................................8
Lagrange polynomials
Logical 'or'.....................................................200
for equal intervals...............................66
Logical 'and'...................................................200
relation to Gaussian quadrature……..107
specific examples of............................66
M
Lagrangian interpolation
Macrostate.....................................................210
and numerical differention……………99
Main effects and experimental design………..251
weighted form.....................................84
Laguerre Polynomials......................................88
recurrence relation..............................89
Matrix
definition...............................................6
Laplace transform
defined..............................................168
factorization........................................34
Matrix inverse
Latin square
defined..............................................251
improvement of...................................41
Matrix product
definition...............................................6
Least square coefficients
Maximum likelihood
errors of.....................................176, 221
Least Square Norm
defined..............................................160
and analysis of variance....................243
of a function......................................222
Maxwell-Boltzmann statistics.......................210
Least squares
Mean......................................................211, 212
and analysis of variance....................243
distribution of...................................225
and correlation coefficients…………236
of a function..............................211, 212
of the F-statistic................................230
and maximum likelihood..................222
of the normal curve...........................218
and regression analysis.....................199
of the t-distribution...........................226
and the Chebyshev norm...................190
Mean square error
for linear functions............................161
and Chi-square..................................227
for non-linear problems.....................181
statistical interpretation of…………..238
with errors in the independent variable181
Mean square residual (see mean square error)
Legendre, A. ..........................................160, 198
263
determination of................................179
Legendre Approximation.......................160, 164
Index
for unequally spaced data.................165
Median
defined..............................................214
matrix development for tensor product ……………………………..162
for weighted......................................163
of the normal curve...........................218
for Normal matrices
Microstate......................................................210
defined..................................................7
Milne predictor...............................................136
for least squares................................176
Mini-max norm..............................................186
Null hypothesis..............................................230
(see also Chebyshev norm)
for correlation...................................240
Minor of a matrix.............................................28
Mode..............................................................222
defined..............................................213
for the K-S tests................................235
Numerical differentiation.................................97
of a function......................................214
of chi-square.....................................227
Numerical integration....................................100
O
of the F-statistic................................230
of the normal curve...........................218
Operations research.......................................190
of the t-distribution...........................226
Operator...........................................................18
Moment of a function.....................................211
central difference................................99
Monte Carlo methods.....................................115
quadrature.........................................115
difference............................................19
differential..........................................18
Multi-step methods for the solution of ODEs…... …………………….134
finite difference...................................98
finite difference dentity………...........99
Multiple correlation.......................................245
identity................................................19
Multiple integrals...........................................112
integral................................................18
Multivariant distribution................................219
shift...............................................19, 99
summation..........................................19
N
vector..................................................19
Optimization problems..................................199
Nabula..............................................................19
Order
Natural splines.................................................77
for an ordinary differential
Neville's algorithm for polynomials.................71
equation............................................121
Newton, Sir I. .................................................97
Newton-Raphson
and non-linear least squares..............182
for polynomials...................................61
of a partial differential equation…….146
of an approximation............................63
Non-linear least squares
errors for...........................................186
of convergence....................................64
Orthogonal polynomials
and Gaussian quadrature...................107
Non-parametric statistical tests
as basis functions for iterpolation.......91
(see Kolmogorov-Smirnov tests).....................................236
some specific forms for......................90
Orthogonal unitary transformations.................10
Normal curve.................................................209
Orthonormal functions……………………………..86
and the t-,F-statistics.........................230
Normal distribution........................................221
Orthonormal polynomials
and analysis of variance....................245
defined................................................86
Normal distribution function..........................209
Orthonormal transformations.....................10, 48
Normal equations...........................................161
Over relaxation for linear equations................46
for non-linear least squares...............181
P
for orthogonal functions....................164
264
for the errors of the coefficients........175
Numerical Methods and Data Analysis
Polytope.........................................................190
Parabolic hypersurface and non-linear least squares..............................................184
Power Spectra..................................................92
Precision of a computer...................................25
Parametric tests..............................................235
Predictor
(see t-,F-,and chi-square tests)
Parent population...........................217, 221, 231
and statistics......................................200
Adams-Bashforth..............................136
stability of.........................................134
Predictor-corrector
correlation coefficients in.................239
for solution of ODEs.........................134
Partial correlation...........................................245
Partial derivative
defined..............................................146
Probabilitly
definition of......................................199
Partial differential equation............................145
and hydrodynamics...........................145
Probability density distribution function.......203
classification of.................................146
defined..............................................203
Probable error................................................218
Product polynomial
Pauli exclusion principle................................210
defined..............................................113
Pearson correlation coefficient.......................239
Proper values...................................................49
Pearson, K. ...................................................239
of a matrix...........................................49
Percent level...................................................232
Percentile
defined..............................................213
Proper vectors..................................................49
of a matrix...........................................49
Protocol for a factorial design........................251
for the normal curve..........................218
Permutation
defined..............................................204
Pseudo vectors ................................................11
Pseudo-tensor...................................................14
(see tensor density)
Personal equation...........................................246
Pythagoras theorem and least squares...........179
Photons ……………………………..............229
Picard's method..............................................123
Q
Poisson distribution........................................207
Polynomial
Quadrature.....................................................100
factored form for.................................56
general definition................................55
and integral equations.......................148
for multiple integrals........................112
roots of................................................56
Monte Carlo......................................115
Polynomial approximation...............................97
Quadrature weights
and interpolation theory......................63
determination of................................105
and multiple quadrature....................112
Quartile
and the Chebyshev norm...................187
defined..............................................214
Polynomials
upper and lower................................214
Chebyschev.........................................91
for splines..........................................76
Quotient polynomial........................................80
interpolation with................................82
Gegenbauer.........................................90
(see rational function).........................80
Hermite...............................................90
Jacobi..................................................90
Lagrange.............................................66
R
Laguerre..............................................89
Random variable
Legendre.............................................87
defined..............................................202
orthonormal.........................................86
moments for......................................212
Ultraspherical......................................90
Rational function.............................................80
and the solution of ODEs..................137
265
Index
Significance
Recurrence relation
level of..............................................230
for Chebyschev polynomials...............91
meaning of........................................230
for Hermite polynomials..................................90
of a correlation coefficient................240
for Laguerre polynomials....................89
Similarity transformation.................................48
for Legendre polynomials...................87
definition of........................................50
for quotient polynomials.....................81
for rational interpolative
functions..............................................81
Simplex method.............................................190
Simpson's rule
and Runge-Kutta...............................143
Recursive formula for Lagrangian polynomials68
as a hyper-efficient quadrature
Reflection transformation................................10
formula…………………….104
Regression analysis........................217, 220, 236
compared to other quadrature ................
and least squares...............................199
formulae...............................112
Regression line...............................................237
degree of precision for......................104
degrees of freedom for......................241
derived..............................................104
Relaxation Methods
running form of.................................105
for linear equations.............................43
Relaxation parameter
defined................................................44
Singular matrices.............................................33
example of...........................................44
Skewness.......................................................212
of a function......................................212
of chi-square.....................................227
Residual error
of the normal curve...........................218
in least squares..................................176
of the t-distribution...........................226
Richardson extrapolation.................................99
Splines.............................................................75
or Romberg quadrature.....................111
specific example of.............................77
Right hand rule.................................................11
Standard deviation
Romberg quadrature.......................................111
and the correlation coefficient..........239
compared to other formulae………... 112
defined..............................................212
including Richardson extrapolation....112
of the mean.......................................225
Roots of a polynomial......................................56
of the normal curve...........................218
Rotation matrices.............................................12
Standard error of estimate..............................218
Rotational Transformation ..............................11
Statistics
Roundoff error.................................................25
Bose-Einstein....................................210
Rule of signs....................................................57
Fermi-Dirac......................................211
Runga-Kutta algorithm for systems of ODEs138
Maxwell-Boltzmann.........................210
Runga-Kutta method......................................126
Steepest descent for non-linear least squares.184
applied to boundary value problems.141
Step size
S
control of for ODE............................130
Sterling's formula for factorials.....................207
Sample set and probability theory…………..200
Students's t-Test.............................................233
(see t-test)
Sample space..................................................200
Scalar product
definition...............................................5
Symmetric matrix..............................................6
Synthetic Division...........................................57
recurrence relations for.......................58
Secant iteration scheme for polynomials.........63
Self-adjoint.........................................................6
Shift operator...................................................99
266
Numerical Methods and Data Analysis
T
Unit matrix.......................................................41
t-statistic
defined..............................................225
Unitary matrix....................................................6
V
for large N.........................................230
t-test
defined..............................................231
Vandermode determinant.................................65
Variance.........................................211, 212, 220
for correlation coefficients................242
analysis of.........................................242
for large N.........................................231
for a single observation.....................227
Taylor series
of the t-distribution...........................226
and non-linear least squares..............183
of a function......................................212
and Richardson extrapolation.............99
of a single observation......................220
and Runga-Kutta method..................126
of chi-square.....................................227
Tensor densities ..............................................14
of the normal curve...........................218
Tensor product
of the F-statistic................................230
for least square normal equations......162
of the mean...............................220, 225
Topology............................................................7
Variances
Trace
and Chi-squared................................227
of a matrix.............................................6
first order..........................................238
transformational invarience of............49
of deviations from the mean.............238
Transformation- rotational...............................11
Vector operators..............................................19
Transpose of the matrix...................................10
Vector product
Trapezoid rule................................................102
and Runge-Kutta...............................143
definition...............................................6
Vector space
compared to other quadrature formulae112
general form......................................111
for least squares................................179
Vectors
Treatment and experimental design...............249
contravariant.......................................16
Treatment level
Venn diagram for combined probability........202
for an experiment..............................249
Volterra equations
Tri-diagonal equations.....................................38
as Fredholm equations......................150
for cubic splines..................................77
defined..............................................146
Trials
solution by iteration..........................153
and experimantal design...................252
symbology for………………………252
solution of Type 1.............................150
solution of Type 2.............................150
Triangular matrices
for factorization...................................34
W
Triangular system
Weight function...............................................86
of linear equations...............................30
for Chebyschev polynomials..............90
for Gaussian quadrature....................109
Trigonometric functions
orthogonality of...................................92
for Gegenbauer polynomials..............90
for Hermite polynomials.....................89
Truncation error...............................................26
for Laguerre polynomials...................88
estimate and reduction for ODE........131
for Legendre polynomials...................87
estimate for differential equations.....130
Jacobi polynomials.............................90
for numerical differentiation...............99
Weights for Gaussian quadrature...................108
267
U
268
Index
Y
Yield for an experiment.................................249
Z
Zeno's Paradox...............................................197
Fundamental Numerical Methods and Data Analysis by George W. Collins, II

[hide]

[/hide]
先下下来看看,谢谢楼主了:30bb
謝謝你的分享
繼續加油:11bb
謝謝你的分享
繼續加油!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
好书啊!!!谢谢楼主分享!!!!!
thanks very much
好书,定一个
好书  下下来 看看 ,谢谢楼主 !
顶顶.................................
下载学习下
:17de :17de :17de :17de :29bb :29bb
下载下来看看先,楼主辛苦了,!!!!!!!!!!
thanx.............................
:11bb :27bb :30bb
真的是好书啊 ,非常感谢楼主了                                。。。。。。。。。。。。
楼主真辛苦了,好人,向你学习。
谢谢楼主的分享,:30bb :30bb
:27bb :27bb
谢谢
不错的专业书啊,真的是很好的资料啊,多谢分享哦
I'll have a look, and study. thinks
:30bb :30bb :30bb :30bb
好资料,想看看,呵呵,谢谢分享!
gooooooooooooooooood
顶顶顶顶顶顶顶顶顶顶顶顶的
:27bb 4# 00d44
Unlocking Knowledge, Empowering Minds
',''╭⌒⌒╮.',''',,'.'',,','',.
'╱◥███◣'『微网』是个家
︱田︱ 田 田| ','''',.发贴把瓦加
╬╬╬╬╬╬╬╬╬╬╬╬╬
超级顶
十分感谢~~~~~~~~~~~~~~~~
谢谢谢谢谢谢
谢谢楼主的分享,
本帖最后由 huangfeihong88 于 2009-11-7 17:53 编辑

:27bb:21bb
好书! 感谢楼主提供分享!!!!!
:13bb:31bb:8de
好书啊  谢谢楼主啊
看了LZ的帖子,我只想说一句很好很强大!
看看。。。谢谢分享
客服中心 搜索
关于我们
关于我们
关注我们
联系我们
帮助中心
资讯中心
企业生态
社区论坛
服务支持
资源下载
售后服务
推广服务
关注我们
官方微博
官方空间
官方微信
返回顶部