摘要:AbstractOne of the most popular means of addressing premature declaration of convergence due to low error gradients in parameter identification is Tikhonov regularisation. Tikhonov regularisation utilises additionala-prioriinformation to modify the objective surface, and thus is not a true least squares approach. The Dimensional Reduction Method is a framework that can be placed around parameter identification approaches, restricting iteration to hyperplanes where low error gradients exist, thus allowing iteration within these areas without modification of the objective surface. A comparison between the ability of the two methods to accurately identify parameters on the highly non-linear pulmonary recruitment model was undertaken. The Dimensional Reduction Method was able to produce lower residuals in the majority of cases, and statistically significant improvement in both parameter error and model residuals. Additionally, the two approaches can be implemented simultaneously, which further improved model residuals and parameter errors, demonstrating the modularity of each approach and that both approach ill-posed problems in a distinct manner. Overall, this provides a strong case for the advantages of the Dimensional Reduction Method over current parameter identification methods designed for addressing ill-posed problems.