Back to main topic

TotalFitAIC

This class describes the methods that will take the objects of the TotalFit class (saved as fitting sessions) where data were fit with different models, and compare the results.

NOTE: There are some theoretical aspects that need to be clarified in the original publications are that here we are using WEIGHTED sum of squares instead of just sum of squares as asked by the the original test, we use dataset priorities to re-weight contributions of different pieces of the data to the final sum of squares of residuals. The number of points (outside of the log()) from these datasets has to (probably) also be weighted by these priorities but that needs a theoretical proof.

Intuitively, it should not make difference because both compared models utilize the same data and the same formula for the sum of squares. However, the sensitivity of the test is likely to be unequal to contributions of different datasets. Probably, the most accurate results will be achieved if one (1) does not use the priorities and (2) includes data with similar total sizes.

Therefore if you only fit a bunch of line shapes AICc calculation will be accurate. But if you mix in CPMG datasets or ITC data, which have much FEWER points relatively to line shapes, which are (normally) oversampled you will probably bias model selection to what line shapes dictate. In such cases the recipie will be to remove any additional zero-filling from spectral data and keep baseline regions to a minimum: all to reduce total number of data points in the line shape datasets to a minimum and balance total data size with data sizes of other types in the analysis.

NOTE 2: if any of datasets have priority set to 0 to exclude it from fitting - its points also NOT COUNTED.

COMPUTING AICc
To compute AICc run compute_AICc(dataset) method. The method returns a text report and AICc value.

COMPARING TWO FITTING RESULTS
To compare results of fitting of the two identical TotalFit objects fit with different models call compare_datasets() method with these two datasets as parameters. The method returns a text report on comparison and a value of the Evidence Ratio. ER > 1 is in favor of the first model in a list of two.

 

Methods

These are Static methods so they do may be used directly with the name of a class

 


 

For more information see  TotalFitAIC.m

Example of application: code/Control_scripts_archive/hypothesis_testing.m