You are here: Reference Content > Results > Data Outputs
After reduction of complex NTA data into meaningful results using Data Processing & Analysis tools, the results should be presented as clearly communicated data outputs with affiliated evidence. Here, we focus on two key categories of data outputs, and provide recommendations regarding evidence that should be reported: reporting the results of statistical & chemometric analyses and reporting the results of compound annotation & identification efforts (specifically, communicating the confidence of the annotation/identifications).
Statistical and Chemometric Outputs
You are here: Reference Content > Results > Data Outputs > Statistical Outputs
Statistical and chemometric analyses of nontargeted data can be a powerful tool to interpret complex results or identify relationships between samples. Precision and accuracy of the statistical approaches should be reported in the study to show validity of the approach. These metrics can include variability of PCA scores (using replicate sample analyses) or accuracy of categorical analyses using standards or spiked matrix samples. In Table 4.1, we recommend data/outputs that should be reported for specific statistical analyses. However, we note that many (if not most) of the statistical analyses used by NTA researchers are not specific to NTA studies, and the examples provided here are not comprehensive. For basic statistical calculations and analyses (e.g., standard deviation, t-test, etc.), researchers should provide the results of all relevant calculated metrics with sufficient accompanying interpretation to allow a reader to understand the meaning and importance of the analysis and results.
We also propose questions to consider when evaluating the reporting about statistical analyses in NTA studies. We emphasize that although NTA researchers can use the information in this section to evaluate the quality and performance of their statistical analyses, the focus of the proposed questions is on the quality and completeness of reporting about the analyses and associated assessments, rather than the quality of the approaches and analyses themselves.
If NTA researchers generate new statistical packages or code, we recommend that these be made available through open-source platforms such as Github (https://github.com/). Further, researchers should provide clear documentation within their code (regarding everything from variable-naming to the analytical steps) so that other researchers can interpret and use the code.
Table 4.1 – Results Reporting: Statistical & Chemometric Outputs | ||
---|---|---|
Statistical or Chemometric Analysis | Recommended Data to Assess and Report | Proposed Questions for Evaluation |
Principal Components Analysis | Visual display of the individual sample scores with one or more principal components and loading plots showing effect of individual variables (often features) within each principal component. Load plots may be overlaid on top of PCA plots as bi-plots (Ngo, 2018), or separately. Reported lists of individual features/samples that contribute most to observed trends. Impact of data pre-processing (e.g., standardization, min-max scaling or log-scaling) on results. Results specific to the scores plot, such as: ● Description of any distinct (visual) separation or pattern within the scores plot ● Uncertainty about the scores plot ● Precision of replicates within the scores plot (if replicates were used for the PCA) | Did the authors provide visuals and supporting statistics to allow the reader to understand and interpret the results of the PCA? If feature lists were extracted, did the authors provide the complete list(s)? While this information may be helpful and it is recommended to include it (likely in supplementary information), it can be difficult to include all of it. Did the authors communicate any uncertainty in the analysis that may impact interpretation of their results, and discuss the possible source of such uncertainty? Did the authors report steps used in pre-processing (e.g., if/how features were scaled or standardized, whether any features were removed) and possible impacts of such pre-processing steps on the results (e.g., sensitivity of the results to the scale of the dataset, standardization method, etc.)? |
Non-metric Multidimensional Scaling Analysis (MDS) | Visual display of MDS axes and loading plots. Reported lists of individual features/samples that contribute most to observed trends. Impact of data pre-processing (e.g., standardization, min-max scaling or log-scaling) on results. | Did the authors provide visuals and supporting statistics to allow the reader to understand and interpret the results of the MDS? If feature lists were extracted, did the authors provide the complete list(s)? Did the authors communicate any uncertainty in the analysis that may impact interpretation of their results, and discuss the possible source of such uncertainty? Did the authors report steps used in pre-processing (e.g., if/how features were scaled or standardized, whether any features were removed) and possible impacts of such pre-processing steps on the results (e.g., sensitivity of the results to the scale of the dataset, standardization method, etc.)? |
Partial Least Squares Discriminant Analysis (PLS-DA) | Visual display of the individual sample scores with one or more variables and loading plots showing effect of individual variables (often features) within each principal component. Reported lists of individual features/samples that contribute most to observed trends. Impact of data pre-processing (e.g., standardization, min-max scaling or log-scaling) on results. Method and results for assignment of classes within the PLS-DA. Results specific to the scores plot, such as: ● Description of any distinct (visual) separation or pattern within the scores plot; ● Uncertainty about the scores plot; or ● Precision of replicates within the scores plot (if replicates were used for the PLS-DA). | Did the authors provide visuals and supporting statistics to allow the reader to understand and interpret the results of the PLS-DA? If feature lists were extracted, did the authors provide the complete list(s)? Did the authors report the results of any class assignments? Did the authors communicate any uncertainty in the analysis that may impact interpretation of their results, and discuss the possible source of such uncertainty? Did the authors report steps used in pre-processing (e.g., if/how features were scaled or standardized, whether any features were removed) and possible impacts of such pre-processing steps on the results (e.g., sensitivity of the results to the scale of the dataset, standardization method, etc.)? |
Differential Analysis | Using one or more statistical comparisons of the processed data, a visual display of the differences and/or a calculation of statistical similarity/difference (e.g., adjusted p-values). Results of any validation of the differential analysis using standards and/or spiked matrix samples. Demonstration of reproducibility of the results across replicates. | Did the authors communicate whether there was a distinct (visual) separation between the case and control sample sets? Did the authors describe the results of any validation efforts, and communicate any possible biases or errors that were observed? If replicates were used to demonstrate reproducibility of these differences, did the authors communicate the results of the analysis? |
Hierarchical Cluster Analysis | Visual display of cluster analysis. Reported classifications or groupings of features, identifications, or samples. Assessment of whether the linkage method, method to measure distance, or number of selected clusters was appropriate for the dataset. | Did the authors communicate whether sample replicates, QC samples, and/or sample types grouped as expected? Did the authors discuss whether the feature-level clustering provided any evidence that blank correction/removal was insufficient and/or if there were large sections of the HCA that are identical/low-abundance across all samples; and did they describe any subsequent analyses used to correct for such insufficiencies? Did the authors discuss how the number of clusters was selected (e.g., based on a priori information, by use of a supplementary technique such as the silhouette score, etc.) and possible impact on the reported results? Did the authors provide a rationale for their selection of a linkage method and method to measure distance, and evaluate whether the selected methods were appropriate for their dataset? |
Identification and Confidence Levels
You are here: Reference Content > Results > Data Outputs > Identification and Confidence Levels
Definitions for annotation, identification, compound databases, and spectral libraries (and additional information on these concepts) can be found in the section on Annotation & Identification.
To report the results of annotation and identification efforts, the evidence used to attribute molecular information, properties, and/or identities to specific observed mz@RTs and/or features should be presented to communicate confidence of identification. Confidence of identification is closely interrelated to the concept of identification scope; for example, if the scope of an identification includes all possible isomers vs. a single isomer, the confidence of identification will be higher in the latter case.
Levels, scales, and/or scoring systems can be used to clearly communicate confidence, with specific types of annotated properties or molecular information provided as supporting evidence. Examples of properties/molecular information, recommended supporting data to report, and proposed questions to evaluate the clarity and completeness of reporting on the annotation & identification results are provided in Table 4.2.
There are numerous examples of discrete confidence levels for compound identification, such as the levels created by Schymanski et al. 2014 and Sumner et al. 2007. Typically, confidence of compound identification is communicated by clearly listing the annotated properties that lead to the identification.
Table 4.2 – Results Reporting: Annotation & Identification | ||
---|---|---|
Recommended Data to Assess and Report | Proposed Questions for Evaluation | |
Annotated Property / Molecular Information | Annotation Data | |
(Pseudo)molecular ion m/z | Mass-to-charge ratio (m/z) for the intact molecular ion of the feature with instrumental accuracy (in ppm). Observed adduct(s) (e.g., [M]+, [M+Na]+, [M-H]-) for the precursor ion. | Did the authors report the exact mass and observed adducts for the precursor ion? |
Molecular formula for intact molecule and for fragments/neutral losses | Molecular formula (or formulas) for the feature and/or individual fragment ions or neutral loss masses with error (in ppm) from the intact molecular ion. Associated information to consider: ● Is the error (in ppm) of the predicted formula within the instrument accuracy? ● Does the fragment formula fit within the bounds of the intact molecular formula? ● Was the isotopic distribution considered when predicting the molecular formula (see next row)? | Did the authors report the mass error (in ppm) of the predicted formula? Did the authors provide a comparison between the error (in ppm) of the predicted formula relative to the instrument accuracy? Did the authors evaluate and report whether the fragment formula(s) fit within the bounds of the intact molecular formula? |
Elemental composition from isotopic distribution | Match score for the measured isotopic distribution to the predicted isotopic distribution based on the expected molecular formula. The number of isotopic peaks used to calculate the match score (if possible to determine using the software). | Did the authors report any element or number boundaries used to predict elemental composition/formula? Did the authors report the number of observed isotopic peaks and the match score for the measured vs. predicted isotopic distribution? Did the authors consider/discuss whether the method for calculating the isotopic distribution and/or match score impacted their ability to predict a formula? |
Functional Groups | List of functional groups associated with the feature and supporting data (fragment m/z, neutral loss mass, etc.). | Did the authors report the identified functional groups, and state which fragment ion was used to identify the specific functional group? |
Chemical Class | List of chemical classes associated with the feature. | Did the authors report the chemical class(es) and provide a description/discussion of their rationale for the assignment? If screening tools (such as Kendrick Mass Defect) were used to group known chemical class members with the feature, did the authors discuss their ability to verify chemical class membership for standards or known compounds? |
Reference mass spectrum match | Score for the degree of similarity between unknown compound mass spectrum and a reference mass spectrum in a library or database. | Did the authors report the similarity score, and communicate the associated confidence? Did the authors report the type of similarity score used? |
in silico mass spectrum match | Score for the degree of similarity between unknown compound mass spectrum and an in silico generated mass spectrum for the suspected compound. | Did the authors report the similarity score, and communicate the associated confidence? If the in silico approach was validated within the study, did the authors report on and discuss the results of the validation? |
A compound database or spectral library (defined in Annotation & Identification) may be part of the data outputs from a study, although the interpretation and conclusions of the database or library searching results is dependent on the properties of the database or library. To clearly communicate these results, Table 4.3 presents information to consider when reporting results from and/or sharing the contents of a database or library.
Table 4.3 – Results Reporting: Compound Database or Spectral Library | |
---|---|
Recommended Considerations | |
Feature | Considerations and Questions for Evaluation |
Instrumental conditions | Inclusion of instrumental conditions is highly recommended when sharing a spectral library; conditions should be mapped to spectra. Minimum conditions to include are separation type, ionization type, and ion analyzer type. Find additional details in Data Acquisition and example requirements from a spectral repository (MassBank) at: https://github.com/MassBank/MassBank-web/blob/main/Documentation/MassBankRecordFormat.md |
Curation Level | A high-level description of the curation level of both the spectra in a library and/or the chemical structure information in a database should be provided when a compound database or spectral library is a data output. For compounds and chemical structures: ● Was any standardization applied? ● What compound and structural identifiers were reviewed and mapped? For spectra: ● Were spectra recalibrated? ● Were noise and/or base peak thresholds applied? |
Output location | Where is the database or library output going to be stored? Will it be shared publicly or with limited permissions? |
References & Other Relevant Literature
Ngo, L. (2018, 6/18/2018). How to read PCA biplots and scree plots. Retrieved from https://blog.bioturing.com/2018/06/18/how-to-read-pca-biplots-and-scree-plots/
Schymanski, E. L., Jeon, J., Gulde, R., Fenner, K., Ruff, M., Singer, H. P., & Hollender, J. (2014). Identifying small molecules via high resolution mass spectrometry: Communicating confidence. Environmental Science & Technology, 48(4), 2097-2098. doi:10.1021/es5002105
Sumner, L. W., Amberg, A., Barrett, D., Beale, M. H., Beger, R., Daykin, C. A., . . . Viant, M. R. (2007). Proposed minimum reporting standards for chemical analysis Chemical Analysis Working Group (CAWG) Metabolomics Standards Initiative (MSI). Metabolomics, 3(3), 211-221. doi:10.1007/s11306-007-0082-2