Near Earth Objects Cover-Up
What’s Wrong with NEOWISE
In a 2016 preprint and two papers published online in December 2017 titled “Asteroid thermal modeling in the presence of reflected sunlight” and June 2018, “An empirical examination of WISE/NEOWISE asteroid analysis and results” in the journal Icarus, I raised a number of scientific, methodological, and ethical problems with the NEOWISE asteroid research project and its published papers and results. In this article, I summarize the problems in a less formal way than the papers, and I explain how the scientific misconduct that I found can be verified by anyone. Unlike a formal research paper I can also include the NEOWISE group’s response; these are shown to be specious with similarly simple methods. This is a follow-up to my earlier article “A Simple Guide to NEOWISE Problems,” which I posted to Medium two years ago.
The NEOWISE research project was funded by NASA and based at its Jet Propulsion Laboratory (JPL). The project used infrared (IR) observations of asteroids made by the NASA WISE spacecraft mission to estimate the physical properties of asteroids: chiefly the diameter D, but also the albedo (reflectivity) in visible light pv, and the albedo in IR light pIR. These parameters are estimated by fitting a model (i.e., by fitting a curve) to the observational data, which in the case of NEOWISE is the amount of IR light flux in each of four bands W1, W2, W3, W4, each of which measures different wavelengths of IR light.
NEOWISE observed vastly more asteroids in the IR than all previous studies combined. No currently scheduled or proposed mission will observe a comparable number of asteroids in four IR bands, so the WISE/NEOWISE data set is the best information on asteroid sizes and albedos that science will have for many years. As a result, it ought to be a treasure trove of information about asteroids that is widely used by the community with full confidence in its reliability and a full understanding of how the numerical results were derived.
In order to assess the accuracy of the asteroid diameters that NEOWISE estimated, the established method is to compare the diameters obtained by thermal modeling to diameters determined independently for the same asteroids by other methods — notably by bouncing radar beams off them, by observing them from spacecraft, and by timing the dip in starlight that occurs when an asteroid directly passes in front of star (a so-called occultation). For simplicity, let’s refer to these gold-standard radar, occultation, and spacecraft measurements as ROS diameters. The NEOWISE group did compare model diameters to ROS diameters in this way in a preliminary calibration paper in which they analyzed and criticized previous work, including estimates made using the IRAS space telescope and estimates published by Ryan and Woodward. But if the NEOWISE team ever performed this obvious, reliable test for accuracy on their own results, they didn’t publish the outcome. Instead, they claimed great accuracy without much support. They they simply copied the ROS diameters out of previous papers by others and presented them as NEOWISE modeling results, making it impossible for third parties to check the actual accuracy.
The problems I have identified with NEOWISE can be divided into three categories:
- scientific errors in the NEOWISE papers published from 2011 through 2014,
- scientific misconduct evident in those papers,
- and further misconduct, apparently to cover up the issues, which occurred after I first pointed out these problems in a preprint article in 2016.
Virtually all scientists abhor the idea of misconduct, so I do not make this charge lightly. What I mean by “misconduct” are actions that violate common standards of the ethical practice of science. These includes ethical guidelines for statistical practice (e.g. guidelines adopted by the American Statistical Association), as well as the ethical standard of scientific journals and institutions. NASA has its own regulations about research misconduct . The most basic of these rules can be summarized very simply:
- don’t present other people’s work as your own,
- describe the methods used to get the results so others can replicate them,
- and avoid anything that is misleading or deceptive to readers.
It’s important to keep in mind that misconduct of these kinds can and does occur unintentionally, just as customers sometimes inadvertently take goods out of stores without paying. That’s still shoplifting, even if the intent was not to steal, and it’s wrong. Similarly, publishing misleading information in a scientific paper is wrong, regardless of what led to it. The appropriate response for an author is to publicly acknowledge and correct the misstep immediately, retracting the paper if the misconduct affected the results in a material way.
As it happens, my recent papers and other publications have documented strong evidence — including statements by some NEOWISE researchers themselves — that the issues I am calling misconduct in the NEOWISE papers were not inadvertent. They appear to have been deliberate choices made repeatedly by the NEOWISE team over a long period of time.
These actions have caused the astronomical community to work under the false belief that the NEOWISE results are more accurate (have smaller errors) than the evidence warrants. They have also allowed the NEOWISE group to effectively monopolize the use of the asteroid diameter and albedo data, which is required by both NASA policy and scientific norms to be shared openly with the scientific community.
Since I first brought these issues to light, the NEOWISE group has reacted extremely deceptively. To return to the shoplifting example, it’s like they claimed they did pay when confronted, and then bolted out the door. Had the NEOWISE team shown any good intentions whatsoever in this matter when I initially raised the issues, I would now be pursuing this matter quite differently. But further acts of misconduct seem to have been committed to cover up the original issues, suggesting that at least some of the researchers involved have been acting in bad faith.
Below I summarize the problems briefly. I then provide more detailed explanation and evidence for each in turn. While the scientific errors require some detailed arguments and calculations (which can be found in my two Icarus papers), most of the misconduct problems — including the worst ones — can be easily verified by anybody; instructions below show how. Those matters generally don’t require detailed scientific knowledge or subjective judgment. All one need do is to compare two numbers or two sorted lists to see whether or not they match up.
Scientific and methodological errors (2011 to 2014)
The worst examples of scientific error in the NEOWISE papers, all quite serious, are listed below. My two Icarus papers and the Notes section below discuss them in more detail.
1. Violation of Kirchhoff’s law of thermal radiation. The NEOWISE analysis violates this fundamental and simple law, which is taught in every freshman physics course.
2. Incorrect albedos and diameters. A simple formula defines the mathematical relationship among visible-band albedo, absolute visible magnitude, and diameter. This formula is listed in each of the NEOWISE result papers. Yet about 14,000 of the NEOWISE results violate the formula.
3. No proper error analysis. A basic part of any scientific study of this kind is to analyze the various sources of error in the results — that is called error analysis. In addition, one would like to compare new results to those previously obtained by using other methods, which could be called accuracy analysis. The NEOWISE error analysis is very inadequate (more on this below). The accuracy analysis is essentially non-existent.
4. Nonstandard and unjustified data analysis. The NEOWISE data analysis relied heavily on nonstandard methods of several kinds. Some of these methods involved discarding much of the observational data. The NEOWISE papers give no valid reasons for discarding that much data.
5. Exceptions to data-processing rules. In multiple instances, the NEOWISE papers describe an approach to data analysis — for example, an assumption that they employ for a group of asteroids — that upon inspections turns out not to have been reliably followed. In addition to using many ad hoc rules, NEOWISE seems to have broken many of these rules in numerous undocumented exceptions.
6. Underestimated observational errors. A fundamental input to the NEOWISE analysis is the estimated observational error in flux. Hanus et al. (2015) showed that the WISE/NEOWISE analysis systematically underestimated the errors. They found that the true errors were at least 40% larger than claimed in the W3 band and 30% larger in the W4 band.
My paper corroborates those findings and, using a much larger sample of WISE data, shows that the errors were 150% larger in the W1 band and 50% larger in the W2 band than previously claimed. This finding invalidates the NEOWISE error analysis and has an effect on every NEOWISE result.
7. Poor-quality model fits. Many NEOWISE results fit the data very poorly. In some cases, the fit is so bad that it the result appears to have been effectively fabricated. In other cases, the fits are unnecessarily poor.
Deception and misconduct, part I: The original papers, 2011 to 2014
The initial round of NEOWISE papers contain the following examples which breach the normal rules of science. Here is a list of the worst offenses.
8. Irreproducible results. The NEOWISE papers do not describe their methods in sufficient detail to allow them to be replicated (i.e., to perform the same calculations on observational data and get the same results). NEOWISE team leaders have consistently refused to explain these details to any external researchers in the field outside the NEOWISE group.
This is a clear violation of the normal practice of science. It is also a violation of NASA and JPL rules regarding disclosure of relevant science to the scientific community. One can speculate on at least two possible motives for the secrecy: it could be to prevent discovery of other problems discussed below, and it might also be intended to allow the NEOWISE group to monopolize the data set so that they can dominate the field in this area.
9. Exaggerated accuracy. The NEOWISE papers repeatedly and systematically exaggerate the level of accuracy in their diameter estimates. The initial NEOWISE paper on accuracy claims (erroneously, as it turns out) that the minimum systematic error is 10%. Subsequent papers make far more aggressive claims, however, such as “Using a NEATM thermal model fitting routine, we compute diameters for over 100,000 main belt asteroids from their IR thermal flux, with errors better than 10%” (Masiero et al. 2011). Yet that comment is not supported with independent analysis; instead, they simply reference the previous paper that claims a minimum error of 10%. It is unethical to make unsubstantiated accuracy claims and to falsely reference papers in a deceptive manner.
10. Conflating accuracy across multiple models and data. The NEOWISE papers use 10 different models and 12 different combinations of data from the W1 through W4 bands, with 47 different combinations of models and bands. Each combination ought to have different accuracy and error properties, yet the information as to which result was calculated with each combination was not disclosed until 2016. It is deceptive to claim an overall accuracy based on a best-case model and to imply that this is typical of all of the model/data combinations.
11. Copied ROS diameters presented as NEOWISE results. In more than 100 cases, the NEOWISE group intermingled previously published ROS diameters for asteroids in tables of NEOWISE model-fit diameters, without explaining that they were doing so or referencing the sources of the copied diameters. That is plagiarism. It also created a false impression that NEOWISE had excellent accuracy, and it prevented any third party from making their own assessment of the accuracy.
12. Unfair criticism of work by others. One NEOWISE paper (Mainzer et al. 2011c) compared ROS asteroid diameters to model-fit diameters from two other research projects (IRAS and Ryan and Woodward). Based on this comparison, the NEOWISE group criticizes those earlier studies for being “biased,” and argued that NEOWISE is superior. Yet they failed to publish the same comparison for NEOWISE diameters. Furthermore, they made it impossible for third parties to make this comparison by copying the same ROS diameters, and instead misrepresented those measurements as model fits.
Adding insult to injury, the group has made it impossible to replicate their calculations by refusing direct requests to share required details. It is unethical for the NEOWISE team to criticize other scientists results on the basis of a metric that they did not apply to their own work and rendered impossible for others to apply to their results.
13. Fabricated results. The NEOWISE model curves completely miss all of the data points they claim to fit in 30% to 50% of cases, depending on band and model combination. In effect, these results are fabricated — they clearly do not depend on the data and instead are an artifact of the nonstandard analytical approaches the group used. These results were nevertheless presented as best fits to the data.
Beyond the complete misses, many other curves are very poor fits. The NEOWISE team must have known this; a failure to disclose such a serious issue is unethical. It should go without saying that the quality of model fits is an absolutely essential part of the presentation of results from a study based on model fitting.
Deception and misconduct, part II: The cover-up, 2016 to present
From June 2015 through May 2016, I made the NEOWISE group aware of the Kirchhoff’s law issue. Amy Mainzer replied that I was confused; after that, I received no further comments from her. Undeterred, I periodically sent drafts and updates of my findings to Mainzer, as well as to Edward Wright at UCLA (who is affiliated with NEOWISE) and to Tom Statler of NASA (who is not part of NEOWISE). Wright and Statler did reply to some emails, but they were unable to answer most of my questions.
During this period, I repeatedly sought comments or clarifications. The NEOWISE group had every opportunity to clarify, explain, or show me that I was wrong. Instead they simply refused to answer.
On May 20, 2016, I posted a draft manuscript on arXiv.org (https://arxiv.org/abs/1605.06490v2). This preprint service is commonly used by scientists, especially in physics and astronomy, as a way to get comments and feedback from the scientific community on preliminary results prior to submission for formal peer review.
It was then clear to me that NEOWISE would never respond to private questions. My hope in releasing the manuscript was that it would yield some answers to the questions I had posed to the NEOWISE group — either from them or from others in the field. This preprint, and an earlier paper of mine was covered by the New York Times on May 24, 2016, and subsequently by other media outlets.
To continue to read this article, please click the link below…