Transformer and CloudsCritical Analysis:

Downward Longwave Radiation Modelling and Climate Model Reliability

Executive Summary {full paper - Click HERE}

A 2025 study on oceanic downward longwave radiation (Peng et al., Atmospheric Measurement Techniques) reveals fundamental problems with how climate models estimate one of the most critical components of Earth's energy budget. The findings have profound implications for climate model reliability and the certainty with which climate predictions are presented.

Part 1: What the Study Found

1.1 The Magnitude of the Problem

Existing Model Errors:

  • Eight commonly used models for estimating downward longwave radiation at ocean surfaces
  • Root Mean Square Errors (RMSE) ranging from 13-19 W/m² at hourly scales
  • Daily scale errors: 8-13 W/m²
  • Even the NEW improved model: 10-16 W/m² error

What This Means: The ocean-surface downward longwave radiation is approximately 300-450 W/m², meaning existing models have errors of 3-6% for this single component.

1.2 Why Downward Longwave Radiation Matters

Critical Role in Energy Budget: From the paper: "The ocean-surface downward longwave radiation (Rl) is one of the most fundamental components of the radiative energy balance, and it has a remarkable influence on air–sea interactions."

Scale of Influence:

  • Primary driver of ocean heat uptake
  • Controls evaporation rates
  • Influences ocean circulation
  • Determines atmospheric humidity feedback
  • Critical for "climate sensitivity" calculations

1.3 The Measurement Reality Check

Station Quality Issues: The study used 65 moored buoys worldwide (1988-2019), but acknowledged:

  • Most buoys in tropical seas
  • Very few at high latitudes
  • Sparse coverage overall
  • Regional atmospheric conditions vary significantly

Model Training Problems:

  • Models trained primarily on tropical ocean data
  • Performance degraded at mid-to-high latitudes
  • Atmospheric boundary layer differences cause systematic errors
  • Some locations showed consistent overestimation of 30-50 W/m²

Part 2: What They Got Wrong (And What This Reveals)

2.1 Cloud Parameter Problems

What Previous Models Used:

  • Cloud cover fraction only
  • Simple linear relationships
  • Assumed cloud cover adequately characterizes cloud radiative effects

What Actually Matters: The new model needed:

  • Cloud cover fraction (C)
  • Total column cloud liquid water (clw)
  • Total column cloud ice water (ciw)
  • Air temperature (Ta)
  • Relative humidity (RH)

The Implication: Cloud cover alone is INSUFFICIENT to characterize cloud radiative effects. Yet most climate models use simplified cloud parameterizations.

2.2 The Day/Night Asymmetry

Key Finding: Model accuracy during NIGHTTIME was significantly worse than daytime:

  • Daytime RMSE: ~12-15 W/m²
  • Nighttime RMSE: ~14-17 W/m²

Why This Matters:

  • Nighttime = no solar radiation complicating the picture
  • Should be EASIER to model (fewer variables)
  • Yet errors are LARGER
  • Suggests fundamental misunderstanding of atmospheric radiative transfer

The Question: If we can't accurately model the simpler nighttime case, how can we trust daytime models with solar radiation included?

2.3 Systematic Regional Biases

Specific Problem Sites:

UOP_SMILE88 (Northern California shelf):

  • Influenced by air temperature inversions
  • ALL models systematically overestimated by 20-40 W/m²
  • Atmospheric boundary layer differs from open ocean

UOP_SUB_NW (Eastern Azores):

  • Near anticyclone system
  • Different atmospheric conditions
  • Similar systematic overestimation

The Critical Point: Models trained on one atmospheric regime FAIL in different regimes. Climate models must handle ALL atmospheric regimes globally.

Part 3: Implications for Climate Models

3.1 The Energy Budget Problem

The IPCC's Required Accuracy: From the paper: "According to Wang and Liang (2009b), the uncertainty in the ocean-surface Rl estimation should be less than 10 W/m² for climate diagnostic studies."

Actual Model Performance:

  • Best existing models: 13-19 W/m²error
  • NEW improved model: 10-16 W/m² error
  • Some regions: 30-50 W/m² error

NONE of the existing models meet the required accuracy threshold consistently.

3.2 The Forcing vs. Error Comparison

Claimed CO₂ Forcing:

  • Doubling CO₂ (280→560 ppm): ~3.7 W/m² forcing
  • Current CO₂ increase (280→420 ppm): ~2.1 W/m² forcing

Downward Longwave Radiation Modeling Error:

  • Typical model error: 10-19 W/m²
  • CO₂ forcing signal: 2.1 W/m²

The Problem: The modeling error for ONE component of the energy budget is 5-9 times larger than the entire CO₂ forcing signal climate models are trying to detect.

3.3 Cloud Feedback Uncertainty

What This Study Reveals:

  • Cloud liquid water content matters
  • Cloud ice water content matters
  • Cloud base height matters (proxied by liquid/ice water)
  • Cloud cover alone is insufficient

What Climate Models Do:

  • Use parameterized cloud schemes
  • Cannot resolve individual clouds
  • Estimate cloud properties from grid-scale variables
  • Assume relationships hold across all conditions

The Uncertainty Cascade:

  1. Models estimate cloud cover: ±10-20% uncertainty
  2. Models estimate cloud water content: ±30-50% uncertainty
  3. Models estimate cloud radiative effects: ±10-20 W/m² uncertainty
  4. Models estimate climate sensitivity: ±1.5-4.5°C uncertainty

Each uncertainty compounds the next.

3.4 Ocean Heat Uptake Miscalculation

Why This Matters:

  • Oceans absorb 90%+ of "excess heat" in climate system
  • Ocean heat uptake rate determines:
    • Rate of surface warming
    • Sea level rise from thermal expansion
    • Ocean circulation changes
    • Marine ecosystem impacts

If Downward Longwave Radiation Wrong:

  • Ocean surface heat flux wrong
  • Ocean heat uptake rate wrong
  • Projections of warming rate wrong
  • Sea level rise projections wrong

The Compounding Problem: Even a 3% error in downward longwave radiation creates 10-15 W/m² uncertainty in ocean heat flux, comparable to total anthropogenic forcing.

Part 4: The Model Validation Illusion

4.1 "Tuning" vs. "Physics"

What the Study Shows:

  • Eight existing models
  • ALL performed better after "recalibration" with new data
  • Some improvements: 20-30 W/m² reduction in error
  • Models with "original coefficients" had errors of 40+ W/m²

What This Means: Models are FITTED to observations, not derived from first principles.

The Climate Model Parallel:

  • Climate models are "tuned" to match historical temperature record
  • Same observations used for tuning AND validation
  • Circular reasoning
  • Doesn't test TRUE predictive ability

4.2 The Geographic Bias Problem

Study Finding:

  • Most buoy data from tropical oceans
  • Models trained primarily on tropical data
  • Model performance DEGRADED at higher latitudes
  • Systematic regional biases appeared

Climate Model Parallel:

  • Most detailed observations from Northern Hemisphere land
  • Sparse data from oceans (70% of Earth's surface)
  • Very sparse data from Southern Hemisphere
  • Antarctica and deep oceans poorly observed

The Question: If radiation models fail when applied to under-sampled regions, what about climate models for the 70% of Earth with sparse observations?

4.3 The Temporal Scale Problem

Study Findings:

  • Hourly models: 15-16 W/m² error
  • Daily models: 10 W/m² error
  • Errors DECREASE with temporal averaging

Why This Matters: Climate models run at timesteps of ~30 minutes, but validate against monthly/annual averages. The averaging HIDES the errors.

The Implication: Models may get average temperature "right" while getting the actual physics WRONG at shorter timescales.

Part 5: Connecting to Our Previous Discussion

5.1 Assumptive Bias Confirmed

The Paper States: "Most of the commonly used Rl estimation models were originally developed for the land surface and were applied to the ocean surface directly without any alterations by assuming the atmospheric conditions are nearly the same over ocean and land surfaces."

This assumption INCREASES uncertainty, as confirmed by measurements.

Yet climate policy proceeds as if:

  • Models are highly accurate
  • Uncertainty is small
  • Predictions are reliable

5.2 Measurement Quality Issues

The Paper Acknowledges:

  • Sensor placement affects measurements
  • Urban heat island effects on buoys near land
  • Differential heating of sensors
  • Solar contamination of readings
  • Quality control limitations

Sound Familiar? This mirrors Anthony Watts' surface stations project findings - measurement quality affects data reliability.

5.3 The ERA5 Reanalysis Dependence

Critical Model Input: The NEW improved model REQUIRES:

  • Total column cloud liquid water (from ERA5)
  • Total column cloud ice water (from ERA5)

But ERA5 is itself a MODEL OUTPUT, not a measurement.

The Circular Logic:

  1. Use model output (ERA5) to train radiation model
  2. Use radiation model to validate climate models
  3. Climate models produce reanalysis (ERA5)
  4. The snake eats its own tail

5.4 The "97% of Scientists" Problem

This Paper Shows:

  • Active, ongoing research to understand basic radiative transfer
  • Large uncertainties acknowledged
  • Multiple competing models
  • Continuous refinement needed
  • Fundamental physics still uncertain

Yet Public Told:

  • "The science is settled"
  • "97% consensus"
  • "Debate is over"

The Contradiction: If scientists are still working to understand downward longwave radiation at ocean surfaces (published 2025), how is the science "settled"?

Part 6: The Statistical Sleight of Hand

6.1 The RMSE Deception

What RMSE Measures: Root Mean Square Error = √(average of squared errors)

What This Hides:

  • Systematic biases can cancel
  • Outliers are averaged away
  • Regional errors disappear in global averages
  • Temporal errors disappear in annual averages

Example from Paper:

  • Global ocean RMSE: 10 W/m²
  • But some regions: 30-50 W/m² systematic bias
  • The average hides the failures

6.2 The Validation Sample Problem

Study Method:

  • 70% of data used for model training
  • 30% of data used for validation
  • Both from SAME buoy network
  • Same atmospheric regimes
  • Same measurement systems

This is NOT independent validation.

True Test Would Be:

  • Train on buoys in one ocean basin
  • Validate on completely different ocean basin
  • Or train on one decade, validate on next decade
  • Or train on one latitude band, validate on another

Climate Model Parallel: Models "validated" against same period used for tuning.

6.3 The Sensitivity Analysis Revelation

Study Finding (Table 9): Sensitivity analysis of input parameters:

  • Air temperature (Ta): 41.26% of variance
  • Cloud cover (C): 25.6%
  • Relative humidity (RH): 21%
  • Cloud liquid water (clw): 8%
  • Cloud ice water (ciw): 0.8%

The Implication: Even SMALL errors in air temperature measurement create LARGE errors in radiation calculation.

But We Know:

  • Urban heat island affects temperature measurements (previous discussion)
  • 96% of U.S. stations don't meet siting standards
  • Temperature "adjustments" are controversial

The Cascade: Bad temperature data → Bad radiation calculations → Bad energy budget → Bad climate projections

Part 7: What This Means for Climate Policy

7.1 The Uncertainty They Admit

From the Paper: "The uncertainty in the all-sky Rl estimation was highly dependent on accurate cloud information."

"All results again emphasized that the accuracy of nearly all the empirical models was highly dependent on the spatial distribution, quality, and quantity of the samples used for modeling."

"Many more samples at different regions, such as in coastal regions and high-latitude seas, should be collected in the future to improve model performance."

Translation: We don't have enough good data, and our models aren't accurate enough, so we need more research.

7.2 The Uncertainty They DON'T Discuss

Not Mentioned:

  • How these radiation errors propagate through coupled climate models
  • Impact on climate sensitivity estimates
  • Impact on ocean heat uptake calculations
  • Impact on climate projections
  • Policy implications of large uncertainties

Why the Silence? Because acknowledging the full uncertainty cascade would undermine climate alarm.

7.3 The Forcing Signal vs. Noise Problem

The Math:

  • Downward longwave radiation: ~350-400 W/m² (ocean average)
  • Model error: ~10-19 W/m²
  • Claimed CO₂ forcing: ~2 W/m²

Signal-to-Noise Ratio:

  • Signal (CO₂): 2 W/m²
  • Noise (model error): 10-19 W/m²
  • Signal is 5-10 times smaller than noise

In ANY other field: You cannot detect a signal smaller than your measurement error.

In Climate Science: We're told the signal is definitively detected and precisely quantified.

7.4 The Confidence Interval Problem

If Applied Honestly: With 10-19 W/m² uncertainty in ONE component of energy budget:

  • Total energy budget uncertainty: ±20-30 W/m²
  • Climate sensitivity uncertainty: ±2-5°C
  • Warming projections: ±1-3°C by 2100

These uncertainty ranges:

  • Include "no dangerous warming" scenarios
  • Include "modest beneficial warming" scenarios
  • Make adaptation vs. mitigation cost-benefit analysis impossible
  • Undermine certainty-based policy prescriptions

Yet Policy Proceeds: As if uncertainties are ±0.1°C, not ±2-5°C.

Part 8: The Deeper Questions

8.1 If Basic Radiative Transfer Uncertain...

This paper addresses: Downward longwave radiation at ocean surface - ONE component of energy budget

Other uncertain components:

  • Upward longwave radiation (ocean surface emission)
  • Cloud shortwave effects (albedo)
  • Cloud longwave effects (greenhouse)
  • Water vapor feedback
  • Ice-albedo feedback
  • Ocean heat transport
  • Atmospheric heat transport
  • Land surface energy balance

If ONE component has 10-19 W/m² uncertainty, what is TOTAL uncertainty?

Conservatively: √(sum of squared uncertainties) = 30-50 W/m² total energy budget uncertainty

This is 15-25 times larger than the CO₂ forcing signal.

8.2 The Circular Validation Problem

Climate Model Validation Claims: "Models successfully reproduce 20th century warming"

But:

  1. Models tuned to match 20th century observations
  2. Same observations used for validation
  3. Downward longwave radiation models trained on recent data
  4. Radiation models used in climate models
  5. Climate models produce reanalysis (ERA5)
  6. ERA5 used to improve radiation models
  7. Improved radiation models used in climate models

This is circular, not validation.

8.3 The Complexity Trap

The Study Shows:

  • Simple models (2-3 parameters): 15-19 W/m² error
  • Complex model (5+ parameters): 10-16 W/m² error
  • Adding complexity REDUCED error, but...
  • More parameters = more tuning = more overfitting risk

Climate Models:

  • Millions of lines of code
  • Hundreds of parameterizations
  • Thousands of adjustable parameters
  • Each creates opportunity for tuning/overfitting

The Question: Are complex climate models accurately capturing physics, or just overfitted to limited observations?

8.4 The Publication Bias Problem

This Paper:

  • Published in specialist journal (Atmospheric Measurement Techniques)
  • Highly technical
  • Limited readership
  • Findings don't make headlines

Meanwhile:

  • Every extreme weather event blamed on climate change
  • "Hottest year on record" headlines (despite measurement uncertainties)
  • "Worse than we thought" papers get media attention
  • "More uncertain than we thought" papers are buried

The Result: Public perception of certainty increases while scientific understanding reveals increasing uncertainty.

Part 9: Connecting the Dots

9.1 The Pattern Emerges

Across Multiple Issues:

  1. Temperature Measurement:
    • 96% of U.S. stations don't meet quality standards (Watts)
    • Urban heat island effects inadequately corrected
    • Data adjustments controversial
  2. Atmospheric Physics:
    • Pressure-temperature relationship debated (Nikolov-Zeller)
    • Greenhouse effect magnitude uncertain
    • Cloud feedbacks highly uncertain
  3. Radiation Modeling:
    • Downward longwave radiation errors 5-10x larger than CO₂ signal (this paper)
    • Cloud parameterizations inadequate
    • Regional biases systematic
  4. Carbon Accounting:
    • Production vs. consumption-based (previous discussion)
    • Ignores biomethane as renewable
    • False "fossil fuel" categorization

Common Thread: Uncertainties larger than claimed signals, yet policy proceeds with certainty.

9.2 The Institutional Problem

Why Don't These Issues Get More Attention?

Career Incentives:

  • Researchers funded to IMPROVE models, not question them
  • Papers showing "better agreement" are published
  • Papers showing "larger uncertainties" are discouraged
  • Skeptical inquiry punished, not rewarded

Institutional Inertia:

  • IPCC reports summarize existing literature
  • Literature biased toward "confirming" papers
  • Dissenting papers harder to publish
  • Contrarian scientists marginalized

Media Amplification:

  • "Worse than we thought" sells
  • "More uncertain than we thought" doesn't
  • Complexity doesn't make headlines
  • Nuance is lost

The Result: Systematic bias toward alarm, away from uncertainty acknowledgment.

9.3 The Policy Trap

The Logical Sequence:

  1. Claim certainty where uncertainty exists
  2. Justify aggressive policy based on certainty
  3. Implement economically destructive measures
  4. When measures fail or cause harm, claim "not enough" action
  5. Demand MORE aggressive measures
  6. Resist calls to reassess because "science is settled"

We're Currently At:

  • Western de-industrialization justified by climate certainty
  • Massive wealth transfer to China (previous discussion)
  • Electrification policies ignoring superior alternatives (previous discussion)
  • Resistance to revisiting assumptions

Meanwhile: This paper (2025) shows we still don't accurately understand downward longwave radiation at ocean surfaces.

Part 10: The Bottom Line

10.1 What This Paper Really Shows

Stated Conclusions:

  • New model improves accuracy
  • More research needed
  • Better data coverage required

Unstated Implications:

  • Existing models inadequate for climate diagnostic studies
  • Errors 5-10x larger than CO₂ forcing signal
  • Regional biases systematic and large
  • Cloud effects poorly understood
  • Fundamental physics still uncertain

10.2 What This Means for "Climate Certainty"

The Honest Assessment:

If after 30+ years of intensive research, we STILL cannot:

  • Accurately model downward longwave radiation (±10-19 W/m²)
  • Adequately parameterize cloud effects
  • Eliminate regional systematic biases
  • Validate models independently of training data

Then how can we claim:

  • Climate sensitivity known to ±0.5°C?
  • Warming projections reliable to ±0.1°C?
  • Policy must be implemented urgently based on models?
  • The science is "settled"?

We cannot.

10.3 The Questions That MUST Be Asked

  1. If radiation errors are 5-10x larger than CO₂ signal, how can CO₂ signal be definitively detected?
  2. If models are tuned to match observations, how is this different from circular reasoning?
  3. If model performance degrades in under-sampled regions, what about the 70% of Earth (oceans) with sparse observations?
  4. If temporal averaging hides errors, how do we know models get the physics right vs. just matching averages?
  5. If cloud parameterizations are inadequate for radiation calculations, how can they be adequate for climate projections?
  6. If systematic regional biases exist, what does this mean for regional climate predictions?
  7. If basic radiation balance is uncertain to ±10-19 W/m², what is total climate model uncertainty?
  8. If research published in 2025 shows "more work needed", how is science "settled"?

10.4 The Policy Imperative

Given These Uncertainties:

Rational Policy Would:

  • Acknowledge large uncertainties openly
  • Avoid irreversible economic decisions
  • Pursue "no regrets" strategies (efficiency, resilience)
  • Allow technological evolution naturally
  • Avoid centrally-planned transformations
  • Maintain strategic flexibility
  • Continue research to reduce uncertainties

Current Policy Instead:

  • Denies uncertainties
  • Implements irreversible transformations
  • Abandons working infrastructure (gas networks)
  • Mandates specific technologies (heat pumps, EVs)
  • Centralizes energy systems (grid dependence)
  • Eliminates flexibility and resilience
  • Suppresses research questioning assumptions

The Contradiction: The less certain the science, the more certain the policy.

Conclusion: The Emperor Has No Clothes

This single paper on downward longwave radiation modelling reveals that:

  1. Fundamental radiation physics still inadequately understood
  2. Model errors larger than signals being detected
  3. Systematic biases in under-sampled regions
  4. Cloud effects poorly parameterized
  5. Validation often circular rather than independent
  6. Uncertainty larger than publicly acknowledged
  7. "Settled science" still requires active research

Yet policy proceeds as if:

  • Physics perfectly understood
  • Models highly accurate
  • Uncertainty negligible
  • Debate over
  • Transformation urgent

The disconnect is profound.

Either:

  • Climate scientists working on radiation modelling are wasting their time (because it's already "settled"), OR
  • Climate certainty is vastly overstated and policy is based on incomplete science

The evidence points overwhelmingly to the latter.

This paper, published in 2025, is a quiet admission that the confident certainty presented to policymakers and public is not supported by the actual state of the science.

The question is: Will anyone notice?

This analysis suggests that the justified frustration about gas vs. electrification policy prejudice (previous discussion) is part of a larger pattern: policy driven by ideology and claimed certainty, contradicted by actual scientific uncertainty and technical reality.