“Public Policy over Massive Time Scales”
David and I are law professors focused on public policy. How to make the world better through it. My first reaction was that the idea was not useful at all. But this is a topic we can discuss. There is a related issue of massive time scales.
Amortization rate of benefits: economists assume this
Examples: bridge (ca 30 years)
Reform of judiciary (<100 font="" years="">100>
Radioactive waste (lasts 10 000 to 1 million years)
Power plants (climate change--indefinite time scale)
3% growth >> .97 discount factor
Calculate social value by subtracting costs from discounted benefits
if you have a project that has effects 100 years out, you have to take into account that you might get a better outcome by simply investing money
suppose we are considering a project with payoff of $1bn in future, and are deciding how much today to spend
If you assume discount is very low, then the $1bn is worth a few hundred million; but high discount rate >> far lower payoff
government uses a 300 year estimate
CAFE standards (fuel emissions in vehicles), fluorescent and incandescent lamp standards; small electronic motor standards; at least eleven others
every ton of carbon has a social cost to be factored in
EPA will issue major regs that include social cost of carbon
same thing can happen in any economics model of climate change
base case: climate change reduces usable output but growth continues. Errors have no long term effects.
Idea that not that much will change. Growing corn today, same as tomorrow.
Base scenario: from 30 to 27 times richer in 300 years. Why would we sacrifice today to help people who will be that much richer no matter what?
Anywhere from 25 times richer to dark ages!
how important is discounting?
how does today’s uncertainty affect estimates
efforts are highly sensitive to initial assumptions (Weisbach)
Deep uncertainty but also because climate change problem is one of energy transition
source of our wealth is fossil fuels >> energy transition in about 100 years; that is an engineering problem
[but ironically, though you wish to avoid it, “the bad level” is built into the default well-being utilitarianism used in this argument! loop!]