Human behavior creating global warming is metaphysically immoral and veganism is a moral solution.

In the previous post we established that not changing our behavior in response global warming is immoral. In line with this, according to a report by two World Bank advisers the animal agriculture industry surprisingly contributes to around fifty-one percent of all global emissions. From this study we can conclude that consuming less meat would dramatically reduce our harmful impact on the planet. But why haven’t we heard of this before?

To answer this, the video above shows some of the statistics found in a documentary called Cowspiracy, and it explores why this might not be as well a known cause as the direct burning of fossil fuels. Reasons provided are the reluctance of charities to confront the public about such a large change in behavior, and the power of the animal agriculture industry in stamping out dissent.

But in addition to morally valuing biological life on earth by not suffocating it with inorganic CO2 there is another benefit of not consuming animal meat. This benefit is the correct valuing of biologically more evolved animals over that of their less evolved counterparts – plants and grains. As Robert Pirsig writes in Lila:

An evolutionary morality,.. would say [eating meat is] scientifically immoral for everyone because animals are at a higher level of evolution, that is, more Dynamic, than are grains and fruits and vegetables.. It would add, also, that this moral principle holds only where there is an abundance of grains and fruits and vegetables. It would be immoral for Hindus not to eat their cows in a time of famine, since they would then be killing human beings in favor of a lower organism.

Robert Pirsig

Thirdly, that’s not to mention the growing list of health benefits that can be found in reducing the amount of meat in your diet and improving the overall biological quality of the people on the planet.

Therefore these three key reasons make veganism moral on many levels and supported by the evolutionary hierarchy of the Metaphysics of Quality.

The Evil of Disregarding Climate Science

The MOQ is a beautiful intellectual framework. As an intellectual framework, it uniquely shows that it’s both immoral and illogical to not change our behaviour in response to global warming. Traditionally, the argument to change our behaviour goes something like this:

“We are running a dangerous experiment to see how much CO2 we can pump into our atmosphere. At its worst, global warming threatens the existence of mankind. The right thing to do is to heed the dire warnings of climate scientists. They speak of rising water levels and increasing global temperatures. With these increasing temperatures and rising water levels, mankind may be no longer able to survive. So we should, we must change our behaviour.”

This argument has many opponents however. From those in power who like things the way they are to those co-opted by power with bogus arguments about the validity of the science.

That’s because, without the MOQ, climate change opponents and even proponents are easily able to question the validity of truth and scientific fact. They are also easily able to immorally question the content of those facts for their own monetary gain.

With the MOQ however, we can make the argument for change much stronger. With it – the issue of climate change becomes not only a matter of fact but as a matter of quality. It does this by showing that not only is it moral to change our behaviour, but it’s evil not to. An MOQ argument for changing our behaviour follows:

“If we don’t value the biological quality of the life in our oceans and allow inorganic particles of CO2 to fill our planet. Then allowing this lower level to subsume the higher level is immoral. If we allow the social values of money and power to trump the intellectual truths of scientists explaining the threat. Then this is immoral. The threat of CO2 winning in the fight against life on earth is very dire. Biological quality is necessary for the social and intellectual quality of human beings to exist. Without it, the existence of these two levels is at risk. The moral thing to do then is to act to no longer allow CO2 to win its fight against biological quality. The moral thing to do is to follow what makes sense intellectually and not succumb to social greed. The moral thing to do is to change our behaviour in response to Global Warming.”

This is the unique thing about the MOQ. With the MOQ we can reject excuses of cultural relativism or scepticism about the existence of truth. We can call out paid arguments for the non-existence of global warming as the evil that they are. And we can logically say that responding to global warming is moral. This is true not just for some people in some such a place and time, but for all people -everywhere. And that’s very powerful.

The Diagnosed Threat Of Artificial Intelligence

With Elon Musk having recently said he will be giving away a Billion USD to fund research into AI to ensure risks are minimised – I wonder if there’s not already a free solution to the unique problem presented by AI in the codes of a moral philosophy we know.

In the Metaphysics Of Quality the Law of the Jungle declares that biological quality should always prevail over inorganic quality. In this case – I propose a simple AI rule. If a machine, controlled by software, is capable of taking a life in its day to day operation – then the machine must be able to detect life and avoid killing or injuring it where possible, unless of course specifically designed to do so (weapons).

That’s it. Doing scientific research to solve what is fundamentally a philosophical issue seems a lot like declaring war on an international policy issue [The War on Terror] that is – lots of money spent and bad results. Unless, of course, the research improves the life detecting capabilities of machines to be more affordable. I live in hope.

I’ve seen lots of talk recently about the moral threat of AI. So, what does the MOQ have to say about it?

To start with, how about a fact which appears to be lost in much of the discussion.

No computer has ever made a moral judgement which it hasn’t been told to make and so there is no reason to think this will ever change. Believing this will change spontaneously as a result of improved intelligence of machines is just that, a leap of faith, and not supported by evidence. As it stands, it is the human programmer making all moral judgements of consequence. Computers, being 0’s and 1’s, are simply the inorganic tools of the culturally moral programmer.

Unfortunately though, this isn’t likely to be appreciated any time soon because of a philosophical blind spot our culture has. That blind spot is our metaphysics which neglects the fundamental nature of morality and in doing so gets confused about both where morality comes from and whether machines can make moral judgements independently of being instructed to do so.

For example, in the case of a recent foreign affairs article – Nayed Al-Rodhan appears to believe that AI will start making moral judgements as a result of more ‘sophistication’ and learning and experience.

“Eventually, a more sophisticated robot capable of writing its own source code could start off by being amoral and develop its own moral compass through learning and experience.”

The MOQ however makes no such claim which, as already mentioned, is contrary to our experience. According to our experience it is only human beings and higher primates who can make social moral judgments in response to Dynamic Quality. Machines are simply inorganic tools and their components only make ‘moral decisions’ at the inorganic level.

That’s not to say though, that there aren’t any dangers of AI and that all risks are overblown. AI – being loosely defined as advanced computational/mechanical decision not requiring frequent human input – threatens society if it is either poorly programmed and a catastrophic sequence of decisions occurs or if it is well programmed by a morally corrupt programmer. However each of these scenarios aren’t fundamentally technological but philosophical, psychological & legal in nature.

The unique threat of AI is this aforementioned increase in freedom of machines to make decisions without human intervention making them both more powerful and more dangerous. The sooner our culture realises this, the sooner our culture can start to discuss these moral challenges and stop worrying about the machines ‘taking over’ in some kind of singularity apocalypse. Because unfortunately, if we don’t understand the problem, a solution will be wanting, and therein lies the real threat of AI.