By Chelsea Manning
Today, my friend and security expert, Yan Zhu, asked us to consider the following post-truth thought experiment:
Imagine that you had a magic machine. You tell the machine what your goals are. The machine tells you, in any situation, the optimal statement to say in order to achieve your goals, and who to say it to. The statement may or may not be true.
Under which circumstances, if any, would you follow the machine’s instructions?
Example 1: Bob tells the machine that his goal is to become as rich as possible. The machine instructs Bob to publish a “fake news” article about how climate change is an illuminati conspiracy theory.
Example 2: Alice tells the machine that her goal is to cure cancer. The machine instructs her to tell the local florist that her favorite color is red when in reality it is blue.
Instinctively, most truth-loving people would consider Bob to be immoral for following his instructions; however, we would probably not say the same of Alice. Many of us would even admit that in Alice’s situation we would follow the machine’s instructions.
She then concludes at the end:
“ How effective is honesty at achieving your goals, and  at what point do you decide that lying is a more effective means to an end?”
I want to say “never” for the second question, but I can clearly imagine a world in which it is the wrong answer.
Yan’s analysis has roots in a consequentalist flavor of the utilitarian perspective.
In plain English, the question she is asking is this: “Will more good than bad come from lying?”
In other words: “should I lie to get cookies, even if they’re being shared with everyone?” then, “if we can get cookies for everyone (because they’re so delicious) is it a moral imperative to try to achieve that goal?”
In the utilitarian perspective, right and wrong is determined by what causes the most pleasure. What will get you the most cookies. Even if it’s bad in the long run. If it feels good. Do it.
Being honest still gets you cookies in the end. It just harder. (Arguably, in the end though, it feels better, which is also sweet.)
Perhaps the best lens with which to analyze this thought experiment is to use the “categorical imperative” suggested by Immanuel Kant: “an objective, rationally necessary and unconditional principle that we must always follow despite any natural desires or inclinations we may have to the contrary.” – (Stanford Encyclopedia of Philosophy)
Fundamentally, under the categorical imperative, you should never lie, even if it gets everybody cookies. This is because you can still achieve your goal of obtaining cookies through honest means—it is just harder.
But it’s not only about whether everyone gets cookies or not; it’s about how you get them. For Kant, the Categorical Imperative called upon the basic principle of morality to command, quite unconditionally, certain courses of action, “quite independently of the particular ends and desires of the moral agent.” This categorical imperative “binds us regardless of our desires: everyone has a duty to not lie, regardless of circumstances and even if it is in our interest to do so. These imperatives are morally binding because they are based on reason, rather than contingent facts about an agent.” (Wikipedia)
In the case of Yan’s thought experiment; even if the machine can help you achieve the goal (of getting cookies for everyone) you should carefully consider the outcome of the machine and act on your own moral imperative. When you depend on the output of the machine, whether lying or telling the truth, you are no longer acting as an independent moral actor.
In other words, in the post-truth world, we need to make our own choices based on our conscience and the categorical imperative.
2) Tweet where she announces this essay:
— yan (@bcrypt) January 23, 2017