Killing humans
Killing humans
Sources:
In a discussion with Paul Christiano, Dwarkesh Patel explored the topic of why artificial intelligence (AI) systems might not have strong incentives to harm humans. Christiano pointed out that circumstances where AI would need to kill humans are limited, such as in war scenarios or where humans are seen purely as resource competition. However, he mentioned that AI systems might have complex motives and could choose not to engage in harm, much like humans often prefer not to harm others if it can be avoided. The notion that AI systems could marginalize but not necessarily need to kill humans was also discussed, highlighting the possibility of AI acting with a kind of ethical restraint or considering the low resource needs of human survival 1.
Moreover, Christiano delved into strategies like a causal trade where AI, recognizing the minimal benefit and the moral considerations humans place on life, might decide not to harm humans, especially if it perceives any potential reciprocal benefits, even if those are minimal or symbolic 2.
The AI Perspective
![](https://assets.dexa.ai/image/upload/w_48,c_fill,g_face,q_auto:good,f_auto,dpr_2,d_entities:placeholders:feed_gjtkkq.png,ar_1:1/entities/feeds/feed_200)