Artificial intelligence (AI) is expected to provide answers, products, and services of increasing quality for the satisfaction of our needs. However, AI entails certain risks, some very serious, e.g., the risk of human enslavement or even extinction. In order to handle this ethical and sustainability issue properly, we need to ask questions about what we really want, what is our real goal, and what we really are. Classical philosophy defines us as thinking entities, and the main problem is the issue of how to think in the right way as persons or as groups and societies. Accordingly, the design and use of AI as a tool to support our thinking process may be the right way to take advantage of the possibilities AI offers. However, if we design and use AI as a provider of answers, services, and products, as we currently do, the risk is that an incessantly advanced AI will swiftly replace our thinking and by that undermine our existence.