Following the classical philosophical definition of ethics and the psychological research on problem solving and decision making, the issue of ethics becomes concrete and opens up the way for the creation of IT systems that can support handling of moral problems. Also in a sense that is similar to the way humans handle their moral problems. The processes of communicating information and receiving instructions are linguistic by nature. Moreover, autonomous and heteronomous ethical thinking is expressed by way of language use. Indeed, the way we think ethically is not only linguistically mediated but linguistically construed – whether we think for example in terms of conviction and certainty (meaning heteronomy) or in terms of questioning and inquiry (meaning autonomy). A thorough analysis of the language that is used in these processes is therefore of vital importance for the development of the above mentioned tools and methods. Given that we have a clear definition based on philosophical theories and on research on human decision-making and linguistics, we can create and apply systems that can handle ethical issues. Such systems will help us to design robots and to prescribe their actions, to communicate and cooperate with them, to control the moral aspects of robots’ actions in real life applications, and to create embedded systems that allow continuous learning and adaptation.