AI is on everyone’s lips. Applications of AI are becoming increasingly relevant in the field of clinical decision-making. While many of the conceivable use cases of clinical AI still lay in the future, others have already begun to shape practice. The project vALID provides a normative, legal, and technical analysis of how AI-driven clinical Decisions Support Systems could be aligned with the ideal of clinician and patient sovereignty. It examines how concepts of trustworthiness, transparency, agency, and responsibility are affected and shifted by clinical AI—both on a theoretical level, and with regards to concrete moral and legal consequences. This analysis is grounded in an empirical case study which deploys mock-up simulations of AI-driven clinical Decision Support Systems and systematically gathers clinician and patient attitudes on a variety of designs and implementations. One key output of vALID will be a governance perspective on human-centric AI-driven Decision Support Systems in the context of shared clinical decision-making.