This paper critically examines the growing use of big data algorithms and AI in science, society, and public policy. While these tools are often introduced with the goal of increasing efficiency, the results do not always lead to greater empowerment or fairness for individuals or communities. Persistent issues such as bias, measurement error, and over-reliance on prediction can undermine success and produce outcomes that are neither fair nor transparent, especially when automated decisions replace human judgment. Beyond technical limitations, the widespread use of data-driven methods also shapes the distribution of power, influences public trust, and raises questions about the health of techno-socio-economic institutions. We argue that the pursuit of optimality cannot succeed without careful evaluation of ethical risks and societal side effects. Responsible innovation demands open standards, ongoing scrutiny, and a focus on human values alongside technical performance. Our goal is to encourage a fundamental reorientation of the big data paradigm away from a focus on short-term optimization and towards a framework of “systemic resilience” and “participatory oversight” or even co-creation. We propose specific pathways to achieve this, arguing that responsible innovation requires considering complexity science while integrating constitutional and cultural values to achieve technologies that are not just efficient, but symbiotic with human self-organization.
