This paper seeks to outline the implications of the adoption of artificial intelligence (AI), and more specifically of machine learning (ML), by the old ‘gatekeepers’ – the legacy media – as well as by the new, algorithmic, media – the digital intermediaries – focusing on personalization. Data-driven personalization, despite demonstrating commercial benefits for the companies that deploy it, as well as a purported convenience for consumers, can have individual and societal implications that convenience simply cannot counterbalance. Nor are citizens necessarily complacent with regard to targeting, as has been suggested. According to an interim report on online targeting released by the UK’s Centre for Data Ethics and Innovation (CDEI), ‘people’s attitudes towards targeting change when they understand more of how it works and how pervasive it is’.
Machine learning (ML)-driven personalization is fast expanding from social media to the wider information space, encompassing legacy media, multinational conglomerates and digital-native publishers: however, this is happening within a regulatory and oversight vacuum that needs to be addressed as a matter of urgency.
Mass-scale adoption of personalization in communication has serious implications for human rights, societal resilience and political security. Data protection, privacy and wrongful discrimination, as well as freedom of opinion and of expression, are some of the areas impacted by this technological transformation.
Artificial intelligence (AI) and its ML subset are novel technologies that demand novel ways of approaching oversight, monitoring and analysis. Policymakers, regulators, media professionals and engineers need to be able to conceptualize issues in an interdisciplinary way that is appropriate for sociotechnical systems.
Funding needs to be allocated to research into human–computer interaction in information environments, data infrastructure, technology market trends, and the broader impact of ML systems within the communication sector.
Although global, high-level ethical frameworks for AI are welcome, they are no substitute for domain- and context-specific codes of ethics. Legacy media and digital-native publishers need to overhaul their editorial codes to make them fit for purpose in a digital ecosystem transformed by ML. Journalistic principles need to be reformulated and refined in the current informational context in order to efficiently inform the ML models built for personalized communication.