but I’d leave the deeper commentary on work-life datafication and automation to those with that expertise. That being said, the sociology in my socio-legal training still urges me to keep track on the shifting power-relations and inherent dependencies between the different parties involved. As our societies become more “platformised” and dependent on large tech companies, we need to understand what that dependency brings and how we can find appropriate balances in that quite fundamental shift.
I think that the new with the new, so to speak, is the adaptability and possible agency of contemporary and coming machine learning capabilities. Things getting agency, albeit being an age-old fear, is in a computerised way very much here now. Not as a cognitively aware intelligence, but as a mimicking agency in the adaptable and predictive recommendation systems, the personalised services and the automated decisions. In contemporary AI, this sort of agency is heavily relying on the data available for its underlying pattern recognition. And, given that this data often is collected from human behaviour, it is too often a skewed source with biased legacies, leading to less precision for dark skin in facial recognition or flawed conclusions of preferring male candidates rather than the most fitting, regardless of gender, in recruitment. That “golden standard”, so to speak, is not always so golden, after all, posing not only a number of challenging tasks with regards to retrieving better data quality but also – and more importantly, I think – a normative challenge of what it ought to learn from, what it ought to reproduce and amplify in a far from equal society.
The automated decision-making that plays a growing role in either business or governmental policies has various impacts on humans as well as on certain aspects of their fundamental rights. This ranges from freedom of speech and of the press in the case of social media newsfeeds to the right to privacy and data protection to even fair trial and due process requirements in the case of the deployment of algorithmic decision-making in investigations or in judicial procedures, such as face recognition technology. How can governments step up and preserve rights and values in the Digital Age?
Again, a great, but huge, question. As mentioned above, I think much of the challenges are found in the intricate interplay between learning technologies and societal already present structures. I think that the surge of ethics guidelines in the fields of AI points to a number of important points when emphasising accountability and transparency in order to secure fairer and more trusted applied AI-systems. As many have pointed out in addition to me,
a challenge now is to move from principle to process.