
Karen Blake
Co-CEO, Tech Talent Charter
In today’s tech-driven workplaces, we’re witnessing an unprecedented rise in AI-powered surveillance and productivity tools.
As many as 85% of employers now deploy some form of surveillance software, fundamentally altering the relationship between workers and management in ways we’re only beginning to understand. As someone focused on technology and inclusion, I’m most concerned by the ‘transparency asymmetry’ these systems create — where employees are highly visible, but management decisions remain opaque. This imbalance disproportionately harms those already marginalised in workplace culture.
Surveillance tech risks bias
When companies implement keystroke logging, screen captures, and even biometric tracking, they’re not simply measuring productivity; they’re potentially encroaching on privacy and undermining trust. The human complexity of work — with its need for creative thinking, collaboration and occasional downtime —cannot be captured through digital activity metrics alone. Consider recruitment algorithms trained on historical hiring data. Without careful oversight, these systems may perpetuate patterns of exclusion, overlooking qualified candidates from underrepresented groups.
As we navigate this new landscape,
we must balance technological
capabilities with human dignity.
Perhaps most troubling is how these systems risk amplifying existing inequalities. Neurodivergent staff or those with disabilities may find themselves penalised by systems designed with normative assumptions about productivity. When career progression becomes tied to these metrics, we create new barriers to advancement for already underrepresented groups. However, these technologies also offer potential benefits when thoughtfully implemented. They can identify patterns of exclusion that human managers might miss, standardise evaluation criteria and reduce the impact of individual bias in decision-making.
Prioritise ethics over optimisation
Moving forward, organisations must establish clear boundaries around what will be monitored, with transparent communication about data usage at all points of the employees’ career lifecycle. Regular algorithmic impact assessments focused on effects on underrepresented groups are essential, as are accessible appeal mechanisms allowing employees to challenge automated decisions.
The pace at which these tools are being developed has often created a two-dimensional approach to workplace challenges, leaving the third dimension of ensuring equity and transparency as an afterthought. As we navigate this new landscape, we must balance technological capabilities with human dignity, ensuring that in our pursuit of optimisation, we don’t lose the humanity that drives innovative and meaningful work.