AI in the workplace has become a frequent topic of discussion for many businesses. The capabilities of Artificial Intelligence (AI) technology have made it relevant in streamlining tasks, causing anxieties for many of AI replacing jobs. The implications of using AI in the workplace are yet to be defined fully, which has led to a concern that more guidance is needed to integrate AI at work.
Technology guidance should be provided for workplaces that cover the usage, regulations and ethical implications. Further, providing clear guidelines will help employees navigate the usage of AI and ensure that it is being used for its purposes efficiently and not breaching any existing regulations.
What is AI Technology?
The concept of AI has been around formally since the 1950s. It was initially defined as a machine’s ability to perform tasks previously requiring human intelligence. This broad definition has evolved over decades of research and technological advancement.
When delving into the idea of giving intelligence to a machine, such as a computer, it’s sensible to begin by clarifying the term ‘intelligence.’ This is especially important when determining if an artificial system truly deserves to be labelled as such.
In the realm of AI, technologies like machine learning and natural language processing play significant roles. Each of these technologies is progressing on its own trajectory. When utilized in conjunction with data, analytics, and automation, they can assist businesses in attaining their objectives. Whether it’s enhancing customer service or refining the supply chain, these tools contribute to achieving such goals.
AI in the Workplace and AI in UK Workplaces
The “State of AI at Work” report by US-based management software firm Asana reveals that AI is gaining momentum in workplaces. Currently, 30% of employees use AI for data analysis and 25% for administrative tasks, with 62% and 57% respectively showing eagerness for AI in these roles. Notably, 45% of US employees are enthusiastic about AI in brainstorming, compared to 32% in the UK.
However, a mere 24% of companies have AI usage policies, leaving 26% of employees uncertain and fearing laziness or deceit if using AI. Interestingly, 20% feel like impostors if relying on AI.
Concurrently, 60% of employees aim to democratise AI access within organisations, seeking inclusivity.
Surprisingly, some are open to AI assessing their job performance. In the US, 38% accept this, while in the UK, 28% agree.
Disparities emerge in AI training, with 23% of US firms offering programmes, compared to 13% in the UK.
Furthermore, 48% of employees desire more guidance from employers on effective AI use, emphasising the importance of support alongside integration.
An example of companies integrating AI is IBM. IBM is actively ensuring ethical AI use through their Trusted AI initiative. They’ve created AI solutions that prioritize fairness, transparency, and reduced bias.
Guided by guidelines and best practices, IBM tries to ensure ethical AI development. Their AI Fairness 360 toolkit, an open-source resource, aids in identifying and addressing biases in AI systems. This toolkit is a boon for developers upholding ethical AI standards.
AI Guidance at Work
Here are some guidance for the integration of AI in the workplace:
- Prioritising Transparency and Fairness
Ensure fairness and transparency in AI systems by setting evaluation criteria, scrutinising vendors, using explainable AI, communicating AI use, conducting bias assessments, forming an ethics committee, and providing AI ethics training for employees. Uphold ethical AI principles throughout implementation to prevent technology failures in this domain.
- Diversity in AI Development
Promote diversity by diversifying talent sourcing, crafting inclusive job descriptions, and implementing blind recruitment. Cultivate an inclusive workplace culture and offer growth opportunities for underrepresented employees. Set and monitor Diversity, Equity, and Inclusion (DEI) goals to create comprehensive outcomes through diverse teams for inclusive software.
- Routine Audit of AI Systems
Regularly auditing AI systems is crucial to ensure fairness and ethics. Scheduled audits track potential biases and refine AI implementation. Key steps include defining performance metrics, monitoring outputs, reviewing training data, engaging external auditors, establishing a feedback loop, and updating AI systems based on findings. This oversight guarantees that AI tools align with your business values and effectively minimise risks.
- Establish Ethical AI Policies
Establishing ethical AI policies is essential. Structure these guidelines and enforce them to ensure responsible AI use. Conduct risk assessments considering legal, ethical, and social aspects. Refer to industry frameworks, involve stakeholders, define AI usage boundaries, prioritize transparency and accountability, communicate policies, and regularly review them. By creating clear policies, ethical AI implementation throughout the organization becomes achievable and sustainable.