- Only 1.8% of the 45,000 individuals analyzed were bothered about AI ethics
- Consumers believe organizations should be held accountable
- EU AI Act brings multimillion-euro penalties
According to Pluralsight, of the 45,000 people who wanted to learn about artificial intelligence, only a mere 1.8% actively searched how to adopt AI responsibly.
The study revealed a rise in interest surrounding generative AI, machine learning and AI for cybersecurity, however Pluralsight Chief Content Officer Chris Herbert said that no significant interest in ethical AI was seen on the platform.
Herbert added: “It’s crucial that learners understand the risks and pitfalls associated with AI so they can adopt it ethically.”`
We’re not interested in ethical AI
The report highlights Google DeepMind research showing how AI can be misused, manipulated and exploited. Herbert said we should be focusing on “mitigating its risks and negative consequences while maximizing its positive outcomes.”
Lead Content Strategist Adam Ipsen also noted Accenture research reveals that more than three-quarters (77%) of global consumers believe organizations should be held accountable for AI misuse, highlighting the need for greater awareness.
The reality is that four in five executives and nearly as many (72%) IT practitioners say their organization often invest in new tech without considering employee training. In a similar vein, only 12% of execs have significant experience working with AI.
Consequences for not adopting AI ethically are also set to have a financial value, with the EU AI Act entering force in August 2024 and gradually seeing enforcement increased over the course of the next few years. Maximum fines stand at €35 million or 7% of global turnover.
Looking ahead, Ipsen urges businesses not to see AI as a “one and done” project, but one that requires constant upskilling. Those who take the time to learn will realize the true benefits of AI rather than see it become a liability that causes them to face legislative and regulatory hurdles.