Kestria recently met with global industry leaders at the AI ethics, regulation and business growth roundtable to discuss where to draw development lines, whether to enforce global or regional rules and if self-regulation and ethical frameworks can preempt legislative action.
Key takeaways:
Ethical concerns and regulation: The article stresses the need to address ethical concerns surrounding AI's growth, including job displacement and biased decision-making, through regulation.
Systemic risks and fairness: Industry leaders highlight systemic risks in AI applications, particularly in financial markets, emphasizing the importance of fairness, transparency, and human oversight.
Transparency and accountability: The importance of transparency in understanding AI systems' functions and balancing job loss concerns with higher-value task automation is emphasized, urging global and regional action.
Balancing innovation and ethics: There's a call for a balanced approach integrating ethical AI design from the start, with collaboration among stakeholders seen as crucial for fostering trust and advancing AI responsibly.
Addressing the concerns behind AI's rapid growth
For Ego Obi from UK, Head of Operations, Sub-Saharan Africa at Google, AI’s rapid growth concerns are valid. AI's ability to mimic human thinking and learning raises worries about job displacement and unemployment in certain industries. While AI may enhance business efficiency, it may also lead to the displacement of certain jobs. On the other hand, it can present upskilling opportunities. Another factor is the ethical concern surrounding the lack of transparency in the use of AI in decision-making, which may result in negative unintended outcomes such as bias. There's a concept called the Black Box, where it's hard to understand how AI systems make decisions or reach certain outcomes due to the algorithms used. If the data fed in is biased, the AI system can perpetuate these biases.
AI applications gather and analyze extensive data, raising privacy concerns as individuals fear potential misuse of personal information and privacy breaches. Regulation may be needed to mitigate these risks. Then there's the fear of losing control: particularly in crucial sectors like healthcare and finance, where autonomous decisions could have unintended consequences without human oversight. Despite its potential for enhancing business efficiency, AI raises significant societal and ethical concerns that demand attention.
As per Razvan Cazan from Romania, Head of Finance & Accounting Service Delivery at DB Schenker DBS, the major concern is the systemic risk. Beyond individual users and algorithms, systemic interactions between AI applications and humans pose significant challenges. For example, financial markets have transformed with AI algorithms independently discovering investment strategies. However, this can lead to potential harm, such as AI-driven trading algorithms contributing to market volatility or systemic instability. Addressing these risks requires an ethical framework that considers the broader impact of AI on financial systems.