As the digital age continues to evolve, artificial intelligence (AI) is rapidly becoming a cornerstone of innovation and efficiency. In 2021, the Philippines launched the National Artificial Intelligence Roadmap, which prioritizes inclusive, resilient, and sustainable development. Furthermore, the country’s President believes that AI can uplift the lives of the nation’s citizens, drive enterprise productivity, and increase the Philippine economy’s competitiveness.
According to a recent study from IBM’s Institute for Business Value, three out of four CEOs think that organizations with the most advanced generative AI (GenAI) are at an advantage, with nearly half already utilizing GenAI to guide their strategic decisions. As organizations expand their AI adoption, it is imperative that they adhere to Responsible AI practices, which promote the ethical, transparent, and beneficial use of the technology.
AI adoption in the Philippines
The country’s AI adoption is evident across multiple sectors, each harnessing its capabilities to enhance operations and manage risks.
Financial institutions. Some local universal banks are leveraging on AI for risk assessment, fraud detection, and customer service, utilizing solutions provided by tech giants such as Microsoft.
Healthcare. Some healthcare platforms are leveraging AI for medical data analysis, improving patient care, and expanding telehealth services.
Telecommunications. Local telecom companies employ AI for network optimization, customer service enhancement, and predictive maintenance.
E-commerce/Retail. Online marketplaces and retailers utilize AI-driven recommendations and predictive analytics to refine customer experiences and operational efficiency.
AI's impact on risk management
AI is revolutionizing risk management by offering enhanced data analysis, predictive capabilities, real-time risk assessments, and advanced cybersecurity measures. These technologies enable businesses to identify and respond to risks with unprecedented speed and accuracy.
However, the integration of AI into risk management is not without its challenges. Concerns around data privacy, algorithmic bias and fairness, transparency, and regulatory compliance must be addressed to ensure the responsible use of AI.
Data privacy and security. AI systems rely on data. There's a risk that sensitive customer or business information could be exposed, particularly if appropriate cybersecurity measures are not in place.
Algorithmic bias and fairness. AI systems are only as good as the data they're trained on. If the data is inaccurate, incomplete, or biased, it can lead to unreliable or discriminatory decisions.
Lack of transparency. Complex AI models may lack transparency, making it challenging for stakeholders to understand how decisions are made. If the reason behind a decision by AI can't be explained, it can lead to legal and ethical implications.
Regulatory compliance. The legal environment for AI is complex, fluid, and still developing. Companies can face risks relating to non-compliance with data protection regulations and other industry-specific laws.
Navigating AI risks with responsible practices
Responsible AI covers transparency, fairness, accountability, ethical use, privacy protection, reliability, safety, sustainability, inclusivity, and governance.
To integrate Responsible AI into risk management, companies can adopt the following best practices:
Ethical framework development. Create a comprehensive ethical framework that aligns with regulatory standards and industry-specific best practices.
Data governance and privacy protection. Implement data governance practices to ensure data privacy and transparency in AI models.
Transparency and explainability. Make AI outputs understandable and provide justifications for AI-generated decisions.
Bias detection and mitigation. Conduct thorough bias assessments to identify and mitigate biases in AI models.
Human-AI collaboration. Augment human expertise with AI, promoting collaboration through accessible interfaces like visualizations and interactive dashboards.
Examples of Responsible AI in action
Banks. Major local banks are incorporating AI in risk management, with a focus on fraud detection. Responsible AI usage involves stringent data protections and privacy measures.
Telecommunications. Local providers use AI to manage infrastructure risks and predict outages. Ensuring responsible AI usage means preventing wrongful service denials.
E-commerce. Some platforms employ AI for product recommendations, with a responsibility to avoid discriminatory biases.
Health Tech. Certain local companies use AI for disease diagnosis, requiring the protection of sensitive health information.
The trajectory of Responsible AI in the Philippines
The future of Responsible AI in the Philippines includes broader AI adoption across sectors, enhanced regulations, and workforce upskilling, among others. With the Philippines set to propose the creation of a Southeast Asian AI regulatory framework to the ASEAN in 2026, Responsible AI could become a standard in business operations.
As AI becomes more pervasive in the country’s business landscape, its impact on society will be profound, shaping the future of work, influencing broader socio-economic development, and driving positive change. It is therefore imperative for organizations to embrace Responsible AI principles in risk management and collaborate with stakeholders to navigate the opportunities and challenges presented by local AI-driven innovations.
Christiane Joymiel C. Say-Mendoza and Joseph Ian M. Canlas are Business Consulting Partners of SGV & Co.
This article is for general information only and is not a substitute for professional advice where the facts and circumstances warrant. The views and opinions expressed above are those of the authors and do not necessarily represent the views of SGV & Co.