ChaptersCircleEventsBlog

AI and Privacy 2024 to 2025: Embracing the Future of Global Legal Developments

Published 04/22/2025

AI and Privacy 2024 to 2025: Embracing the Future of Global Legal Developments

Written by Aashita Jain, Informatica.

 

We are ushering in an exciting new era where Data Privacy and Artificial Intelligence (AI) innovation move beyond guidelines to become powerful catalysts for change, revolutionizing business operations. Recently, AI's explosive market growth and cutting-edge innovations are revolutionizing business operations, and these innovations are predicted to drive AI's market expansion further beyond $3 trillion by 2034.[1] Businesses use AI technologies to streamline operations by automating routine tasks, providing data-driven insights, and enhancing  the firm’s decision-making process. By embracing AI, companies can achieve operational efficiency, reduce costs, improve services, and foster innovation across various industries. The transformation is facilitated through rapid prototyping, testing, and predictive analytics, ultimately redefining product development cycles.  

 

Navigating the Landscape

The explosive growth of AI necessitates careful navigation of the evolving legal and privacy landscape. 2024 marked a pivotal moment in global regulation as transformative legislation concerning privacy, artificial intelligence (AI), and cybersecurity commenced a significant overhaul of the compliance landscape. Notably, the introduction of the European Union AI Act (EU AI Act), the revision of privacy laws, and the ongoing expansion of state-level AI and privacy regulations within the United States marked a revolution—this presents organizations worldwide with many changes to navigate.

As we advance into 2025, this momentum shows no signs of abating. Four states in the US have implemented new privacy laws effective January 1, 2025, followed by the enactment of New Jersey's law on January 15. In the European Union, the Digital Operational Resilience Act (DORA) came into effect for financial services entities on January 17, 2025. Subsequently, on February 2, provisions concerning prohibited artificial intelligence under the EU AI Act emerged, establishing new benchmarks for the ethical use of AI technologies.

This new era marks a pivotal time for rewriting the regulatory playbook for organizations worldwide.

 

The AI-Privacy Conundrum:

Today's AI represents an advanced version of earlier technologies, with its origins back to the 1950s. Historical trends have resurfaced, accompanied by expected challenges at a higher level that require careful consideration. In his article, Artificial Intelligence and Privacy, Daniel J Solove argues in-depth against AI exceptionalism and explains the lingering privacy issues since its origin. He aptly notes how we are focused on AI exceptionalism and deflecting from residual privacy challenges that… treating AI as so different and special that we fail to see how the privacy problems with AI are the same as existing privacy problems, just enhanced. AI represents a future for privacy that has been anticipated for a long time; AI starkly highlights the deep-rooted flaws and inadequacies in current privacy laws, bringing these issues to the forefront"[2].

AI systems thrive on massive data sets for training and improving performance, often involving personal and sensitive information. Increased reliance on AI raises concerns about data misuse that crosses ethical boundaries and the extent and purpose of data collection. Additionally, AI models, especially deep learning models with opaque algorithms, are often "black boxes," making it difficult to explain how they make decisions. AI algorithms can unintentionally perpetuate biases present in training data, amplifying privacy concerns.

Growing user awareness about data privacy leads to higher expectations for transparency and accountability about how their data is collected, stored, and used. Additionally, with the rise of cyber-attacks and data breaches, businesses face increasing pressure to prioritize ethical data practices to demonstrate accountability, fairness, and compliance, balancing innovation with accountability. With emerging technologies such as AI and their ubiquitous use, the key question of the hour is how businesses can responsibly harness innovation while meeting privacy expectations. A privacy-first approach is no longer optional, but staying ahead of this dynamic interplay can create a competitive advantage for businesses.

 

Rising Regulatory Landscape:

The current global privacy regulatory landscape has fundamentally experienced a profound transformation with cornerstones like the General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and the Personal Information Protection Law (PIPL). These are further complemented by frameworks such as ISO 27001, 27701 standards, and the NIST Privacy Standard, setting the stage and guiding businesses on effectively handling personal data. Yet, these frameworks fall short beyond traditional data practices, leaving critical gaps in addressing the unique challenges AI poses. As AI reshapes industries, 2024 and 2025 promise a wave of global legal developments that will critically influence the interplay between innovation and privacy and shape the foundation of their co-existence. Business leaders are now navigating uncharted territory where they must go beyond compliance by aligning with emerging frameworks to ensure accountability and drive innovation. This approach is not solely about regulatory preparedness; it presents an opportunity to build trust and lead in an AI-powered future.

The privacy landscape in the United States (US) is fragmented and driven by state-level laws such as California's CCPA (as amended by the CPRA), which mandates greater transparency in AI-powered profiling and decision-making. Other states, including Colorado and Virginia, are establishing their privacy laws, introducing additional obligations for businesses processing consumer data, creating an environment that demands adaptability. While federal initiatives remain in flux, frameworks like NIST's AI Risk Management Framework (RMF) continue to guide organizations in identifying and mitigating risks. The framework established as a patchwork is now steering discussions towards cohesive standards. The legislation is expected to focus on sector-specific regulations such as healthcare and autonomous vehicles. Businesses must prepare for these fragmented AI regulations by adopting flexible and scalable governance frameworks while ensuring organization readiness.

The European Union (EU) continues to assert its position as a global leader in privacy and AI regulations, with GDPR providing a strong foundation and the now newly effective EU AI Act setting a risk-based framework for AI governance. The AI Act will impose requirements on high-risk AI systems, such as transparency, bias detection, and human oversight. The EU endeavors to achieve a balanced approach between federal regulations and sector-specific initiatives, such as introducing the Digital Operational Resilience Act (DORA), targeting financial institutions, and mandating robust data protection and cybersecurity measures. These developments steer businesses towards proactive readiness, such as implementing AI ethics policies, investing in Privacy-Enhancing Technologies (PETs), and maintaining detailed documentation for AI systems. For multi-jurisdictional enterprises, it is imperative to prioritize cross-border compliance strategies by aligning AI systems with the most EU standards, ensuring operational and legal consistency across regions.

In the Asia Pacific (APAC), privacy and AI governance reflect a dynamic interplay of innovation and regulation. India's Digital Personal Data Protection Act (DPDPA) imposes robust consent requirements and significant penalties for non-compliance, emphasizing accountability. China's PIPL enforces strict data localization and mandates transparency in algorithmic decision-making, creating complexities for multi-national businesses entering its market. Meanwhile, though Singapore updated its Model AI Governance Framework, it focuses on ethical AI practices, offering guidance on achieving transparency and fairness in AI-driven decisions. As APAC continues to refine its regulatory structures, in 2025, businesses operating in APAC will face challenges such as data sovereignty requirements and fragmented compliance demands. The key lies in localized compliance strategies supported by centralized governance structures. To prepare for the upcoming changes, businesses should localize data storage where required, adopt region-specific privacy frameworks, and collaborate with legal experts to address jurisdictional differences. Multi-jurisdictional organizations must integrate localized policies into a unified governance strategy, leveraging automation tools to monitor compliance across APAC regions.

As these regulations mature globally, multi-jurisdictional leaders find themselves at a crossroads: adapt or risk falling behind. The following two years are a defining period of growing regulatory complexity. To thrive, businesses must:

1. Adopt Agile Governance Models to Prepare for Fragmentation: A single global AI regulatory framework is unlikely in the near term. Businesses should implement adaptable, modular compliance strategies. Collaboration with local experts and regulators is critical for navigating a diverse compliance landscape.

2. Adopt Privacy-by-Design: Incorporate privacy considerations into every development and deployment stage to stay ahead of evolving regulations.

3. Invest in Privacy Enhancing Technologies (PETs): Investing in Privacy-Enhancing Technologies (PETs), such as differential privacy and federated learning, can address cross-border data concerns while fostering compliance.

 

Emerging Industry Trends in Global AI Governance:

As we herald the new era of structured oversight and ethical innovation at an unprecedented speed, governance is no longer just about regulation- it's about crafting a shared vision for innovation, ethics, and accountability. The conversation around AI governance has moved beyond theoretical frameworks to tangible global action, signaling a turning point for businesses. Emerging trends focus on risk-based approaches, operational transparency, and integrating ethical considerations into AI systems.

The EU AI Act exemplifies this shift, introducing a tiered risk classification system that demands stricter controls for high-risk applications, such as healthcare and recruitment. Similarly, ISO/IEC 42001 is posed to set global standards for ethical and sustainable AI practices, aligning operational resilience with regulatory compliance. Meanwhile, DORA in the EU redefines cybersecurity expectations for AI-powered financial systems. Across the Atlantic, the NIST AI Risk Management Framework emphasizes a dynamic, adaptable approach to mitigating AI-related risks in evolving landscapes. Initiatives like the EU-US Data Privacy Framework aim to streamline data transfers for businesses leveraging AI across jurisdictions. Emerging bilateral agreements, such as India-US technology collaborations, focus on harmonizing AI standards globally.

ISO/IEC 27701:2019 is expected to be replaced by ISO/IEC FDIS 27701 within the coming months.[3] This update introduces an exciting interplay of privacy and AI governance, emphasizing managing privacy concerns related to AI with specific governance structures and controls tailored to AI environments for better alignment with ISO 42001 guidelines for responsible AI system management.

Integrating updated ISO/IEC 27701 with ISO 27001:2022 and the newly introduced ISO 42001:2023, which focuses on AI management systems, provides a comprehensive framework for addressing AI-related risk. Furthermore, combining these frameworks with the EU AI Act offers a structured approach to ensuring compliance with privacy and AI regulations while effectively managing risks for long-term success.

By aligning these standards and emphasizing interoperability, organizations can:

  • Enhance Data Privacy Governance by implementing robust controls to protect personal data processed by AI systems and ensure compliance with global privacy regulations.
  • Strengthen Information Security by establishing a unified management system that addresses both information security and privacy risks associated with AI technologies.
  • Demonstrate accountability by exhibiting a commitment to ethical AI practices and responsible data handling through adherence to internationally recognized standards, thereby instilling confidence in stakeholders.

 

What's Next for Business Leaders?

As the global governance landscape evolves, businesses must look beyond mere compliance. The future is poised to favor organizations that adeptly manage the intricacies of fragmented regulations while leveraging governance frameworks as tools for trust and innovation.

A strategic playbook may involve the following:

1. Harmonizing the forthcoming global changes with the existing Governance Model: Conduct a full review of the existing Privacy Governance Model against the emerging frameworks. Build a unified dynamic compliance matrix by finding the delta of the applicable overlapping regulations and regional laws with the strictest applicable regulations to ensure alignment while anticipating future changes, especially for cross-border operations. Identify the gaps with the harmonized framework to ensure forward compatibility. Organizations must develop a custom model tailored to their needs, as no singular approach can effectively address the unique needs of all entities.

2. Operationalize Governance Standards with Risk Management Framework: Integrate frameworks like ISMS (ISO 27001) and PIMS (ISO 27701) to implement risk-based governance models. Use ISO/IEC 42001 and the EU AI Act to create risk tiers for AI applications, aligning oversight with the impact level. Leverage the risk-based governance models to mitigate the gaps identified in the unified dynamic compliance matrix. Consider fragmenting risk and accountability models at the enterprise and application/system levels where possible to assign appropriate mitigating controls and data classification. Partner with accredited auditors to validate your alignment with these frameworks, boosting credibility with stakeholders.

3. Prepare for Regional Adaptations: Use local consultants and legal and technical experts to verify your regional strategies and avoid unintended breaches of localized requirements.

4. Technology as a driver of compliance: Invest in Privacy-Enhancing Technologies (PETs) as mitigating controls such as federated learning and differential privacy to ensure compliance and drive innovation. Deploy AI explainability tools to interpret AI models' decisions to maintain a documentary audit trail, which is critical for meeting transparency and explainability requirements under frameworks like the EU AI Act and demonstrating accountability. Test high-risk applications in controlled environments to preempt regulatory scrutiny while refining governance models.

5. Organizational Alignment for Execution: Continue to invest in leadership support to ensure commitment to implementation and maintenance. It is crucial for allocating resources, defining roles and responsibilities, and driving the implementation process. Establish cross-functional governance committees combining legal, technical, and ethical expertise. Investment in employee training on frameworks, making compliance an organization-wide responsibility. Continuous monitoring to track adherence to global standards in real-time for forward compatibility.

The intersection of AI and privacy is no longer a mere regulatory requirement; it has evolved into an organization's strategic imperative. As businesses confront the complexities of dynamic global frameworks, their capacity to align innovation with governance will delineate industry leaders. Beyond 2025, organizational success will increasingly belong to those enterprises that perceive governance not as a barrier but as a catalyst for growth. By transforming regulatory challenges into opportunities, they will invest in sustainable innovation by building trust, credibility, and resilience in an increasingly fragmented landscape. Ultimately, the focus should not solely be on what you build but on how you build it for long-term success.

 

[1] (No date) Artificial Intelligence Market Size, Share & Trends Analysis Report by Solution, By Technology (Deep Learning, Machine Learning, NLP, Machine Vision, Generative AI), By Function, By End-use, By Region, And Segment Forecasts, 2024 - 2030. rep. Grand View Research. 

[2] Solove, Daniel J., Artificial Intelligence and Privacy (February 1, 2024). 77 Florida Law Review (forthcoming Jan 2025), GWU Legal Studies Research Paper No. 2024-36, GWU Law School Public Law Research Paper No. 2024-36, Available at SSRN: https://ssrn.com/abstract=4713111 or http://dx.doi.org/10.2139/ssrn.4713111

[3] ISO/IEC 27701:2019 (2022) ISO. Available at: https://www.iso.org/standard/71670.html (Accessed: 21 January 2025).

Share this content on your favorite social network today!

Unlock Cloud Security Insights

Unlock Cloud Security Insights

Choose the CSA newsletters that match your interests:

Subscribe to our newsletter for the latest expert trends and updates