Supporting data sovereignty on AI platforms: A primer

Reading Time: 4 minutes

Introduction

One of the critical challenges organizations face while building a new AI platform is considering whether the platform meets the data sovereignty and data protection regulations of the operating country. This is more crucial now than ever, as most organizations are amid their cloud adoption journeys.

In this blog, I will explain:

  • What is data sovereignty?
  • Why is data sovereignty essential?
  • What are the business and architectural challenges in meeting data sovereignty regulations?
  • How does Refract, the Insight Designer module of the Fosfor Decision Cloud, help clients build AI platforms in line with data sovereignty requirements?

What is data sovereignty?

Data sovereignty is a concept wherein a nation or legal jurisdiction possesses the prerogative and entitlement to oversee and manage data originating within its confines. Under this principle, the government holds the authority to administer the acquisition, retention, manipulation, and dissemination of data that has its origins within the geographical boundaries of the nation.

For example, in the case of a multi-national corporation, operations would typically span countries with varying levels of data regulations, and in some cases even require the compute cycles to be executed locally within the country.

Let’s assume a client has operations across Europe, wherein all the Schengen countries within Europe except for one country (For e.g., country 3) can share the same infrastructure, storage, and compute. In this case, since country 3 has a stricter data sovereignty regulation, data collected, stored, or processed locally cannot be accessed outside country 3.

If an enterprise has operations in three countries, where country 1 and country 2 are Schengen countries governed by the same data regulations, and country 3 has stricter laws that mandate all the data storage, compute, and consumption happen locally in the country, users will have to adapt a distributed architecture. Let’s assume we have three users; User 1 and 2 belong to country 1 and 2, respectively, while user 3 belongs to country 3. As illustrated in Figure 1 below, while the workloads of user 1 and 2 could be pushed to a shared infrastructure, user 3’s workload should be pushed to a dedicated infrastructure.


Figure 1: Shared & dedicated infrastructure architecture

Why is data sovereignty essential?

As enterprises generate vast volumes of data via various channels such as eCommerce, mobile devices, and social media, there is a considerable responsibility for safeguarding this massive data collection. With an evolving presence in laws and regulations across nations, data sovereignty ensures that sensitive data–such as personal information or trade secrets–aren’t easily abused by cybercriminals.
Data sovereignty also provides companies willing to comply with local regulations, a competitive advantage over peers. This is particularly true as compliance demonstrates a commitment to protecting customer data, building trust with customers, and gaining an edge over those who disregard data security.

It is essential to note that enterprises need to ensure they meet the data sovereignty requirements of the countries they operate in, or else they face the risk of huge penalties and reputational risk.

What are the business and architectural challenges in meeting data sovereignty regulations?

Meeting data sovereignty regulations poses significant challenges for businesses operating in an era where data has become a global currency. These regulations, which require data to be stored and processed within specific geographic boundaries, have far-reaching implications for organizations of all sizes and industries. In this increasingly interconnected world, businesses navigate a complex web of legal, operational, and compliance issues. This article delves into the business challenges associated with data sovereignty regulations, shedding light on the critical considerations and strategies needed to effectively address these concerns while staying competitive in a data-driven economy.

The following are some of the challenges associated with meeting data sovereignty regulations:

  • Data localization: Regulations often require data to be stored within specific geographic boundaries. This can be costly as it may necessitate setting up local data centers or using cloud providers with data centers in the relevant region.
  • Data management: Managing data in compliance with various regulations can be complex and resource-intensive. Businesses must implement robust data governance, encryption, and access control mechanisms.
  • Compliance costs: Achieving compliance often involves substantial financial investments in technology, legal counsel, and compliance audits, which can strain a company’s budget.
  • Legal and regulatory complexities: Data sovereignty laws and regulations can vary widely from one jurisdiction to another. Understanding and navigating this legal landscape can be daunting, especially for businesses with an international presence.
  • Business disruption: Complying with data sovereignty regulations can lead to disruptions, including service downtime or changes in data processing practices, which may impact customer experience and revenue.
  • Data transfer restrictions: Regulations can limit the cross-border transfer of data, which can hinder global business operations and disrupt supply chains.
  • Data security: Businesses must implement robust security measures to protect data within specific regions, as breaches can result in severe penalties and reputation damage.
  • Vendor selection: Choosing the proper data storage and processing vendors that comply with local regulations can be challenging, as not all cloud service providers may have a presence in every region.
  • Privacy concerns: Meeting data sovereignty requirements often involves addressing privacy concerns and ensuring customer data is handled per local privacy laws.
  • Data portability: Regulations may require businesses to enable data portability, allowing individuals to move their data between service providers, which can be technically challenging.
  • Contractual obligations: Businesses may need to renegotiate contracts with vendors and customers to ensure compliance with data sovereignty laws, which can be time-consuming and costly.
  • Risk management: Companies must develop risk mitigation strategies to address the potential legal and financial risks associated with non-compliance.
  • International expansion challenges: Expanding into new markets means dealing with additional data sovereignty regulations, creating complexities for global expansion strategies.
  • Data residency and backup: Ensuring the data is always accessible and recoverable, even when subjected to local regulations, can be a considerable technical challenge.
  • Monitoring and reporting: Meeting compliance often requires continuous monitoring and reporting on data handling practices, which can be resource-intensive.
  • Employee training: Businesses must ensure that employees are aware of and trained in compliance with data sovereignty regulations, which may require ongoing education programs.

Navigating these challenges is essential for businesses to thrive in a data-driven world while complying with the complex and ever-evolving landscape of data sovereignty regulations.

Here are some architectural approaches that can help companies to mitigate these challenges:

Option 1: A separate cluster for each country.

Option 2: A common cluster for a group of countries and a separate cluster for countries with stricter regulations, with a separate domain name and separate metadata.

Option 3: A common cluster for a group of countries and a separate cluster for countries with stricter regulations, with a common domain name and common metadata.

The following are the pros and cons of each approach:

Option 1: A separate cluster for each country

Pros

Stricter data isolation for each country, as each country will have its database for managing metadata.

Cons

  • An expensive solution as enterprises need to procure more VMs and clusters.
  • Maintenance overhead as more clusters need to be maintained.
  • No central discoverability of assets across the enterprise.
  • No common domain name, and hence, user experience might be slightly different for users from different countries.

Option 2: A common cluster for a group of countries and a separate cluster for countries with stricter regulations, using a separate domain name and separate metadata

Pros

  • Less expensive compared to option 1 as less hardware is required.
  • Lesser maintenance compared to option 1 as there are fewer clusters.

Cons

  • No central discoverability of assets across the enterprise.
  • No common domain name, and hence, user experience might be slightly different for users from different countries.

Option 3: A common cluster for a group of countries and a separate cluster for countries with stricter regulations, using a common domain name and common metadata

Pros

  • Less expensive compared to option 1 as less hardware is required.
  • Lesser maintenance compared to option 1 as there are fewer clusters.
  • Central discoverability of models and other assets as we maintain common metadata.
  • Common domain name, and hence, the user experience will be the same for all users.

Cons

  • None of significance.

As you can see, option 3 offers the most advantage to enterprises operating across multiple countries with varying data sovereignty regulations.

How does Refract help clients build AI platforms in line with data sovereignty requirements?

Refract, the Insight Designer module of the Fosfor Decision Cloud is an enterprise-grade AI platform that can manage the complete lifecycle of an AI project, from data discovery to data extraction, model deployment, and model monitoring.

Since Refract is an AI platform built using microservices, the platform can be hosted on-premises or on any cloud platform.

The following are some of the key features of Refract:

    • Data extraction: It has a massive collection of connectors and a built-in SDK called Refractio, which can be used to extract data from various data sources.
    • Data profiling: Out-of-the-box data profiling capabilities like completeness, accuracy, basic statistics, missing values, etc.
    • Data preparation (feature engineering): It has 100+ out-of-the-box functions for data preparation and feature engineering.
    • Model development: It supports multiple development environments like JupyterLab, VS Code, Spark, R, Python, etc.
      Model registration: Built-in SDK for model registration.
    • Model deployment: One-click deployment of models and applications.
    • Model consumption: Models can be consumed via API or a Streamlit application.
    • Model monitoring: Out-of-the-box capabilities for model monitoring.

Why Refract is perfect for building AI platforms in line with data sovereignty requirements?

Refract has a microservices architecture with common application metadata, so even if multiple instances of the application are running, we can still have a common discoverability feature across the platform.

Refract also leverages options like Azure Front Door, which ensures all the users have a common domain name, even though there might be multiple instances of the application running.

How the solution works?

Refract uses the below-shown architecture (Figure 2) to ensure that workloads for all Schengen countries (as discussed in the earlier example, excluding country 3) are scheduled on the common shared infrastructure, storage, and compute. Country 3’s workloads will be scheduled on different infrastructure, storage, and compute.


Figure 2: Shared & dedicated infrastructure architecture that Refract implements.

Note: In the above image, we have referenced Azure as an example, but the solution can be implemented with any other cloud provider as well.

The following is the sequence of events when a user tries to log in:

    1. Whenever a user tries to log in to the portal, the domain name will be the same for users from Country A and others.
      The application identifies the origin of the traffic, and if the traffic is originating from Country A, then the workload will be pushed onto the nodes sitting in Country A. If the traffic originates from other countries, the workload will be pushed to the shared infrastructure.
    2. The Fosfor Decision Cloud maintains application metadata in a common DB, so all the models are visible in a central location. If the client needs separate metadata, then the Fosfor Decision Cloud’s architecture is flexible enough to maintain separate metadata for individual countries.

Conclusion

In conclusion, the Fosfor Decision Cloud emerges as a pivotal partner for global enterprises navigating the intricate terrain of AI implementation amidst the stringent demands of data sovereignty regulations.

Its commitment to ensuring data compliance without sacrificing a seamless user experience underscores its significance in the AI landscape. By providing centralized asset discoverability, the Fosfor Decision Cloud transcends geographic boundaries and regulatory complexities, uniting disparate assets under a unified platform. Moreover, its adaptable architectural design facilitates the management of separate metadata for countries where strict data sovereignty rules prevail, offering a tailored solution when business requirements necessitate it.

Ultimately, the Fosfor Decision Cloud empowers businesses to concentrate on AI model development and management, alleviating concerns surrounding data sovereignty and thus facilitating the uninterrupted pursuit of innovation and excellence.

Go to fosfor.com to learn more.

Author

Ravikumar S Haligode

Senior Specialist – Data Science, Fosfor

With over 15 years of IT experience, Ravikumar has worked closely with senior stakeholders from business, operations, and system owners to identify opportunities for cost reduction, revenue enhancement, and customer experience using a data-driven approach. He has worked on multiple AI/ML projects, with extensive experience in building and evaluating models, tuning hyperparameters for optimum performance, and retraining models.

More on the topic

Read more thought leadership from our team of experts

AI in a box: How Refract simplifies end-to-end machine learning

The modern tech world has become a data hub reliant on processing. Today, there is user data on everything from driving records to scroll speed on social media applications. As a result, there has been a considerable demand for methods to process this data, given that it holds hidden insights that can propel a company into the global stage quicker than ever before.

Read more

Bias in AI: A primer

While Artificial Intelligence (AI) systems can be highly accurate, they are imperfect. As such, they may make incorrect decisions or predictions. Several challenges need to be solved for the development and adoption of technology. One major challenge is the bias in AI systems. Bias in AI refers to the systematic differences between a model's predicted and true output. These deviations can lead to incorrect or unfair outcomes, which can seriously affect critical fields like healthcare, finance, and criminal justice.

Read more

Generative AI - Accelerate ML operations using GPT

As Data Science and Machine Learning practitioners, we often face the challenge of finding solutions to complex problems. One powerful artificial intelligence platform that can help speed up the process is the use of Generative Pretrained Transformer 3 (GPT-3) language model.

Read more