An online retailer always gets users’ explicit consent before sharing customer data with its partners. A navigation app anonymizes activity data before analyzing it for travel trends. A school asks parents to verify their identities before giving out student information.

These are just some examples of how organizations support data privacy, the principle that people should have control of their personal data, including who can see it, who can collect it, and how it can be used.

One cannot overstate the importance of data privacy for businesses today. Far-reaching regulations like Europe’s GDPR levy steep fines on organizations that fail to safeguard sensitive information. Privacy breaches, whether caused by malicious hackers or employee negligence, can destroy a company’s reputation and revenues. Meanwhile, businesses that prioritize information privacy can build trust with consumers and gain an edge over less privacy-conscious competitors. 

Yet many organizations struggle with privacy protections despite the best intentions. Data privacy is more of an art than a science, a matter of balancing legal obligations, user rights, and cybersecurity requirements without stymying the business’s ability to get value from the data it collects. 

An example of data privacy in action

Consider a budgeting app that people use to track spending and other sensitive financial information. When a user signs up, the app displays a privacy notice that clearly explains the data it collects and how it uses that data. The user can accept or reject each use of their data individually. 

For example, they can decline to have their data shared with third parties while allowing the app to generate personalized offers. 

The app heavily encrypts all user financial data. Only administrators can access customer data on the backend. Even then, the admins can only use the data to help customers troubleshoot account issues, and only with the user’s explicit permission.

This example illustrates three core components of common data privacy frameworks:

  • Complying with regulatory requirements: By letting users granularly control how their data is processed, the app complies with consent rules that are imposed by laws like the California Consumer Privacy Act (CCPA).
  • Deploying privacy protections: The app uses encryption to protect data from cybercriminals and other prying eyes. Even if the data is stolen in a cyberattack, hackers can’t use it.
     
  • Mitigating privacy risks: The app limits data access to trusted employees who need it for their roles, and employees can access data only when they have a legitimate reason to. These access controls reduce the chances that the data is used for unauthorized or illegal purposes.  

Learn how organizations can use IBM Guardium® Data Protection software to monitor data wherever it is and enforce security policies in near real time.

Examples of data privacy laws

Compliance with relevant regulations is the foundation of many data privacy efforts. While data protection laws vary, they generally define the responsibilities of organizations that collect personal data and the rights of the data subjects who own that data.

Learn how IBM OpenPages Data Privacy Management can improve compliance accuracy and reduce audit time.

The General Data Protection Regulation (GDRP)

The GDPR is a European Union privacy regulation that governs how organizations in and outside of Europe handle the personal data of EU residents. In addition to being perhaps the most comprehensive privacy law, it is among the strictest. Penalties for noncompliance can reach up to EUR 20,000,000 or 4% of the organization’s worldwide revenue in the previous year, whichever is higher.

The UK Data Protection Act 2018

The Data Protection Act 2018 is, essentially, the UK’s version of the GDPR. It replaces an earlier data protection law and implements many of the same rights, requirements, and penalties as its EU counterpart. 

The Personal Information Protection and Electronic Documents Act (PIPEDA)

Canada’s PIPEDA governs how private-sector businesses collect and use consumer data. PIPEDA grants data subjects a significant amount of control over their data, but it applies only to data used for commercial purposes. Data used for other purposes, like journalism or research, is exempt.

US data protection laws

Many individual US states have their own data privacy laws. The most prominent of these is the California Consumer Privacy Act (CCPA), which applies to virtually any organization with a website because of the way it defines the act of “doing business in California.” 

The CCPA empowers Californians to prevent the sale of their data and have it deleted at their request, among other rights. Organizations face fines of up to USD 7,500 per violation. The price tag can add up quickly. If a business were to sell user data without consent, each record it sells would count as one violation. 

The US has no broad data privacy regulations at a national level, but it does have some more targeted laws. 

Under the Children’s Online Privacy Protection Act (COPPA), organizations must obtain a parent’s permission before collecting and processing data from anyone under 13. Rules for handling children’s data might become even stricter if the Kids Online Safety Act (KOSA), currently under consideration in the US Senate, becomes law. KOSA would require online services to default to the highest privacy settings for users under 18.

The Health Insurance Portability and Accountability Act (HIPAA) is a federal law that deals with how healthcare providers, insurance companies, and other businesses safeguard personal health information. 

The Payment Card Industry Data Security Standard (PCI DSS)

The Payment Card Industry Data Security Standard (PCI DSS) is not a law, but a set of standards developed by a consortium of credit card companies, including Visa and American Express. These standards outline how businesses must protect customers’ payment card data.

While the PCI DSS isn’t a legal requirement, credit card companies and financial institutions can fine businesses that fail to comply or even prohibit them from processing payment cards.

Examples of data privacy principles and practices

Privacy compliance is only the beginning. While following the law can help avoid penalties, it may not be enough to fully protect personally identifiable information (PII) and other sensitive data from hackers, misuse, and other privacy threats.

Some common principles and practices organizations use to bolster data privacy include:

Data visibility

For effective data governance, an organization needs to know the types of data it has, where the data resides, and how it is used. 

Some kinds of data, like biometrics and social security numbers, require stronger protections than others. Knowing how data moves through the network helps track usage, detect suspicious activity, and put security measures in the right places. 

Finally, full data visibility makes it easier to comply with data subjects’ requests to access, update, or delete their information. If the organization doesn’t have a complete inventory of data, it might unintentionally leave some user records behind after a deletion request. 

Example

A digital retailer catalogs all the different kinds of customer data it holds, like names, email addresses, and saved payment information. It maps how each type of data moves between systems and devices, who has access to it (including employees and third parties), and how it is used. Finally, the retailer classifies data based on sensitivity levels and applies appropriate controls to each type. The company conducts regular audits to keep the data inventory up to date.

User control

Organizations can limit privacy risks by granting users as much control over data collection and processing as possible. If a business always gets a user’s consent before doing anything with their data, it’s hard for the company to violate anyone’s privacy.

That said, organizations must sometimes process someone’s data without their consent. In those instances, the company should make sure that it has a valid legal reason to do so, like a newspaper reporting on crimes that perpetrators would rather conceal.

Example

A social media site creates a self-service data management portal. Users can download all the data they share with the site, update or delete their data, and decide how the site can process their information.

Data limitation

It can be tempting to cast a wide net, but the more personal data a company collects, the more exposed it is to privacy risks. Instead, organizations can adopt the principle of limitation: identify a specific purpose for data collection and collect the minimum amount of data needed to fulfill that purpose. 

Retention policies should also be limited. The organization should dispose of data as soon as its specific purpose is fulfilled.

Example

A public health agency is investigating the spread of an illness in a particular neighborhood. The agency does not collect any PII from the households it surveys. It records only whether anyone is sick. When the survey is complete and infection rates determined, the agency deletes the data. 

Transparency

Organizations should keep users updated about everything they do with their data, including anything their third-party partners do.

Example

A bank sends annual privacy notices to all of its customers. These notices outline all the data that the bank collects from account holders, how it uses that data for things like regulatory compliance and credit decisions, and how long it retains the data. The bank also alerts account holders to any changes to its privacy policy as soon as they are made.

Access control

Strict access control measures can help prevent unauthorized access and use. Only people who need the data for legitimate reasons should have access to it. Organizations should use multi-factor authentication (MFA) or other strong measures to verify users’ identities before granting access to data. Identity and access management (IAM) solutions can help enforce granular access control policies across the organization.

Example

A technology company uses role-based access control policies to assign access privileges based on employees’ roles. People can access only the data that they need to carry out core job responsibilities, and they can only use it in approved ways. For example, the head of HR can see employee records, but they can’t see customer records. Customer service representatives can see customer accounts, but they can’t see customers’ saved payment data. 

Data security measures

Organizations must use a combination of tools and tactics to protect data at rest, in transit, and in use. 

Example

A healthcare provider encrypts patient data storage and uses an intrusion detection system to monitor all traffic to the database. It uses a data loss prevention (DLP) tool to track how data moves and how it is used. If it detects illicit activity, like an employee account moving patient data to an unknown device, the DLP raises an alarm and cuts the connection.

Privacy impact assessments

Privacy impact assessments (PIAs) determine how much risk a particular activity poses to user privacy. PIAs identify how data processing might harm user privacy and how to prevent or mitigate those privacy concerns.

Example

A marketing firm always conducts a PIA before every new market research project. The firm uses this opportunity to clearly define processing activities and close any data security gaps. This way, the data is only used for a specific purpose and protected at every step. If the firm identifies serious risks it can’t reasonably mitigate, it retools or cancels the research project. 

Data privacy by design and by default

Data privacy by design and by default is the philosophy that privacy should be a core component of everything the organization does—every product it builds and every process it follows. The default setting for any system should be the most privacy-friendly one.

Example

When users sign up for a fitness app, the app’s privacy settings automatically default to “don’t share my data with third parties.” Users must change their settings manually to allow the organization to sell their data. 

Examples of data privacy violations and risks

Complying with data protection laws and adopting privacy practices can help organizations avoid many of the biggest privacy risks. Still, it is worth surveying some of the most common causes and contributing factors of privacy violations so that companies know what to look out for.

Lack of network visibility

When organizations don’t have complete visibility of their networks, privacy violations can flourish in the gaps. Employees might move sensitive data to unprotected shadow IT assets. They might regularly use personal data without the subject’s permission because supervisors lack the oversight to spot and correct the behavior. Cybercriminals can sneak around the network undetected.

As corporate networks grow more complex—mixing on-premises assets, remote workers, and cloud services—it becomes harder to track data throughout the IT ecosystem. Organizations can use tools like attack surface management solutions and data protection platforms to help streamline the process and secure data wherever it resides.

Learn how IBM data privacy solutions implement key privacy principles like user consent management and comprehensive data governance.

AI and automation

Some regulations set special rules for automated processing. For example, the GDPR gives people the right to contest decisions made through automated data processing.

The rise of generative artificial intelligence can pose even thornier privacy problems. Organizations cannot necessarily control what these platforms do with the data they put in. Feeding customer data to a platform like ChatGPT might help garner audience insights, but the AI may incorporate that data into its training models. If data subjects didn’t consent to have their PII used to train an AI, this constitutes a privacy violation. 

Organizations should clearly explain to users how they process their data, including any AI processing, and obtain subjects’ consent. However, even the organization may not know everything the AI does with its data. For that reason, businesses should consider working with AI apps that let them retain the most control over their data. 

Overprovisioned accounts

Stolen accounts are a prime vector for data breaches, according to the IBM Cost of a Data Breach report. Organizations tempt fate when they give users more privileges than they need. The more access permissions that a user has, the more damage a hacker can do by hijacking their account.

Organizations should follow the principle of least privilege. Users should have only the minimum amount of privilege they need to do their jobs. 

Human error

Employees can accidentally violate user privacy if they are unaware of the organization’s policies and compliance requirements. They can also put the company at risk by failing to practice good privacy habits in their personal lives. 

For example, if employees overshare on their personal social media accounts, cybercriminals can use this information to craft convincing spear phishing and business email compromise attacks.

Data sharing

Sharing user data with third parties isn’t automatically a privacy violation, but it can increase the risk. The more people who have access to data, the more avenues there are for hackers, insider threats, or even employee negligence to cause problems.

Moreover, unscrupulous third parties might use a company’s data for their own unauthorized purposes, processing data without subject consent. 

Organizations should ensure that all data-sharing arrangements are governed by legally binding contracts that hold all parties responsible for the proper protection and use of customer data. 

Malicious hackers 

PII is a major target for cybercriminals, who can use it to commit identity theft, steal money, or sell it on the black market. Data security measures like encryption and DLP tools are as much about safeguarding user privacy as they are about protecting the company’s network.

Data privacy fundamentals

Privacy regulations are tightening worldwide, the average organization’s attack surface is expanding, and rapid advancements in AI are changing the way data is consumed and shared. In this environment, an organization’s data privacy strategy can be a preeminent differentiator that strengthens its security posture and sets it apart from the competition.

Take, for instance, technology like encryption and identity and access management (IAM) tools. These solutions can help lessen the financial blow of a successful data breach, saving organizations upwards of USD 572,000 according to the Cost of a Data Breach report. Beyond that, sound data privacy practices can foster trust with consumers and even build brand loyalty.

As data protection becomes ever more vital to business security and success, organizations must count data privacy principles, regulations, and risk mitigation among their top priorities.

Explore Guardium Data Protection
Was this article helpful?
YesNo

More from Security

What you need to know about the CCPA rules on AI and automated decision-making technology

9 min read - In November 2023, the California Privacy Protection Agency (CPPA) released a set of draft regulations on the use of artificial intelligence (AI) and automated decision-making technology (ADMT).  The proposed rules are still in development, but organizations may want to pay close attention to their evolution. Because the state is home to many of the world's biggest technology companies, any AI regulations that California adopts could have an impact far beyond its borders.  Furthermore, a California appeals court recently ruled that…

How to prevent prompt injection attacks

8 min read - Large language models (LLMs) may be the biggest technological breakthrough of the decade. They are also vulnerable to prompt injections, a significant security flaw with no apparent fix. As generative AI applications become increasingly ingrained in enterprise IT environments, organizations must find ways to combat this pernicious cyberattack. While researchers have not yet found a way to completely prevent prompt injections, there are ways of mitigating the risk.  What are prompt injection attacks, and why are they a problem? Prompt…

Building the human firewall: Navigating behavioral change in security awareness and culture

4 min read - The latest findings of the IBM X-Force® Threat Intelligence Index report highlight a shift in the tactics of attackers. Rather than using traditional hacking methods, there has been a significant 71% surge in attacks where criminals are exploiting valid credentials to infiltrate systems. Info stealers have seen a staggering 266% increase in their utilization, emphasizing their role in acquiring these credentials. Their objective is straightforward: exploit the path of least resistance, often through unsuspecting employees, to obtain valid credentials. Organizations…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters