Data and AI drive our world; safety and security are paramount. Absent adequate safeguards, AI exposes organizations to threats that can lead to unintended harm and leave sensitive data accessible to malicious actors. Yet, many organizations overlook the vulnerabilities inherent in using AI tools. Merav Yuravlivker, Chief Learning Officer at Data Society Group, emphasizes the urgency of addressing these risks.
“One of the top priorities in AI literacy is safety and security. It’s easy to put information into a third-party platform and not think about what happens next. That data can be repurposed, reused, or even exploited by others, and we’ve already seen real-world examples of this happening,” she explains.
Notable breaches have exposed internal data, such as the hacking of OpenAI's internal messaging systems. Other security failures have compromised the privacy of customer or patient data. Such was the case with this year's Kaiser Permanente breach that resulted in the theft of 13.4 million members' personal information.
With 61% of companies reporting that they experienced third-party data breaches or cybersecurity incidents in the course of a year, it’s clear that today’s digitally connected business landscape is fraught with challenges to data protection. Still, reliance on external platforms is just one among many rising trends that can compromise data safety and security.
Fueled by vast quantities of data from a broad range of sources, AI raises the risk of data misuse—whether deliberate or inadvertent—that imperils organizations and the public alike. Generative AI in particular, with its capacity to produce new content, elevates AI-related risks such as misinformation and identity theft. 71% of senior IT leaders share this concern, reporting that they believe generative AI is ushering in new data security and safety challenges for their organizations.
When sensitive information is mishandled, the consequences can be severe: financial losses, reputational damage, and legal challenges. At USD $4.88 million, the global average cost of a data breach reached its highest level in 2024, representing a 10% increase over its 2023 total. This financial toll includes loss of business and expenses associated with responding to breaches. More difficult to measure are the costs of eroded trust and unmet ethical standards.
To protect themselves and the public, organizations must develop robust data governance structures that clearly outline the acceptable use of AI tools. Data governance provides the policies and procedural guardrails that help teams handle data safely and responsibly at all phases, from collection and access to storage and destruction. Across the data lifecycle, these frameworks should also address the critical question of employee accountability.
Responsibility requires accountability, and accountability plays a crucial role in reducing AI-related risks. As Yuravlivker notes, “If people know they’ll be held liable for harm to constituents or clients, they’ll ask more questions. And that’s exactly where we need to be as a society.” Precise accountability mechanisms can help employees think critically before acting, reducing risks associated with mishandling sensitive data.
“Safety isn’t just about having the right tools,” Yuravlivker explains. “It’s about building a culture of responsibility. People need to understand the risks they’re taking with AI and be held accountable for how they use these tools.”
The rapid rise of generative AI use in the workplace exacerbates challenges to protecting organizational data. A recent report found that the volume of corporate data employees feed into AI tools increased by 485% between March 2023 and March 2024, and the percentage of this data that is sensitive increased by 10.7% in that period. Among the workplace protocols that must be in place to protect data are guidelines for the safe and responsible use of AI tools across the organization.
Safety must extend beyond technical measures for organizations to succeed in today’s data-driven environment. It requires education, communication, and a shared understanding of why safety protocols are in place. “We need to move beyond assuming safety and start fostering accountability at every level of AI adoption,” Yuravlivker adds.
As organizations expand the reach of AI technologies across their teams and into the public, they must match progress in these areas with initiatives that raise awareness of potential threats and steps individuals across roles and teams can take to mitigate them. An informed and accountable workforce is an organization’s most effective defense against threats to data safety and security.
Where there is accountability, organization-wide knowledge, and effective data governance, responsible AI can flourish. Further, as AI technologies proliferate, it must. Building a culture of safety and security in AI adoption is not optional—it’s a fundamental requirement for sustainable, ethical innovation.
Subscribe to get the latest updates from Data Society, including tips for how to use your data better, real-life examples of leveraging analytics, and more.