The Mask of Legitimacy

In a world increasingly defined by invisible wars—information warfare, cyber warfare, and the silent crushing of dissent—NSO Group has emerged as a quiet giant. Founded in Israel in 2010, the cybersecurity firm rose to global infamy for its flagship spyware, Pegasus, a tool that allows covert access to mobile devices with astonishing ease and precision. Marketed as a weapon of last resort for governments fighting terrorism and organized crime, Pegasus is promoted with the rhetoric of public safety and legitimate statecraft. But behind this mask of legitimacy lies a far more troubling legacy: a digital tool that has enabled repression, censorship, and violence in dozens of countries around the world.

Pegasus is not merely malware—it is espionage software of military-grade sophistication. It can penetrate a phone without any interaction from the user. No click. No download. No warning. Once inside, it silently turns a device into a surveillance terminal, extracting messages, emails, call logs, encrypted chats, and even ambient audio and video. NSO insists that it only sells Pegasus to vetted governments for specific, lawful investigations. But time and again, independent researchers and journalists have uncovered evidence that tells a different story: one of surveillance abused to stifle democracy, crush dissent, and eliminate perceived threats to entrenched power.

In 2019, the phone of Jamal Khashoggi’s fiancée, Hatice Cengiz, was found to be infected with Pegasus. This came in the aftermath of Khashoggi’s brutal murder inside the Saudi consulate in Istanbul—a killing widely believed to have been orchestrated by the Saudi regime. The spyware was also reportedly deployed against Khashoggi’s close associates, fellow dissidents, and journalists connected to his work. Though NSO denies involvement, the incident starkly illustrates the proximity between Pegasus and extrajudicial violence.

Mexico offers another haunting case. Between 2016 and 2017, at least 25 journalists, lawyers, and anti-corruption activists were targeted with Pegasus spyware. Among them was Carmen Aristegui, one of the country’s most respected investigative journalists, who had uncovered high-level corruption implicating then-President Enrique Peña Nieto. Others targeted included the legal team investigating the disappearance of 43 students in Ayotzinapa, a national tragedy still shrouded in unanswered questions and state obstruction. Rather than aiding justice, the spyware was seemingly used to sabotage it.

Cartels, too, enter the equation—not directly as clients of NSO, but as indirect beneficiaries. In some cases, journalists and human rights workers monitoring cartel violence have found themselves under Pegasus surveillance. The targeting of Mexican journalist Cecilio Pineda Birto, who was gunned down shortly after reporting on cartel activity and police collusion, has drawn suspicion due to circumstantial Pegasus-related indicators surrounding his final days. Although direct attribution remains elusive, the climate of fear and vulnerability is palpable.

Even democratic countries are not immune. In Hungary, the administration of Viktor Orbán has been accused of deploying Pegasus to monitor opposition figures, journalists, and legal professionals. In India, a 2021 investigation revealed that dozens of phones belonging to opposition leaders, election strategists, and independent media were potential Pegasus targets. The Modi government has refused to confirm or deny whether it purchased or used the software, but the implications are chilling: spyware meant for national security repurposed to surveil political challengers.

The NSO Group claims it is a vendor of lawful surveillance. It frames its spyware as a scalpel in the war against terror. But the record shows a sledgehammer: indiscriminate, unaccountable, and terrifyingly silent. In an era where dissent is often criminalized and transparency punished, Pegasus has become a preferred instrument not of law enforcement, but of authoritarian consolidation. The story of NSO is not merely a cautionary tale about technology—it is a symptom of global democratic backsliding, a mirror held up to states that no longer trust their citizens enough to leave them unmonitored.

The Mechanics of Pegasus Spyware

The defining characteristic of Pegasus is its invisibility. Unlike conventional malware that relies on user mistakes—clicking malicious links, downloading tainted files—Pegasus requires nothing. It often enters through what cybersecurity experts call “zero-click” exploits, which allow it to compromise a device without the user’s awareness or interaction. These are among the most sophisticated and expensive attack vectors in existence, typically sold for millions on the exploit black market. That NSO Group has repeatedly obtained and deployed such exploits is a testament not only to its engineering prowess, but also to the resources and legal protections it enjoys under Israeli defense export laws.

One of the most notorious attack methods leveraged by Pegasus targeted iPhones through Apple’s iMessage service. In 2021, researchers at Amnesty International and The Citizen Lab uncovered evidence of a zero-click exploit later dubbed “FORCEDENTRY.” It allowed Pegasus to bypass Apple’s BlastDoor security framework—a major internal redesign meant to prevent such attacks—by exploiting how iMessage parsed GIF files. Apple quickly issued a patch, but the damage was already done. Dozens of journalists and activists had been compromised long before the vulnerability became known to the public.

This wasn’t NSO’s first dance with messaging platforms. In 2019, WhatsApp revealed that Pegasus had exploited a critical flaw in its call handling process. The attack required nothing more than placing a video call to a target; the spyware could be installed even if the call was never answered. The scope was staggering: some 1,400 users across 20 countries were reportedly targeted, prompting WhatsApp’s parent company, Meta (then Facebook), to file a lawsuit against NSO. The case is still working its way through the U.S. legal system, its outcome likely to set a precedent for cross-border accountability in cyberspace.

Yet Pegasus doesn’t stop at app-level vulnerabilities. It also descends into the most arcane and poorly understood layers of mobile device architecture: the baseband firmware. This is the low-level software that governs how a phone communicates with cellular networks. Unlike operating systems or apps, baseband firmware is almost entirely opaque. It’s closed-source, vendor-controlled, and rarely audited. NSO and similar actors have allegedly exploited this blind spot by injecting malicious commands through silent SMS messages or malformed signaling packets, allowing them to track users, intercept calls, or even manipulate network behavior—all without alerting the device’s operating system.

Some of these methods appear to rely on undocumented debugging features such as DIAG mode, originally intended for manufacturers and telecom providers. When manipulated by an attacker, these features become invisible backdoors. Unlike OS-level exploits that might crash an app or leave behind logs, baseband intrusions are notoriously difficult to detect or attribute. For Pegasus, this means a deep foothold that persists across reboots, sometimes even across SIM card swaps, without triggering security alerts or antivirus software.

Pegasus also employs a rotating arsenal of browser and kernel exploits. It has been known to exploit vulnerabilities in Safari’s WebKit engine and in Chrome’s JavaScript interpreter, granting it remote code execution rights that allow full device compromise. These kernel-level exploits can grant the spyware root access, allowing it to evade app sandboxing, extract encrypted messages from apps like Signal and WhatsApp, and activate microphones and cameras at will. Once installed, Pegasus runs silently in memory, without creating traditional files on disk. It’s designed to self-destruct if it senses a forensic investigation, leaving behind minimal traces—fragments in RAM, hints in crash logs, or traces in network traffic.

The sheer modularity of Pegasus is part of its danger. Its deployment can be tailored to the target: a specific exploit chain for Android, a different one for iOS; a minimal payload for stealth, or a more aggressive one for complete data exfiltration. It can be updated remotely, configured to operate only during certain hours, or instructed to delete itself after a fixed period. It is, in effect, a surveillance platform, not just a tool—a professional service with customer support and backend infrastructure.

In its technical execution, Pegasus represents the convergence of espionage tradecraft and consumer technology. It is engineered not merely to observe but to control, not merely to infiltrate but to dominate. It thrives in the shadows of modern computing—on the unpatched vulnerability, the undocumented feature, the unseen layer beneath our daily use. And it does so in the name of security, while placing millions at risk.

The Surveillance-for-Profit Ecosystem

While the NSO Group remains the most publicly reviled player in the spyware industry, it is by no means alone. In the shadowy marketplace of digital surveillance, NSO is simply the most visible apex of a much deeper and more lucrative pyramid. Across Europe, the Middle East, and beyond, a growing constellation of private firms now offers espionage-as-a-service to state actors, often with few questions asked and even fewer consequences. These companies operate under the cover of national security imperatives, but their true clientele often includes the world’s most repressive regimes.

Cytrox, a North Macedonian company now operating under the umbrella of Intellexa, has made headlines for its Predator spyware—a Pegasus alternative that also employs zero-click exploits. Predator was reportedly used to target opposition journalists and political figures in Egypt and Greece. In the Greek scandal, dubbed “Predatorgate” by local media, several investigative reporters and even the head of the intelligence service were implicated. The government denied responsibility, but public confidence in the separation between commercial surveillance and political power suffered a permanent wound.

Then there is Candiru, another Israeli firm, whose portfolio includes tools designed to exploit web browsers and hijack encrypted communications. Though less well-known than NSO, Candiru has reportedly sold its products to multiple authoritarian governments. A 2021 Microsoft report detailed how Candiru’s spyware targeted over 100 victims, including politicians, human rights defenders, and academics, through precisely crafted spear-phishing campaigns. Like NSO, Candiru claims to restrict its sales to vetted customers—yet their victims consistently fall outside the realm of terrorism or organized crime.

Germany’s now-defunct FinFisher is another cautionary tale. Originally touted as a tool for lawful interception, FinFisher software ended up in the hands of regimes in Egypt, Bahrain, and Turkey. Internal leaks and investigative reports revealed that the company’s executives were aware of the human rights abuses linked to their clients. In 2022, FinFisher declared insolvency after being raided by German authorities over alleged illegal exports. It was a rare instance of actual accountability—but only after years of abuse had already occurred.

These firms operate with a striking lack of transparency. They do not participate in bug bounty programs, which are designed to reward ethical vulnerability disclosures and improve public cybersecurity. Instead, they accumulate zero-day vulnerabilities like war chests—stockpiling flaws that no one else knows exist, and selling them to the highest bidder. This hoarding of digital weapons erodes the collective security of the global internet. Every zero-day that remains undisclosed is a silent threat to billions of users.

The ethical vacuum in which these companies thrive is reinforced by the legal ambiguity surrounding cyberweapons. Export controls are weak or inconsistently applied. Oversight is minimal, often perfunctory. In many cases, firms simply operate under national defense exemptions or diplomatic cover, allowing them to skirt accountability even when their products are used for torture, intimidation, or extrajudicial killings.

What unites NSO, Cytrox, Candiru, and their peers is not just their technical sophistication—it is their business model. They are not rogue actors but profit-driven contractors embedded in a larger global architecture of surveillance. Their clients range from corrupt democracies to autocracies in decline, all seeking the same thing: omniscient control over information, over perception, over people.

This is the surveillance-for-profit economy. It exists not in the shadows, but in plain sight—licensed, marketed, funded, and quietly integrated into the workflows of law enforcement agencies and intelligence bureaus. Its tools are crafted by engineers who once might have worked for antivirus firms or tech startups. Its success is measured not in lives saved, but in devices compromised and secrets extracted. And its greatest danger lies in the fact that, for many governments, it has become not a scandal, but a standard operating procedure.

Resistance Through Reinforcement: How Tech Companies Respond

The rise of Pegasus and its kin has forced major technology firms—especially Apple and Google—into an unfamiliar posture of defensive warfare. Traditionally, companies like these have competed on features, performance, and aesthetics. But in the shadow of elite spyware, they are increasingly judged by how well they can withstand nation-grade intrusion. In this adversarial landscape, device manufacturers are no longer mere vendors; they are guardians of last resort for users targeted by billion-dollar surveillance programs.

Apple, long a favored target of Pegasus due to the global ubiquity and security centralization of iOS, responded most directly with the rollout of Lockdown Mode in iOS 16. The feature is not for everyone—it drastically limits functionality by disabling preview links, restricting JavaScript in Safari, and blocking most incoming attachments and service requests. It is, by design, an experience-degrading measure, a digital panic room for high-risk individuals. And yet its very existence acknowledges a disturbing truth: that the threat model for iPhone users now includes professional spyware operators working at the behest of governments.

The impact of Lockdown Mode was more than symbolic. According to Apple, it successfully thwarted at least one attempted Pegasus deployment in 2022. Security researchers confirmed that some previously reliable Pegasus vectors—particularly those involving WebKit rendering or iMessage parsing—became inert when Lockdown Mode was active. Still, the mode offers protection only when deliberately enabled and works best when combined with behavioral caution. It does not address lower-level threats like baseband or kernel exploits.

Android, with its decentralized update system and broad hardware diversity, faces even greater challenges. Google’s Pixel line receives monthly security updates, and the company has pushed for faster patch adoption among OEMs. But the reality is stark: most Android devices in the wild remain unpatched for months, sometimes years. This patching lag creates fertile ground for attackers, especially those with access to unreported zero-days.

Google’s Project Zero team, one of the most respected vulnerability research groups in the world, has documented several cases of Pegasus and Predator using zero-click exploits against Android targets. In response, Google has hardened key components of Android's architecture—sandboxing, memory safety, and process isolation. Yet these measures, while valuable, are not impervious. Attackers with deep pockets and advanced knowledge continue to pierce even the most well-defended phones.

But perhaps the most alarming blind spot in the mobile security ecosystem is the baseband firmware layer. This is the component that governs cellular radio communication—calls, texts, data handoff, GPS triangulation. Because baseband firmware is tightly controlled by chip manufacturers like Qualcomm and MediaTek, it remains largely closed to public scrutiny. It is rarely updated, almost never audited, and poorly understood even by security professionals. The opacity of baseband code makes it an ideal target for spyware developers. Exploiting baseband flaws allows for deep and persistent access to a device, bypassing operating system defenses entirely.

Security researchers have long warned that the industry’s neglect of baseband security could have catastrophic implications. Some have demonstrated proof-of-concept attacks using malformed signaling messages to crash or hijack mobile modems. Others have reverse-engineered portions of DIAG mode—an undocumented diagnostic protocol embedded in many Android basebands—and shown how it could be exploited for silent tracking or data exfiltration. But without open access to the source code, the full scale of these vulnerabilities remains hidden.

This is where the ecosystem shows its fragility. Apple can issue software patches and introduce features like Lockdown Mode. Google can re-architect Android and harden its kernel. But neither can compel Qualcomm or MediaTek to open their baseband source or adopt a transparent security posture. As long as baseband firmware remains a black box, attackers like NSO will continue to exploit it, armed with state funding and shielded by diplomatic secrecy.

The race is asymmetrical. Tech companies must protect billions of users, across hundreds of device types and software builds, against attackers who need only one unpatched hole. But if anything good has emerged from Pegasus’ deployment, it is a shift in mindset: security is no longer a backend priority. It is now a central front in the battle for human rights, press freedom, and personal autonomy. The devices we carry are no longer just phones. They are potential liabilities, vulnerable battlegrounds in a global war of information. And the burden of defense increasingly falls not on governments—but on the engineers who design the glass rectangles in our hands.

Protecting the Vulnerable: Activists, Journalists, and Civil Society

The people most frequently targeted by Pegasus are not terrorists or hardened criminals, as NSO Group’s marketing would have the public believe. They are often journalists, dissidents, human rights defenders, lawyers, academics, and opposition politicians—individuals whose only crime is telling the truth, organizing for justice, or questioning authority. For them, the threat of spyware is not hypothetical. It is personal, daily, and destabilizing. Pegasus turns the very device they rely on for communication and safety into a silent traitor.

Amnesty International and The Citizen Lab have documented dozens of such cases across multiple continents. In Morocco, the phones of investigative journalists Omar Radi and Maati Monjib were reportedly infected with Pegasus. Radi, who reported on government corruption and land expropriation, was later imprisoned on charges widely seen as politically motivated. In Rwanda, a country with a polished global image but a ruthless record of silencing dissent, Pegasus was allegedly used to target exiled activists and opposition members. Several Rwandan journalists living abroad found traces of the spyware on their devices.

The case of Saudi Arabia is perhaps the most chilling. Even after the brutal murder of Jamal Khashoggi, Saudi dissidents abroad continued to face surveillance and harassment. Lina al-Hathloul, the sister of jailed women’s rights activist Loujain al-Hathloul, discovered that her phone had been hacked with Pegasus. Her only offense had been speaking out on behalf of her sister—a cause that apparently justified state-level espionage. The spyware, in these cases, is not just a tool of surveillance but of psychological warfare, inducing paranoia, silence, and self-censorship.

Legal professionals and human rights lawyers have also been caught in the dragnet. In Mexico, attorneys working on behalf of families of the disappeared were Pegasus targets, including those investigating the Ayotzinapa case. In India, a forensic analysis revealed that activists and lawyers associated with the Bhima Koregaon case had their phones compromised. Some of them were later arrested under sweeping anti-terror laws, with alleged evidence against them surfacing in questionable digital form—raising the specter that surveillance may have been used to plant or manipulate evidence.

In authoritarian contexts, the mere suspicion of surveillance can be corrosive. It can sow distrust within activist circles, isolate leaders, and collapse fragile movements. Secure communication becomes nearly impossible when one cannot determine whether any device is safe. Organizing retreats into physical spaces, paper notes, or whispered conversation—forms of communication that modern movements are often ill-equipped to sustain.

Security experts and digital rights organizations have responded with practical, though limited, advice. High-risk users are encouraged to use “burner” phones—cheap, disposable devices used only briefly and without linking to personal identifiers. Disabling iMessage and FaceTime can reduce some attack surfaces, especially against known Apple exploits. Encrypted messaging apps like Signal remain valuable, but they cannot protect against root-level spyware. In extreme cases, air-gapped laptops and offline workflows are revived—tools more common in spycraft than journalism.

These mitigations, however, are imperfect and exclusionary. Not everyone has the knowledge, time, or resources to manage multiple devices, understand threat models, or engage in rigorous operational security. In many countries, journalists work alone, unsupported by large newsrooms or digital security teams. They are expected to protect their sources, report the truth, and remain reachable, all while avoiding the gaze of a state that may be listening from their own pockets.

Moreover, while companies like Apple and Google can issue patches and bolster defenses, they cannot force transparency from governments. The identity of NSO’s clients remains largely undisclosed, protected by national security claims. Even when evidence of abuse is presented, official denials are swift and consequence-free. The targets have no clear path to recourse; they often don’t even know who ordered their surveillance. There is no global oversight body for spyware, no international court for digital rights violations. The architecture of abuse is transnational, but the victims are left to seek justice in domestic systems often complicit or powerless.

Pegasus has become a symbol not only of technical mastery but of moral failure. It exposes the vulnerability of civil society in the digital age—a world where courage and connection have become liabilities. The question is no longer whether activists are being targeted. It is how long we can continue to call these abuses “exceptions” rather than acknowledge the new norm: that truth-telling itself is being criminalized, and that spyware is its most efficient jailer.

The Quest to Reverse Engineer Pegasus

For all its notoriety, Pegasus remains largely a black box—feared, studied, but only partially understood. While cybersecurity researchers have traced its footprints, dissected its behavior, and mapped some of its attack surfaces, the spyware’s full architecture has never been publicly reverse-engineered. This is no accident. Pegasus is built to vanish, not just to surveil. It’s engineered with layers of self-destruction, obfuscation, and legal shielding that make deep analysis exceptionally difficult. And yet, the global research community continues to probe its structure, driven not only by curiosity, but by necessity: without understanding the weapon, it cannot be countered.

Among the most important players in this effort is The Citizen Lab at the University of Toronto’s Munk School of Global Affairs. Since 2016, the lab has led the forensic charge against Pegasus, identifying infections on the phones of journalists, lawyers, and dissidents across more than forty countries. By analyzing leaked SMS messages, abnormal device behaviors, and encrypted traffic patterns, Citizen Lab has helped uncover how Pegasus infiltrates devices and what it does once inside. Their findings have informed legal challenges, corporate countermeasures, and policy debates worldwide.

Amnesty International’s Security Lab has complemented this work with technical rigor. Their landmark 2021 forensic methodology report documented how traces of Pegasus could be found in system logs, crash reports, and backup metadata on iOS devices. These signatures—while ephemeral—provided enough substance to build a detection toolkit, now used by at-risk users and journalists. Yet the report also underscored how fragile and partial this evidence can be. Pegasus often deletes itself upon execution, scrubbing indicators from memory, or shutting down if it detects a debugging tool or forensic probe.

Other companies have stepped into the fight. Kaspersky Labs, Lookout, and Google’s Project Zero have published analyses of zero-day exploits linked to NSO infrastructure. In particular, the FORCEDENTRY and KISMET vulnerabilities—used in zero-click attacks against iPhones—have been dissected in detail. These technical post-mortems have proven invaluable in closing specific attack vectors. Apple’s Lockdown Mode and security patches directly resulted from this kind of forensic backtracking. But this is reactive defense. The spyware has already done its work by the time the vulnerabilities are patched.

Why hasn’t Pegasus itself been fully reverse-engineered, as so many other malware tools have? First, there’s the issue of access. Pegasus is not available on underground forums. It doesn’t circulate among casual hackers or script kiddies. It is deployed selectively by state-linked actors through secured infrastructure. Obtaining a live, intact sample is exceedingly rare. Even when captured, the spyware is programmed to self-terminate under suspicious conditions. It rarely leaves persistent files on disk; instead, it executes in memory, extracts data, and cleans itself from the device. This ephemeral nature renders traditional reverse engineering tools—like file disassemblers and static analyzers—largely ineffective.

Then there are the legal barriers. Pegasus is classified under Israeli law as a “military export”—a designation normally reserved for missiles and firearms. Any attempt to publish its internal code, if successful, could be prosecuted as the illegal distribution of a weapons system. Researchers who operate in jurisdictions with security treaties or commercial ties to Israel face a chilling effect, even when working independently. Companies that do engage in analysis must tread carefully to avoid accusations of espionage or export law violations.

Still, not all hope is lost. While the full codebase remains hidden, the behavior of Pegasus has been extensively documented. Forensic tools like MVT (Mobile Verification Toolkit), developed in part by Amnesty, allow for the detection of known infection artifacts. Indicators of Compromise (IoCs) are published in threat intelligence reports and shared among civil society organizations. Even without dissecting the entire engine, the outlines of the machine have become visible.

And yet, the deeper architecture—the command-and-control protocols, the modular payload structure, the full library of exploits—remains shrouded. NSO Group has mastered not only offensive software design but counter-forensics. Each infection is customized, each target isolated. The spyware adapts, updates, and erases faster than it can be mapped.

The effort to reverse-engineer Pegasus is as much an arms race as a research initiative. While independent researchers chip away at its edges, NSO continues to evolve its capabilities behind closed doors, shielded by contracts, state secrecy, and proprietary advantage. It’s not simply a battle between code and code—it’s a contest of will, resources, and political resolve. The spyware has no face, no signature, no confession. Only symptoms. Only victims. And the unfinished work of making it visible.

Ethical and Legal Quagmire

The existence of Pegasus raises a profound ethical dilemma that stretches beyond code and contracts into the murky territory of sovereignty, accountability, and global digital governance. NSO Group positions itself as a vendor of last resort—one that equips "trusted" governments with tools necessary to combat terror, child exploitation, and organized crime. This framing, rooted in a national security narrative, affords NSO both legal latitude and moral cover. But the company’s actions—and the actions of its clients—reveal a darker calculus: one where the benefits of surveillance are privatized, the harms are distributed, and the mechanisms of abuse are cloaked in diplomatic immunity.

At the heart of this problem lies the Israeli government’s export control regime. Pegasus, like other dual-use technologies, is classified as a military product under Israeli law. Every sale must be approved by the Ministry of Defense. This creates a system in which NSO’s business decisions are inextricably linked to Israel’s foreign policy interests. In practice, this means that NSO is not just a private company—it is a quasi-state actor, advancing soft power under the guise of cybersecurity. The spyware’s sale to countries like Saudi Arabia, the United Arab Emirates, and Hungary occurred alongside significant diplomatic or trade relationships. When abuses surface, the response is often muted or deflected as a matter of national discretion.

Internationally, the legal framework around spyware is fragmented at best and nonexistent at worst. The Wassenaar Arrangement, a multilateral export control regime designed to prevent the proliferation of military-grade technology, includes "intrusion software" in its purview. But its guidelines are vague and nonbinding. States interpret them differently, and enforcement is inconsistent. Some countries, like the United States, have moved unilaterally. In 2021, the U.S. Department of Commerce added NSO Group to its Entity List—a designation typically reserved for arms traffickers and sanctioned regimes—effectively banning American firms from doing business with it. The move was unprecedented, signaling that at least some governments are willing to treat spyware vendors as global threats. But it remains an outlier.

Civil suits have emerged as a secondary battleground. Meta’s lawsuit against NSO over the WhatsApp hack was groundbreaking. Apple followed with its own legal action, seeking to permanently bar NSO from using any of its software or services. These suits do not simply demand damages; they challenge the notion that private companies can act with sovereign-level impunity in cyberspace. NSO has attempted to claim immunity, arguing that its clients are nation-states and thus shielded by international law. So far, courts have been skeptical, but definitive rulings remain elusive. The legal system is struggling to keep pace with the hybrid nature of digital warfare, where the lines between public and private, commercial and strategic, are increasingly blurred.

For victims, the situation is even more bleak. Those who discover they have been surveilled by Pegasus often have no recourse. Their governments may be the perpetrators, or unwilling to confront the governments that are. There are no international mechanisms for redress, no Hague tribunal for digital violations of privacy and free expression. Even documenting the abuse is a herculean task. Victims must rely on small teams of researchers with limited funding and legal exposure. The result is a grotesque inversion: the surveilled must fight uphill battles to prove they’ve been harmed, while the surveillers move on with impunity, protected by trade secrets and shielded by national interests.

The ethical questions extend beyond law. Should any company be allowed to traffic in unpatchable, military-grade exploits? Is it acceptable for a private firm to stockpile vulnerabilities that threaten global digital infrastructure? What responsibilities do democracies have when their allies or trading partners deploy spyware to crush dissent? These are not hypothetical questions. They are the moral terrain of the twenty-first century, where power flows through fiber-optic cables and control is exercised not with guns, but with invisible code.

NSO Group is not alone in testing these boundaries, but it has become the emblem of their erosion. Its very existence challenges the assumption that cyberspace can remain a domain of civilian trust. It shows how the tools of war can be privatized and sold at scale. And it exposes a gaping void in global governance—a place where states are eager to exploit digital power but unwilling to constrain it.

Until that void is addressed, Pegasus will not be the last weapon of its kind. It will simply be the best-known.

A Shadow Industry Unrestrained

Pegasus is not just software. It is a blueprint for a new form of power—one that transcends borders, undermines rights, and leaves no fingerprints. Its deployment is not loud or dramatic. There is no knock at the door, no broken lock, no missing file. Just a phone, silent and glowing, as it siphons off the contents of a life. And behind that glow stands a multibillion-dollar industry built on secrecy, manipulation, and control.

NSO Group has spent over a decade refining this model: a private enterprise that sells military-grade spyware to state clients under the pretense of fighting crime and terror. But what its track record shows is something far more dangerous: Pegasus has been used to monitor political opponents, silence journalists, sabotage investigations, intimidate activists, and possibly even aid in assassinations. Despite public outrage, blacklists, and lawsuits, the firm continues to operate, protected by legal ambiguity and a network of geopolitical interests that view surveillance not as a threat—but as a tool.

What makes Pegasus especially dangerous is not simply its technical sophistication—though that is considerable. It is its convergence with the weaknesses of our political systems. Autocracies use it to suppress resistance. Democracies are tempted to use it to preempt threats. Corporations try to counter it, but can only do so much when the spyware targets the very firmware their devices run on. And the global public—citizens, voters, writers, whistleblowers—remains caught in the middle, too often unaware that such a weapon exists at all.

Despite scattered progress—Apple’s Lockdown Mode, the U.S. blacklist, investigative reporting—there is still no cohesive international framework to regulate this kind of surveillance. Export controls lag behind technology. Human rights law has not caught up to digital abuse. Technical countermeasures are piecemeal and reactive. Meanwhile, firms like Cytrox, Candiru, and others continue to develop their own Pegasus-like platforms, emboldened by the precedent NSO has set: profit is possible, prosecution is rare.

There are two futures ahead. One is the path we are already on: a future where mobile phones become permanent liabilities for anyone who questions authority, where spyware firms operate like arms dealers, and where the architecture of surveillance is woven into the fabric of everyday life. The other is harder to build but essential to imagine—a world where digital privacy is treated as a fundamental right, where surveillance is transparent and accountable, and where the global community draws a hard line against commodified repression.

To choose the second path, governments must act—not merely with statements, but with legislation, oversight, and sanctions that match the gravity of the threat. Technology companies must continue to harden their platforms while pushing back, loudly and legally, against those who seek to exploit their ecosystems. Civil society must be supported, not merely by NGOs and journalists, but by the resources and protections necessary to survive in a weaponized information age.

Pegasus has taught us what is possible when the digital world is turned against its users. The lesson should not be lost. It is not enough to name and shame. It is not enough to patch and forget. The spyware industry thrives on our apathy, on the gaps between disciplines and jurisdictions. If we are to reclaim the tools of communication as instruments of liberation—not instruments of fear—then we must begin by treating Pegasus not as an anomaly, but as a warning. One that arrived on our phones, and stayed too long.


om tat sat