The dark web is often associated with the “dark side” of the internet, but it’s easy to forget its origin story. As we start to see an increase in phishing kits and malware available for purchase on these illicit marketplaces, I want to provide some education on this platform’s history and evolution.
To access the dark web, you need to use a browser that encrypts traffic and anonymizes users by routing data through multiple servers. There are a handful of options one can use to accomplish this, but for simplicity’s sake, I’m going to focus on the most prevalent one, The Onion Router (commonly referred to as Tor).
Tor was created by the U.S. Navy in the mid-1990s, with development starting in 1995 by the Naval Research Laboratory (NRL). It was part of a project to protect U.S. government communications online. The first public release of Tor came in 2002 when it was known as "The Onion Routing Project" and aimed to provide secure, anonymous communication for both military and civilian use. Tor was created to enable a more secure, private world.
To this day, browsers like Tor offer benefits that traditional browsers cannot. They offer the closest form of pure privacy one can find on the web. This is particularly important for individuals living under oppressive regimes where internet access is censored or monitored. Journalists and whistleblowers also use Tor to communicate securely, protecting their sources and sensitive information. Even everyday users can benefit from Tor’s anonymity features to protect their data from online tracking, profiling, and invasive advertising practices.
Information regarding traffic and usage of dark web sites is tough to pinpoint (for obvious reasons). According to Tor, the dark web hosted anywhere from 30,000 to 100,000 active sites at its peak in 2016 and 2017. The dark web is a small fraction of the deep web (i.e., all web content not indexed by search engines). Some estimates put it at less than 0.1% of the entire internet. It’s extremely disproportionate to the amount of impact this forum has had on the world.
While the dark web can support free speech and secure communication, it also harbors obvious risks. The dark web has transformed over the last decade from niche forums to complex marketplaces enabling a wide array of illegal activities. This shift is now amplified by AI-driven tools designed with malicious intentions like WormGPT, FraudGPT, and DarkBERT, which have recently been highlighted by the media.
The evolution of dark web marketplaces shows just how resilient today’s cybercriminals have become. A continuous adaptation to technological advancements and law enforcement efforts has left many governing bodies questioning their efficacy in combating the problem. As generative AI technology advances, the line between innovation and exploitation grows thinner. This leads to a conversation around ethical AI.
Mainstream AI platforms such as ChatGPT operate under strict ethical and compliance protocols designed to prevent misuse. AI tools on the dark web operate without these safeguards, so you can imagine the many malicious ways threat actors have been using them. Deepfake creation, phishing automation, malware development, counterfeit documentation, social engineering bots—the list goes on.
For example, I asked ChatGPT to make me a phishing kit. It didn’t go so well. Instead, it recommended resources for improving my security protocols to prevent phishing.
Anyone who has seen The Terminator, has a pretty good idea of what happens when machines are given full authority, access, and control. Is it an extreme example of a perfect AI storm? Of course. Does the picture do a decent job at representing AI left unchecked? I’d say so.
In order for AI to maintain a useful and practical role in society, there must be guardrails. This brings us to Ethical AI—a methodology for creating AI for the betterment of society with security in mind. A balance between innovation and security is what’s in every ethical AI developer’s mind.
Ethical AI development requires a core set of rules:
Let me be very clear, this is not an all-inclusive list of principles to consider while creating AI in all its applications. Despite my Terminator reference, I am writing this as a basis for consumer-focused GenAI and that alone.
Many AI tools continue to elude conventional law enforcement, keeping the legal system locked in a constant cat-and-mouse struggle over their operations. Jurisdictional challenges on the dark web enable the creators of tools like FraudGPT to operate anonymously and continue their sales. Even when identified, perpetrators often evade legal consequences due to international barriers and weak cybercrime legislation. This raises the question for many legal bodies: "Are we truly making a difference?"
Governments and international bodies have tried to attack the problem with the following strategies:
Unfortunately, the technology has advanced too quickly to wait for "reactive" regulation. As malicious AI tools grow in adoption, the window to implement preventive measures for businesses is narrowing. A balance between innovation and security is what’s on every ethical AI developer’s mind.
Businesses can protect against AI-driven attacks by adopting advanced security practices, AI-based defenses, and employee training. AI-powered tools for anomaly detection, threat intelligence, and automated incident response can detect and mitigate threats in real-time (see my blog on, AI fighting AI here). A zero-trust approach with least privilege access, multi-factor authentication, and continuous identity verification limits exposure. Data protection through encryption and regular backups helps defend against ransomware, while monitoring for AI abuses like deepfake impersonation and botnet activity adds another layer of defense.
Employee awareness and strong cybersecurity policies are nothing new in a defense strategy, but they must include education on modern threats. Regular training on tactics like phishing and deepfakes, along with security drills and penetration testing, improves preparedness. Businesses should secure supply chains by vetting partners and consider incorporating AI into their third-party risk management process. Finally, collaboration with industry groups (ISAC, for example), compliance with regulations, and cyber insurance bolster organizational resilience against these evolving threats.
These threats will grow in sophistication, but so can our defenses against them. Be sure to stay current on emerging attacks and education around what’s available for purchase on the dark web if you can. It never hurts to do some first-hand investigation.
Without ethical guardrails, AI risks becoming a tool of widespread harm rather than progress. Developers, regulators, and businesses have made a lot of progress in their AI systems that respect privacy, fairness, and human dignity. The dark web’s AI-driven threats will not wait for legislation to catch up, but neither should our defenses.
Businesses must adopt a mindset of resilience and adaptability. By staying informed, investing in cutting-edge AI defenses, and fostering a culture of security awareness, organizations can protect themselves against these dark web, genAI threats.
The battle between innovation and exploitation is far from over, but by leading with responsibility and vigilance, we can shape a future where AI serves as a force for good.