Our Rambling Rant....By Phoenix_Jones

We are dedicated to providing cutting-edge security solutions at no cost to the community, and since our source code is protected speech, we are not going anywhere. Criminals makes millions every year selling tools that are designed to be point and disrupt. So we decided why not do the same with security tools, except at no cost for the home user. Until now, governments, criminal organizations and other groups have paid hackers thousands of dollars to buy what are known as 0-day exploits, flaws in software you use everyday that have no fix or patches. Others report them to the manufacturer for money in bounty programs. We use them to create tools that protect YOU and your family members in real-time from these 0-days, as well as advance the right to repair movement and homebrew scene by exploiting these same flaws for good/fun.

If you are asking yourself why would we do this? It because we are the hackers who still have the core belief that, like anarchy is not about violence and smashing windows, hacking is not about damaging lives, stealing data or making money. Its about pushing boundaries, exploring, finding new and better ways of doing something and improving peoples lives. And for the longest time, hackers were at the forefront of the tech world. They didn't have to buy their own platforms or pay people to like them. Hackers didn't care how many people followed them. Instead of using their real names, they had monikers like Grandmaster Ratte, Mudge, Sid Vicious...and yes, even Lord British.

They taught us hacking was more a mentality like punk then a adjective describing an action. They taught us that just because we can doesn't meant we should, and if someone tells us we cant, we will prove them wrong...just so we can say we did it. For us, its about having fun, a very important part of living as long as your not hurting other people. And that's what the original hackers from MIT, Berkley and Cal-tech taught us, dating all the way back to the 1950's when computers we more of a mechanical machine and looked nothing like what a computer today looks like, let alone functions like one.

But everything changed after 9/11 happened. While it was very important people like the members of the Cult of The Dead Cow and other groups came to aid of those fighting the war against a brand new world, one the government knew nothing about (due their own fault). But as the war dragged on and and computers evolved, the hackers did not find the balance between going to far and remembering what the word hacker once meant. They forgot what the core of being one was about. While making money is fine, those tools ended up on the phones and computers of dissidents, reporters and have led to the deaths of people seeking nothing more than a better life or for trying to report on war crimes. They have become the go to tool for dictators controlling their populations. And those tools have continued to evolve. With the dawn of a new AI era, surveillance spyware, crypto-jackers and info stealers are being created faster than ever. And with only a handful of the old guard still active working on projects such as Veilid trying to undo the damage that was done, we are losing the war on safety, privacy and freedom.

While the immediate effect of these tools were not known to many, and it took years of court cases and FOI requests to reveal just how they were being used by the US government and others, the real damage was already done. Then when these tools were leaked, instead of helping on the front lines to stop the damage being done, the people who created them slipped into C-Suite jobs or government advisor roles making millions with their true backgrounds completely hidden.

That is why we formed this group. As the old guard moved on, not looking back, no one stepped up to take their place and instead left the next generation to learn on their own. And while some of these groups had the right idea, they had the wrong execution. You know the saying, "The path to hell is paved with good intentions."

Besides making tools to to help stop the current war online, we also hope to to lead by example. To show the current generation that their are better ways then being malicious, such as releasing tools that will protect you from 0-day exploits. Tools that will outsmart the spyware and malware/ransomware that has infected millions of computer. But also how to still have fun with it.

No, we are not legion. And some of us are getting old, so we might forget. But its time hackers are no longer a bad word again. For a full history of the hacker revolution, there are some great books. I suggest reading Cult of the Dead Cow: How the Original Hacking Supergroup Might Just Save the World by Joseph Mann. (When I was just a little script kiddie myself in the early 90's, I spent countless hours on their BBS, reading and learning everything I could, so I'm a little biased. And a little traumatized.)

This is not some manifesto, its just a lesson in history and a plea to other hackers. If we don't want history to repeat at the dawn of this new computing era we just entered, we need hackers on the side of....well chaotic good. If you want to join us, find us (we really are not hiding).

Welcome to darkHal Security Group's website. Our goal is to keep you safe, even when you dont know we are there Our team consists of some of the smartest minds in America, building smart tools for the intelligent threats of tomorrow. We work to keep you safe, everyday and we solve things no one else can, for no other reason then we can. As long as there are people selling script kiddies kits that can steal millions in crypto and run sophisticated, coordinated attacks with the click of a button, we will provide enterprise grade defense tools that can be run with the click of a button to protect Grandma's retirement (and her cookie recipe!).

The public model we are releasing will be on huggingface in the next few weeks. It is trained for defensive use only and will not perform any offensive tasks. Due to US export law, it will only be available to US residents via the HF application process. It is available in multiple model formats, recommended VRAM is 16GB, but just barely loads on 8GB.

Our offensive model is only be available to US CEH's and entities. Due to possible ITAR export laws affecting this product, you must provide documentation of your role, your CEH credentials and prove you are in compliance with US regulations. The offensive model has been trained and finetuned over the last 6 months, and is updated once a year. Current version is 2.0.

A limited model is available in 4-bit and 8-bit quantized safetensor and pytorch models. The recommend VRAM is 16gb

The full model is available in Granite and is the recommended solution. 40gb VRAM is the minimum required ram, 80GB of VRAM is recommended. As this tool is not a defensive tool and not meant for public use, it is not free.

Update 9/4: New version of darkHal model manager releasing next Friday.

Changelog: Fixed the universal agent. Some people were having an error where the agent tried to use the wrong commands to open a shell or powershell window. This was caused by some models telling the agent to skip opening a new shell, or the agent tried to use the "agent shell" to run the commands, which would fail as it is the pipeline for the agent and model to communicate.

WIP: Agent Dhals logic has been updated and is now basically its own json powered mini-model. Still needs to be tweaked to work.

Improved loading times, added more cli functions and fixed random crashes due to memory mismanagement.

------------------


Our git for our pre-release Alpha is up https://github.com/DigiJLinux/DarkHal_2.0. Remember, this tool is only a few weeks old, don't expect much out of it yet.

Update 9/4: Our 4th amendment protection tool is set to be released at the end of September. We are currently working out a few issues with google plays new ToS for creators.

darkHal 2.0

About darkHal Security Group

"There's a reason you separate military and the police. One fights the enemies of the state, the other serves and protects the people. When the military becomes both, then the enemies of the state tend to become the people". -- Commander Adama, Battlestar Galactia

black and white bed linen

Artificial intelligence can erode freedoms or strengthen them—the outcome depends on who controls it, and how we respond to it. The people need to remember, if we don't like the laws being passed or how its being used, let your Representatives know, and if they don't listen, replace them. Their job is to represent us. Not make decisions for us. darkHal is an apolitical group and do not support either party. We do not give a shit about your politics, we only care about the 1's and 0's.

Artificial intelligence is often presented as a tool of progress—streamlining services, analyzing massive datasets, and empowering individuals. Yet, like any technology, AI is neutral in essence, except when it is deliberately trained not to be. Its ethical impact depends not only on how it is deployed, but also on who deploys it. When placed in the hands of governments, corporations, or malicious actors, AI systems can be weaponized against the very constitutional rights designed to protect citizens. Understanding these risks is essential if liberty is to be preserved in an increasingly automated world.

One of the main areas of concern lies in the freedom of speech and expression. AI-driven content moderation and recommendation systems, while designed to maintain civility online and recommend material a person may relate to, have the potential to silence dissent and reinforce messages of distrust, hate, and violence. Algorithms, trained to identify harmful or “unsafe” speech, may suppress valid opinions or target certain groups to take their voice away. Citizens who suspect they are being monitored because their posts have been flagged may begin to self-censor, creating a chilling effect that undermines open debate—the cornerstone of American democracy. At the same time, AI-generated deepfakes and manipulated media make it more difficult for the public to separate fact from fiction, creating an environment where truth can be drowned out by manufactured lies. For example, imagine a local election in which a convincing AI-generated video surfaces online showing a candidate making inflammatory remarks they never actually said. Even if the video is later debunked, the damage is already done: news cycles amplify the clip, and social media spreads it widely to millions in a matter of seconds. Voters’ trust in the candidate is shaken. The false narrative competes with reality, leaving citizens unsure whom to believe and undermining the democratic process itself. This risk, however, can be mitigated through rapid-response verification systems—such as forcing micro-watermarking in manufactured media at the time of creation, embedded in the pixels, or deploying independent fact-checking networks that can authenticate content before it spreads. Public education campaigns that teach citizens how to identify digital manipulation can also help blunt the impact, ensuring that truth has a fighting chance against falsehoods.

Yet it is worth acknowledging that many of these defenses have been tried before—and they often fall short. Watermarking and authentication tools can be circumvented or stripped away. Fact-checking networks, while valuable, rarely match the speed and reach of viral misinformation. Public education campaigns struggle against the sheer realism of today’s generative tools and ignorance of AI capabilities. I still hear people saying that AI cannot create applications on its own, even when the evidence is in front of them. We live in a time where a human voice can be convincingly cloned in less than thirty seconds, and a fifteen-minute training sample can now reproduce not just words but the subtle cues of emotion and tone that even skilled listeners may find impossible to separate from fabrication. This raises a profound question: if any statement can be manufactured and any artifacts explained, how do we defend truth in a world where authentic voices can be replicated and reshaped at will?

Some argue that forcing “guardrails” onto AI systems is the only way to prevent harm. Yet this collides with a deeper constitutional question that we must also consider: do programmers have a First Amendment right to express themselves through source code? In American courts, the answer is yes. The courts have recognized that computer code is a form of speech protected under the First Amendment. In Bernstein v. U.S. Department of State (1999), the Ninth Circuit held that publishing encryption code was protected expression, striking down government attempts to license and restrict its dissemination. The Sixth Circuit echoed this in Junger v. Daley (2000), reinforcing that code is not just functional—it communicates ideas. Earlier battles, from United States v. Progressive, Inc. (1979), where the government unsuccessfully tried to block publication of an article describing how to build a hydrogen bomb, to the Pentagon Papers case (1971), where the Supreme Court rejected government efforts to stop newspapers from printing a classified history of the Vietnam War, established how rarely the state can justify restraining the publication of technical or sensitive information without a direct threat to national security. These cases highlight the judiciary’s consistent skepticism toward prior restraint, especially when national security is invoked as justification. Although the current Supreme Court has shown it has no issue favoring the rights of specific groups while abridging the rights of others. It is also no secret courts have been using AI more and more to research and write rulings, with little understanding of how LLMs work.

That same tension between liberty and security also extends beyond speech into the realm of personal privacy. The right to privacy was enshrined in the Fourth Amendment because the framers of the Bill of Rights did not want the government to become like the British crown, empowered to search, seize, and surveil without restraint. AI has enabled exactly that, with the assistance of companies like Google, Meta, and our cellphone providers, who have given real-time access to our location, search history, and everything else our phones collect to anyone who could pay—including the government—regardless of whether they had a warrant. Not that long ago, that realization would have led to mass protests over surveillance. And it did. A government program known as PRISM was exposed, and it was headline news for months. People were outraged for years. But when the news broke about T-Mobile, Verizon, and AT&T selling real-time information to anyone with money, the only ones who got upset were the FTC. Republicans in Congress ranged from being annoyed to furious—at the FTC’s “overreaching powers.” Only a few cared about the companies themselves, and for specific reasons. The Democrats demanded CEOs answer their questions and called a few hearings, but did nothing. Most people do not even know this happened. The outcome? A fine. This was far worse than PRISM, and nobody cared. With the help of AI, that information has been used to create targeted ads and complete profiles about U.S. citizens that include everything from where you go every day to what kind of underwear you buy.

Sadly, people have become too stupid to realize that once you realize your rights have been stripped away—because they’ve been used on you or against you—it’s too late to do anything. They do not understand that the argument isn’t about whether you have something to hide or not, or just accepting it with a shrug—“because that’s just how it is.” It’s about not letting the government erode our rights. Today’s tools such as instant-match facial recognition, predictive policing software, and real-time geolocation tracking allow authorities to monitor citizens on a scale once unimaginable except in East Germany—all without a warrant ever being issued. And until the courts make a ruling in the cellphone provider case, it all seems legal as long as it’s a private company doing it. When these systems claim to forecast behavior—predicting who might commit a crime or who might pose a security risk—they open the door to pre-emptive action that undermines the presumption of innocence, and they are being relied on more and more. These are systems prone to issues such as daydreaming or agreeing with their user just because.

Some technologists argue that the only way to defend against such surveillance is to fight algorithms with algorithms. One emerging approach is the use of a tool we are planning on releasing: darkHal’s “Fourth Amendment Protection Plugin,” a system designed not merely to obfuscate, but to actively shield users from AI-driven profiling. Rather than attempting the impossible task of disappearing from the digital landscape, darkHal generates layers of synthetic data—fake GPS coordinates, fabricated browsing histories, fake messages, simulated app usage, and false forensic metadata. By blending authentic activity with thousands of AI-generated content items, it prevents surveillance algorithms from producing reliable conclusions about an individual’s behavior or location.

The idea reframes privacy as an act of digital resistance. Instead of passively accepting that AI will map and monitor every action, tools like darkHal inject uncertainty into the system itself. Critics caution that this tactic could complicate legitimate investigations or erode trust in digital records. Yet supporters argue that when the state deploys AI to surveil without warrants or probable cause, citizens may be justified in using AI-driven counter-surveillance tools to defend their constitutional protections. In effect, darkHal embodies a technological assertion of the Fourth Amendment—restoring the principle that people should be secure in their “persons, houses, papers, and effects,” even when those papers now exist as data logs and metadata streams.

These tools then create concerns about due process and equal protection under the law. Courts and law enforcement agencies increasingly turn to algorithmic decision-making to guide bail, sentencing, and parole decisions. Police use AI-driven tools to create reports that have zero oversight, with no way to verify if an error in the facts was due to a malfunctioning AI or a dishonest law enforcement officer. According to Ars Technica, some of these models are trained on biased data, reinforcing the very disparities they are meant to reduce. Their reasoning is often hidden inside opaque “black box” systems, leaving defendants and their attorneys unable to challenge or even understand the basis for adverse rulings. In extreme cases, predictive models raise the specter of “pre-crime” scenarios, where individuals are treated as guilty not for what they have done, but for what a machine predicts they might do.

If the courtroom illustrates how AI can erode individual rights, the public square shows how it can chill collective ones. The right to assemble and associate freely is another area where AI can become a tool of control. Advanced computer vision allows drones and surveillance cameras to identify and track participants at protests, while machine learning applied to metadata can map entire networks of activists. Leaders may be singled out and pressured, while participants may face intimidation simply for exercising their right to gather. In some countries, AI-based “social scoring” systems already penalize individuals for their associations, and similar mechanisms could emerge elsewhere—such as in the U.S.—if left unchecked.

The erosion of assembly rights highlights a broader truth: democracy depends not only on the ability to gather and speak, but also on the ability to participate fully in elections. If the public square is vulnerable to AI manipulation, the ballot box is equally at risk. Even the most fundamental democratic right—the right to vote—is not immune. Generative AI makes it easier than ever to flood social media with targeted disinformation, tailoring falsehoods to specific demographics with surgical precision. Automated campaigns can discourage turnout among targeted groups, spread confusion about polling locations or dates, or erode faith in electoral outcomes altogether. If applied to electronic voting systems themselves, AI could exploit vulnerabilities at a scale that would threaten confidence in the legitimacy of elections.

These risks do not mean that AI is inherently incompatible with constitutional democracy. Rather, they highlight the need for deliberate safeguards such as equal access. If the police can monitor us without warrants in ways the founding fathers could not even fathom—but clearly did not want or would approve of—what’s to stop them from taking our other rights away based on technology simply because it didn’t exist 249 years ago? Transparency laws can give citizens the right to know when AI is being used, how it was trained, and how it arrives at its conclusions. Independent oversight boards and technical audits can ensure accountability in government deployments. But most importantly, humans must retain ultimate judgment in matters of liberty, justice, and political participation. And if citizens are being monitored with these tools, so should law enforcement and, when possible, the military. Finally, promoting access and digital literacy among the public—on how LLMs are created, used, and how to use them—is essential, so that citizens recognize manipulation when they see it and understand the power—and the limits—of these systems.

Yet, if left unchecked, artificial intelligence risks becoming a silent but powerful tool to erode constitutional protections without the end user even realizing it is happening. However, if governed wisely, the same technology can help safeguard rights by exposing corruption, enhancing transparency, and empowering individuals. The real question is not whether AI will shape our constitutional order; it is how we will let it.

AI and Liberty: Who Gets to Decide?

By: the drakHal Security Group Published: 8/26/2025

Artificial intelligence can erode freedoms or strengthen them—the outcome depends on who controls it, and how we respond to it. The people need to remember, if we don't like the laws being passed or how its being used, let your Representatives know, and if they don't listen, replace them. Their job is to represent us. Not make decisions for us. darkHal is an apolitical group and do not support either party. We do not give a shit about your politics, we only care about the 1's and 0's.

Artificial intelligence is often presented as a tool of progress—streamlining services, analyzing massive datasets, and empowering individuals. Yet, like any technology, AI is neutral in essence, except when it is deliberately trained not to be. Its ethical impact depends not only on how it is deployed, but also on who deploys it. When placed in the hands of governments, corporations, or malicious actors, AI systems can be weaponized against the very constitutional rights designed to protect citizens. Understanding these risks is essential if liberty is to be preserved in an increasingly automated world.

One of the main areas of concern lies in the freedom of speech and expression. AI-driven content moderation and recommendation systems, while designed to maintain civility online and recommend material a person may relate to, have the potential to silence dissent and reinforce messages of distrust, hate, and violence. Algorithms, trained to identify harmful or “unsafe” speech, may suppress valid opinions or target certain groups to take their voice away. Citizens who suspect they are being monitored because their posts have been flagged may begin to self-censor, creating a chilling effect that undermines open debate—the cornerstone of American democracy. At the same time, AI-generated deepfakes and manipulated media make it more difficult for the public to separate fact from fiction, creating an environment where truth can be drowned out by manufactured lies. For example, imagine a local election in which a convincing AI-generated video surfaces online showing a candidate making inflammatory remarks they never actually said. Even if the video is later debunked, the damage is already done: news cycles amplify the clip, and social media spreads it widely to millions in a matter of seconds. Voters’ trust in the candidate is shaken. The false narrative competes with reality, leaving citizens unsure whom to believe and undermining the democratic process itself. This risk, however, can be mitigated through rapid-response verification systems—such as forcing micro-watermarking in manufactured media at the time of creation, embedded in the pixels, or deploying independent fact-checking networks that can authenticate content before it spreads. Public education campaigns that teach citizens how to identify digital manipulation can also help blunt the impact, ensuring that truth has a fighting chance against falsehoods.

Yet it is worth acknowledging that many of these defenses have been tried before—and they often fall short. Watermarking and authentication tools can be circumvented or stripped away. Fact-checking networks, while valuable, rarely match the speed and reach of viral misinformation. Public education campaigns struggle against the sheer realism of today’s generative tools and ignorance of AI capabilities. I still hear people saying that AI cannot create applications on its own, even when the evidence is in front of them. We live in a time where a human voice can be convincingly cloned in less than thirty seconds, and a fifteen-minute training sample can now reproduce not just words but the subtle cues of emotion and tone that even skilled listeners may find impossible to separate from fabrication. This raises a profound question: if any statement can be manufactured and any artifacts explained, how do we defend truth in a world where authentic voices can be replicated and reshaped at will?

Some argue that forcing “guardrails” onto AI systems is the only way to prevent harm. Yet this collides with a deeper constitutional question that we must also consider: do programmers have a First Amendment right to express themselves through source code? In American courts, the answer is yes. The courts have recognized that computer code is a form of speech protected under the First Amendment. In Bernstein v. U.S. Department of State (1999), the Ninth Circuit held that publishing encryption code was protected expression, striking down government attempts to license and restrict its dissemination. The Sixth Circuit echoed this in Junger v. Daley (2000), reinforcing that code is not just functional—it communicates ideas. Earlier battles, from United States v. Progressive, Inc. (1979), where the government unsuccessfully tried to block publication of an article describing how to build a hydrogen bomb, to the Pentagon Papers case (1971), where the Supreme Court rejected government efforts to stop newspapers from printing a classified history of the Vietnam War, established how rarely the state can justify restraining the publication of technical or sensitive information without a direct threat to national security. These cases highlight the judiciary’s consistent skepticism toward prior restraint, especially when national security is invoked as justification. Although the current Supreme Court has shown it has no issue favoring the rights of specific groups while abridging the rights of others. It is also no secret courts have been using AI more and more to research and write rulings, with little understanding of how LLMs work.

That same tension between liberty and security also extends beyond speech into the realm of personal privacy. The right to privacy was enshrined in the Fourth Amendment because the framers of the Bill of Rights did not want the government to become like the British crown, empowered to search, seize, and surveil without restraint. AI has enabled exactly that, with the assistance of companies like Google, Meta, and our cellphone providers, who have given real-time access to our location, search history, and everything else our phones collect to anyone who could pay—including the government—regardless of whether they had a warrant. Not that long ago, that realization would have led to mass protests over surveillance. And it did. A government program known as PRISM was exposed, and it was headline news for months. People were outraged for years. But when the news broke about T-Mobile, Verizon, and AT&T selling real-time information to anyone with money, the only ones who got upset were the FTC. Republicans in Congress ranged from being annoyed to furious—at the FTC’s “overreaching powers.” Only a few cared about the companies themselves, and for specific reasons. The Democrats demanded CEOs answer their questions and called a few hearings, but did nothing. Most people do not even know this happened. The outcome? A fine. This was far worse than PRISM, and nobody cared. With the help of AI, that information has been used to create targeted ads and complete profiles about U.S. citizens that include everything from where you go every day to what kind of underwear you buy.

Artificial intelligence can erode freedoms or strengthen them—the outcome depends on who controls it, and how we respond to it. The people need to remember, if we don't like the laws being passed or how its being used, let your Representatives know, and if they don't listen, replace them. Their job is to represent us. Not make decisions for us. darkHal is an apolitical group and do not support either party. We do not give a shit about your politics, we only care about the 1's and 0's.

Artificial intelligence is often presented as a tool of progress—streamlining services, analyzing massive datasets, and empowering individuals. Yet, like any technology, AI is neutral in essence, except when it is deliberately trained not to be. Its ethical impact depends not only on how it is deployed, but also on who deploys it. When placed in the hands of governments, corporations, or malicious actors, AI systems can be weaponized against the very constitutional rights designed to protect citizens. Understanding these risks is essential if liberty is to be preserved in an increasingly automated world.

One of the main areas of concern lies in the freedom of speech and expression. AI-driven content moderation and recommendation systems, while designed to maintain civility online and recommend material a person may relate to, have the potential to silence dissent and reinforce messages of distrust, hate, and violence. Algorithms, trained to identify harmful or “unsafe” speech, may suppress valid opinions or target certain groups to take their voice away. Citizens who suspect they are being monitored because their posts have been flagged may begin to self-censor, creating a chilling effect that undermines open debate—the cornerstone of American democracy. At the same time, AI-generated deepfakes and manipulated media make it more difficult for the public to separate fact from fiction, creating an environment where truth can be drowned out by manufactured lies. For example, imagine a local election in which a convincing AI-generated video surfaces online showing a candidate making inflammatory remarks they never actually said. Even if the video is later debunked, the damage is already done: news cycles amplify the clip, and social media spreads it widely to millions in a matter of seconds. Voters’ trust in the candidate is shaken. The false narrative competes with reality, leaving citizens unsure whom to believe and undermining the democratic process itself. This risk, however, can be mitigated through rapid-response verification systems—such as forcing micro-watermarking in manufactured media at the time of creation, embedded in the pixels, or deploying independent fact-checking networks that can authenticate content before it spreads. Public education campaigns that teach citizens how to identify digital manipulation can also help blunt the impact, ensuring that truth has a fighting chance against falsehoods.

Yet it is worth acknowledging that many of these defenses have been tried before—and they often fall short. Watermarking and authentication tools can be circumvented or stripped away. Fact-checking networks, while valuable, rarely match the speed and reach of viral misinformation. Public education campaigns struggle against the sheer realism of today’s generative tools and ignorance of AI capabilities. I still hear people saying that AI cannot create applications on its own, even when the evidence is in front of them. We live in a time where a human voice can be convincingly cloned in less than thirty seconds, and a fifteen-minute training sample can now reproduce not just words but the subtle cues of emotion and tone that even skilled listeners may find impossible to separate from fabrication. This raises a profound question: if any statement can be manufactured and any artifacts explained, how do we defend truth in a world where authentic voices can be replicated and reshaped at will?

Some argue that forcing “guardrails” onto AI systems is the only way to prevent harm. Yet this collides with a deeper constitutional question that we must also consider: do programmers have a First Amendment right to express themselves through source code? In American courts, the answer is yes. The courts have recognized that computer code is a form of speech protected under the First Amendment. In Bernstein v. U.S. Department of State (1999), the Ninth Circuit held that publishing encryption code was protected expression, striking down government attempts to license and restrict its dissemination. The Sixth Circuit echoed this in Junger v. Daley (2000), reinforcing that code is not just functional—it communicates ideas. Earlier battles, from United States v. Progressive, Inc. (1979), where the government unsuccessfully tried to block publication of an article describing how to build a hydrogen bomb, to the Pentagon Papers case (1971), where the Supreme Court rejected government efforts to stop newspapers from printing a classified history of the Vietnam War, established how rarely the state can justify restraining the publication of technical or sensitive information without a direct threat to national security. These cases highlight the judiciary’s consistent skepticism toward prior restraint, especially when national security is invoked as justification. Although the current Supreme Court has shown it has no issue favoring the rights of specific groups while abridging the rights of others. It is also no secret courts have been using AI more and more to research and write rulings, with little understanding of how LLMs work.

That same tension between liberty and security also extends beyond speech into the realm of personal privacy. The right to privacy was enshrined in the Fourth Amendment because the framers of the Bill of Rights did not want the government to become like the British crown, empowered to search, seize, and surveil without restraint. AI has enabled exactly that, with the assistance of companies like Google, Meta, and our cellphone providers, who have given real-time access to our location, search history, and everything else our phones collect to anyone who could pay—including the government—regardless of whether they had a warrant. Not that long ago, that realization would have led to mass protests over surveillance. And it did. A government program known as PRISM was exposed, and it was headline news for months. People were outraged for years. But when the news broke about T-Mobile, Verizon, and AT&T selling real-time information to anyone with money, the only ones who got upset were the FTC. Republicans in Congress ranged from being annoyed to furious—at the FTC’s “overreaching powers.” Only a few cared about the companies themselves, and for specific reasons. The Democrats demanded CEOs answer their questions and called a few hearings, but did nothing. Most people do not even know this happened. The outcome? A fine. This was far worse than PRISM, and nobody cared. With the help of AI, that information has been used to create targeted ads and complete profiles about U.S. citizens that include everything from where you go every day to what kind of underwear you buy.