AI isn’t the future. It’s the present
AI is already writing code, passing bar exams, faking video evidence, and mimicking human voices. And it’s improving fast—faster than most people expected. But while the technology is advancing, the rules around it aren’t.
Right now, there are almost no real limits on what companies can build or release. There’s no requirement to test AI for safety, no law that says humans have to stay in charge, and no system in place to hold anyone accountable if things go wrong.
That’s the gap we’re trying to close.
What’s at risk?
- Autonomous weapons that can make lethal decisions without human input
- AI-generated propaganda used to manipulate elections or public opinion
- Deepfakes that destroy reputations or create false evidence
- Bias in decision making systems, affecting everything from hiring to criminal justice
- Systems we don’t fully understand making decisions we can’t reverse
Tech isn’t evil
AI doesn’t care what it’s used for. It will reflect the values—or the blind spots—of whoever builds and trains it. That’s why we can’t rely on corporations to self-regulate. The stakes are too high.
We’ve seen what happens
Social media changed the world before most people understood how it worked. By the time the problems showed up—disinformation, surveillance, algorithmic addiction—it was already baked in.
With AI, we still have a chance to act before the damage is done.
This bill isn’t about fear
The Ethical AI Act doesn’t ban innovation. It doesn’t shut down research. It simply says:
If you’re building something that can impact lives at scale, you need to build it carefully. And you need to answer for it.
That’s how every other high-risk industry works—from aviation to medicine to nuclear energy. AI shouldn’t be the exception.