Why It Matters

AI isn’t the future. It’s the present

AI is already writing code, passing bar exams, faking video evidence, and mimicking human voices. And it’s improving fast—faster than most people expected. But while the technology is advancing, the rules around it aren’t.

Right now, there are almost no real limits on what companies can build or release. There’s no requirement to test AI for safety, no law that says humans have to stay in charge, and no system in place to hold anyone accountable if things go wrong.

That’s the gap we’re trying to close.

donate now

What’s at risk?

This isn’t just about job loss or automation. It’s bigger than that. We’re talking about:
If that sounds like science fiction, it’s not. It’s already happening in small pieces—and it’s scaling fast.

Tech isn’t evil

But it isn’t neutral either

AI doesn’t care what it’s used for. It will reflect the values—or the blind spots—of whoever builds and trains it. That’s why we can’t rely on corporations to self-regulate. The stakes are too high.

We’ve seen what happens

We’ve seen what happens

Social media changed the world before most people understood how it worked. By the time the problems showed up—disinformation, surveillance, algorithmic addiction—it was already baked in.

With AI, we still have a chance to act before the damage is done.

The Ethical AI Act doesn’t ban innovation.

This bill isn’t about fear

It’s about responsibility

The Ethical AI Act doesn’t ban innovation. It doesn’t shut down research. It simply says:

If you’re building something that can impact lives at scale, you need to build it carefully. And you need to answer for it.

That’s how every other high-risk industry works—from aviation to medicine to nuclear energy. AI shouldn’t be the exception.

If we don’t set the rules someone or something—else will

That’s why we’re acting now. That’s why this bill matters.