Anyone else worried about AI risk in their SaaS?

I’ve built an ERP SaaS that’s live and being used. Like everyone else, I’ve added a lot of AI features to keep up with competitors support, analytics, and some decision-related stuff.

It’s working from a features point of view, but honestly, I’m still uneasy about it.

Accuracy is one thing, but I’m more worried about hallucinations, data leaks, compliance, and what happens if the AI gives a wrong answer and a customer acts on it. Right now, we have some guardrails and limited human checks, but it doesn’t feel fully figured out.

For others building AI-powered SaaS:

  • How are you thinking about AI risk?
  • Are investors or enterprise customers pushing you on this?
  • Are you doing human-in-the-loop or just trusting guardrails?

Not trying to scare anyone, just trying to build this responsibly. Curious how others are handling it.

Author: Front_Bodybuilder105