Skip to main content

Proudly not AI-powered

· 6 min read
Fletcher Heisler
CEO at Authentik Security Inc

authentik is an open source Identity Provider that unifies your identity needs into a single platform, replacing Okta and Auth0, Ping, and Entra ID. Authentik Security is a public benefit company building on top of the open source project.


We recently updated our list of upcoming Enterprise features to more accurately reflect the requests we’ve been hearing from our customers and community. One of the changes you may notice: we are no longer spending precious cycles on brainstorming ways to inject AI into our product and user experience.

We had briefly considered ways that “AI” and specifically LLMs might enhance our platform, including an AI-based risk assessment option, as you see below on the left in our previous Enterprise features list. On reflection, we realized we could probably get most of the way to the same outcome with some custom expression templates and a few if statements, begging the question of whether it was worth the effort to pursue at all. So on our current website, you'll no longer see that mention of AI.

We did not want to push a feature just for the sake of being able to say we are AI-powered.

It seems that nearly every tech startup or product is trying to shoehorn in “AI” lately, whether it makes sense to do so or not. Most of these companies are trying to boost valuation (equity funding of generative AI startups peaked at $21.8B in 2023) or prove to a board or executives that the company has an “AI strategy” (for better or worse, as I’ll get into below).

Fortunately, our board and executives consist of myself and our founding CTO. Only one of us was briefly swayed by the siren song of AI (and it certainly wasn’t Jens).

Real utility versus hype

The sudden and incredible progress we’ve seen over the past few years with LLMs and diffusion models is promising, and in some (rare) cases has been able to deliver tremendous business value. I also have no doubt that there are many genuinely useful applications for AI that we haven’t even conceived of yet.

However, at least at present, the most common reaction from end users when being offered, suggested, tricked, or sometimes forced into using AI-enabled features is: Why?

Let me search, or talk to a human, or write code, rather than squeeze my inputs through a poorly-implemented, poorly-understood, and poorly-tested black box that’s been layered on top of existing features.

Often what actually gets accomplished by adding an LLM on top of an existing product is user frustration, costly overhead, and opaque results.

The current state of AI-powered everything

In benign cases, we’ve seen non AI-companies making a lot of noise about AI integration and adoption in their products, in a manner reminiscent of WeWork positioning itself as a tech company instead of a real estate holding.

One study found consumers were actually put off by the presence of generative AI in product and service descriptions, which tracks with the notion that such campaigns are targeted more at investors than consumers. In customer support, ill-advised and unwanted AI is being rolled out to great fanfare without adequate guardrails (just ask Air Canada, Chevrolet, or DPD).

Earlier this year, Meta replaced the search bar on Instagram with “Ask Meta AI” with no opt-out, which went down about as well as you would expect. Google has been testing AI overviews in search results for users who have not opted in to the Google Search Labs SGE feature, specifically to seek feedback from these unwilling participants.

We have no desire to inflict AI that nobody asked for on our users.

When considering integrating something like an LLM into self-hosted software (as authentik is), it’s even more important to weigh the value of adding a multi-gigabyte, memory-hungry model when the same value could often be accomplished with a few well-crafted if statements.

AI is not a panacea

“AI” is also being wheeled out in marketing and press materials without actually explaining how it is being integrated into products or what the tangible benefits will be. Such announcements may temporarily impress investors, but don’t actually benefit customers. Okta AI was announced a year ago, but still appears to consist of a handful of capabilities that are “coming soon” and other vague improvements often with no obvious connection to AI in any case.

The survey below, intended for security executives, illustrates how AI is too often lumped together with “the next big things”, such as blockchain technology and quantum computing. We aren't going to try to sell you on any of those three; we are focused on building more immediately useful features for our users.

LLMs are also getting deployed as an “easy button”—a short-term shortcut or crutch for actually providing a good user experience. Why improve the organization of your documentation if you can simply throw the whole thing into a chatbot and return best-guess responses?

To paraphrase Stephen Merity, if the product wouldn’t work without free human labor, AI certainly won’t save it.

So, for now, authentik remains proudly deterministic, efficient, and not AI-powered. Will we use “AI” in the future? Almost certainly. Will such models or methods even be considered AI at that point? Given how goalposts continue to shift over time, hard to say. But we will not be rushing to embed an LLM into our product unless it is clearly the most helpful, efficient, and user-friendly solution to a specific problem.

We’re investing our efforts elsewhere (for now)

We are certainly not averse to integrating cutting-edge technology into authentik when the opportunity makes sense. For instance, we will soon be rolling out support for the Shared Signals Framework, an open standard to provide more efficient, secure API webhooks. It’s currently implemented by Apple but likely to become more widely adopted in the coming years.

But our primary mission is to make authentication simple for everyone. And to do so, we need to deliver a product that our community can trust and rely on; not an LLM-powered paper-clip. To that end, we are looking forward to providing more of the functionality that our community and customers actually need in order to make their jobs easier and improve their lives.

As always, we want to know your thoughts about this blog and our “proudly not AI-powered” stance. What are your thoughts about AI in general, and specifically within the Security field and for Identity Providers? Reach out to us on GitHub, Discord, or with an email to [email protected].