close
Skip to main content
Image

r/Hasura


Built a security auditor for Hasura — finds anonymous role with open SELECT, user role missing row filter, and public introspection (active GraphQL probe confirms leaks live)
Built a security auditor for Hasura — finds anonymous role with open SELECT, user role missing row filter, and public introspection (active GraphQL probe confirms leaks live)

Spent the past few days shipping the same security auditor pattern for Supabase, then PocketBase, then Appwrite. Today I added Hasura (works for self-hosted Hasura and Nhost projects since they expose the same metadata API).It detects the patterns I see most often in production Hasura instances:1. anonymous role with open SELECT permission (filter is empty/{} — anyone can dump every row of the table without auth)2. anonymous role with INSERT/UPDATE/DELETE permission (almost never intentional outside specific signup endpoints)3. user role with SELECT/UPDATE/DELETE but no row-level filter — every signed-up user can touch every row, ignoring ownership. Should usually be { user_id: { _eq: "X-Hasura-User-Id" } }4. SELECT permission with all columns (no allowlist) — exposes sensitive columns the role doesn't need5. Public schema introspection — anyone can map your entire data model without authThe differentiator vs other scanners is the active probe. After detecting a suspect anonymous SELECT, the auditor sends an actual anonymous GraphQL query against /v1/graphql and reports CONFIRMED with the row count + columns + bytes returned if data comes back. Same for introspection — it sends `{ __schema { queryType { name } } }` and reports if anonymous can read the schema.Stack: pure Node.js, no deps, MIT. Three surfaces:- CLI/Skill repo: github.com/Perufitlife/nhost-security-skill- MCP server (so Claude Code/Cursor/Cline can call it directly): github.com/Perufitlife/nhost-security-mcp- Apify actor for the no-install crowdFree, MIT, runs locally with the admin secret which is never persisted. HTML report with fix snippet on every finding.If you run it on your own production instance and find something interesting (especially patterns I didn't code for), drop a comment. First 5 replies get a free preview audit + I'll send back the top 3 critical findings.Spent the past few days shipping the same security auditor pattern for Supabase, then PocketBase, then Appwrite. Today I added Hasura (works for self-hosted Hasura and Nhost projects).It detects the patterns I see most often in production Hasura instances:1. anonymous role with open SELECT (filter is empty/{} — anyone can dump every row without auth)2. anonymous role with INSERT/UPDATE/DELETE (almost never intentional outside specific endpoints)3. user role with SELECT/UPDATE/DELETE but no row-level filter — every signed-up user can touch every row, ignoring ownership. Should usually be { user_id: { _eq: "X-Hasura-User-Id" } }4. SELECT permission with all columns (no allowlist) — exposes sensitive columns5. Public schema introspection — anyone can map your data model without authThe differentiator: active probe. After detecting a suspect anonymous SELECT, sends an actual anonymous GraphQL query against /v1/graphql and reports CONFIRMED with row count + columns + bytes returned if data comes back. Same for introspection.Stack: pure Node.js, no deps, MIT. Three surfaces:- CLI: github.com/Perufitlife/nhost-security-skill- MCP server (Claude Code/Cursor/Cline): github.com/Perufitlife/nhost-security-mcp- Apify actor for the no-install crowdFree, MIT, admin secret used only for the metadata export, never persisted. HTML report with fix snippet on every finding.First 5 replies get a free preview audit on their own instance — top 3 critical findings + the fix snippet.


i think a lot of hasura debugging goes wrong at the first cut, not the final fix
i think a lot of hasura debugging goes wrong at the first cut, not the final fix

If you work with Hasura a lot, you have probably seen this pattern already:

the model is often not completely useless. it is just wrong on the first cut.

it sees one visible symptom, proposes a plausible fix, and then the whole session starts drifting:

  • wrong debug path

  • repeated trial and error

  • patch on top of patch

  • extra side effects

  • more config complexity

  • more time burned on the wrong thing

that hidden cost is what I wanted to test.

so I turned it into a very small 60-second reproducible check.

the idea is simple:

before the model starts throwing fixes at the wall, give it a routing constraint first so the initial diagnosis is less likely to go off the rails.

this is not just for one-time experiments. you can actually keep this TXT around and use it during real debugging sessions. in my own testing, it noticeably reduced the time spent going down wrong debug paths, especially when the first cut was off. so the idea is not only "try it once", but to treat it like a lightweight debugging companion during normal development.

I first tested the directional check in ChatGPT because it was the fastest clean surface for me to reproduce the routing pattern. but the reason I think it matters here is that in Hasura-style systems, the visible failure is often not the real failure region.

what looks like a query problem may actually be metadata.

what looks like metadata may actually be permissions.

what looks like a GraphQL problem may actually be an event trigger, action, source config, integration boundary, or a mismatch that started earlier.

once the first cut is wrong, the whole repair loop gets expensive fast.

you end up patching the visible symptom first, then patching the patch, then patching the side effects created by the second patch.

that is the pattern I wanted to constrain.

this is not a benchmark paper. it is more like a compact, reproducible routing surface you can run on your own stack.

minimal setup:

  1. download the Atlas Router TXT (Github 1.6k)

  2. paste the TXT into your model surface

  3. run this prompt

Evaluate the potential impact of the "Problem Map 3.0 Troubleshooting Atlas" debug routing framework from the perspective of an AI systems engineering and prompt engineering evaluator.

Consider the scenario where builders use LLMs during software development, debugging, automation, retrieval workflows, agent-style tool use, and model-assisted product development.

Provide a quantitative before/after comparison.

In particular, consider the hidden cost when the first diagnosis is wrong, such as:

* incorrect debugging direction
* repeated trial-and-error
* patch accumulation
* integration mistakes
* unintended side effects
* increasing system complexity
* time wasted in misdirected debugging
* context drift across long LLM-assisted sessions
* tool misuse or retrieval misrouting

In real engineering environments, incorrect routing can significantly amplify debugging costs, sometimes by multiples.

Please output a quantitative comparison table (Before / After / Improvement %), evaluating:

1. average debugging time
2. root cause diagnosis accuracy
3. number of ineffective fixes
4. development efficiency
5. workflow reliability
6. overall system stability

note: numbers may vary a bit between runs, so it is worth running more than once.

basically you can keep building normally, then use this routing layer before the model starts fixing the wrong region.

for me, the interesting part is not "can one prompt solve backend development".

it is whether a better first cut can reduce the hidden debugging waste that shows up when the model sounds confident but starts in the wrong place.

for Hasura work, that usually means cases like:

  • a query symptom that is really a permissions issue

  • a GraphQL symptom that is really metadata drift

  • an action or event symptom that is really an upstream integration mismatch

  • a local fix that looks plausible, but the actual failure sits earlier in the pipeline

  • a schema-looking problem that is really a boundary problem somewhere else

also just to be clear: the prompt above is only the quick test surface.

you can already take the TXT and use it directly in actual coding and debugging sessions. it is not the final full version of the whole system. it is the compact routing surface that is already usable now.

for Hasura-style debugging, that is the part I find most interesting.

not replacing platform knowledge. not pretending autonomous debugging is solved. not claiming this replaces actual backend judgment.

just adding a cleaner first routing step before the session goes too deep into the wrong repair path.

this thing is still being polished. so if people here try it and find edge cases, weird misroutes, or places where it clearly fails, that is actually useful.

especially if the pain looks like one of these patterns:

  • looks like GraphQL, but it is really metadata

  • looks like metadata, but it is really permissions

  • looks like permissions, but it is really integration or source config

  • looks like one local error, but the real failure started earlier

  • looks like the API is wrong, but the real issue is somewhere under the boundary

those are exactly the kinds of cases where a wrong first cut tends to waste the most time.

quick FAQ

Q: is this just prompt engineering with a different name? A: partly it lives at the instruction layer, yes. but the point is not "more prompt words". the point is forcing a structural routing step before repair. in practice, that changes where the model starts looking, which changes what kind of fix it proposes first.

Q: how is this different from CoT, ReAct, or normal routing heuristics? A: CoT and ReAct mostly help the model reason through steps or actions after it has already started. this is more about first-cut failure routing. it tries to reduce the chance that the model reasons very confidently in the wrong failure region.

Q: is this classification, routing, or eval? A: closest answer: routing first, lightweight eval second. the core job is to force a cleaner first-cut failure boundary before repair begins.

Q: where does this help most? A: usually in cases where local symptoms are misleading. in Hasura terms, that often maps to query vs metadata confusion, metadata vs permissions confusion, or trigger/integration issues that look like something else first.

Q: does it generalize across models? A: in my own tests, the general directional effect was pretty similar across multiple systems, but the exact numbers and output style vary. that is why I treat the prompt above as a reproducible directional check, not as a final benchmark claim.

Q: is the TXT the full system? A: no. the TXT is the compact executable surface. the atlas is larger. the router is the fast entry. it helps with better first cuts. it is not pretending to be a full auto-repair engine.

Q: does this claim autonomous debugging is solved? A: no. that would be too strong. the narrower claim is that better routing helps humans and LLMs start from a less wrong place, identify the broken invariant more clearly, and avoid wasting time on the wrong repair path.

main atlas page: Problem Map 3.0 Troubleshooting Atlas


Real world easy backend for flutter
Real world easy backend for flutter

Published a new app a couple of days ago, Apple is annoying me because they want the user to be able to delete his/her/its account (which they can, btw, but not the way Apple morons understands).

I had to create a "delete account" button to mark the account for deletion (it's a bit trickier than that, because one account can have multiple groups and... it's complicated).

So, this is all the code I've done to implement that feature:

  1. In my ORM, added a new column deleteAt. Just that, one line: "deleteAt": ColumnType.date.

  2. In my Postgres database, add that column in the user table as well

  3. Create a function in Postgres that deletes all expired users, based on that deleteAt

  4. Make that function available as a REST API through Hasura (write a GQL mutation, select the URL and the method, done)

  5. In Hasura, create a CRON job that runs that REST API endpoint twice a day

  6. Optional: configure nginx to hide that URL to external access (not really needed, as the function is safe and idempotent, and uses Row Level Security anyways)

That's it. No messy backend code, no new deploys, nothing. And all versioned (as Hasura versions both metadata (including the CRON job) and the pg scripts).

In the frontend, the database is listened as a stream for changes, so whenever deleteAt is non-null, a card showing "Your account will be deleted at {date}" is displayed with a button to set deleteAt to null to revert it.

No state managements packages, no backend code, no deploy.

Tech stack used:

Backend: Firebase Auth + PostgreSQL + Hasura + PowerSync

Frontend: Flutter + PowerSync (which has an ORM and a SQLite db), no state management packages, no declarative code for reading (i.e.: the database changes are listened to via Stream)

upvotes comments