close
Skip to content

Keep backend and frontend daemonized in system mode#663

Open
alicomert wants to merge 4 commits intositeboon:mainfrom
alicomert:codex/daemon-cli-support
Open

Keep backend and frontend daemonized in system mode#663
alicomert wants to merge 4 commits intositeboon:mainfrom
alicomert:codex/daemon-cli-support

Conversation

@alicomert
Copy link
Copy Markdown

@alicomert alicomert commented Apr 17, 2026

Summary

  • Make Linux daemon mode system-first and keep startup messaging focused on system service operations.
  • Extend daemon management to control two services together:
    • cloudcli.service (backend on 3001)
    • cloudcli-frontend.service (frontend on 5173)
  • Add --frontend-port support and include frontend state/port visibility in status and doctor outputs.
  • Add server/vite-daemon.js to run Vite through Node API as a long-running service entrypoint.
  • Update dev scripts so npm run dev uses daemon install/restart flow with both ports (3001 and 5173).

Root Cause

Frontend was not managed as a persistent daemon service, and Vite process behavior under non-interactive service context could exit quickly, which left only backend behavior visible in practice. This created a mismatch where users expected both frontend and backend to survive terminal close/restart, but only backend daemon behavior was applied consistently.

Impact

  • Frontend (5173) and backend (3001) are both daemon-managed and auto-restart with systemd.
  • Services remain active after terminal exit and are configured for boot-time persistence in system mode.
  • Operators get explicit English guidance for status/stop/logs commands in daemon output.

Validation

  • npm run typecheck
  • npm run dev
  • node server/cli.js daemon status --mode system
  • HTTP checks:
    • http://85.235.74.198:3001 -> 200
    • http://85.235.74.198:5173 -> 200

Notes

  • Branch includes earlier daemon CLI groundwork plus this persistence fix set.

Summary by CodeRabbit

  • New Features

    • Add daemon management: run CloudCLI as a background service on Linux with install/start/stop/restart/status/logs/doctor controls, auto-start behavior, and CLI options to force foreground or restart the daemon after updates.
    • Add a Vite dev-mode daemon and strict frontend port binding.
  • Configuration

    • Documented new commented env var for shell session timeout (default 30 minutes) and updated dev/start scripts to use daemon-friendly workflows.
  • Documentation

    • New Linux daemon guide with install, status, logs, lifecycle, mode, and update instructions.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 17, 2026

📝 Walkthrough

Walkthrough

Adds daemon management to CloudCLI: new daemon CLI command, cross-platform daemon manager, Linux/systemd installer/auto-bootstrap on start, Vite daemon helper, doc and script updates, and a commented SHELL_SESSION_TIMEOUT_MINUTES entry in .env.example.

Changes

Cohort / File(s) Summary
Configuration & Documentation
\.env\.example, README\.md
Added commented SHELL_SESSION_TIMEOUT_MINUTES (default 30) and documented cloudcli daemon usage, install/status/doctor/logs/restart/stop/disable/uninstall commands, auto-daemon behavior and mode selection, and linger guidance.
Build & Runtime Scripts
package\.json
Reworked npm scripts to remove concurrently/tsx; dev/server scripts now drive CLI daemon commands or node entrypoints with fixed ports for dev paths.
CLI Interface & Dispatch
server/cli\.js
Added top-level daemon subcommand, --no-daemon and --restart-daemon flags; updated argument parsing to lock first positional command; updated update flow to optionally restart daemon and to call auto-daemon bootstrap on start.
Systemd Daemon Management
server/daemon-manager\.js
New Linux/systemd-focused module implementing install/start/stop/restart/status/logs/enable/disable/uninstall/doctor for backend/frontend units, unit file generation, health polling, mode resolution (`auto
Cross-Platform Daemon Runtime
server/daemon/manager\.js
New cross-platform daemon manager under ~/.cloudcli/daemon: start/stop/status, PID/state persistence, log access, autostart enable/disable for Linux/macOS/Windows, and exported lifecycle APIs.
Server Bootstrap & Health Probing
server/index\.js
Added TCP port probing (isPortOpen, waitForPortOpen) and maybeAutoDaemonBootstrapFromIndex() to attempt system daemon install on Linux before standard startup, with fallback and port-reachability detection logic.
Vite Dev Daemon
server/vite-daemon\.js, vite\.config\.js
New vite-daemon.js script to run Vite as a managed daemon with CLI parsing and signal handling; Vite config set strictPort: true.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant CLI as server/cli.js
    participant DaemonMgr as server/daemon-manager.js
    participant Systemd
    participant Backend
    participant Health as HTTP /health

    User->>CLI: cloudcli daemon install --mode=system
    CLI->>DaemonMgr: handleDaemonCommand(['install','--mode=system'])
    DaemonMgr->>DaemonMgr: Validate Linux & systemctl
    DaemonMgr->>DaemonMgr: Render unit files (backend + frontend)
    DaemonMgr->>Systemd: systemctl daemon-reload
    DaemonMgr->>Systemd: systemctl enable --now cloudcli.service
    DaemonMgr->>Systemd: systemctl enable --now cloudcli-frontend.service
    DaemonMgr->>Backend: Wait for startup
    DaemonMgr->>Health: Poll /health on backend port
    Backend-->>Health: 200 OK
    DaemonMgr-->>CLI: Success
    CLI-->>User: Installed and started
Loading
sequenceDiagram
    participant User
    participant CLI as server/cli.js
    participant AutoBoot as maybeAutoDaemonBootstrapFromIndex
    participant DaemonMgr as server/daemon-manager.js
    participant PortProbe as Port Reachability
    participant Server as startServer()

    User->>CLI: cloudcli start
    CLI->>AutoBoot: maybeAutoDaemonBootstrapFromIndex()
    AutoBoot->>DaemonMgr: handleDaemonCommand(['install','--mode=system'])
    alt install succeeds
        DaemonMgr-->>AutoBoot: installed & started
        AutoBoot-->>CLI: return (bootstrap complete)
        CLI-->>User: Daemon running (background)
    else install fails
        AutoBoot->>PortProbe: waitForPortOpen(timeout)
        alt port opens
            PortProbe-->>AutoBoot: port reachable
            AutoBoot-->>CLI: log "daemon active"
            CLI-->>User: Daemon appears active
        else
            AutoBoot-->>CLI: throw error suggesting sudo or --no-daemon
            CLI-->>User: Error with recovery steps
        end
    end

    alt User used --no-daemon
        CLI->>Server: startServer() (foreground)
        Server-->>CLI: server running
        CLI-->>User: Foreground server started
    end
Loading

Possibly related PRs

Suggested Reviewers

  • viper151

Poem

🐰 I burrowed through config and code tonight,
Laid tiny unit files out of sight,
Now CloudCLI naps and hops at dawn,
Daemons hum softly on the lawn. ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.69% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main objective: implementing system-mode daemonization for both backend and frontend services.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@alicomert alicomert changed the title [codex] Keep backend and frontend daemonized in system mode Keep backend and frontend daemonized in system mode Apr 17, 2026
@alicomert alicomert marked this pull request as ready for review April 17, 2026 09:04
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
server/cli.js (1)

718-756: ⚠️ Potential issue | 🟡 Minor

Top-level parser silently ignores --frontend-port and --mode.

cloudcli start --frontend-port 5174 --mode user passes these intending to override the daemon's defaults, but:

  • --frontend-port / --mode don't match any branch in parseArgs — they're silently dropped.
  • The following value token (5174, user) hits the !arg.startsWith('-') branch; because commandSet is already true it's also dropped, but if the user omits start it would be wrongly parsed as the command name.

Either parse them properly (and forward to maybeAutoDaemonStart(options) via e.g. options.frontendPort, options.mode), or reject unknown flags explicitly so users notice. Right now the help text advertises flows like cloudcli daemon install --mode system --port 3001 but nothing prevents a user from mistakenly writing cloudcli start --mode system --frontend-port 5174 and getting no feedback that half of it was ignored.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/cli.js` around lines 718 - 756, The parseArgs function currently
ignores --frontend-port and --mode causing those values to be dropped; update
parseArgs to recognize and set parsed.options.frontendPort and
parsed.options.mode (accept both --flag value and --flag=value forms) so these
are forwarded to maybeAutoDaemonStart/options, or alternatively make parseArgs
explicitly error on unknown flags (e.g., throw or set an error field) so callers
get feedback. Modify the branches in parseArgs that handle --port and
--database-path to add analogous handling for --frontend-port and --mode
(including short/equals forms), and ensure parsed.options is passed unchanged
into maybeAutoDaemonStart or the daemon start path. Ensure you reference
parsed.options.frontendPort and parsed.options.mode where the daemon startup
path reads options.
🧹 Nitpick comments (5)
server/daemon-manager.js (2)

615-617: Fallback warning is misleading when user bus is actually available.

resolveDaemonMode returns 'system' whenever subcommand === 'install' (line 319), even if the user bus is healthy. But this branch only guards on !userBus.ok, so the warning "User mode is unavailable; auto mode is falling back to system mode" is in practice correct only when the user bus is truly down. When --mode auto install is used with a working user bus, the auto→system resolution is by design (system-first install) and no warning appears — so this branch never actually misfires. Still, the wording implies user-mode was unavailable, which is untrue in the general auto-install case. Consider clarifying the message, e.g.:

-console.log(`${c.warn('[WARN]')} User mode is unavailable; auto mode is falling back to system mode.`);
+console.log(`${c.warn('[WARN]')} systemd user bus is unavailable (${userBus.detail}); auto mode is installing the system unit.`);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon-manager.js` around lines 615 - 617, The warning text is
misleading because resolveDaemonMode forces mode === 'system' for subcommand ===
'install' even when userBus.ok is true; update the logic around the check using
parsed.options.mode, mode and userBus.ok (and the subcommand === 'install' rule)
so the message only claims "User mode is unavailable" when userBus.ok is false,
and either suppress the warning for the install path or emit a different message
(e.g., "auto mode resolved to system for install command") when subcommand ===
'install'; adjust the branch that currently logs using c.warn('[WARN]') so it
checks subcommand (or changes the wording) to accurately reflect why auto→system
was chosen.

698-702: stop aborts on backend failure after already stopping the frontend.

Frontend uses allowFailure: true but backend does not, so systemctl stop cloudcli.service failing (e.g. permission denied in system mode without sudo) leaves the user in a state where the frontend is stopped but the backend is still running — and no success message is printed. Since both services should be treated as a pair, consider symmetric allowFailure + a final state-check, or run stop in a single systemctl stop A B invocation so systemctl handles atomicity/ordering.

-runSystemctl(mode, ['stop', FRONTEND_DAEMON_SERVICE_NAME], { allowFailure: true });
-runSystemctl(mode, ['stop', DAEMON_SERVICE_NAME]);
+runSystemctl(mode, ['stop', FRONTEND_DAEMON_SERVICE_NAME, DAEMON_SERVICE_NAME]);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon-manager.js` around lines 698 - 702, The stop case is
asymmetric: runSystemctl is called with allowFailure: true for
FRONTEND_DAEMON_SERVICE_NAME but not for DAEMON_SERVICE_NAME, which can leave
the frontend stopped while the backend remains running; update the 'stop'
handling in the switch case for 'stop' to either call runSystemctl once with
both service names (e.g., runSystemctl(mode, ['stop',
FRONTEND_DAEMON_SERVICE_NAME, DAEMON_SERVICE_NAME], { allowFailure: true })) or
set allowFailure: true for the DAEMON_SERVICE_NAME call as well and then perform
a final state-check (using the same runSystemctl/status helper) to report the
combined result; refer to runSystemctl, FRONTEND_DAEMON_SERVICE_NAME,
DAEMON_SERVICE_NAME and the 'stop' case when making the change.
server/cli.js (1)

675-715: maybeAutoDaemonStart only passes --port to install, so a custom port on the CLI won't propagate to the frontend unit.

daemonArgs at line 685 only includes --port and --database-path. The frontend daemon picks its port from process.env.VITE_PORT || 5173 inside daemon-manager.js. That's fine for the default case, but if a user ran cloudcli start --port 3002 expecting a coherent pair, the frontend still lands on 5173 and /api proxy expectations implied by vite.config.js won't align.

Consider forwarding options.frontendPort (once the parser supports it — see related comment) and/or documenting that the frontend port must be set via VITE_PORT.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/cli.js` around lines 675 - 715, The install command builds daemonArgs
in maybeAutoDaemonStart but never forwards a frontend port, so the daemon's
frontend will default to VITE_PORT/5173; update maybeAutoDaemonStart to push a
frontend port flag into daemonArgs (e.g. '--frontend-port' with
String(options.frontendPort || process.env.VITE_PORT ||
process.env.FRONTEND_PORT) when present) before calling handleDaemonCommand so
the daemon-manager/frontend can pick up the same port; ensure the new flag name
matches what daemon-manager.js expects and include it alongside '--port' and
'--database-path' in the daemonArgs construction.
server/vite-daemon.js (1)

16-36: Minor: value-flag parsing doesn't validate that a value is present.

parsed.host = argv[++i] / parsed.port = Number(argv[++i]) silently accept undefined if the flag is the last token (e.g. node vite-daemon.js --host). --host becomes host: undefined and Vite will later fail with a less useful error; --port becomes NaN which is then caught by the range check (good), but the host case is not.

Consider validating presence:

Suggested guard
         if (arg === '--host') {
-            parsed.host = argv[++i];
+            const value = argv[++i];
+            if (value === undefined) throw new Error('Missing value for --host');
+            parsed.host = value;
         } else if (arg.startsWith('--host=')) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/vite-daemon.js` around lines 16 - 36, The flag parsing loop allows
--host and --port flags to consume a missing next token (argv[++i]) leading to
parsed.host = undefined or parsed.port = NaN; update the branches handling arg
=== '--host', arg === '--port' || arg === '-p' (and their
startsWith('--host=')/startsWith('--port=') forms) to validate the extracted
value is present and non-empty (for --host) and that Number(value) is a valid
finite number (for --port); if validation fails, print the usage/help message or
throw a clear Error like "Missing value for --host" / "Missing or invalid value
for --port" and exit, so parsed.host and parsed.port are never set to
undefined/NaN.
server/daemon/manager.js (1)

554-564: readDaemonLogs loads the whole log file; there's no rotation anywhere.

startDaemon appends stdout/stderr to daemon.log indefinitely (line 389 opens with 'a') and readDaemonLogs does fs.readFileSync(LOG_FILE, 'utf8') then splits the full buffer just to return the tail. For a service designed to run persistently under systemd, this log will grow unbounded and eventually OOM the "read last N lines" path.

Two independent fixes worth considering:

  1. Stream the tail instead of loading the entire file (e.g. read the final N KB, split, then slice the last lines).
  2. Rely on platform log infrastructure (journalctl --user -u cloudcli-daemon.service, Console.app for launchd, Event Viewer) and/or add size-based rotation when writing.
♻️ Minimal tail-only read
-  const raw = fs.readFileSync(LOG_FILE, 'utf8');
-  const lineList = raw.split(/\r?\n/);
-  const limited = lineList.slice(Math.max(0, lineList.length - lines)).join('\n');
-  return { exists: true, content: limited.trim(), logFile: LOG_FILE };
+  const { size } = fs.statSync(LOG_FILE);
+  const maxBytes = Math.min(size, 256 * 1024); // cap at 256 KiB
+  const fd = fs.openSync(LOG_FILE, 'r');
+  try {
+    const buf = Buffer.alloc(maxBytes);
+    fs.readSync(fd, buf, 0, maxBytes, Math.max(0, size - maxBytes));
+    const lineList = buf.toString('utf8').split(/\r?\n/);
+    const limited = lineList.slice(Math.max(0, lineList.length - lines)).join('\n');
+    return { exists: true, content: limited.trim(), logFile: LOG_FILE };
+  } finally {
+    fs.closeSync(fd);
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon/manager.js` around lines 554 - 564, readDaemonLogs currently
reads the entire LOG_FILE into memory which will OOM for long-running daemons;
change it to read only the file tail by using the file size (fs.statSync) and
reading the last X bytes (e.g. lines * avgLineBytes or a fixed tailBytes) via
fs.open/fs.read or fs.createReadStream with a start offset, then split and slice
the final `lines` entries before returning. Also consider adding a complementary
change in startDaemon (where the file is opened with 'a') to implement
size-based rotation (rotate when LOG_FILE exceeds a threshold) or document
reliance on system logging, but the immediate fix is to modify readDaemonLogs to
compute start = max(0, size - tailBytes) and only read from that offset to avoid
loading the whole file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@package.json`:
- Around line 27-30: The dev script currently runs "server:dev" which calls
"node server/cli.js daemon install --mode system" causing daemon-manager.js to
call ensureLinux(), write system unit files to /etc/systemd/system (requiring
sudo), and install a persistent root service; change package.json so "dev"
remains a cross-platform foreground developer entrypoint (e.g. run server and
client concurrently or run server CLI without "daemon install") and move the
system-wide daemon path into a new script like "dev:daemon" that explicitly uses
"--mode system"; alternatively default the daemon install invocation to "--mode
auto" or "--mode user" so non-root Linux and macOS/Windows contributors aren't
blocked; update any docs or tests referencing "server:dev" and ensure references
to daemon-manager.js.ensureLinux and the "daemon install" flow point at the new
"dev:daemon" script.

In `@server/daemon-manager.js`:
- Around line 159-175: The systemd unit templates emitted by
buildDaemonServiceUnit and buildFrontendDaemonServiceUnit lack User= and Group=
and hardening directives, causing services to run as root; modify both functions
to detect the invoking user (prefer SUDO_USER fallback to process.env.USER or
add a --user flag parsed by the CLI) and inject User=${user} and Group=${user}
into the [Service] section, and also add recommended hardening keys such as
ProtectSystem=full, PrivateTmp=true, NoNewPrivileges=true (and optionally
ReadOnlyPaths=/ etc.) for the system-mode unit generation so installed units do
not run with root privileges.
- Around line 614-688: The install flow can leave the frontend enabled/running
if the backend healthCheckOrThrow(DAEMON_SERVICE_NAME, …) throws; wrap the two
calls to healthCheckOrThrow(...) in a try { ... } catch (err) {
runSystemctl(mode, ['disable','--now', DAEMON_SERVICE_NAME]); runSystemctl(mode,
['disable','--now', FRONTEND_DAEMON_SERVICE_NAME]); optionally remove written
unit files (servicePath, frontendServicePath) or at least log guidance to run
uninstall, then rethrow the error; this ensures any partial start enabled by
runSystemctl(..., ['enable','--now', ...]) is undone on failure.

In `@server/daemon/manager.js`:
- Around line 154-170: The health-check currently treats any 2xx-4xx HTTP code
as healthy which masks client/authorization errors; update the check in
checkHealth (used by getDaemonStatus) to only consider successful responses as
healthy by tightening the predicate to 2xx (or 2xx–3xx if redirects should
count) instead of res.statusCode >= 200 && res.statusCode < 500; alternatively,
if the intent is merely reachability, rename checkHealth to checkReachable and
keep the broader predicate—adjust the status/doctor outputs to match the chosen
semantics.
- Around line 177-190: The generated systemd unit uses Type=oneshot and a
bootstrap that spawns a detached/unref'd child, which breaks systemd
supervision; change the unit generation (the template that returns the string
containing execStart/execStop) to use Type=simple, remove RemainAfterExit, and
add Restart=on-failure so systemd tracks and restarts the real process; also
stop spawning a detached/unref'd child in the daemon start code (the code that
currently does the detached spawn/unref — locate the spawn/unref call around the
start logic) and instead run the daemon in-foreground (do not unref or detach)
so ExecStart launches the long-running process that systemd supervises. Ensure
ExecStop still stops via the same PID handling but rely on systemd restart
semantics rather than orphaning the process.

In `@server/index.js`:
- Around line 2345-2376: The catch block in maybeAutoDaemonBootstrapFromIndex
currently treats a reachable backend (checked by waitForPortOpen(SERVER_PORT))
as full success even though install also starts a frontend unit; update the
catch handling in maybeAutoDaemonBootstrapFromIndex (where handleDaemonCommand
is called) to also probe the frontend service before returning true — e.g., call
waitForPortOpen for the frontend port (use your FRONTEND_PORT or
CLOUDCLI_FRONTEND_PORT constant) or verify the frontend unit status, and only
call printSystemDaemonActiveNotice/return true when both backend and frontend
are healthy; if backend is reachable but frontend is not, log/console.warn the
original error.message (preserve the caught error text) and surface a clear
warning instead of returning early so startServer() can continue the correct
failure path.

---

Outside diff comments:
In `@server/cli.js`:
- Around line 718-756: The parseArgs function currently ignores --frontend-port
and --mode causing those values to be dropped; update parseArgs to recognize and
set parsed.options.frontendPort and parsed.options.mode (accept both --flag
value and --flag=value forms) so these are forwarded to
maybeAutoDaemonStart/options, or alternatively make parseArgs explicitly error
on unknown flags (e.g., throw or set an error field) so callers get feedback.
Modify the branches in parseArgs that handle --port and --database-path to add
analogous handling for --frontend-port and --mode (including short/equals
forms), and ensure parsed.options is passed unchanged into maybeAutoDaemonStart
or the daemon start path. Ensure you reference parsed.options.frontendPort and
parsed.options.mode where the daemon startup path reads options.

---

Nitpick comments:
In `@server/cli.js`:
- Around line 675-715: The install command builds daemonArgs in
maybeAutoDaemonStart but never forwards a frontend port, so the daemon's
frontend will default to VITE_PORT/5173; update maybeAutoDaemonStart to push a
frontend port flag into daemonArgs (e.g. '--frontend-port' with
String(options.frontendPort || process.env.VITE_PORT ||
process.env.FRONTEND_PORT) when present) before calling handleDaemonCommand so
the daemon-manager/frontend can pick up the same port; ensure the new flag name
matches what daemon-manager.js expects and include it alongside '--port' and
'--database-path' in the daemonArgs construction.

In `@server/daemon-manager.js`:
- Around line 615-617: The warning text is misleading because resolveDaemonMode
forces mode === 'system' for subcommand === 'install' even when userBus.ok is
true; update the logic around the check using parsed.options.mode, mode and
userBus.ok (and the subcommand === 'install' rule) so the message only claims
"User mode is unavailable" when userBus.ok is false, and either suppress the
warning for the install path or emit a different message (e.g., "auto mode
resolved to system for install command") when subcommand === 'install'; adjust
the branch that currently logs using c.warn('[WARN]') so it checks subcommand
(or changes the wording) to accurately reflect why auto→system was chosen.
- Around line 698-702: The stop case is asymmetric: runSystemctl is called with
allowFailure: true for FRONTEND_DAEMON_SERVICE_NAME but not for
DAEMON_SERVICE_NAME, which can leave the frontend stopped while the backend
remains running; update the 'stop' handling in the switch case for 'stop' to
either call runSystemctl once with both service names (e.g., runSystemctl(mode,
['stop', FRONTEND_DAEMON_SERVICE_NAME, DAEMON_SERVICE_NAME], { allowFailure:
true })) or set allowFailure: true for the DAEMON_SERVICE_NAME call as well and
then perform a final state-check (using the same runSystemctl/status helper) to
report the combined result; refer to runSystemctl, FRONTEND_DAEMON_SERVICE_NAME,
DAEMON_SERVICE_NAME and the 'stop' case when making the change.

In `@server/daemon/manager.js`:
- Around line 554-564: readDaemonLogs currently reads the entire LOG_FILE into
memory which will OOM for long-running daemons; change it to read only the file
tail by using the file size (fs.statSync) and reading the last X bytes (e.g.
lines * avgLineBytes or a fixed tailBytes) via fs.open/fs.read or
fs.createReadStream with a start offset, then split and slice the final `lines`
entries before returning. Also consider adding a complementary change in
startDaemon (where the file is opened with 'a') to implement size-based rotation
(rotate when LOG_FILE exceeds a threshold) or document reliance on system
logging, but the immediate fix is to modify readDaemonLogs to compute start =
max(0, size - tailBytes) and only read from that offset to avoid loading the
whole file.

In `@server/vite-daemon.js`:
- Around line 16-36: The flag parsing loop allows --host and --port flags to
consume a missing next token (argv[++i]) leading to parsed.host = undefined or
parsed.port = NaN; update the branches handling arg === '--host', arg ===
'--port' || arg === '-p' (and their startsWith('--host=')/startsWith('--port=')
forms) to validate the extracted value is present and non-empty (for --host) and
that Number(value) is a valid finite number (for --port); if validation fails,
print the usage/help message or throw a clear Error like "Missing value for
--host" / "Missing or invalid value for --port" and exit, so parsed.host and
parsed.port are never set to undefined/NaN.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2a51ad51-0c5a-466e-bfe5-48e468c0e90e

📥 Commits

Reviewing files that changed from the base of the PR and between 25b00b5 and 4bb0d0b.

📒 Files selected for processing (9)
  • .env.example
  • README.md
  • package.json
  • server/cli.js
  • server/daemon-manager.js
  • server/daemon/manager.js
  • server/index.js
  • server/vite-daemon.js
  • vite.config.js

Comment thread package.json
Comment thread server/daemon-manager.js
Comment on lines +159 to +175
return `[Unit]
Description=CloudCLI Server
After=network.target

[Service]
Type=simple
WorkingDirectory=${appRoot}
ExecStart=${execStart}
Environment=HOST=0.0.0.0
Environment=CI=true
Environment=CLOUDCLI_DAEMON_MANAGED=1
Restart=always
RestartSec=2

[Install]
WantedBy=default.target
`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Security: system-mode systemd units run as root (no User= / Group= directive).

Both buildDaemonServiceUnit and buildFrontendDaemonServiceUnit emit [Service] sections without User=/Group=. When installed in --mode system (now the default path for npm run dev, cloudcli start auto-bootstrap, and the documented install command), systemd runs both Node processes as root. Running the Vite dev server and the Express/WebSocket server as root gives every plugin, PTY shell, and file operation in this app root-level authority on the host — a significant privilege escalation compared to the prior npm run dev foreground workflow.

Recommend:

  • Capture the invoking user (e.g. from $SUDO_USER or a --user flag) and emit User= / Group= in the unit.
  • Consider hardening directives like ProtectSystem=full, PrivateTmp=true, NoNewPrivileges=true for the system unit.
Sketch of change
 [Service]
 Type=simple
 WorkingDirectory=${appRoot}
 ExecStart=${execStart}
+User=${serviceUser}
+Group=${serviceGroup}
 Environment=HOST=0.0.0.0
 Environment=CI=true
 Environment=CLOUDCLI_DAEMON_MANAGED=1
+NoNewPrivileges=true
+PrivateTmp=true
 Restart=always
 RestartSec=2

Also applies to: 221-237

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon-manager.js` around lines 159 - 175, The systemd unit templates
emitted by buildDaemonServiceUnit and buildFrontendDaemonServiceUnit lack User=
and Group= and hardening directives, causing services to run as root; modify
both functions to detect the invoking user (prefer SUDO_USER fallback to
process.env.USER or add a --user flag parsed by the CLI) and inject User=${user}
and Group=${user} into the [Service] section, and also add recommended hardening
keys such as ProtectSystem=full, PrivateTmp=true, NoNewPrivileges=true (and
optionally ReadOnlyPaths=/ etc.) for the system-mode unit generation so
installed units do not run with root privileges.

Comment thread server/daemon-manager.js
Comment on lines +614 to +688
case 'install': {
if (parsed.options.mode === 'auto' && mode === 'system' && !userBus.ok) {
console.log(`${c.warn('[WARN]')} User mode is unavailable; auto mode is falling back to system mode.`);
}

try {
fs.mkdirSync(path.dirname(servicePath), { recursive: true });
fs.mkdirSync(path.dirname(frontendServicePath), { recursive: true });

const backendUnitContent = buildDaemonServiceUnit({
appRoot,
serverPort: portNum,
databasePath,
nodeExecPath: context.nodeExecPath,
cliEntry: context.cliEntry,
});
fs.writeFileSync(servicePath, backendUnitContent, 'utf8');

const frontendUnitContent = buildFrontendDaemonServiceUnit({
appRoot,
frontendPort: frontendPortNum,
nodeExecPath: context.nodeExecPath,
cliEntry: context.cliEntry,
});
fs.writeFileSync(frontendServicePath, frontendUnitContent, 'utf8');
} catch (fileError) {
if (mode === 'system' && (fileError.code === 'EACCES' || fileError.code === 'EPERM')) {
throw new Error(
`Permission denied writing daemon unit files (${servicePath}, ${frontendServicePath}). ` +
`Try: sudo cloudcli daemon install --mode system --port ${portNum} --frontend-port ${frontendPortNum}`
);
}
throw fileError;
}

runSystemctl(mode, ['daemon-reload']);
runSystemctl(mode, ['enable', '--now', DAEMON_SERVICE_NAME]);
runSystemctl(mode, ['enable', '--now', FRONTEND_DAEMON_SERVICE_NAME]);

if (mode === 'user') {
const lingerResult = runCommand('loginctl', ['enable-linger', os.userInfo().username]);
if (lingerResult.status !== 0) {
console.log(`${c.warn('[WARN]')} Could not enable linger automatically.`);
console.log(` ${c.dim(extractCommandError(lingerResult, 'Unknown linger error'))}`);
console.log(` ${c.tip('[TIP]')} Run with sufficient privileges: ${c.bright(`sudo loginctl enable-linger ${os.userInfo().username}`)}`);
}
}

const installedPort = getPortFromServiceUnit(servicePath) || portNum;
const installedFrontendPort = getPortFromServiceUnit(frontendServicePath) || frontendPortNum;
await healthCheckOrThrow(mode, DAEMON_SERVICE_NAME, installedPort, c);
await healthCheckOrThrow(mode, FRONTEND_DAEMON_SERVICE_NAME, installedFrontendPort, c);

const backendState = getServiceState(mode, DAEMON_SERVICE_NAME);
const frontendState = getServiceState(mode, FRONTEND_DAEMON_SERVICE_NAME);
console.log(`\n${c.ok('✔')} Daemon installed and started.`);
console.log(` Mode: ${c.bright(mode)}`);
console.log(` Backend Unit: ${c.dim(servicePath)}`);
console.log(` Frontend Unit: ${c.dim(frontendServicePath)}`);
console.log(` Backend Active: ${c.bright(backendState.active)}`);
console.log(` Backend Enabled:${c.bright(backendState.enabled)}`);
console.log(` Frontend Active:${c.bright(frontendState.active)}`);
console.log(` Frontend Enabled:${c.bright(frontendState.enabled)}`);
console.log(` Backend URL: ${c.bright(`http://localhost:${installedPort}`)}`);
console.log(` Frontend URL: ${c.bright(`http://localhost:${installedFrontendPort}`)}\n`);
if (mode === 'system') {
console.log(`${c.ok('[OK]')} System daemon is active for backend and frontend.`);
console.log(`${c.info('[INFO]')} Backend health: ${c.bright(`http://localhost:${installedPort}/health`)}`);
console.log(`${c.info('[INFO]')} Frontend: ${c.bright(`http://localhost:${installedFrontendPort}/`)}`);
console.log(`${c.info('[INFO]')} Status: ${c.bright('cloudcli daemon status --mode system')}`);
console.log(`${c.info('[INFO]')} Stop: ${c.bright('sudo cloudcli daemon stop --mode system')}`);
console.log(`${c.info('[INFO]')} Logs: ${c.bright('sudo cloudcli daemon logs --mode system')}\n`);
}
break;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

install leaves the frontend service running (and marked enabled) when the backend health check throws.

healthCheckOrThrow(DAEMON_SERVICE_NAME, …) at line 664 runs first. If it throws, the subsequent healthCheckOrThrow for the frontend never executes, but enable --now FRONTEND_DAEMON_SERVICE_NAME on line 651 has already started/enabled the frontend unit. The error propagates, but the user now has a half-installed system with an enabled frontend service they did not ask to keep.

Consider wrapping the two health checks in a try { ... } catch { stop + disable both units; rethrow } to give install atomic-ish semantics, or at least emit guidance to run cloudcli daemon uninstall --mode system on failure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon-manager.js` around lines 614 - 688, The install flow can leave
the frontend enabled/running if the backend
healthCheckOrThrow(DAEMON_SERVICE_NAME, …) throws; wrap the two calls to
healthCheckOrThrow(...) in a try { ... } catch (err) { runSystemctl(mode,
['disable','--now', DAEMON_SERVICE_NAME]); runSystemctl(mode,
['disable','--now', FRONTEND_DAEMON_SERVICE_NAME]); optionally remove written
unit files (servicePath, frontendServicePath) or at least log guidance to run
uninstall, then rethrow the error; this ensures any partial start enabled by
runSystemctl(..., ['enable','--now', ...]) is undone on failure.

Comment thread server/daemon/manager.js
Comment on lines +154 to +170
const req = http.get({
host: '127.0.0.1',
port,
path: '/health',
timeout: 1200
}, (res) => {
complete(Boolean(res.statusCode && res.statusCode >= 200 && res.statusCode < 500));
res.resume();
});

req.on('error', () => complete(false));
req.on('timeout', () => {
req.destroy();
complete(false);
});
});
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

checkHealth treats 4xx as healthy, which masks real failures.

With res.statusCode >= 200 && res.statusCode < 500, a response of 401, 403, or 404 (e.g. /health route never registered) will be reported as healthy in getDaemonStatus. For a liveness-only probe that's defensible, but the naming and the status/doctor output implies semantic health.

Either rename to something like checkReachable, or tighten the predicate to 2xx (optionally 3xx).

🛠️ Proposed tightening
-      complete(Boolean(res.statusCode && res.statusCode >= 200 && res.statusCode < 500));
+      complete(Boolean(res.statusCode && res.statusCode >= 200 && res.statusCode < 400));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon/manager.js` around lines 154 - 170, The health-check currently
treats any 2xx-4xx HTTP code as healthy which masks client/authorization errors;
update the check in checkHealth (used by getDaemonStatus) to only consider
successful responses as healthy by tightening the predicate to 2xx (or 2xx–3xx
if redirects should count) instead of res.statusCode >= 200 && res.statusCode <
500; alternatively, if the intent is merely reachability, rename checkHealth to
checkReachable and keep the broader predicate—adjust the status/doctor outputs
to match the chosen semantics.

Comment thread server/daemon/manager.js
Comment on lines +177 to +190
return `[Unit]
Description=CloudCLI daemon bootstrap
After=network-online.target
Wants=network-online.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=${execStart}
ExecStop=${execStop}

[Install]
WantedBy=default.target
`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm how the system-mode units referenced in the PR (cloudcli.service,
# cloudcli-frontend.service) are defined, and whether they use Type=simple
# with Restart=. Also check whether any code path here is reused by those
# units or whether they are defined independently.
fd -t f 'cloudcli.*\.service'
rg -nP -C2 '^(Type|Restart|ExecStart|ExecStop)=' -g '*.service'
rg -nP -C3 'cloudcli-frontend|cloudcli\.service' -g '!**/node_modules/**'

Repository: siteboon/claudecodeui

Length of output: 961


🏁 Script executed:

# First, find the correct file location - the review says daemon/manager.js
# but initial results showed daemon-manager.js
fd -t f -e js 'manager' server/

Repository: siteboon/claudecodeui

Length of output: 178


🏁 Script executed:

# Search for the service unit definition with the Type=oneshot snippet
rg -nP 'Type=oneshot' --type javascript

Repository: siteboon/claudecodeui

Length of output: 99


🏁 Script executed:

# Find startDaemon implementation to verify the spawn behavior
rg -nP -A20 'startDaemon\s*\(' --type javascript | head -100

Repository: siteboon/claudecodeui

Length of output: 99


🏁 Script executed:

# Check if service files are generated/written anywhere
rg -nP 'ExecStart.*cloudcli daemon' --type javascript

Repository: siteboon/claudecodeui

Length of output: 99


🏁 Script executed:

# Read the file at the specified lines
cat -n server/daemon/manager.js | sed -n '170,195p'

Repository: siteboon/claudecodeui

Length of output: 1065


🏁 Script executed:

# Check the entire startDaemon function around lines 391-403
wc -l server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 93


🏁 Script executed:

# Search for startDaemon and the spawn behavior without type restrictions
rg -n -A25 'startDaemon.*\(' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 991


🏁 Script executed:

# Check if service unit is being generated/written
rg -n 'Type=oneshot' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 81


🏁 Script executed:

# Get more of the startDaemon function to see the spawn call
cat -n server/daemon/manager.js | sed -n '360,430p'

Repository: siteboon/claudecodeui

Length of output: 2403


🏁 Script executed:

# Also check what happens in buildLinuxSystemdUnit - look for how service unit is used
rg -n -B5 -A15 'buildLinuxSystemdUnit' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 1669


🏁 Script executed:

# Search for spawn and unref to verify detached behavior
rg -n 'spawn\|\.unref\|detached' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Check if there's any mention of system-mode services or PR description
rg -n 'system.*service|cloudcli-frontend|system-mode' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 148


🏁 Script executed:

# Check if daemon-manager.js has different service definitions
cat -n server/daemon-manager.js | head -50

Repository: siteboon/claudecodeui

Length of output: 1943


🏁 Script executed:

# Look for README or docs that might mention the PR objective
fd -t f -e md | xargs rg -l 'systemd|daemon|auto-restart' 2>/dev/null

Repository: siteboon/claudecodeui

Length of output: 74


🏁 Script executed:

# Check full stopDaemon to see if it truly relies on PID file
cat -n server/daemon/manager.js | sed -n '423,470p'

Repository: siteboon/claudecodeui

Length of output: 1550


🏁 Script executed:

# Search daemon-manager.js for any service unit definitions
rg -n 'Type=|buildLinux|systemd' server/daemon-manager.js | head -30

Repository: siteboon/claudecodeui

Length of output: 729


🏁 Script executed:

# Check README for context about the PR or daemon features
rg -n -B2 -A2 'systemd|daemon|auto' README.md

Repository: siteboon/claudecodeui

Length of output: 3243


🏁 Script executed:

# Look for any PR or documentation files
fd -t f \( -name 'CHANGELOG*' -o -name 'PR*' -o -name 'PULL*' \)

Repository: siteboon/claudecodeui

Length of output: 295


🏁 Script executed:

# Examine if there's a way to check what ExecStart actually does
rg -n 'cloudcli daemon start' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Key finding: daemon-manager.js has Type=simple. Let's see those sections
cat -n server/daemon-manager.js | sed -n '150,170p'
cat -n server/daemon-manager.js | sed -n '220,240p'

Repository: siteboon/claudecodeui

Length of output: 1203


🏁 Script executed:

# Check if daemon/manager.js and daemon-manager.js are related or separate
rg -n 'import.*daemon' server/ | head -20

Repository: siteboon/claudecodeui

Length of output: 242


🏁 Script executed:

# Verify what the ExecStart command actually invokes - is it going through daemon-manager or daemon/manager?
rg -n 'buildDaemonStartArgs\|daemon start' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Check if daemon-manager.js uses the startDaemon from daemon/manager.js
rg -n 'startDaemon\|from.*daemon/manager' server/daemon-manager.js | head -20

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Critical: Let's understand what buildLinuxSystemdUnit is actually used for
# Is it a bootstrap service or the main daemon?
rg -n -B5 -A5 'buildLinuxSystemdUnit\|LINUX_SYSTEMD_UNIT' server/daemon/manager.js | head -50

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Check the constants and see if daemon/manager.js is for user-mode bootstrap
rg -n 'LINUX_SYSTEMD_UNIT_PATH\|LINUX_SYSTEMD_UNIT' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Let's see if daemon-manager.js is calling startDaemon from daemon/manager.js
grep -n 'startDaemon\|daemon/manager' server/daemon-manager.js | head -20

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Check what the actual ExecStart command does - does it invoke daemon start?
rg -n 'buildDaemonStartArgs' server/daemon/manager.js -A10

Repository: siteboon/claudecodeui

Length of output: 1666


🏁 Script executed:

# Let's understand the architecture better - what does the bootstrap service actually do?
# Read around the buildLinuxSystemdUnit function to understand context
cat -n server/daemon/manager.js | sed -n '100,130p'

Repository: siteboon/claudecodeui

Length of output: 935


🏁 Script executed:

# Check what happens after startDaemon spawns the detached child
# Does it actually start a server process or something else?
rg -n 'CLOUDCLI_DAEMON_MANAGED\|daemon managed' server/

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Let's see the full picture - what does "cloudcli daemon start" actually run?
# It seems to invoke startDaemon but let's verify the flow
cat -n server/daemon/manager.js | sed -n '1,50p'

Repository: siteboon/claudecodeui

Length of output: 2019


🏁 Script executed:

# Check if the daemonized process then loads the actual server
rg -n 'index.js\|server.js' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Final check: What's the relationship between the bootstrap service and the actual server?
# Is the daemon/manager.js bootstrap service meant to be the only way to run systemd-managed daemon?
# Or is it just one option?
rg -n 'daemon.*install\|enableLinuxAutostart' server/daemon/manager.js -B2 -A10 | head -60

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Check if there are comments explaining the architecture/design
head -30 server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 1195


🏁 Script executed:

# See if there's any other way to use systemd that doesn't use Type=oneshot
rg -n 'systemd\|foreground' server/cli.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Critical clarity check - which daemon implementation is actually being used?
# Let's trace the command flow from CLI
rg -n 'handleDaemonCommand' server/cli.js -A20 | head -40

Repository: siteboon/claudecodeui

Length of output: 1800


🏁 Script executed:

# Check if daemon/manager.js is imported anywhere at all
rg -n 'from.*daemon/manager\|daemon/manager' server/

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# List what's actually exported from daemon/manager.js
rg -n '^export ' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 542


🏁 Script executed:

# Final verification: does the bootstrap service (Type=oneshot) intentionally spawn detached?
# Is there any TODO or FIXME comments in daemon/manager.js about this?
rg -n 'TODO|FIXME|HACK|BUG' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 47


🏁 Script executed:

# Check if there's any documentation or comments explaining the bootstrap design choice
sed -n '170,195p' server/daemon/manager.js

Repository: siteboon/claudecodeui

Length of output: 883


Type=oneshot bootstrap with detached child forfeits systemd supervision.

The bootstrap service (lines 177-190) uses Type=oneshot to invoke cloudcli daemon start, which spawns a detached, unref'd child process (lines 391-403) and exits. This means:

  • The real daemon runs outside systemd's tracked main process; Restart=on-failure cannot recover it or auto-restart on crashes
  • ExecStop relies entirely on the PID file; if state files are removed manually, systemd has no way to stop the orphaned process
  • systemctl status will always show "active (exited)" regardless of actual daemon status

The codebase already demonstrates the correct pattern: server/daemon-manager.js defines cloudcli.service and cloudcli-frontend.service using Type=simple with Restart=always. Run the server in foreground under Type=simple with Restart=on-failure instead of this bootstrap-and-spawn pattern for proper systemd supervision and consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon/manager.js` around lines 177 - 190, The generated systemd unit
uses Type=oneshot and a bootstrap that spawns a detached/unref'd child, which
breaks systemd supervision; change the unit generation (the template that
returns the string containing execStart/execStop) to use Type=simple, remove
RemainAfterExit, and add Restart=on-failure so systemd tracks and restarts the
real process; also stop spawning a detached/unref'd child in the daemon start
code (the code that currently does the detached spawn/unref — locate the
spawn/unref call around the start logic) and instead run the daemon
in-foreground (do not unref or detach) so ExecStart launches the long-running
process that systemd supervises. Ensure ExecStop still stops via the same PID
handling but rely on systemd restart semantics rather than orphaning the
process.

Comment thread server/index.js
Comment on lines +2345 to +2376
async function maybeAutoDaemonBootstrapFromIndex() {
if (process.platform !== 'linux') return false;
if (process.env.CLOUDCLI_DAEMON_MANAGED === '1') return false;
if (process.env.CLOUDCLI_NO_DAEMON === '1') return false;
if (process.env.CLOUDCLI_DAEMON_ATTEMPTED === '1') return false;

process.env.CLOUDCLI_DAEMON_ATTEMPTED = '1';

try {
console.log(`${c.info('[INFO]')} Linux detected. Enforcing system daemon mode for CloudCLI...`);
await handleDaemonCommand(['install', '--mode=system', '--port', String(SERVER_PORT)], {
appRoot: APP_ROOT,
defaultPort: String(SERVER_PORT),
color: c,
cliEntry: path.join(APP_ROOT, 'server', 'cli.js'),
});
return true;
} catch (error) {
const healthySoon = await waitForPortOpen(SERVER_PORT);
if (healthySoon) {
console.log(`${c.warn('[WARN]')} System daemon health check was delayed, but port ${SERVER_PORT} is now reachable.`);
printSystemDaemonActiveNotice(SERVER_PORT);
return true;
}

throw new Error(
`System daemon bootstrap failed.\n` +
`${error.message}\n` +
`Run with privileges: sudo cloudcli daemon install --mode system --port ${SERVER_PORT}`
);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fallback only probes the backend port; a dead frontend daemon goes undetected here.

When handleDaemonCommand(install) throws, waitForPortOpen(SERVER_PORT) only checks the backend port. Since install now brings up both cloudcli.service and cloudcli-frontend.service, a failure path where backend is reachable but the frontend unit crash-looped will be reported as "daemon is active" and startServer() returns early. The user loses the UI at 5173 without any warning from bootstrap.

Consider probing both ports before declaring bootstrap success, or surfacing the original error.message as a warning even when backend is reachable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/index.js` around lines 2345 - 2376, The catch block in
maybeAutoDaemonBootstrapFromIndex currently treats a reachable backend (checked
by waitForPortOpen(SERVER_PORT)) as full success even though install also starts
a frontend unit; update the catch handling in maybeAutoDaemonBootstrapFromIndex
(where handleDaemonCommand is called) to also probe the frontend service before
returning true — e.g., call waitForPortOpen for the frontend port (use your
FRONTEND_PORT or CLOUDCLI_FRONTEND_PORT constant) or verify the frontend unit
status, and only call printSystemDaemonActiveNotice/return true when both
backend and frontend are healthy; if backend is reachable but frontend is not,
log/console.warn the original error.message (preserve the caught error text) and
surface a clear warning instead of returning early so startServer() can continue
the correct failure path.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/daemon-manager.js`:
- Around line 105-129: The arg-parsing loop in server/daemon-manager.js (the
block that sets parsed.options.serverPort, frontendPort, mode, and databasePath)
currently consumes args[++i] without validating the next token; update each
value-taking branch (the '--port'/'-p', '--frontend-port', '--mode'/'-m', and
'--database-path' branches) to first check that args[i+1] exists and does not
startWith('-') before incrementing/consuming it, and if the check fails return
or surface a clear error (for example set an error on the parsed result or
throw) indicating the missing value for that specific flag rather than silently
falling back to defaults or swallowing the next flag; keep the existing
--flag=value parsing branches unchanged.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 90c90b80-b0e7-438c-be93-cad694283e61

📥 Commits

Reviewing files that changed from the base of the PR and between 4bb0d0b and 03ae672.

📒 Files selected for processing (3)
  • server/cli.js
  • server/daemon-manager.js
  • server/index.js
🚧 Files skipped from review as they are similar to previous changes (1)
  • server/cli.js

Comment thread server/daemon-manager.js
Comment on lines +105 to +129
for (; i < args.length; i++) {
const arg = args[i];
if (arg === '--port' || arg === '-p') {
parsed.options.serverPort = args[++i];
} else if (arg.startsWith('--port=')) {
parsed.options.serverPort = arg.split('=')[1];
} else if (arg === '--frontend-port') {
parsed.options.frontendPort = args[++i];
} else if (arg.startsWith('--frontend-port=')) {
parsed.options.frontendPort = arg.split('=')[1];
} else if (arg === '--mode' || arg === '-m') {
parsed.options.mode = (args[++i] || '').toLowerCase();
} else if (arg.startsWith('--mode=')) {
parsed.options.mode = (arg.split('=')[1] || '').toLowerCase();
} else if (arg === '--database-path') {
parsed.options.databasePath = args[++i];
} else if (arg.startsWith('--database-path=')) {
parsed.options.databasePath = arg.split('=')[1];
} else if (arg === '--help' || arg === '-h') {
parsed.subcommand = 'help';
} else {
parsed.options.extraArgs = parsed.options.extraArgs || [];
parsed.options.extraArgs.push(arg);
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Value-taking flags silently fall back to defaults when the value is missing or is another flag.

--port, --frontend-port, --mode, and --database-path all read args[++i] without checking that the next token exists and isn’t itself a flag. For example:

  • cloudcli daemon install --mode system --port leaves parsed.options.serverPort === undefined, so configuredPort silently falls back to 3001 (line 501) and installs on the default port instead of surfacing the mistake.
  • cloudcli daemon install --port --mode user assigns serverPort = '--mode', which does get caught by the numeric check at line 506, but with a confusing error message and --mode is lost.

Recommend validating that the consumed token is present and not another flag, and erroring out otherwise.

🛠️ Sketch
+    const takeValue = (flag) => {
+        const next = args[++i];
+        if (next === undefined || next.startsWith('-')) {
+            throw new Error(`Missing value for ${flag}`);
+        }
+        return next;
+    };
     for (; i < args.length; i++) {
         const arg = args[i];
         if (arg === '--port' || arg === '-p') {
-            parsed.options.serverPort = args[++i];
+            parsed.options.serverPort = takeValue(arg);
         } else if (arg.startsWith('--port=')) {
             parsed.options.serverPort = arg.split('=')[1];
         } else if (arg === '--frontend-port') {
-            parsed.options.frontendPort = args[++i];
+            parsed.options.frontendPort = takeValue(arg);
         } else if (arg.startsWith('--frontend-port=')) {
             parsed.options.frontendPort = arg.split('=')[1];
         } else if (arg === '--mode' || arg === '-m') {
-            parsed.options.mode = (args[++i] || '').toLowerCase();
+            parsed.options.mode = takeValue(arg).toLowerCase();
         } else if (arg.startsWith('--mode=')) {
             parsed.options.mode = (arg.split('=')[1] || '').toLowerCase();
         } else if (arg === '--database-path') {
-            parsed.options.databasePath = args[++i];
+            parsed.options.databasePath = takeValue(arg);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/daemon-manager.js` around lines 105 - 129, The arg-parsing loop in
server/daemon-manager.js (the block that sets parsed.options.serverPort,
frontendPort, mode, and databasePath) currently consumes args[++i] without
validating the next token; update each value-taking branch (the '--port'/'-p',
'--frontend-port', '--mode'/'-m', and '--database-path' branches) to first check
that args[i+1] exists and does not startWith('-') before incrementing/consuming
it, and if the check fails return or surface a clear error (for example set an
error on the parsed result or throw) indicating the missing value for that
specific flag rather than silently falling back to defaults or swallowing the
next flag; keep the existing --flag=value parsing branches unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant