20 Mistakes That Quietly Destroy JavaScript/TypeScript Codebases (Part 2)
Common JS/TS patterns that feel fine until they don't. 11 mistakes, before/after code for each. (11 min)
In Part 1, I covered the foundational mistakes: type safety, error handling, and architecture. The kind that shapes how your codebase grows.
This post covers the rest: the runtime and code quality mistakes that don't break your build but break your production. Code Hygiene, Async & Performance, Testing & Validation.
Share this post & I’ll send you some rewards for the referrals.
Give Your AI Agent Eyes on the Web (Partner)
MCP servers eat 72% of your agent’s context window before it reads a single user message. There’s a simpler way.
Bright Data CLI gives coding agents like Claude Code, Cursor, and Copilot direct access to real-time web data - from the terminal. No MCP schema bloat. No server setup. Just one command:
brightdata scrape https://any-website.com → structured JSONScrape any URL with automatic CAPTCHA bypass. Search Google/Bing/Yandex. Extract structured data from 40+ platforms (Amazon, LinkedIn, Instagram, TikTok, YouTube, Reddit, and more).
One install. Works with 46+ AI agents. 10-32x cheaper than MCP for the same tasks.
(Thanks to BrightData for partnering on this post.)
12. Mutating Function Parameters
interface Order {
id: string;
total: number;
discountApplied?: boolean;
}
// ❌ Surprise mutation
function applyDiscount(order: Order, discount: number) {
order.total *= 1 - discount / 100; // Mutates the original!
order.discountApplied = true;
return order;
}The caller passes an order object and gets it back mutated. Every other reference to that object now sees the changed values. This creates action-at-a-distance bugs that are nearly impossible to trace.
The fix:
// ✅ Return a new object
function applyDiscount(order: Order, discount: number): Order {
return {
...order,
total: order.total * (1 - discount / 100),
discountApplied: true,
};
}Functions that return new values instead of mutating inputs are easier to test, easier to reason about, and compose naturally.
Use readonly parameter types to enforce this at compile time.
13. Leaking Memory with Uncleared Listeners, Timers, and Subscriptions
// ❌ Event listener that outlives the thing it's attached to
class WebSocketManager {
private ws?: WebSocket;
private handleMessage = (event: MessageEvent) => { /* ... */ };
private handleError = (event: Event) => { /* ... */ };
connect(url: string) {
this.ws = new WebSocket(url);
this.ws.addEventListener("message", this.handleMessage);
this.ws.addEventListener("error", this.handleError);
// Health check every 30 seconds — but nobody ever stops it
setInterval(() => this.ping(), 30_000);
}
ping() { /* ... */ }
}The setInterval runs forever. The event listeners hold references to this, keeping the entire class instance (and everything it references) alive in memory.
Multiply this by reconnections and you’ve got a slow leak that crashes your Node.js process at 3am on a Saturday.
The tricky part: memory leaks don’t show up in tests. They show up after hours or days of uptime.
If your Node.js process memory keeps climbing when traffic is flat, run node --inspect and take heap snapshots 5 minutes apart — growing object counts point straight at the leak.
The fix:
// ✅ Track everything. Clean up everything.
class WebSocketManager {
private ws?: WebSocket;
// Arrow-function class fields: auto-bound `this` AND stable references
// so removeEventListener can actually find them.
private handleMessage = (event: MessageEvent) => { /* ... */ };
private handleError = (event: Event) => { /* ... */ };
private pingInterval?: ReturnType<typeof setInterval>;
connect(url: string) {
this.ws = new WebSocket(url);
this.ws.addEventListener("message", this.handleMessage);
this.ws.addEventListener("error", this.handleError);
this.pingInterval = setInterval(() => this.ping(), 30_000);
}
disconnect() {
this.ws?.removeEventListener("message", this.handleMessage);
this.ws?.removeEventListener("error", this.handleError);
clearInterval(this.pingInterval);
this.ws?.close();
}
ping() { /* ... */ }
}Every addEventListener needs a removeEventListener. Every setInterval needs a clearInterval. Every subscription needs an unsubscribe.
If your class has a
connectorstart— it needs adisconnectorstop. No exceptions.
One snag worth flagging: removeEventListener only removes a handler when you pass the same function reference that was added. Regular methods don’t auto-bind.
If you write addEventListener("message", this.handleMessage.bind(this)), the .bind returns a new function each time and the removal silently no-ops.
Arrow-function class fields (shown above) give you both a stable reference and a this that points at the instance.
14. Never Cancelling Async Operations
// ❌ Fetch that can't be stopped
async function searchUsers(query: string) {
const response = await fetch(`/api/users?q=${query}`);
return response.json();
}
// User types "al", "ali", "alic", "alice" — 4 requests fly out
// The response for "al" might arrive AFTER "alice"
// Now your UI shows results for "al" while the search box says "alice"No cancellation means wasted requests, race conditions, and stale data rendering on screen.
In React, this is the #1 cause of "my component shows old data" bugs.
The fix:
// ✅ AbortController — the native cancellation primitive
function searchUsers(query: string, signal?: AbortSignal) {
return fetch(`/api/users?q=${query}`, { signal }).then((r) => r.json());
}
// In the caller — cancel the previous request before starting a new one
let controller: AbortController | null = null;
function onSearchChange(query: string) {
controller?.abort(); // Cancel whatever's in flight
controller = new AbortController();
searchUsers(query, controller.signal)
.then(setResults)
.catch((err) => {
if (err.name !== "AbortError") throw err; // Ignore expected aborts
});
}AbortController works with fetch, Node.js streams, database drivers, and most async APIs.
In React, use it inside useEffect cleanup.
In Node.js, pass it to long-running operations so callers can cancel them.
For CPU-bound or polling code that doesn't natively accept a signal, check signal.aborted between iterations and bail out early. If your async function doesn't accept a signal, it's a foot-gun waiting to fire.
15. No HTTP / fetch Timeouts
// ❌ A fetch with no timeout is a fetch that can hang forever
async function getUser(id: string) {
const response = await fetch(`/api/users/${id}`);
return response.json();
}fetch has no default timeout. If the upstream service is slow or hung, the request waits indefinitely.
On a server under load, that means every slow request consumes a connection and a chunk of your concurrency budget — slow upstreams cascade into your service hanging too.
Same shape on the client: the spinner spins forever and the user reloads the tab.
The fix:
// ✅ AbortSignal.timeout — the modern, native way
async function getUser(id: string) {
const response = await fetch(`/api/users/${id}`, {
signal: AbortSignal.timeout(5000), // 5s — fails fast if the server doesn't respond
});
return response.json();
}AbortSignal.timeout(ms) (Node 17.3+, all modern browsers) returns a signal that aborts itself after the timeout. Combine with the cancellation pattern from #14 using AbortSignal.any([userSignal, AbortSignal.timeout(5000)]) when you want both user cancellation and a hard ceiling.
Pick a timeout for every outbound HTTP call.
The right value depends on the operation: 2–5 seconds for user-facing reads, 30+ seconds for bulk imports, but never “no limit”.
A “missing” timeout is the silent default that bites you under load.
16. Running Independent Async Operations Sequentially
// ❌ Sequential — total time = sum of all operations
const user = await getUser(id);
const orders = await getOrders(id);
const notifications = await getNotifications(id);
// If each takes 200ms, total = 600msIf the operations don’t depend on each other, running them sequentially wastes time.
The fix:
// ✅ Parallel — total time = slowest operation
const [user, orders, notifications] = await Promise.all([
getUser(id),
getOrders(id),
getNotifications(id),
]);
// If each takes 200ms, total = 200msUse Promise.all when all operations must succeed. Use Promise.allSettled when some can fail independently — like loading a dashboard where each widget fetches its own data.
Keep sequential
awaitfor operations where each step depends on the previous one.
One related foot-gun related to this:
Don’t use
arr.forEach(async ...)for any of this!
forEach ignores the promise its callback returns, so you fire N parallel async calls and the function returns before any of them complete, including writes that haven’t happened yet.
Logs say “done”; the database says otherwise.
Use Promise.all(arr.map(...)) for parallel, or for...of with await for sequential.
17. Blocking the Event Loop
// ❌ Sync I/O on a hot path — the entire process freezes here
import fs from "node:fs";
app.get("/config", (req, res) => {
const config = fs.readFileSync("./config.json", "utf8"); // blocks!
res.json(JSON.parse(config));
});
// ❌ Heavy CPU work on the main thread
app.post("/report", (req, res) => {
const csv = generateMassiveCsv(req.body); // 800ms of pure CPU
res.send(csv);
});Node.js runs your code on a single thread. While that thread is busy, nothing else runs — no other requests, no timers, no I/O callbacks. A 200ms sync read or a 500ms JSON.parse on a fat payload pauses every concurrent user of your service. Under any real load this looks like random latency spikes that nobody can reproduce locally.
The usual offenders: fs.readFileSync, crypto.pbkdf2Sync, child_process.execSync, parsing huge JSON or XML payloads, regex with catastrophic backtracking, and tight loops over big arrays.
The fix:
// ✅ Async I/O — the event loop keeps serving other requests
import fs from "node:fs/promises";
app.get("/config", async (req, res) => {
const config = await fs.readFile("./config.json", "utf8");
res.json(JSON.parse(config));
});
// ✅ Move CPU-heavy work off the main thread
import { Worker } from "node:worker_threads";
app.post("/report", async (req, res) => {
const worker = new Worker("./csv-worker.js", { workerData: req.body });
worker.once("message", (csv) => res.send(csv));
});For I/O, the rule is simple: never use the *Sync variant in request-handling code.
For CPU-heavy work, move it to a worker thread (worker_threads) or a background queue (BullMQ, etc.). For regex, audit any pattern that contains nested quantifiers ((a+)+) — those are how a 50-char user input becomes a 30-second freeze.
How to spot it in production:
enable Node’s built-in
perf_hooks.monitorEventLoopDelay()or watch forevent_loop_lagin your APM.
A loop delay above ~50ms during normal traffic means something is blocking, and it’s almost always one of the patterns above.
18. Using Date for Everything
// ❌ Timezone roulette
const meetingTime = new Date("2024-03-15T10:00:00");
// What timezone is this? The answer: it depends on where the code runs.JavaScript's Date always represents an instant (UTC milliseconds), but parsing a date-time string without an offset uses the local timezone of the machine running the code. Engines agree on this, that's the spec. The problem is that your machines disagree on what "local" means:
// Your server (UTC): parses as 2024-03-15T10:00:00.000Z
// Your laptop (UTC+2): parses as 2024-03-15T10:00:00.000+02:00 → 08:00 UTC
// Your US colleague (UTC-5): parses as 2024-03-15T10:00:00.000-05:00 → 15:00 UTC
// Same string, three different instants in time.Store that in a database, render it for a user in Tokyo, and you’ve got a meeting that nobody shows up to at the right time.
The fix:
// ✅ Use Temporal (shipping in Firefox, polyfill elsewhere) or date-fns with explicit timezones
import { Temporal } from "@js-temporal/polyfill";
const meeting = Temporal.ZonedDateTime.from({
year: 2024,
month: 3,
day: 15,
hour: 10,
minute: 0,
timeZone: "Europe/Sofia",
});
// ✅ Or with date-fns v3+ (lighter weight, no polyfill needed)
import { TZDate } from "@date-fns/tz";
const meetingInSofia = new TZDate("2024-03-15T10:00:00", "Europe/Sofia");Check your support matrix before you drop the Temporal polyfill.
If Temporal feels too heavy, date-fns with @date-fns/tz is solid and tree-shakeable.
The point isn't which library. It's that you pick one that forces timezone-awareness instead of letting Date guess.
19. Testing for Coverage, Not for Value
// ❌ 100% coverage, 0% confidence
it("should create an instance", () => {
const service = new UserService(mockDeps);
expect(service).toBeDefined(); // Yes... constructors construct things
});
it("should call the database", async () => {
await service.getUser("123");
expect(mockDb.findById).toHaveBeenCalledWith("123");
// Congrats, you've tested that your code... calls code
});These tests verify implementation details, not behavior.
They break on every refactor and catch zero bugs. (Spy assertions like toHaveBeenCalledWith are fine alongside behavior assertions, for example, when you need to verify a side effect on a mocked dependency. They’re not fine as a substitute.)
The fix: Test behavior. What comes out given what goes in?
// ✅ Tests that verify behavior
it("returns the user when found", async () => {
mockDb.findById.mockResolvedValue({ id: "123", name: "Alice" });
const user = await service.getUser("123");
expect(user).toEqual({ id: "123", name: "Alice" });
});
it("throws NotFoundError when user does not exist", async () => {
mockDb.findById.mockResolvedValue(null);
await expect(service.getUser("123")).rejects.toThrow(NotFoundError);
});
it("applies discount correctly", () => {
expect(calculateDiscount(100, "SAVE20")).toBe(80);
expect(calculateDiscount(100, "INVALID")).toBe(100);
expect(calculateDiscount(0, "SAVE20")).toBe(0);
});Test the contract, not the implementation.
A good test should survive a refactor that doesn't change behavior.
20. Not Validating Input at the Boundary
// ❌ Trust-based programming
app.post("/users", async (req, res) => {
const user = await db
.insertInto("users")
.values(req.body) // Whatever you send, we store
.returningAll()
.executeTakeFirstOrThrow();
res.json(user);
});req.body could be anything. A missing field crashes your database query. An extra field like { role: 'admin' } silently grants privilege escalation. That’s classic mass assignment, and it’s the single most common way one of these handlers becomes a security incident.
(Prototype pollution via __proto__ is a separate concern with its own mitigations, but mass assignment is what you’ll actually see in production logs.)
The fix: Validate at the edge, trust internally.
import { z } from "zod";
const CreateUserSchema = z.object({
name: z.string().min(1).max(100),
email: z.string().email(),
role: z.enum(["admin", "user"]).default("user"),
});
app.post("/users", async (req, res) => {
const input = CreateUserSchema.parse(req.body);
// input is typed and validated — only the fields you defined, nothing extra
const user = await db
.insertInto("users")
.values(input)
.returningAll()
.executeTakeFirstOrThrow();
res.json(user);
});Zod gives you runtime validation and TypeScript types from a single schema definition. The .parse() call strips unknown fields by default, so no mass assignment attacks.
Validate at every boundary: API routes, queue consumers, webhook handlers, file parsers.
Once data passes validation, trust it downstream:
=> No defensive checks scattered through your business logic.
One pairing worth naming: input validation (this section) and parameterized queries (the Kysely .values(input) call above parameterizes for you) solve different attacks — mass assignment vs. SQL injection.
They’re complementary, not interchangeable; both belong at the boundary, never in your business logic.
Note: SQL injection deserves its own deep dive, which I’ll cover in a future post.
📌 TL;DR
In this post, we covered:
Code Hygiene: parameter mutation
Async & Performance: memory leaks, cancellation, HTTP timeouts, sequential operations, blocking the event loop,
DateTesting & Validation: coverage vs. value, input validation
If you missed Part 1, start there. It covers the foundational mistakes in type safety, error handling, and architecture that shape everything else.
Thanks for reading, and stay awesome!
Follow me on LinkedIn | Twitter(X) | Threads
Thank you for supporting this newsletter.
Consider sharing this post with your friends and get rewards.
You are the best! 🙏




