Skip to content

Performance

Overview

Kdu is designed to be performant for most common use cases without much need for manual optimizations. However, there are always challenging scenarios where extra fine-tuning is needed. In this section, we will discuss what you should pay attention to when it comes to performance in a Kdu application.

First, let's discuss the two major aspects of web performance:

  • Page Load Performance: how fast the application shows content and becomes interactive on the initial visit. This is usually measured using web vital metrics like Largest Contentful Paint (LCP) and First Input Delay.

  • Update Performance: how fast the application updates in response to user input. For example, how fast a list updates when the user types in a search box, or how fast the page switches when the user clicks a navigation link in a Single-Page Application (SPA).

While it would be ideal to maximize both, different frontend architectures tend to affect how easy it is to attain desired performance in these aspects. In addition, the type of application you are building greatly influences what you should prioritize in terms of performance. Therefore, the first step of ensuring optimal performance is picking the right architecture for the type of application you are building:

  • Consult Ways of Using Kdu to see how you can leverage Kdu in different ways.

  • Jason Miller discusses the types of web applications and their respective ideal implementation / delivery in Application Holotypes.

Profiling Options

To improve performance, we need to first know how to measure it. There are a number of great tools that can help in this regard:

For profiling load performance of production deployments:

For profiling performance during local development:

Page Load Optimizations

There are many framework-agnostic aspects for optimizing page load performance - check out this web.dev guide for a comprehensive round up. Here, we will primarily focus on techniques that are specific to Kdu.

Bundle Size and Tree-shaking

One of the most effective ways to improve page load performance is shipping smaller JavaScript bundles. Here are a few ways to reduce bundle size when using Kdu:

  • Use a build step if possible.

    • Many of Kdu's APIs are "tree-shakable" if bundled via a modern build tool. For example, if you don't use the built-in <Transition> component, it won't be included in the final production bundle. Tree-shaking can also remove other unused modules in your source code.

    • When using a build step, templates are pre-compiled so we don't need to ship the Kdu compiler to the browser. This saves 14kb min+gzipped JavaScript and avoids the runtime compilation cost.

  • Be cautious of size when introducing new dependencies! In real world applications, bloated bundles are most often a result of introducing heavy dependencies without realizing it.

    • If using a build step, prefer dependencies that offer ES module formats and are tree-shaking friendly. For example, prefer lodash-es over lodash.

    • Check a dependency's size and evaluate whether it is worth the functionality it provides. Note if the dependency is tree-shaking friendly, the actual size increase will depend on the APIs you actually import from it. Tools like bundlejs.com can be used for quick checks, but measuring with your actual build setup will always be the most accurate.

  • If you are using Kdu primarily for progressive enhancement and prefer to avoid a build step, consider using petite-kdu (only 6kb) instead.

Code Splitting

Code splitting is where a build tool splits the application bundle into multiple smaller chunks, which can then be loaded on demand or in parallel. With proper code splitting, features required at page load can be downloaded immediately, with additional chunks being lazy loaded only when needed, thus improving performance.

Bundlers like Rollup (which Wite is based upon) or webpack can automatically create split chunks by detecting the ESM dynamic import syntax:

// lazy.js and its dependencies will be split into a separate chunk
// and only loaded when `loadLazy()` is called.
function loadLazy() {
  return import('./lazy.js')
}

Lazy loading is best used on features that are not immediately needed after initial page load. In Kdu applications, this is typically used in combination with Kdu's Async Component feature to create split chunks for component trees:

import { defineAsyncComponent } from 'kdu'

// a separate chunk is created for Foo.kdu and its dependencies.
// it is only fetched on demand when the async component is
// rendered on the page.
const Foo = defineAsyncComponent(() => import('./Foo.kdu'))

If using client-side routing via Kdu Router, it is strongly recommended to use async components as route components. See Lazy Loading Routes for more details.

SSR / SSG

Pure client-side rendering suffers from slow time-to-content. This can be mitigated with Server-Side Rendering (SSR) or Static Site Generation (SSG). Check out the SSR Guide for more details.

Update Optimizations

Props Stability

In Kdu, a child component only updates when at least one of its received props has changed. Consider the following example:

<ListItem
  k-for="item in list"
  :id="item.id"
  :active-id="activeId" />

Inside the <ListItem> component, it uses its id and activeId props to determine whether it is the currently active item. While this works, the problem is that whenever activeId changes, every <ListItem> in the list has to update!

Ideally, only the items whose active status changed should update. We can achieve that by moving the active status computation into the parent, and make <ListItem> directly accept an active prop instead:

<ListItem
  k-for="item in list"
  :id="item.id"
  :active="item.id === activeId" />

Now, for most components the active prop will remain the same when activeId changes, so they no longer need to update. In general, the idea is keeping the props passed to child components as stable as possible.

k-once

k-once is a built-in directive that can be used to render content that relies on runtime data but never needs to update. The entire sub-tree it is used on will be skipped for all future updates. Consult its API reference for more details.

k-memo

k-memo is a built-in directive that can be used to conditionally skip the update of large sub-trees or k-for lists. Consult its API reference for more details.

General Optimizations

The following tips affect both page load and update performance.

Reduce Reactivity Overhead for Large Immutable Structures

Kdu's reactivity system is deep by default. While this makes state management intuitive, it does create a certain level of overhead when the data size is large, because every property access triggers proxy traps that perform dependency tracking. This typically becomes noticeable when dealing with large arrays of deeply nested objects, where a single render needs to access 100,000+ properties, so it should only affect very specific use cases.

Kdu does provide an escape hatch to opt-out of deep reactivity by using shallowRef() and shallowReactive(). Shallow APIs create state that is reactive only at the root level, and exposes all nested objects untouched. This keeps nested property access fast, with the trade-off being that we must now treat all nested objects as immutable, and updates can only be triggered by replacing the root state:

const shallowArray = shallowRef([
  /* big list of deep objects */
])

// this won't trigger updates...
shallowArray.value.push(newObject)
// this does:
shallowArray.value = [...shallowArr.value, newObject]

// this won't trigger updates...
shallowArray.value[0].foo = 1
// this does:
shallowArray.value = [
  {
    ...shallowArray.value[0],
    foo: 1
  },
  ...shallowArray.value.slice(1)
]

Avoid Unnecessary Component Abstractions

Sometimes we may create renderless components or higher-order components (i.e. components that render other components with extra props) for better abstraction or code organization. While there is nothing wrong with this, do keep in mind that component instances are much more expensive than plain DOM nodes, and creating too many of them due to abstraction patterns will incur performance costs.

Note that reducing only a few instances won't have noticeable effect, so don't sweat it if the component is rendered only a few times in the app. The best scenario to consider this optimization is again in large lists. Imagine a list of 100 items where each item component contains many child components. Removing one unnecessary component abstraction here could result in a reduction of hundreds of component instances.

Performance has loaded