aumiqx labs
experiments that probably
shouldn't work but do.
Foundational primitives built from scratch. Not components. Not frameworks. Things that remove constraints the web has accepted for decades.
Inspired by Pretext — Cheng Lou's 15KB library that proved text measurement doesn't need the DOM. Each experiment here follows the same pattern: find something the browser treats as a black box, rewrite it as pure computation, and see what becomes possible.
@aumiqx/gesture
Pretext for touch.
the problem
Every gesture library on the web is event-driven and DOM-dependent. They hook into browser events, maintain internal state, and only work inside a browser with real elements. Want to detect gestures on a Canvas? In a game engine? On a server from session replay data? You can't.
the insight
A gesture is just a mathematical pattern in a sequence of (x, y, t) coordinates. A swipe is fast displacement along one axis. A tap is low drift over short time. A flick is high velocity over minimal distance. None of this requires the DOM.
what this does
Pure-function gesture recognition. Input raw coordinates, get back classified gestures with confidence scores, velocity, direction, curvature, and predicted endpoints. Works in React, Node.js, Canvas, WebGL, React Native, Deno, Bun, or a server analyzing session replays. Zero dependencies.
features
- -7 gesture types (tap, long-press, swipe, flick, pan, double-tap, unknown)
- -Real-time prediction mid-gesture via predict()
- -Configurable thresholds for all detection parameters
- -Full metadata: confidence, velocity, direction, curvature, predicted endpoint
- -~6KB, 0 dependencies, full TypeScript types
@aumiqx/scroll
Every website uses the same scroll physics. Why?
the problem
The browser's scroll is a black box. Every website on Earth has identical scroll physics — same mass, same friction, same inertia. You can listen to scroll events, you can smooth them (Lenis), but you cannot program the physics. You can't make scroll feel heavier in a reading section or lighter in a gallery.
the insight
Scroll is just position += velocity; velocity *= friction running 60 times per second. The browser's implementation is a black box you can't modify. But you can replace it with a programmable physics engine where every parameter — mass, damping, friction, magnetism — is yours to control.
what this does
A complete scroll physics engine. Define per-section friction zones, magnetic snap points that pull scroll toward them, walls with bounce elasticity, and custom mass/damping curves. The engine computes position — you decide what to do with it.
features
- -Per-section physics: slow reading zones, fast gallery scroll, sticky snap points
- -Magnetic targets that pull scroll toward important content
- -Bounce walls with configurable elasticity
- -Pure computation — no DOM dependency, you apply the position yourself
- -~4KB, works with any framework or vanilla JS
@aumiqx/pixels
You don't need a browser to see what React renders.
the problem
To render a React component to an image (OG images, PDFs, thumbnails), you need either Puppeteer (spawns a full Chromium, 200MB, slow, breaks in CI) or Satori (only supports ~60% of CSS, no grid, no transforms, limited text). There's no fast, full-featured, pure-JS way to render React to pixels.
the insight
Layout is math (Yoga computes flexbox). Text measurement is math (Pretext computes line breaks). Pixel rendering is math (Skia composites shapes). Chain three battle-tested engines together and you have a complete React-to-image pipeline in pure Node.js — no browser, no Puppeteer, no CSS subset limitations.
what this does
React-to-image in pure Node.js. Takes a React component tree with styles, computes layout, measures text, renders to PNG buffer. Uses Yoga (WASM) + Pretext + Canvaskit (Skia WASM). No browser process, works in CI, runs at build time.
features
- -Full flexbox layout via Yoga (not the limited Satori subset)
- -Text measurement via Pretext (500x faster than DOM)
- -Pixel rendering via Canvaskit/Skia (the same engine Chrome uses)
- -Pure Node.js — no spawned browser, no Puppeteer dependency
- -Build-time OG images, PDFs, thumbnails from React components
these pages are not indexed. this is a lab, not a product.
built by aumiqx