1 Year in Business. 0 Websites We Were Proud Of.
Let me tell you something nobody talks about.
We've been building Aumiqx Technologies for over a year — real products, real clients, real work. AI agents, workflow automations, intelligent systems that actually run parts of businesses.
But every time we tried to build our own website... it felt wrong.
Generic. Another AI startup template that screamed "we used a drag-and-drop builder and called it a day."
We tried everything. With AI. Without AI. With freelancers. Nothing felt like us.
So we did something ironic:
We ran an AI company without a website.
For a year. An AI automation company. Without a website. Let that sink in.
The problem wasn't technical ability. It was that every approach produced something that looked like everyone else. And if you're building a company that claims to be different, your website can't look like it was assembled from the same component library as every other YC clone.
Then We Built It in Hours. Not Days. Hours.
Last week, we sat down and built the entire thing. Not in a sprint. Not in a hackathon with pizza and Red Bull. Just... sat down and built it.
What came out:
- 22 unique sections — each one designed, not templated
- Dark-first design with mandala patterns and terminal aesthetics
- A careers page that doesn't ask for your resume (because that's not how you find good people)
- A 404 page more entertaining than most homepages
- Custom animations — Framer Motion springs, not CSS transitions
- Every pixel intentional
But here's the part that matters:
AI wasn't just a tool this time. It was the team.
Claude (by Anthropic) acted like the architect — writing production-grade code, debugging in real time, and making decisions you'd expect from a senior developer. Not "generate me a component" — actual architectural thinking, refactoring, SEO strategy, and shipping.
Gemini 3 Pro was the frontend specialist — it made those amazing CTAs on the landing page, shaped the visual direction, and iterated on component design until it felt right. When you need design-to-code at speed, Gemini is unmatched.
Together with us, they didn't just assist. They shipped.
No agency. No 6-week sprint. Just humans + AI building something we're genuinely proud of.
How We Actually Work Together
Here's what our workflow actually looks like. No corporate process diagrams. Just reality.
I come to Claude with a problem — not a specification. Something like: "Bro, I have this keyword data from Search Console. We have zero traffic. Figure out what we need."
Claude analyzes the data. All 733 keywords. The Search Console CSV. The site structure. The competition. Then presents a strategy — not just "write more content" but specific targets: "conversational ai" has 500K monthly volume with 99,900% YoY growth, low competition, and nobody's written a good guide.
I say "go ahead." Claude deploys 5 parallel agents — one writing the UI component, one creating page routes, one writing 25,000 words of guide content, one updating navigation, one writing tech blog articles. All simultaneously.
While those run, I have new ideas: "Add the tech blogs to homepage too. And make the content about how WE built this." Claude adapts mid-flight. No pushback, no "let me finish first." Just adapts.
This isn't prompt-and-pray. It's a real working relationship.
The dynamic
I bring the vision, the business instinct, the taste, and the "ship it" energy. Claude brings the data analysis, parallel execution, architectural thinking, and the ability to write 25,000 words while I'm still thinking about what to name the section.
The key that makes it work: trust. I don't micromanage every line. Claude doesn't wait for approval on every decision. We've built a rhythm — I describe what I want in plain language, Claude makes smart technical choices, and we iterate fast.
It's the same dynamic as any good engineering partnership. Except one partner never sleeps, never gets frustrated, and can run 15 tasks in parallel.
342 Pages from 3 Data Files
The entire site is generated from three TypeScript files:
- india-ai.ts — 15 cities, each with startup counts, key players, funding data, FAQs
- ai-tools.ts — 11 categories, 49 tools, each with honest reviews, pricing, limitations
- automate.ts — 10 industries, automation blueprints, ROI estimates
From these three files, the build process generates:
| Route | Pages | How |
|---|---|---|
| /india-ai/[city] | 15 | One page per city |
| /ai-tools/[slug] | 11 | One page per category |
| /ai-tools/tool/[slug] | 49 | One page per tool |
| /ai-tools/industry/[slug] | 10 | Cross-pillar bridges |
| /automate/[slug] | 10 | One page per industry |
| /compare/tools/[slug] | 87 | Auto-generated tool comparisons |
| /compare/cities/[slug] | 105 | Auto-generated city comparisons |
| /compare/industries/[slug] | 45 | Auto-generated industry comparisons |
Every page gets JSON-LD schema, canonical URLs, FAQs, internal links, and proper meta descriptions — all generated automatically. Adding a new tool means adding one object to a TypeScript file. The build does everything else.
The daily data pipeline
Every morning at 6 AM IST, a GitHub Actions workflow runs:
- Fetches Google News RSS for all 15 cities and 10 industries
- Pulls GitHub stars and trending repos for open-source tools
- Scrapes G2 ratings for all 49 tools via Google Search snippets
- Aggregates HackerNews, Reddit, and Dev.to discussions
- Pulls Inc42 and YourStory AI feeds (India-specific)
- Generates 1,800+ internal links across 95 pages
- Rebuilds the static site with fresh data
- Deploys to production via FTP
The site is always fresh without anyone touching it. That's the power of build-time data fetching with static export.
Claude Code, MCPs, and Multi-Agent Swarms
The development environment is where it gets interesting. Our setup:
- claude-flow MCP — v3 mode with hierarchical-mesh topology, supporting up to 15 concurrent agents
- 30 Claude Code Skills — from AgentDB (vector search, learning) to SPARC methodology to swarm orchestration
- SuperClaude Framework — 30+ /sc: commands for every development task (/sc:analyze, /sc:implement, /sc:brainstorm, /sc:test)
- 40+ helper scripts — auto-commit, intelligence hooks, pattern learning, security scanning
When we need to ship a feature, we don't write code sequentially. We deploy a swarm — multiple AI agents working in parallel. The architect agent designs the structure. The coder agents implement. The tester validates. The reviewer checks quality. All running simultaneously.
For the content section you're reading right now, 5 agents worked in parallel: one built the reading UI (the layout, typography, table of contents), one wrote 5 educational guides, one created the page routes, one wired up navigation, and one wrote these tech blog articles. All at once.
The SPARC methodology — Specification, Pseudocode, Architecture, Refinement, Completion — gives structure to AI-assisted development. Without it, you get spaghetti. With it, you get production code.
The AI Stack: Claude, Gemini, and Beyond
We don't use just one model. Different tasks need different capabilities:
- Claude Opus — The heavy lifter. Architecture decisions, complex refactoring, writing long-form content with nuance. When we need something that thinks, Opus handles it.
- Claude Sonnet — The workhorse. Most day-to-day coding, component building, quick iterations. Fast enough for real-time development, smart enough for production code.
- Gemini 3 Pro — The frontend specialist. Made the amazing CTAs on our landing page, nailed component design and visual thinking. When you need something that understands design intent and can translate it to beautiful, interactive code fast — Gemini delivers. Also used for research, keyword analysis, and shaping ideas.
The key insight: model routing matters. Simple tasks don't need Opus. Complex architecture decisions shouldn't go to a fast model. Our hooks system routes tasks to the right model automatically, saving cost without sacrificing quality.
Total development cost? Under $50 in API credits for a site that would have cost $10,000+ with a traditional agency. And it's better — because every page has data-driven content, real ratings, and daily updates.
SEO Architecture: Built for Google from Day One
Most developers bolt SEO onto a finished site. We built it into the architecture from the first commit.
Every single page on aumiqx.com has:
- JSON-LD structured data — Organization, WebPage, SoftwareApplication, FAQPage, BreadcrumbList. Google's rich snippets love this.
- Canonical URLs with trailing slashes — no duplicate content issues
- Programmatic meta descriptions — unique per page, generated from data
- FAQ sections with FAQPage schema — targets featured snippets
- Internal linking mesh — 1,800+ links. Every pillar connects to every other pillar.
- Comparison pages — 237 "X vs Y" pages targeting high-intent search queries
- Real G2 ratings — AggregateRating schema with actual star ratings from G2
The sitemap has 339 URLs. Every URL is manually verified to be indexable. The robots.txt is AI-crawler friendly — because we want AI systems to understand our content.
The cross-pillar strategy
The three pillars (India AI, AI Tools, Automate) aren't silos. They're interconnected:
- Tool pages link to industries that use them
- Industry pages link to recommended tools
- City pages link to relevant industries and tools
- Comparison pages connect everything
This distributes link equity across the entire site. Google sees topical authority — not 342 disconnected pages, but a coherent knowledge graph about AI in India.
What Didn't Work (And Why We're Open-Sourcing Everything)
It wasn't all smooth. A year of failed attempts taught us more than the final build. Here's what was painful:
- Agent coordination at scale — Running 15 agents in parallel sounds great until two try to edit the same file. We learned to use hierarchical topology (one coordinator, specialists underneath) instead of pure mesh.
- Content quality control — AI-generated content at scale needs aggressive editing. The first draft of every guide is 70% there. The last 30% — the voice, the specific examples, the opinions — needs a human.
- Context window limits — Large files like ai-tools.ts (1,134 lines) push against context limits. We split data files into logical chunks when possible.
- Permission issues — Background agents sometimes can't write files due to permission settings. We learned to give agents bypass permissions for write operations.
If we started over, we'd:
- Set up the agent permission model first (before writing any code)
- Start with 3-4 agents per swarm, not 8-15 (coordination overhead is real)
- Build the data pipeline before the UI (data drives everything)
- Write the SPARC specification phase more thoroughly — it saves time downstream
The open-source promise
Here's the deal: if the community finds this useful, we're open-sourcing the entire website codebase. Every component. Every animation. Every line of AI-generated code.
Your feedback helps us build better. Our code helps you build faster. Fair trade.
The biggest lesson: AI doesn't replace the developer. It replaces the boring parts of development. The creative decisions, the product instinct, the "this feels right" moments — those are still human. And they're the parts that matter most.
The Numbers
| Metric | Value |
|---|---|
| Total pages | 342 |
| Source data files | 3 |
| Auto-generated comparisons | 237 |
| Internal links | 1,800+ |
| Tools reviewed with real ratings | 49 |
| Cities mapped | 15 |
| Industries covered | 10 |
| Daily data sources | 8 (Google News, GitHub, G2, HN, Reddit, Dev.to, Inc42, YourStory) |
| Sitemap URLs | 339 |
| Schema types per page | 3-5 |
| Claude Code skills used | 30 |
| Development cost (API) | ~$50 |
| Equivalent agency cost | $10,000-15,000+ |