System Architecture

Deployed SaaS products, experimental prototypes, and open-source contributions.

Cloud Architect | Non-Profit

Hope: AI Crisis Support

A scalable, serverless AI chatbot and administrative dashboard built on GCP for a mental health non-profit. Delivers secure, low-latency crisis support with automated safety intercepts and strict zero-trust IAM architecture.

GCP Cloud Functions Vertex AI (Gemini) Firestore React (Vite) IAM Security
Technical Deep Dive Read Case Study

Project Overview

The Need You Here Foundation required a highly accessible, non-clinical crisis support chatbot ("Hope") to connect users with mental health resources, alongside a secure dashboard for counselors to manage a provider registry and review chat logs. Because the application handles highly sensitive crisis and abuse data, the infrastructure required a robust, HIPAA-conscious architecture that prioritized data security, automated emergency routing, and strict access control.

Engineering Highlights

  • Event-Driven AI & Dynamic Tool Calling: The core chat API operates on a Python-based Cloud Function integrating directly with Vertex AI (Gemini 2.5 Pro). Equipped with a custom dynamic tool, the LLM autonomously queries Firestore in real-time to match users with available, city-specific counselors from the foundation's registry.
  • Automated Safety Intercepts: Implemented database-configurable trigger arrays for "Crisis" and "Abuse" keywords. If triggered, the system intercepts the message pre-LLM, instantly returning emergency hotline info and locking the chat widget. Logs are saved via synchronous Firestore batch writes to ensure zero data loss during GCP serverless CPU-freezing.
  • Zero-Trust IAM Architecture: Utilized a strict "Principle of Least Privilege" strategy. Upon client handoff, developer database read access was permanently revoked via IAM, retaining only Cloud Functions and Firebase Hosting Admin roles. This legally shielded the development environment while allowing continuous CI/CD deployments.
  • Portable React Widget: Engineered a portable, embeddable React (Vite) frontend that compiles into a single, lightweight JavaScript file with bundled CSS. This allows seamless deployment via a single HTML script tag, complete with secure UUID session management.
Lead Architect | Live SaaS

MinisterSuite.com

A comprehensive AI-powered 'Operating System' for modern churches. Consolidates seven distinct enterprise tools into a single workflow, automating 15+ hours of weekly admin work.

GCP Cloud Run Vertex AI (Gemini) Stripe API Google Slides API Firebase
Technical Deep Dive Read Case Study

Project Overview

MinisterSuite is a multi-tenant SaaS ecosystem designed to function as the "Operating System" for modern churches. Born out of the need to solve the fragmentation in the "ChurchTech" market, it consolidates distinct tools into a single workflow. While competitors fragment these services ($200–$500/month), MinisterSuite leverages LLMs to deliver the suite for a flat rate of $29/month.

Architecture & Key Features

  • Sermon & Outline Engine: A two-stage generative AI tool. Functions as an ideation partner and expands concepts into fully editable, structured 5-point manuscripts.
  • Deep Dive Exegetical Engine: Utilizes RAG (Retrieval-Augmented Generation) to provide seminary-level lexical analysis (Greek/Hebrew) and historical context.
  • Presentation Engine: A visual automation tool integrating with the Google Slides API. Parses text-based outlines and programmatically generates slide decks saved to Google Drive.
  • Announcement Transformer: A "Create Once, Publish Everywhere" (COPE) tool using distinct prompt chains for Bulletin, Email, SMS, and Social Media outputs.

Technical Competencies

  • SaaS Architecture: Designed a multi-tenant application handling diverse user workflows within a single subscription model.
  • Advanced Prompt Engineering: Developed complex prompt chains to ensure theological accuracy and tonal consistency.
  • SEO Strategy: Implemented JSON-LD structured data (SoftwareApplication) to maximize rich snippet visibility.
Lead Engineer | Live Platform

StackNab.io

Automated infrastructure and boilerplate factory. Designed to eliminate 'configuration fatigue' by generating type-safe Next.js environments instantly.

Next.js 15 TypeScript Firebase Auth & Firestore Stripe API GitHub API
Technical Deep Dive Read Case Study

Project Overview

StackNab is a highly scalable, programmatic directory designed to eliminate "configuration fatigue" for developers. Instead of spending hours manually wiring up authentication, databases, and payment webhooks, developers can query a matrix of over 1,000+ modern cloud tools and instantly generate a fully configured, type-safe Next.js 15 boilerplate.

Engineering Achievements

  • Automated Data Pipelines: Engineered custom Node.js/TypeScript scripts that interface with the GitHub REST API to dynamically mine, validate, and generate thousands of unique tool combinations.
  • Edge-Level Security: Built a secure, isolated internal Command Center utilizing Next.js Edge Middleware and Firebase Auth to validate HTTP-only session cookies and protect administrative routes.
  • Serverless Architecture: Leveraged a NoSQL database structure (Firestore) and Next.js 15 App Router to dynamically generate and serve highly optimized programmatic SEO pages at scale.
  • SEO & Crawl Optimization: Designed a dynamic XML sitemap with custom priority-capping algorithms to efficiently manage Google's crawl budget across thousands of programmatically generated routes.
Full-Stack Cloud Engineer | Founder

TryBreathing.org

A serverless health-tech platform delivering clinically verified respiratory protocols and real-time AI-driven personalized assessments.

Next.js Gemini 2.5 Flash Firebase Vercel Edge Stripe Webhooks
Technical Deep Dive Read Case Study

Platform Overview

TryBreathing is a full-stack, serverless health-tech platform built on Next.js and Google Cloud. It features secure multi-tenant role-based access, automated subscription handling via Stripe webhooks, and real-time clinical analysis powered by Google's Gemini AI.

Core Engineering Highlights

  • Serverless AI Pipeline: Architected a serverless API route utilizing the Google Gemini 2.5 Flash model to process user biometric data (BOLT scores, interoception metrics). Engineered strict prompt constraints to force the LLM to return validated, mathematically slugified JSON, instantly mapping user data to a repository of over 150 clinical techniques.
  • Event-Driven Payment Automation: Built a secure, event-driven webhook endpoint using the Firebase Admin SDK to verify Stripe cryptographic signatures and automatically upgrade user database privileges without manual intervention.
  • Multi-Tenant Role-Based Access Control (RBAC): Designed a secure dashboard ecosystem utilizing Firebase Authentication synced with Firestore to dynamically route and restrict access across three distinct user personas: standard clients, verified clinicians, and platform sponsors.
  • Real-Time Data Management: Implemented Firebase Cloud Storage for user assets and utilized Firestore’s real-time NoSQL capabilities to log deeply nested data, including session histories, physiological state deltas, and saved protocol libraries.
Cloud Architect | Live Tool

SermonTranscribe.com

Automated media pipeline turning raw audio into polished content. Features an event-driven architecture that handles large file processing without server timeouts.

Cloud Functions Eventarc Node.js FFmpeg Cloud Storage
Technical Deep Dive Read Case Study

Event-Driven Architecture

Handling large audio files (100MB+) in a serverless environment requires bypassing standard HTTP timeout limits.

  • Async Processing: Users upload to a signed Google Cloud Storage URL. This triggers an Eventarc event, which spins up a Cloud Function to process the file in the background.
  • FFmpeg Integration: A custom containerized Cloud Function utilizes FFmpeg to strip audio from video and normalize levels before transcription.
  • Cost Optimization: The system scales to zero when not in use, incurring costs only during the actual milliseconds of processing time.

// Additional prototypes and private repos available upon request.