Dev help

Dev help

Dev help

Dev Help is an AI-powered developer assistant platform which is a , multi-provider alternative to modern ai tools that is extended and customised on top of LibreChat (open-source) to add live code execution, semantic document search (RAG), enterprise auth, and a full microservices backend. Built end-to-end as a solo engineering project.


Dev Help is an AI-powered developer assistant platform which is a , multi-provider alternative to modern ai tools that is extended and customised on top of LibreChat (open-source) to add live code execution, semantic document search (RAG), enterprise auth, and a full microservices backend. Built end-to-end as a solo engineering project.


Dev Help is an AI-powered developer assistant platform which is a , multi-provider alternative to modern ai tools that is extended and customised on top of LibreChat (open-source) to add live code execution, semantic document search (RAG), enterprise auth, and a full microservices backend. Built end-to-end as a solo engineering project.


PREVIEW
PREVIEW
PREVIEW

Year

2025

Client

Personal Project

Category

Full-Stack Development / AI Engineering

Product Duration

3-4 months
The Problem
The Problem

Developers increasingly rely on AI assistants for day-to-day work, but face a fundamental tension: sending proprietary code and internal docs to third-party AI providers is a compliance and IP risk many teams cannot accept. Consumer tools like ChatGPT also lack developer-critical features , live code execution, document-grounded answers, and searchable history, while vendor lock-in to a single model provider limits flexibility. The gap: no multi-provider AI workspace existed that was both enterprise-ready and genuinely pleasant to use.

Architecture & Engineering
Architecture & Engineering

Dev Help is built on top an open-source self-hosted AI chat platform which was then extended with a fully custom microservices backend written in Node.js / Express. The system adds eight purpose-built services: API Gateway, Auth Service, Chatbot Orchestration, Code Execution, File & Storage, RAG (Retrieval-Augmented Generation), Search, and Email. The frontend is React 18 + TypeScript + Tailwind CSS. Data layers include MongoDB, Redis, PostgreSQL + pgVector, and MeiliSearch. Real-time AI responses stream token-by-token via Server-Sent Events (SSE).

Rainy Ride
AI & Code Execution
AI & Code Execution

A unified provider abstraction layer routes conversations to Google Gemini (Pro, 1.5 Pro, Flash), OpenAI GPT-4 / GPT-3.5, Anthropic Claude or grok switchable from a single dropdown. The JDoodle API integration enables fully sandboxed, real-time code execution across 20+ languages (Python, JS, TypeScript, Java, C/C++, Go, Rust, Kotlin, Swift, SQL, R, and more) directly inside the chat window with no copy-pasting to a terminal. Rate limiting (20 executions / hour per user), timeout management, and structured error parsing are all handled server-side. The custom RAG pipeline chunks uploaded documents, generates 768-dimensional embeddings via Google text-embedding-004, stores vectors in pgVector, and retrieves semantically relevant context for every message in under 100 ms.

Riders
Authentication & Security
Authentication & Security

Enterprise-grade auth was built in from day one and never patched on later. Local auth uses bcrypt (12 rounds) with JWT access tokens and refresh-token rotation. OAuth2 is supported across five providers: Google, GitHub, Discord, via Passport.js strategies. Two-Factor Authentication uses TOTP (speakeasy, Google Authenticator compatible). Full password-reset and email-verification flows use Nodemailer with HTML templates. Sessions are stored in Redis. Security hardening includes Helmet.js headers, express-validator input sanitisation on every endpoint, and rate limiting across all routes.

Search & Performance
Search & Performance

MeiliSearch powers real-time full-text search across a user's entire conversation history. Redis caches sessions, user data (5-minute TTL), and frequent query results. The stateless backend design is horizontally scalable behind a load balancer. Docker + Docker Compose enable one-command local or cloud deployment. Multi-cloud file storage is supported across Local, AWS S3, Azure Blob, and GCS. The platform exposes 100+ REST API endpoints across 10+ database collections. Academic load testing (200 concurrent users, 2,066 automated test cases) validated production-grade reliability before deployment.

Scale & Scope
Scale & Scope

Built over a 4–5 month development cycle, the platform evolved into a large-scale system involving extensive architectural planning, iterative design decisions, and thousands of lines of code. The custom layer built on top of LibreChat spans 4,000+ lines of service code, structured across 8 microservices and 100+ API endpoints, integrating 3 AI providers, 5 OAuth providers, 4 cloud storage backends, and a full vector-search RAG pipeline.

What began as a final-year engineering project at the University of Benin (Department of Computer Engineering) ultimately developed into a production-grade distributed system with measurable performance benchmarks and a published academic paper, demonstrating both the technical scale and real-world applicability of the architecture.

What This Demonstrates
What This Demonstrates

Systems thinking — a complete distributed architecture including microservices, AI orchestration, vector embeddings, SSE streaming, RAG indexing, and multi-cloud storage, designed as an integrated system rather than a single chatbot route.

Full-stack architecture — React + TypeScript frontend → Node.js / Express service layer → MongoDB, Redis, PostgreSQL + pgVector → MeiliSearch → containerized deployment with Docker.

AI engineering — provider abstraction, custom embedding pipelines, RAG retrieval loops, and streaming responses implemented as part of the application architecture rather than simple API integrations.

Security architecture — 2FA, JWT rotation, bcrypt hashing, rate limiting, and hardened security headers incorporated from the earliest stages of development.

Open-source foundation — built on LibreChat and extended through modular service layers, with custom infrastructure separated and documented to maintain a clear and maintainable architecture.

Impact
Impact


Practical impact — designed to support both developers and students in solving programming problems faster through AI-assisted guidance, code explanation, and remote code execution. The system combines LLM-based reasoning with retrieval and execution APIs to help users debug errors, understand unfamiliar concepts, and test code directly within the platform.

Learning acceleration — enables students to move beyond static documentation by receiving context-aware explanations, runnable examples, and step-by-step debugging support, reducing the time required to understand complex programming topics.

Developer productivity — provides engineers with a rapid troubleshooting environment where code snippets, errors, and technical questions can be analyzed in real time, integrating AI responses with external documentation and execution environments.

Reliability and performance — the platform demonstrated strong operational stability under testing conditions, achieving high request success rates, low response latency, and thousands of automated tests across functional, integration, and regression suites, indicating readiness for real-world usage scenarios.

Research foundation — the system architecture and evaluation were documented in the Computer Engineering paper “DevHelp: The Design and Implementation of an AI-Powered Chatbot for Addressing Programming Queries Using LLM-Based NLP and Remote Code Execution APIs”, produced at the University of Benin. The paper is available upon request.

  • My Works My Works

03

//FAQ

Concerns

quick

context

01

Designer or Engineer

02

Types of Projects

03

Working Remotely

04

Music Production Background

05

End-to-End Project Capability

06

Contact

//FAQ

Concerns

quick

context

Designer or Engineer
Types of Projects
Working Remotely
Music Production Background
End-to-End Project Capability
Contact

Let'S WORK

TOGETHER

BASED IN abuja,

nigeria

product engineer
+ Interface Architect

Product Engineer focused on AI-powered tools, scalable systems, and interface architecture.

Let'S WORK

TOGETHER

Product Engineer focused on AI-powered tools, scalable systems, and interface architecture.

Create a free website with Framer, the website builder loved by startups, designers and agencies.