Open SourceNode.jsUnder Construction

Build AI apps without
rebuilding the basics

Snipet is a unified layer for input, context, execution and knowledge retrieval. Focus on your use case, not the infrastructure.

Every AI app ends up rebuilding the same things:

Developers spend 80% of their time orchestrating data flows, managing context windows, and building RAG pipelines instead of refining the actual AI experience. The complexity of stateful AI interactions leads to fragmented architectures and technical debt.

Input handling
Context & memory
RAG pipelines
Orchestration
Tools/actions
Observability
// The manual way (rebuilding basics)
const context = await db.getMemory(userId);
const docs = await vectorStore.search(query);
const prompt = constructPrompt(query, context, docs);
const response = await llm.generate(prompt);
await db.saveMemory(userId, response);
// ... repeat for every feature
Architecture

Snipet provides a unified architecture for AI apps

A modular, event-driven framework designed to handle the entire lifecycle of an AI interaction.

Input
Context
Execution
Output

Core Concepts

The building blocks of the Snipet ecosystem.

Snipet

The primary unit of execution. Encapsulates logic, model configuration, and state management.

Knowledge Source

Raw data origins. Connectors for PDFs, APIs, databases, or real-time streams.

Knowledge Base

Logical grouping of processed data. Manages how information is partitioned and accessed.

Embedding Pipeline

Automated vectorization flow. Handles chunking, embedding, and indexing into vector stores.

Scope

Shared memory layer. Provides persistence and context across multiple snipet executions.

Skill

External capabilities. Allows the AI to perform actions like sending emails or querying APIs.

Execution Pipeline

The core orchestration logic. Manages the sequential or parallel execution of snipets and skills.

Ingestion Flow

1

Knowledge Source

Connect your data (S3, Notion, SQL)

2

Embedding Pipeline

Chunk and vectorize using your preferred model

3

Indexing

Store and manage vectors in pgvector (PostgreSQL)

Execution Flow

1

Input & Context

Raw input combined with Scope memory

2

Model & Knowledge

RAG retrieval and LLM inference

3

Skills & Output

Tool execution and final response delivery

Built for production

Everything you need to ship enterprise-grade AI features.

Modular Architecture

Swap models, vector stores, and memory adapters without changing your core logic.

Integrated RAG

First-class support for retrieval-augmented generation with built-in hybrid search.

Temporal Memory

Scopes manage context history automatically, ensuring the AI remembers what matters.

Multi-Model Support

Use OpenAI, Anthropic, or local models via Ollama with a unified interface.

Skill Execution

Define tools as simple TypeScript functions and let Snipet handle the orchestration.

Observability

Built-in tracing and logging for every step of the execution pipeline.

Community

Snipet is open source and actively evolving

We believe the future of AI infrastructure should be transparent and community-driven. Join us in building a better foundation for AI developers.

Core Development40%

Snipet is currently under construction.
The architecture is being finalized and is not yet usable.

Start building with Snipet

Join the growing community of developers building the next generation of AI applications.