PersistentAI API Documentation / @persistentai/fireflow-vfs
@persistentai/fireflow-vfs
Virtual File System for FireFlow with LakeFS integration.
A versioned file storage layer built on lakeFS for Git-like data versioning, DBOS for durable exactly-once workflow execution, PostgreSQL for metadata, and Redis for real-time event distribution. Provides workspaces, branching, collaboration, role-based access control, and event-driven flow triggers.
Key Capabilities
- Versioned storage — every file operation creates a lakeFS commit (full history, diffs, rollback)
- Durable workflows — all mutations run as DBOS workflows with exactly-once guarantees and automatic retry
- Real-time events — 22 event types published via DBOS streams, PostgreSQL audit log, and Redis pub/sub
- Role-based access control — owner / editor / viewer roles with workspace-level permissions
- Flow subscriptions — VFS events trigger FireFlow executions via pattern-matched subscriptions
- Move with confirmation — multi-file move/copy operations with interactive conflict resolution via DBOS.recv()
- Flow packages —
.fflow/folder structure for storing serialized flows with derived event definitions - Multipart uploads — server-managed upload sessions with presigned S3 URLs
Architecture
The package follows a strict 3-tier architecture where each layer has a single responsibility:
┌─────────────────────────────────────────────────────────────┐
│ tRPC Procedures (fireflow-trpc/procedures/vfs/) │
│ Auth, input validation, URI conversion │
└──────────────────────────┬──────────────────────────────────┘
│ calls
┌──────────────────────────▼──────────────────────────────────┐
│ DBOS Workflows (fireflow-vfs/workflows/) │
│ Orchestration, permission checks, event emission │
│ @DBOS.workflow() — exactly-once, auto-retry │
└──────────────────────────┬──────────────────────────────────┘
│ calls via DBOS.runStep()
┌──────────────────────────▼──────────────────────────────────┐
│ DBOS Steps (fireflow-vfs/steps/) │
│ Atomic I/O: lakeFS API calls, PostgreSQL queries │
│ Retried independently on failure │
└─────────────────────────────────────────────────────────────┘Infrastructure dependencies:
| Component | Role |
|---|---|
| lakeFS | Object storage with Git semantics (repos, branches, commits) |
| PostgreSQL | Metadata: workspaces, collaborators, events, subscriptions, upload sessions |
| Redis | Real-time pub/sub for frontend event streaming |
| DBOS | Durable workflow execution with checkpoint/replay |
Package Exports
The package has 6 export paths. Use the narrowest import that satisfies your needs to avoid pulling server dependencies into browser bundles.
| Export Path | Environment | Contents |
|---|---|---|
@persistentai/fireflow-vfs | Client + Server | Types, URI/path utilities, ID generators |
@persistentai/fireflow-vfs/server | Server only | DB schema, Redis, LakeFS client, DBOS workflows, ACL, upload sessions |
@persistentai/fireflow-vfs/types | Client + Server | All type definitions (re-exported from main) |
@persistentai/fireflow-vfs/path | Client + Server | VfsPath, VfsUri classes, path constants, validation |
@persistentai/fireflow-vfs/lakefs | Server only | LakeFS client creation, error handling |
@persistentai/fireflow-vfs/generated/lakefs/types.gen | Client + Server | Auto-generated lakeFS TypeScript types |
// Frontend — safe
import { buildUri, normalizePath, parseUri, type VFSEvent } from '@persistentai/fireflow-vfs'
import { VfsPath, VfsUri } from '@persistentai/fireflow-vfs/path'
// Backend — requires DB, Redis, DBOS
import { FileWorkflows, checkPermission, vfsWorkspacesTable } from '@persistentai/fireflow-vfs/server'URI Scheme (ff://)
All VFS paths use the ff:// URI scheme (client-graph). The URI contains only the path — workspace and branch context are passed as separate parameters in API calls.
ff:///docs/readme.md Path: /docs/readme.md
ff:///src/components/App.tsx Path: /src/components/App.tsx
ff:///folder/ Directory (trailing slash)
ff:/// RootVfsPath — Immutable validated paths
import { VfsPath } from '@persistentai/fireflow-vfs/path'
const path = VfsPath.parse('/foo/bar.txt')
path.name // 'bar.txt'
path.extension // '.txt'
path.stem // 'bar'
path.depth // 2
path.isDirectory // false
path.parent() // VfsPath('/foo')
path.join('baz') // VfsPath('/foo/bar.txt/baz')
path.withName('x') // VfsPath('/foo/x')
path.withExtension('.md') // VfsPath('/foo/bar.md')
path.toString() // '/foo/bar.txt'
path.toUri() // 'ff:///foo/bar.txt'
path.toLakeFSPath()// 'foo/bar.txt' ← NO leading slash (required by lakeFS API)VfsUri — URI wrapper with optional context
import { VfsUri } from '@persistentai/fireflow-vfs/path'
const uri = VfsUri.parse('ff:///foo/bar.txt')
uri.path // VfsPath instance
uri.scheme // 'ff'
uri.toString() // 'ff:///foo/bar.txt'
uri.toPathString() // '/foo/bar.txt'
// With workspace context (future)
const ctx = VfsUri.create('/foo', 'ws-123', 'main')
ctx.toString() // 'ff://ws-123@main/foo'
ctx.workspace // 'ws-123'
ctx.branch // 'main'Path Constants
| Constant | Value | Notes |
|---|---|---|
VFS_SCHEME | 'ff' | URI scheme identifier |
VFS_SCHEME_PREFIX | 'ff://' | Full prefix |
MAX_FILENAME_LENGTH | 255 | Per filename |
MAX_PATH_LENGTH | 4096 | Total path |
MAX_PATH_DEPTH | 100 | Segment count |
RESERVED_FILENAMES | ['.', '..'] | Cannot be created |
MARKER_FILENAMES | ['.keep', '.gitkeep'] | Internal directory markers |
Critical: normalizePath() vs toLakeFSPath()
import { normalizePath } from '@persistentai/fireflow-vfs'
import { toLakeFSPath } from '@persistentai/fireflow-vfs/path'
normalizePath('ff:///foo/bar') // '/foo/bar' — VFS path (leading slash)
toLakeFSPath('ff:///foo/bar') // 'foo/bar' — lakeFS path (NO leading slash)lakeFS treats /foo/bar and foo/bar as different paths. Always use toLakeFSPath() when making lakeFS API calls.
Core Concepts
Workspaces
A workspace is the isolation boundary. Each workspace maps to a dedicated lakeFS repository.
interface Workspace {
id: string // 'WS' + 20 alphanumeric chars
name: string // Display name
lakeFSRepoName: string // Internal (never exposed via API)
ownerId: string // FireFlow user ID
ownerType: 'user' | 'flow' | 'team' | 'system'
slug: string // URL-safe identifier (unique per owner)
visibility: 'private' | 'shared' | 'public'
defaultBranch: string // Usually 'main'
settings: WorkspaceSettings // maxBranches, maxFileSize, allowedMimeTypes
}Repository naming pattern: ff-{owner_type}-{owner_id_truncated}-{slug}
Branches
Branches are lakeFS branches. Every workspace starts with a main branch. Create feature branches, commit changes, and merge — just like Git.
Files and Directories
Files are lakeFS objects. Directories are virtual — they're defined by common prefixes, not stored as objects. To make an empty directory visible, the system creates a .keep marker file inside it.
Events
Every mutation emits a VFSEvent with one of 22 event types across 6 categories:
| Category | Events |
|---|---|
| Workspace | WORKSPACE_CREATED, WORKSPACE_DELETED, WORKSPACE_SETTINGS_UPDATED |
| Collaborator | COLLABORATOR_ADDED, COLLABORATOR_REMOVED, COLLABORATOR_ROLE_CHANGED |
| File | FILE_CREATED, FILE_MODIFIED, FILE_DELETED, FILE_MOVED, FILE_COPIED |
| Directory | DIRECTORY_CREATED, DIRECTORY_DELETED |
| Git | COMMITTED, BRANCH_CREATED, BRANCH_DELETED, BRANCH_MERGED, CHECKOUT |
| Operation | CONFIRMATION_REQUESTED, CONFIRMATION_RESPONSE, OPERATION_STARTED, OPERATION_CANCELLED |
Each event type maps to a routing name via getEventName():
getEventName(VFSEventType.FILE_MODIFIED) // 'vfs:file:modified'
getEventName(VFSEventType.BRANCH_CREATED) // 'vfs:branch:created'Tree Operations
Events carry treeOperations for incremental frontend state updates:
type TreeOperation
= { op: 'add', node: TreeNode }
| { op: 'remove', path: string }
| { op: 'move', fromPath: string, toPath: string, node: TreeNode }
| { op: 'update', path: string, updates: Partial<TreeNode> }Flow Packages
Flows are stored as .fflow/ folder structures:
workspace/
workflows/
my-flow.fflow/
.keep # Directory marker
manifest.json # { type: 'fflow', flowId, name, version, ... }
flow.json # Serialized flow definition
events/
.keep
on_file_created.ffevent # Derived event entry points
on_sync_complete.ffevent
.fireflow/
flow-index.json # Maps flowId → path across workspaceFlow Subscriptions
Flow subscriptions connect VFS events to flow executions. When a VFS event matches a subscription's filters, the VFSRouterWorkflow spawns a flow execution.
interface FlowSubscription {
workspaceId: string
flowId: string
eventName: string // e.g., 'vfs:file:modified'
eventFilters: {
pathPatterns?: string[] // Glob: '*.md', 'docs/**'
branch?: string // Only match specific branch
metadata?: Record<string, unknown> // Key-value matching
}
contextMode: 'trigger_creator' | 'action_performer' | 'workspace_owner' | 'inherit'
enabled: boolean
}Database Schema
Six tables with fireflow_ prefix (Drizzle ORM):
| Table | Purpose | PK Prefix | Key Columns |
|---|---|---|---|
vfs_workspaces | Workspace → lakeFS repo mapping | WS (20) | lakefs_repo_name (unique), owner_type, visibility, settings (JSONB) |
vfs_collaborators | Access control entries | CO (16) | workspace_id (FK), user_id, role (owner/editor/viewer), accepted_at |
vfs_events | Audit log + flow triggers | EV (20) | event_type, event_name, path, commit_id, metadata (JSONB) |
vfs_shares | Public share links | SH (16) | share_token (unique), permission, path_prefix, expires_at |
vfs_flow_subscriptions | Event → flow routing | FS (16) | flow_id, event_name, event_filters (JSONB), context_mode, enabled |
vfs_upload_sessions | Multipart upload state | US (20) | lakefs_upload_id, physical_address (hidden from client), expires_at |
Notable indexes:
vfs_workspaces_owner_slug_idx— unique slug per ownervfs_events_workspace_time_idx— efficient subscription queriesvfs_flow_subs_unique_idx— one subscription per (workspace, flow, eventName)vfs_upload_sessions_expires_idx— cleanup job support
Schema file: src/db/schema.ts
DBOS Workflows
All workflow classes extend ConfiguredInstance and use @DBOS.workflow() for exactly-once execution. Each workflow is injected with a Drizzle DB instance at construction.
FileWorkflows
File: src/workflows/file-workflows.ts
| Method | Description | Decorator |
|---|---|---|
writeFile(workspaceId, branchName, path, content, userId) | Upload + commit + emit FILE_MODIFIED | @DBOS.workflow() |
readFile(workspaceId, ref, path, userId) | Read from any ref (branch/commit) | — |
deleteFile(workspaceId, branchName, path, userId) | Delete file or directory recursively + emit FILE_DELETED | @DBOS.workflow() |
copyFile(workspaceId, branchName, fromPath, toPath, userId) | Copy + commit + emit FILE_COPIED | @DBOS.workflow() |
createDirectory(workspaceId, branchName, path, userId) | Create .keep marker + emit DIRECTORY_CREATED (idempotent) | @DBOS.workflow() |
FlowWorkflows
File: src/workflows/flow-workflows.ts
| Method | Description |
|---|---|
createFlowPackage(workspaceId, branchName, path, name, description, userId) | Creates .fflow/ folder with manifest, flow.json, updates flow index |
saveFlow(workspaceId, branchName, path, flowData, userId) | Updates flow.json, derives .ffevent files, cleans orphaned events |
getFlowByRef(workspaceId, ref, path, userId) | Reads manifest + flow.json from any ref |
getFlowById(workspaceId, ref, flowId, userId) | Looks up path from flow index, delegates to getFlowByRef |
listFlowEvents(workspaceId, ref, path, userId) | Lists .ffevent files in events/ directory |
deriveEventsWorkflow(workspaceId, flowPath, branchName, userId) | Re-derives events from stored flow.json |
GitWorkflows
File: src/workflows/git-workflows.ts
| Method | Description | Returns |
|---|---|---|
commit(workspaceId, branchName, message, userId) | Create lakeFS commit + emit COMMITTED | Commit |
createBranch(workspaceId, branchName, sourceBranch, userId) | Create branch + emit BRANCH_CREATED | Ref |
deleteBranch(workspaceId, branchName, userId) | Delete branch + emit BRANCH_DELETED | void |
mergeBranch(workspaceId, sourceBranch, targetBranch, message, userId) | Merge + emit BRANCH_MERGED | MergeResult | { conflicts } |
getStatus(workspaceId, branchName, userId) | Get uncommitted changes | DiffList |
checkoutFile(workspaceId, branchName, path, commitId, userId) | Restore file from historical commit | void |
MoveWorkflows
File: src/workflows/move-workflow.ts
Implements a 4-phase plan → confirm → execute → commit pattern for file/folder moves with conflict resolution:
Phase 1 (STEP): Create operation plan
├─ List all source files recursively
├─ Detect all conflicts at target
└─ Emit OPERATION_STARTED event
Phase 2 (WORKFLOW): Confirmation loop
├─ For each conflict:
│ ├─ Emit CONFIRMATION_REQUESTED event
│ ├─ DBOS.recv('CONFIRM_RESPONSE', 300s timeout)
│ ├─ Apply resolution (or remembered resolution for similar conflicts)
│ └─ Emit CONFIRMATION_RESPONSE event
└─ User can CANCEL at any point
Phase 3 (STEP): Execute confirmed operations
├─ Copy non-skipped files (parallel)
├─ Handle KEEP_BOTH renames: file(1).txt, file(2).txt, ...
└─ Bulk delete source files
Phase 4 (STEP): Commit and emit
├─ Single lakeFS commit (all-or-nothing)
└─ Emit FILE_MOVED event with tree operationsConflict resolution options:
| Conflict Type | Available Resolutions |
|---|---|
folder_exists | MERGE, REPLACE, SKIP, CANCEL |
file_exists | OVERWRITE, KEEP_BOTH, SKIP, CANCEL |
TypedFileWorkflows
File: src/workflows/typed-file-workflows.ts
Creates files with pre-creation hooks that set up associated resources:
| Hook | File Extension | Action |
|---|---|---|
flow | .fflow | Creates Flow resources (manifest, flow.json, flow index) |
secret | .ffsecret | Creates Secret resources |
mcp | .ffmcp | Validates MCP server configuration |
WorkspaceWorkflows
File: src/workflows/workspace-workflows.ts
| Method | Description |
|---|---|
createWorkspace(name, userId, options?) | Create lakeFS repo + DB record + owner collaborator + webhook config |
deleteWorkspace(workspaceId, userId) | Delete lakeFS repo + DB records (cascading) |
updateWorkspaceSettings(workspaceId, userId, updates) | Update name/description/visibility |
addCollaborator(workspaceId, targetUserId, role, invitedByUserId) | Add editor/viewer |
removeCollaborator(workspaceId, targetUserId, removedByUserId) | Remove collaborator (owner only) |
VFSRouterWorkflow
File: src/workflows/router-workflow.ts
Routes VFS events to matching flow subscriptions:
- Query
vfs_flow_subscriptionsfor matching (workspaceId, eventName, enabled) - Apply filters: path glob patterns (via
minimatch), branch, metadata - Resolve context mode to determine
runnerId:trigger_creator→ subscription creator's permissionsaction_performer→ VFS actor's permissionsworkspace_owner→ workspace owner's permissionsinherit→ subscription creator's permissions (default)
- Spawn flow executions
Key Patterns
Workflow Step Pattern
Every write workflow follows the same 4-step structure:
@DBOS.workflow()
async writeFile(workspaceId, branchName, path, content, userId) {
// Step 1: Permission check → returns repoName
const { repoName } = await DBOS.runStep(async () => {
const perm = await checkPermission(this.db, userId, workspaceId, 'write')
if (!perm.allowed) throw new Error(`Permission denied: ${perm.reason}`)
return { repoName: perm.lakeFSRepoName }
})
// Step 2: lakeFS operation
const result = await DBOS.runStep(async () => {
return uploadObject(repoName, branchName, path, content)
})
// Step 3: Create commit (null if no changes)
const commit = await DBOS.runStep(async () => {
return createCommit(repoName, branchName, `Write: ${path}`, userId, workspaceId)
})
// Step 4: Emit event (only if commit succeeded)
if (commit) {
await DBOS.runStep(async () => {
await DBOS.writeStream('vfs-events', eventData) // Durable
await insertEvent(this.db, eventData) // Audit trail
})
}
return result
}Dual Event Persistence
Events are persisted through two channels:
- DBOS.writeStream() — durable, used for flow triggers and replay
- insertEvent() — PostgreSQL audit log, used for history queries
Redis pub/sub was previously used for real-time delivery but has been replaced by lakeFS webhooks for most events.
Confirmation Loop (DBOS.recv)
DBOS.recv() can only be called at workflow level (not inside steps). The move workflow uses this to pause execution and wait for user input:
// At WORKFLOW level — DBOS.recv() allowed here
const response = await DBOS.recv<ConfirmationMessage>(
'CONFIRM_RESPONSE', // Topic
CONFIRMATION_TIMEOUT_SECONDS // 300s timeout
)The client sends the response via DBOS.send() to the workflow's topic.
Permission Model (ACL)
File: src/services/acl.ts
Role Permissions
| Role | read | write | admin |
|---|---|---|---|
owner | Y | Y | Y |
editor | Y | Y | - |
viewer | Y | - | - |
public (visibility) | Y | - | - |
Permission Check Flow
checkPermission(db, userId, workspaceId, permission)
│
├─ 1. Workspace exists? → 'Workspace not found'
├─ 2. Is owner? → allowed (all permissions)
├─ 3. Public + read? → allowed (read only)
├─ 4. Is collaborator? → check role permissions
└─ Returns: { allowed, lakeFSRepoName?, role?, reason? }Type-Safe Assertion
import { assertPermissionAllowed } from '@persistentai/fireflow-vfs/server'
const perm = await checkPermission(db, userId, workspaceId, 'write')
assertPermissionAllowed(perm)
// perm is now typed as AllowedPermission — perm.lakeFSRepoName is guaranteed non-nullReal-Time Subscriptions
File: src/redis/event-bus.ts
Redis Pub/Sub
Events are published to workspace-specific channels:
Channel: vfs:ws:{workspaceId}
Payload: VFSEventPayload (JSON serialized)// Publish (from workflow steps)
await publishVFSEvent(eventData)
// Subscribe (async generator)
const bus = getEventBus()
for await (const event of bus.subscribe(workspaceId, signal)) {
console.log(event.eventName, event.path)
}
// Subscribe with callback (multiple workspaces)
const unsubscribe = await bus.subscribeWithCallback(
[workspaceId1, workspaceId2],
(event) => handleEvent(event)
)VFSEventPayload
interface VFSEventPayload {
id: string
workspaceId: string
eventType: VFSEventType
eventName: string // 'vfs:file:modified'
branchName?: string
path?: string
commitId?: string
userId: string
metadata?: Record<string, unknown>
createdAt: string // ISO timestamp
treeOperations?: TreeOperation[]
}Configuration
Environment Variables
| Variable | Required | Default | Description |
|---|---|---|---|
ENABLE_VFS | Yes | 'false' | Set to 'true' to enable VFS |
LAKEFS_ENDPOINT | If VFS enabled | 'http://localhost:8000' | lakeFS API URL |
LAKEFS_ACCESS_KEY_ID | If VFS enabled | — | lakeFS access key |
LAKEFS_SECRET_ACCESS_KEY | If VFS enabled | — | lakeFS secret key |
VFS_S3_ENDPOINT | No | 'http://localhost:9000' | S3-compatible endpoint for direct access |
VFS_S3_BUCKET | No | 'fireflow-vfs' | Storage bucket |
REDIS_URL | If VFS enabled | — | Redis connection URL |
VFS_LAKEFS_WEBHOOK_URL | No | 'http://localhost:3001/webhook/lakefs' | Webhook URL for lakeFS notifications |
VFS Constants
const VFS_CONSTANTS = {
DEFAULT_PAGE_SIZE: 100, // Items per folder page
MAX_PAGE_SIZE: 500, // Maximum items per request
INITIAL_DEPTH: 2, // Initial tree depth for subscriptions
MAX_DEPTH: 10, // Maximum tree traversal depth
}Integration Points
Frontend (Effector stores)
The VFS frontend integration lives in apps/fireflow-frontend/src/store/vfs/:
- Workspace store — active workspace, personal workspace, workspace list
- File store — current path, breadcrumbs, navigation, CRUD operations
- Subscription store — real-time event streaming (snapshot + event buffering)
- Upload store — queued uploads with progress tracking
- File type store — file type definitions registry
- Confirmation store — pending conflict confirmations and responses
Backend API (tRPC procedures)
VFS tRPC procedures in packages/fireflow-trpc/server/procedures/vfs/:
workspace.ts— create, delete, list, update settings, manage collaboratorsfile.ts— list, read, write, delete, stat, moveupload.ts— presigned URLs, multipart upload managementsubscription.ts— real-time event streaming (WebSocket)confirmation.ts— conflict resolution responsesgit.ts— commit, branch, checkout operationsfile-type.ts— file type registry
Execution Engine
VFS context services in packages/fireflow-executor/server/services/context/:
VFSContextService— read-only: stat, readFile, listDirectory, getPresignedUrlVFSWriteContextService— read-write: writeFile, deleteFile, editFile, glob, grep, commit
These services are injected into the ExecutionContext so flow nodes can access VFS.
VFS Nodes
Pre-built VFS nodes in packages/fireflow-nodes/src/nodes/vfs/:
VFSReadNode, VFSWriteNode, VFSEditNode, VFSDeleteNode, VFSListNode, VFSGlobNode, VFSGrepNode, VFSStatNode, VFSExistsNode, VFSPresignedUrlNode, VFSBatchReadNode, VFSCreateFolderNode, VFSGitCommitNode
Directory Structure
src/
├── index.ts # Client-safe exports
├── server.ts # Server-only exports
├── config.ts # VFSConfig, env vars, constants
│
├── db/ # Database layer (Drizzle ORM)
│ ├── schema.ts # 6 tables
│ ├── locks.ts # Advisory locks
│ └── index.ts
│
├── types/ # Complete type system
│ ├── workspace.ts # Workspace, Collaborator, WorkspaceSettings
│ ├── entry.ts # VFSTreeEntry, TreeNode, ListTreeResult
│ ├── events.ts # VFSEventType (22), VFSEvent, TreeOperation
│ ├── file.ts # VFSFile, VFSDirectory
│ ├── file-action.ts # IFileAction definitions
│ ├── file-type-definition.ts # FileTypeDefinition
│ ├── file-types.ts # File type registry
│ ├── git.ts # Branch, Commit, Diff, Merge types
│ ├── operation-plan.ts # OperationPlan, PlanConflict, ConflictResolution
│ ├── confirmation.ts # Confirmation request/response metadata
│ ├── subscription.ts # FlowSubscription, filters, context modes
│ ├── uri.ts # ParsedCGUri, CGUriNamespace
│ └── vfs-subscription-messages.ts # WebSocket message types
│
├── path/ # Path & URI utilities
│ ├── constants.ts # VFS_SCHEME, limits, reserved names
│ ├── vfs-path.ts # VfsPath class (immutable, validated)
│ ├── vfs-uri.ts # VfsUri class (URI wrapper)
│ ├── validation.ts # Path validation and security
│ ├── flow-package.ts # .fflow folder structure helpers
│ ├── utils.ts # basename, dirname, joinPath, normalizePath, toLakeFSPath
│ └── __tests__/ # 7 test files (security, edge cases, etc.)
│
├── services/ # Business logic
│ ├── acl.ts # Permission checking (checkPermission, assertPermissionAllowed)
│ ├── action-resolver.ts # Resolves available UI actions by context
│ ├── operation-planner.ts # Plans multi-file operations with conflict detection
│ └── upload-session.ts # Multipart upload session management
│
├── steps/ # DBOS step functions (atomic I/O)
│ ├── db-steps.ts # insertWorkspace, getWorkspaceById, insertEvent, etc.
│ ├── lakefs-steps.ts # uploadObject, getObject, deleteObject, createCommit, etc.
│ └── event-derivation-step.ts # Derives .ffevent files from flow data
│
├── workflows/ # DBOS workflows
│ ├── file-workflows.ts # FileWorkflows class
│ ├── flow-workflows.ts # FlowWorkflows class
│ ├── git-workflows.ts # GitWorkflows class
│ ├── move-workflow.ts # MoveWorkflows class (confirmation pattern)
│ ├── typed-file-workflows.ts # TypedFileWorkflows class (hooks)
│ ├── workspace-workflows.ts # WorkspaceWorkflows class
│ ├── router-workflow.ts # VFSRouterWorkflow (event → flow routing)
│ └── hooks/ # Pre-creation hooks
│ ├── flow-hook.ts # Create Flow resources
│ ├── secret-hook.ts # Create Secret resources
│ └── mcp-hook.ts # Validate MCP server config
│
├── redis/ # Real-time event distribution
│ ├── client.ts # Redis connection management
│ └── event-bus.ts # RedisEventBus, publishVFSEvent
│
├── lakefs/ # LakeFS client wrapper
│ ├── client.ts # Client initialization
│ └── index.ts # createLakeFSClient, error handling
│
├── utils/ # Utilities
│ ├── id-generate.ts # ID generators (WS, CO, EV, SH, US, CF, OP, PL, FL)
│ ├── action-config.ts # LakeFS webhook action config generation
│ ├── tree-helpers.ts # Build TreeNode from lakeFS objects
│ └── uri.ts # URI parsing and building
│
└── generated/ # Auto-generated lakeFS SDK (~90 files)
└── lakefs/
├── types.gen.ts
├── sdk.gen.ts
└── ...Development
Scripts
pnpm --filter @persistentai/fireflow-vfs dev # TypeScript watch mode
pnpm --filter @persistentai/fireflow-vfs build # Compile
pnpm --filter @persistentai/fireflow-vfs typecheck # Type check
pnpm --filter @persistentai/fireflow-vfs test # Run tests
pnpm --filter @persistentai/fireflow-vfs test:coverage # Coverage reportRegenerate lakeFS SDK
pnpm --filter @persistentai/fireflow-vfs generate:lakefsThis fetches the lakeFS OpenAPI spec (v1.48.0) and generates TypeScript types and client code using @hey-api/openapi-ts.
Tests
8 test files covering path utilities and ID generation:
| Test | Coverage |
|---|---|
path/__tests__/vfs-path.test.ts | VfsPath parsing, operations, serialization |
path/__tests__/vfs-uri.test.ts | VfsUri parsing, context, conversion |
path/__tests__/validation.test.ts | Path validation, reserved names, length limits |
path/__tests__/utils.test.ts | basename, dirname, joinPath utilities |
path/__tests__/flow-package.test.ts | .fflow folder structure patterns |
path/__tests__/security.test.ts | Path traversal prevention, injection checks |
path/__tests__/edge-cases.test.ts | Unicode, special characters, symlinks |
utils/__tests__/id-generate.test.ts | ID uniqueness, prefixes, format |
Dependencies
Runtime
| Package | Version | Purpose |
|---|---|---|
@dbos-inc/dbos-sdk | ^4.7.9 | Durable workflows |
@persistentai/fireflow-types | workspace:* | Foundation types |
ioredis | ^5.9.2 | Redis pub/sub |
minimatch | ^10.1.1 | Glob pattern matching (subscription filters) |
nanoid | ^5.1.6 | ID generation |
zod | ^3.25.76 | Runtime validation |
Peer Dependencies
| Package | Version |
|---|---|
drizzle-orm | ^0.44.5 |
pg | ^8.16.3 |
superjson | ^2.2.5 |
License
Business Source License 1.1 (BUSL-1.1) — see LICENSE.txt