33 Commits

Author SHA1 Message Date
geoffsee
9810f67af0 Refine tool call execution logic in chat-stream-provider to prevent duplicates, enhance retries for agentic-rag, and improve incremental processing, including test updates. 2025-07-31 18:26:42 -04:00
geoffsee
6c433581d3 Add Agentic RAG Tool integration with test cases
- Implemented intelligent retrieval-augmented generation system (`agentic_rag`) for dynamic decision-making on knowledge retrieval.
- Uses Milvus with a large dataset
- Added comprehensive test cases for query analysis, storage, retrieval, and error handling.
- Integrated `AgenticRAGTools` into `chat-stream-provider` enabling tool-based responses.
- Updated dependencies with `@zilliz/milvus2-sdk-node` for Milvus integration.
- Updated lander hero title.
2025-07-31 16:15:23 -04:00
geoffsee
ae6a6e4064 Refactor model filtering logic into reusable basicFilters function. 2025-07-31 10:10:35 -04:00
geoffsee
67483d08db Update model path handling logic for FireworksAI and refine supported model filtering. 2025-07-27 12:30:47 -04:00
geoffsee
53268b528d Update hero label for home route in renderer routes. 2025-07-27 09:32:46 -04:00
geoffsee
f9d5fc8282 Remove unused submodules and related scripts 2025-07-27 09:00:25 -04:00
geoffsee
ce9bc4db07 "Swap default states for mapActive and aiActive in LandingComponent" 2025-07-17 14:11:15 -04:00
geoffsee
bd71bfcad3 - Remove unused BevyScene and related dependencies.
- Refactor `InstallButton` and relocate it to `install/`.
- Update `Toolbar` imports to reflect the new `InstallButton` structure.
- Introduce `handleInstall` functionality for PWA installation prompt handling.
2025-07-17 14:04:47 -04:00
Geoff Seemueller
4edee1e191 Potential fix for code scanning alert no. 5: Shell command built from environment values
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Signed-off-by: Geoff Seemueller <28698553+geoffsee@users.noreply.github.com>
2025-07-17 13:47:50 -04:00
geoffsee
734f48d4a7 remove webhost in assistant prompt 2025-07-17 13:47:50 -04:00
geoffsee
66363cdf39 set ai as the default landing 2025-07-17 13:47:50 -04:00
geoffsee
36f8fcee87 Integrate PWA service worker registration using virtual:pwa-register. 2025-07-17 13:47:50 -04:00
geoffsee
f055cd39fe Update InputMenu to use clientChatStore.reset() instead of setActiveConversation when closing. 2025-07-17 13:47:50 -04:00
geoffsee
0183503425 Refactored layout components and styling: removed unused imports, adjusted positioning and padding for consistency. 2025-07-17 13:47:50 -04:00
geoffsee
a7ad06093a simplify landing page for my peace 2025-07-17 13:47:50 -04:00
geoffsee
c26d2467f4 sweet lander 2025-07-17 13:47:50 -04:00
geoffsee
818e0e672a chat + maps + ai + tools 2025-07-17 13:47:50 -04:00
geoffsee
48655474e3 mirror error handling behavior in cloudflare worker 2025-07-17 13:47:50 -04:00
geoffsee
ffabfd4ce5 add top level error handler to the router 2025-07-17 13:47:50 -04:00
geoffsee
fa5b7466bc Optimize WASM handling and integrate service worker caching.
Removed unused pointer events in BevyScene, updated Vite config with Workbox for service worker caching, and adjusted file paths in generate-bevy-bundle.js. Added WASM size optimization to ensure smaller and efficient builds, skipping optimization for files below 30MB.
2025-07-17 13:47:50 -04:00
geoffsee
6cc5e038a7 Add visible prop to toggle components and simplify conditional rendering 2025-07-17 13:47:50 -04:00
geoffsee
e72198628c Add "Install App" button to the toolbar using react-use-pwa-install library 2025-07-17 13:47:50 -04:00
geoffsee
c0428094c8 **Integrate PWA asset generator and update favicon and manifest configuration** 2025-07-17 13:47:50 -04:00
geoffsee
3901337163 - Refactor BevyScene to replace script injection with dynamic import.
- Update `NavItem` to provide fallback route for invalid `path`.
- Temporarily stub metric API endpoints with placeholders.
2025-07-17 13:47:50 -04:00
geoffsee
0ff8b5c03e * Introduced BevyScene React component in landing-component for rendering a 3D cockpit visualization.
* Included WebAssembly asset `yachtpit.js` for cockpit functionality.
* Added Bevy MIT license file.
* Implemented a service worker to cache assets locally instead of fetching them remotely.
* Added collapsible functionality to **Tweakbox** and included the `@chakra-ui/icons` dependency.
* Applied the `hidden` prop to the Tweakbox Heading for better accessibility.
* Refactored **Particles** component for improved performance, clarity, and maintainability.

  * Introduced helper functions for particle creation and count management.
  * Added responsive resizing with particle repositioning.
  * Optimized animation updates, including velocity adjustments for speed changes.
  * Ensured canvas size and particle state are cleanly managed on component unmount.
2025-07-17 13:47:50 -04:00
geoffsee
858282929c Refactor chat-stream-provider to simplify tool structure. Optimize WeatherTool implementation with enriched function schema. 2025-07-17 13:47:50 -04:00
geoffsee
06b6a68b9b Enable tool-based message generation in chat-stream-provider and add BasicValueTool and WeatherTool.
Updated dependencies to latest versions in `bun.lock`. Modified development script in `package.json` to include watch mode.
2025-07-17 13:47:50 -04:00
dependabot[bot]
de968bcfbd Bump dotenv from 16.6.1 to 17.0.0
Bumps [dotenv](https://github.com/motdotla/dotenv) from 16.6.1 to 17.0.0.
- [Changelog](https://github.com/motdotla/dotenv/blob/master/CHANGELOG.md)
- [Commits](https://github.com/motdotla/dotenv/compare/v16.6.1...v17.0.0)

---
updated-dependencies:
- dependency-name: dotenv
  dependency-version: 17.0.0
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-01 11:00:15 -04:00
dependabot[bot]
6e8d9f3534 Bump react-streaming from 0.3.50 to 0.4.2
Bumps [react-streaming](https://github.com/brillout/react-streaming) from 0.3.50 to 0.4.2.
- [Changelog](https://github.com/brillout/react-streaming/blob/main/CHANGELOG.md)
- [Commits](https://github.com/brillout/react-streaming/compare/v0.3.50...v0.4.2)

---
updated-dependencies:
- dependency-name: react-streaming
  dependency-version: 0.4.2
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-07-01 10:59:30 -04:00
geoffsee
57ad9df087 fix wrangler config schema ref 2025-06-26 14:21:11 -04:00
geoffsee
610cb711a4 fix eslint version to 8 2025-06-26 12:40:54 -04:00
geoffsee
8cba09e67b - Add cache refresh mechanism for providers in ChatService
- Implemented tests to validate caching logic based on provider changes
- Enhanced caching logic to include a provider signature for more precise cache validation
2025-06-25 19:12:14 -04:00
geoffsee
c8e6da2d15 Add Docker support with Dockerfile and docker-compose.yml, update build scripts and README for containerized deployment.
- Updated server `Bun.build` configuration: adjusted `outdir`, added `format` as `esm`, and set `@open-gsio/client` to external.
- Expanded README with Docker instructions.
- Added new package `@open-gsio/analytics-worker`.
- Upgraded dependencies (`vite`, `typescript`, `bun`) and locked `pnpm` version in `package.json`.
2025-06-25 18:13:52 -04:00
80 changed files with 3722 additions and 339 deletions

3
.dockerignore Normal file
View File

@@ -0,0 +1,3 @@
/.wrangler/
/.open-gsio/
/node_modules/

17
.gitignore vendored
View File

@@ -7,11 +7,22 @@
**/.idea/
**/html/
**/.env
packages/client/public/static/fonts/*
**/secrets.json
**/.dev.vars
packages/client/public/sitemap.xml
packages/client/public/robots.txt
wrangler.dev.jsonc
/packages/client/public/static/fonts/
/packages/client/public/robots.txt
/packages/client/public/sitemap.xml
/packages/client/public/yachtpit.html
/packages/client/public/yachtpit.js
/packages/client/public/yachtpit_bg.wasm
/packages/client/public/assets/
/packages/client/public/apple-touch-icon-180x180.png
/packages/client/public/icon.ico
/packages/client/public/maskable-icon-512x512.png
/packages/client/public/pwa-64x64.png
/packages/client/public/pwa-192x192.png
/packages/client/public/pwa-512x512.png
packages/client/public/yachtpit_bg*
/project/

51
Dockerfile Normal file
View File

@@ -0,0 +1,51 @@
FROM oven/bun:latest
WORKDIR /app
# Copy package files first for better caching
COPY package.json bun.lock ./
# Create directory structure for all packages
RUN mkdir -p packages/ai packages/ai/src/types packages/client packages/coordinators packages/env packages/router packages/schema packages/scripts packages/server packages/services packages/cloudflare-workers/analytics packages/cloudflare-workers/open-gsio
# Copy package.json files for all packages
COPY packages/ai/package.json ./packages/ai/
COPY packages/ai/src/types/package.json ./packages/ai/src/types/
COPY packages/client/package.json ./packages/client/
COPY packages/coordinators/package.json ./packages/coordinators/
COPY packages/env/package.json ./packages/env/
COPY packages/router/package.json ./packages/router/
COPY packages/schema/package.json ./packages/schema/
COPY packages/scripts/package.json ./packages/scripts/
COPY packages/server/package.json ./packages/server/
COPY packages/services/package.json ./packages/services/
COPY packages/cloudflare-workers/analytics/package.json ./packages/cloudflare-workers/analytics/
COPY packages/cloudflare-workers/open-gsio/package.json ./packages/cloudflare-workers/open-gsio/
# Install dependencies
RUN bun install
# Copy the rest of the application
COPY . .
# Create .env file if it doesn't exist
RUN touch ./packages/server/.env
# Build client and server
RUN bun build:client && bun build:server
# Ensure the client directories exist
RUN mkdir -p ./client/public ./client/dist/client
# Copy client files to the expected locations
RUN cp -r ./packages/client/public/* ./client/public/ || true
RUN cp -r ./packages/client/dist/* ./client/dist/ || true
EXPOSE 3003
# Verify server.js exists
RUN test -f ./packages/server/dist/server.js || (echo "Error: server.js not found" && exit 1)
CMD ["bun", "./packages/server/dist/server.js"]

View File

@@ -14,6 +14,7 @@ This is a full-stack Conversational AI.
- [Installation](#installation)
- [Deployment](#deployment)
- [Docker](#docker)
- [Local Inference](#local-inference)
- [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon)
@@ -40,6 +41,59 @@ This is a full-stack Conversational AI.
> Note: Subsequent deployments should omit `bun run deploy:secrets`
## Docker
You can run the server using Docker. The image is large but will be slimmed down in future commits.
### Building the Docker Image
```bash
docker compose build
# OR
docker build -t open-gsio .
```
### Running the Docker Container
```bash
docker run -p 3003:3003 \
-e GROQ_API_KEY=your_groq_api_key \
-e FIREWORKS_API_KEY=your_fireworks_api_key \
open-gsio
```
You can omit any environment variables that you don't need. The server will be available at http://localhost:3003.
### Using Docker Compose
A `docker-compose.yml` file is provided in the repository. You can edit it to add your API keys:
```yaml
version: '3'
services:
open-gsio:
build: .
ports:
- "3003:3003"
environment:
- GROQ_API_KEY=your_groq_api_key
- FIREWORKS_API_KEY=your_fireworks_api_key
# Other environment variables are included in the file
restart: unless-stopped
```
Then run:
```bash
docker compose up
```
Or to run in detached mode:
```bash
docker compose up -d
```
## Local Inference
> Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.
@@ -116,6 +170,8 @@ I would like to express gratitude to the following projects, libraries, and indi
- [Vike](https://vike.dev/) - Framework for server-side rendering and routing
- [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
- [Marked.js](https://github.com/markedjs/marked) - Markdown Rendering
- [Shiki](https://github.com/shikijs/shiki) - Syntax Highlighting
- [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments
- [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution
- [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration

476
bun.lock

File diff suppressed because it is too large Load Diff

13
docker-compose.yml Normal file
View File

@@ -0,0 +1,13 @@
version: '3'
services:
open-gsio:
image: open-gsio:latest
build:
pull: false
context: .
dockerfile: Dockerfile
ports:
- "3003:3003"
env_file:
- ./packages/server/.env
restart: unless-stopped

View File

@@ -15,6 +15,7 @@
"server:dev": "bun build:client && (cd packages/server && bun run dev)",
"build": "(cd packages/cloudflare-workers/open-gsio && bun run deploy:dry-run)",
"build:client": "(cd packages/client && bun run vite build)",
"build:server": "bun --filter=@open-gsio/server run build",
"deploy": "(cd packages/cloudflare-workers/open-gsio && bun run deploy)",
"deploy:secrets": "wrangler secret bulk secrets.json -c packages/cloudflare-workers/open-gsio/wrangler.jsonc",
"openai:local:mlx": "packages/scripts/start_inference_server.sh mlx-omni-server",
@@ -30,7 +31,7 @@
"@types/bun": "^1.2.17",
"@typescript-eslint/eslint-plugin": "^8.35.0",
"@typescript-eslint/parser": "^8.35.0",
"eslint": "^9.29.0",
"eslint": "^8",
"eslint-config-prettier": "^10.1.5",
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-prettier": "^5.5.1",
@@ -39,5 +40,9 @@
},
"peerDependencies": {
"typescript": "^5.8.3"
}
},
"dependencies": {
"@chakra-ui/icons": "^2.2.4"
},
"packageManager": "pnpm@10.10.0+sha512.d615db246fe70f25dcfea6d8d73dee782ce23e2245e3c4f6f888249fb568149318637dca73c2c5c8ef2a4ca0d5657fb9567188bfab47f566d1ee6ce987815c39"
}

View File

@@ -40,6 +40,7 @@
"@open-gsio/env": "workspace:*",
"@open-gsio/schema": "workspace:*",
"@anthropic-ai/sdk": "^0.55.0",
"@zilliz/milvus2-sdk-node": "^2.6.0",
"openai": "^5.0.1",
"wrangler": "^4.18.0",
"vitest": "^3.1.4",

View File

@@ -46,7 +46,6 @@ describe('AssistantSdk', () => {
expect(prompt).toContain('# Assistant Knowledge');
expect(prompt).toContain('### Date: ');
expect(prompt).toContain('### Web Host: ');
expect(prompt).toContain('### User Location: ');
expect(prompt).toContain('### Timezone: ');
});

View File

@@ -22,9 +22,10 @@ export class AssistantSdk {
const currentTime = `${now.getHours()}:${formattedMinutes} ${now.getSeconds()}s`;
return `# Assistant Knowledge
## Assistant Name
### open-gsio
## Current Context
### Date: ${currentDate} ${currentTime}
### Web Host: open-gsio.seemueller.workers.dev
${maxTokens ? `### Max Response Length: ${maxTokens} tokens (maximum)` : ''}
### Lexicographical Format: Markdown
### User Location: ${userLocation || 'Unknown'}

View File

@@ -2,6 +2,7 @@ import { Schema } from '@open-gsio/schema';
import type { Instance } from 'mobx-state-tree';
import { OpenAI } from 'openai';
import type Message from '../../../schema/src/models/Message.ts';
import { AssistantSdk } from '../assistant-sdk';
import { ProviderRepository } from '../providers/_ProviderRepository.ts';
import type {

View File

@@ -1,5 +1,5 @@
import { OpenAI } from 'openai';
import { describe, it, expect, vi } from 'vitest';
import { describe, it, expect, vi, beforeEach } from 'vitest';
import {
BaseChatProvider,
@@ -29,7 +29,7 @@ class TestChatProvider extends BaseChatProvider {
}
// Mock dependencies
vi.mock('../../lib/chat-sdk', () => ({
vi.mock('../../chat-sdk/chat-sdk.ts', () => ({
default: {
buildAssistantPrompt: vi.fn().mockReturnValue('Assistant prompt'),
buildMessageChain: vi.fn().mockReturnValue([
@@ -39,6 +39,26 @@ vi.mock('../../lib/chat-sdk', () => ({
},
}));
vi.mock('../../tools/agentic-rag.ts', () => ({
agenticRAG: vi.fn(),
AgenticRAGTools: {
type: 'function',
function: {
name: 'agentic_rag',
description: 'Test agentic RAG tool',
parameters: {
type: 'object',
properties: {
action: { type: 'string', enum: ['search_knowledge'] },
query: { type: 'string' },
collection_name: { type: 'string' },
},
required: ['action', 'collection_name'],
},
},
},
}));
describe('ChatStreamProvider', () => {
it('should define the required interface', () => {
// Verify the interface has the required method
@@ -50,26 +70,616 @@ describe('ChatStreamProvider', () => {
});
});
describe('BaseChatProvider', () => {
it('should implement the ChatStreamProvider interface', () => {
// Create a concrete implementation
const provider = new TestChatProvider();
describe('BaseChatProvider - Model Tool Calling', () => {
let provider: TestChatProvider;
let mockOpenAI: any;
let dataCallback: any;
let commonParams: CommonProviderParams;
// Verify it implements the interface
beforeEach(() => {
vi.clearAllMocks();
provider = new TestChatProvider();
dataCallback = vi.fn();
mockOpenAI = {
chat: {
completions: {
create: vi.fn(),
},
},
};
commonParams = {
openai: mockOpenAI,
systemPrompt: 'Test system prompt',
preprocessedContext: {},
maxTokens: 1000,
messages: [{ role: 'user', content: 'Test message' }],
model: 'gpt-4',
env: {} as any,
};
});
it('should implement the ChatStreamProvider interface', () => {
expect(provider.handleStream).toBeInstanceOf(Function);
expect(provider.getOpenAIClient).toBeInstanceOf(Function);
expect(provider.getStreamParams).toBeInstanceOf(Function);
expect(provider.processChunk).toBeInstanceOf(Function);
});
it('should have abstract methods that need to be implemented', () => {
// This test verifies that the abstract methods exist
// We can't instantiate BaseChatProvider directly, so we use the concrete implementation
const provider = new TestChatProvider();
it('should handle regular text streaming without tool calls', async () => {
// Mock stream chunks for regular text response
const chunks = [
{
choices: [
{
delta: { content: 'Hello ' },
finish_reason: null,
},
],
},
{
choices: [
{
delta: { content: 'world!' },
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'stop',
},
],
},
];
// Verify the abstract methods are implemented
expect(provider.getOpenAIClient).toBeDefined();
expect(provider.getStreamParams).toBeDefined();
expect(provider.processChunk).toBeDefined();
mockOpenAI.chat.completions.create.mockResolvedValue({
async *[Symbol.asyncIterator]() {
for (const chunk of chunks) {
yield chunk;
}
},
});
await provider.handleStream(commonParams, dataCallback);
expect(mockOpenAI.chat.completions.create).toHaveBeenCalledWith(
expect.objectContaining({
tools: expect.arrayContaining([
expect.objectContaining({
type: 'function',
function: expect.objectContaining({
name: 'agentic_rag',
}),
}),
]),
}),
);
});
it('should handle tool calls in streaming response', async () => {
const { agenticRAG } = await import('../../tools/agentic-rag.ts');
vi.mocked(agenticRAG).mockResolvedValue({
success: true,
data: {
results: ['Test result'],
analysis: { needsRetrieval: false },
},
});
// Mock stream chunks for tool call response
const chunks = [
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: 'call_123',
type: 'function',
function: {
name: 'agentic_rag',
arguments:
'{"action": "search_knowledge", "query": "test query", "collection_name": "test_collection"}',
},
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'tool_calls',
},
],
},
];
// Second stream for response after tool execution
const secondStreamChunks = [
{
choices: [
{
delta: { content: 'Based on the search results: Test result' },
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'stop',
},
],
},
];
let callCount = 0;
mockOpenAI.chat.completions.create.mockImplementation(() => {
callCount++;
if (callCount === 1) {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of chunks) {
yield chunk;
}
},
});
} else {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of secondStreamChunks) {
yield chunk;
}
},
});
}
});
await provider.handleStream(commonParams, dataCallback);
// Verify tool was called
expect(agenticRAG).toHaveBeenCalledWith({
action: 'search_knowledge',
query: 'test query',
collection_name: 'test_collection',
});
// Verify feedback messages were sent
expect(dataCallback).toHaveBeenCalledWith(
expect.objectContaining({
type: 'chat',
data: expect.objectContaining({
choices: expect.arrayContaining([
expect.objectContaining({
delta: expect.objectContaining({
content: expect.stringContaining('🔧 Invoking'),
}),
}),
]),
}),
}),
);
expect(dataCallback).toHaveBeenCalledWith(
expect.objectContaining({
type: 'chat',
data: expect.objectContaining({
choices: expect.arrayContaining([
expect.objectContaining({
delta: expect.objectContaining({
content: expect.stringContaining('📞 Calling agentic_rag'),
}),
}),
]),
}),
}),
);
});
it('should handle tool call streaming with incremental arguments', async () => {
const { agenticRAG } = await import('../../tools/agentic-rag.ts');
vi.mocked(agenticRAG).mockResolvedValue({
success: true,
data: { results: ['Test result'] },
});
// Mock stream chunks with incremental tool call arguments
const chunks = [
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: 'call_',
type: 'function',
function: { name: 'agentic_rag', arguments: '{"action": "search_' },
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: '123',
function: { arguments: 'knowledge", "query": "test", ' },
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
function: { arguments: '"collection_name": "test_collection"}' },
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'tool_calls',
},
],
},
];
const secondStreamChunks = [
{
choices: [
{
delta: { content: 'Response after tool call' },
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'stop',
},
],
},
];
let callCount = 0;
mockOpenAI.chat.completions.create.mockImplementation(() => {
callCount++;
if (callCount === 1) {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of chunks) {
yield chunk;
}
},
});
} else {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of secondStreamChunks) {
yield chunk;
}
},
});
}
});
await provider.handleStream(commonParams, dataCallback);
// Verify the complete tool call was assembled and executed
expect(agenticRAG).toHaveBeenCalledWith({
action: 'search_knowledge',
query: 'test',
collection_name: 'test_collection',
});
});
it('should prevent infinite tool call loops', async () => {
const { agenticRAG } = await import('../../tools/agentic-rag.ts');
vi.mocked(agenticRAG).mockResolvedValue({
success: true,
data: {
results: [],
analysis: { needsRetrieval: true },
retrieved_documents: [],
},
});
// Mock stream that always returns tool calls
const toolCallChunks = [
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: 'call_123',
type: 'function',
function: {
name: 'agentic_rag',
arguments:
'{"action": "search_knowledge", "query": "test", "collection_name": "test_collection"}',
},
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'tool_calls',
},
],
},
];
mockOpenAI.chat.completions.create.mockResolvedValue({
async *[Symbol.asyncIterator]() {
for (const chunk of toolCallChunks) {
yield chunk;
}
},
});
await provider.handleStream(commonParams, dataCallback);
// Should detect duplicate tool calls and force completion (up to 5 iterations based on maxToolCallIterations)
// In this case, it should stop after 2 calls due to duplicate detection, but could go up to 5
expect(mockOpenAI.chat.completions.create).toHaveBeenCalledTimes(2);
});
it('should handle tool call errors gracefully', async () => {
const { agenticRAG } = await import('../../tools/agentic-rag.ts');
vi.mocked(agenticRAG).mockRejectedValue(new Error('Tool execution failed'));
const chunks = [
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: 'call_123',
type: 'function',
function: {
name: 'agentic_rag',
arguments:
'{"action": "search_knowledge", "query": "test", "collection_name": "test_collection"}',
},
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'tool_calls',
},
],
},
];
const secondStreamChunks = [
{
choices: [
{
delta: { content: 'I apologize, but I encountered an error.' },
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'stop',
},
],
},
];
let callCount = 0;
mockOpenAI.chat.completions.create.mockImplementation(() => {
callCount++;
if (callCount === 1) {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of chunks) {
yield chunk;
}
},
});
} else {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of secondStreamChunks) {
yield chunk;
}
},
});
}
});
await provider.handleStream(commonParams, dataCallback);
// Should still complete without throwing
expect(mockOpenAI.chat.completions.create).toHaveBeenCalledTimes(2);
});
it('should prevent duplicate tool calls', async () => {
const { agenticRAG } = await import('../../tools/agentic-rag.ts');
vi.mocked(agenticRAG).mockResolvedValue({
success: true,
data: { results: ['Test result'] },
});
// Mock the same tool call twice
const chunks = [
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: 'call_123',
type: 'function',
function: {
name: 'agentic_rag',
arguments:
'{"action": "search_knowledge", "query": "test", "collection_name": "test_collection"}',
},
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'tool_calls',
},
],
},
];
// Second iteration with same tool call
let callCount = 0;
mockOpenAI.chat.completions.create.mockImplementation(() => {
callCount++;
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of chunks) {
yield chunk;
}
},
});
});
await provider.handleStream(commonParams, dataCallback);
// Should only execute the tool once, then force completion
expect(agenticRAG).toHaveBeenCalledTimes(1);
});
it('should handle invalid JSON in tool call arguments', async () => {
const chunks = [
{
choices: [
{
delta: {
tool_calls: [
{
index: 0,
id: 'call_123',
type: 'function',
function: {
name: 'agentic_rag',
arguments: '{"action": "search_knowledge", "invalid": json}', // Invalid JSON
},
},
],
},
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'tool_calls',
},
],
},
];
const secondStreamChunks = [
{
choices: [
{
delta: { content: 'I encountered an error parsing the tool arguments.' },
finish_reason: null,
},
],
},
{
choices: [
{
delta: {},
finish_reason: 'stop',
},
],
},
];
let callCount = 0;
mockOpenAI.chat.completions.create.mockImplementation(() => {
callCount++;
if (callCount === 1) {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of chunks) {
yield chunk;
}
},
});
} else {
return Promise.resolve({
async *[Symbol.asyncIterator]() {
for (const chunk of secondStreamChunks) {
yield chunk;
}
},
});
}
});
// Should not throw, should handle gracefully
await expect(provider.handleStream(commonParams, dataCallback)).resolves.not.toThrow();
});
});

View File

@@ -1,6 +1,7 @@
import { OpenAI } from 'openai';
import ChatSdk from '../chat-sdk/chat-sdk.ts';
import { agenticRAG, AgenticRAGTools } from '../tools/agentic-rag.ts';
import type { GenericEnv } from '../types';
export interface CommonProviderParams {
@@ -35,12 +36,296 @@ export abstract class BaseChatProvider implements ChatStreamProvider {
});
const client = this.getOpenAIClient(param);
const streamParams = this.getStreamParams(param, safeMessages);
const stream = await client.chat.completions.create(streamParams);
for await (const chunk of stream as unknown as AsyncIterable<any>) {
const shouldBreak = await this.processChunk(chunk, dataCallback);
if (shouldBreak) break;
const tools = [AgenticRAGTools];
const callFunction = async (name, args) => {
if (name === 'agentic_rag') {
return agenticRAG(args);
}
};
// Main conversation loop - handle tool calls properly
let conversationComplete = false;
let toolCallIterations = 0;
const maxToolCallIterations = 5; // Prevent infinite loops
let toolsExecuted = false; // Track if we've executed tools
const attemptedToolCalls = new Set<string>(); // Track attempted tool calls to prevent duplicates
while (!conversationComplete && toolCallIterations < maxToolCallIterations) {
const streamParams = this.getStreamParams(param, safeMessages);
// Only provide tools on the first call, after that force text response
const currentTools = toolsExecuted ? undefined : tools;
const stream = await client.chat.completions.create({ ...streamParams, tools: currentTools });
let assistantMessage = '';
const toolCalls: any[] = [];
for await (const chunk of stream as unknown as AsyncIterable<any>) {
// console.log('chunk', chunk);
// Handle tool calls
if (chunk.choices[0]?.delta?.tool_calls) {
const deltaToolCalls = chunk.choices[0].delta.tool_calls;
for (const deltaToolCall of deltaToolCalls) {
if (deltaToolCall.index !== undefined) {
// Initialize or get existing tool call
if (!toolCalls[deltaToolCall.index]) {
toolCalls[deltaToolCall.index] = {
id: deltaToolCall.id || '',
type: deltaToolCall.type || 'function',
function: {
name: deltaToolCall.function?.name || '',
arguments: deltaToolCall.function?.arguments || '',
},
};
} else {
// Append to existing tool call
if (deltaToolCall.function?.arguments) {
toolCalls[deltaToolCall.index].function.arguments +=
deltaToolCall.function.arguments;
}
if (deltaToolCall.function?.name) {
toolCalls[deltaToolCall.index].function.name += deltaToolCall.function.name;
}
if (deltaToolCall.id) {
toolCalls[deltaToolCall.index].id += deltaToolCall.id;
}
}
}
}
}
// Handle regular content
if (chunk.choices[0]?.delta?.content) {
assistantMessage += chunk.choices[0].delta.content;
}
// Check if stream is finished
if (chunk.choices[0]?.finish_reason) {
if (chunk.choices[0].finish_reason === 'tool_calls' && toolCalls.length > 0) {
// Increment tool call iterations counter
toolCallIterations++;
console.log(`Tool call iteration ${toolCallIterations}/${maxToolCallIterations}`);
// Execute tool calls and add results to conversation
console.log('Executing tool calls:', toolCalls);
// Limit to one tool call per iteration to prevent concurrent execution issues
// Also filter out duplicate tool calls
const uniqueToolCalls = toolCalls.filter(toolCall => {
const toolCallKey = `${toolCall.function.name}:${toolCall.function.arguments}`;
return !attemptedToolCalls.has(toolCallKey);
});
const toolCallsToExecute = uniqueToolCalls.slice(0, 1);
if (toolCallsToExecute.length === 0) {
console.log('All tool calls have been attempted already, forcing completion');
toolsExecuted = true;
conversationComplete = true;
break;
}
// Send feedback to user about tool invocation
dataCallback({
type: 'chat',
data: {
choices: [
{
delta: {
content: `\n\n🔧 Invoking ${toolCallsToExecute.length} tool${toolCallsToExecute.length > 1 ? 's' : ''}...\n`,
},
},
],
},
});
// Add assistant message with tool calls to conversation
safeMessages.push({
role: 'assistant',
content: assistantMessage || null,
tool_calls: toolCallsToExecute,
});
// Execute each tool call and add results
let needsMoreRetrieval = false;
for (const toolCall of toolCallsToExecute) {
if (toolCall.type === 'function') {
const name = toolCall.function.name;
console.log(`Calling function: ${name}`);
// Track this tool call attempt
const toolCallKey = `${toolCall.function.name}:${toolCall.function.arguments}`;
attemptedToolCalls.add(toolCallKey);
// Send feedback about specific tool being called
dataCallback({
type: 'chat',
data: {
choices: [
{
delta: {
content: `📞 Calling ${name}...`,
},
},
],
},
});
try {
const args = JSON.parse(toolCall.function.arguments);
console.log(`Function arguments:`, args);
const result = await callFunction(name, args);
console.log(`Function result:`, result);
// Check if agentic-rag indicates more retrieval is needed
if (
name === 'agentic_rag' &&
result?.data?.analysis?.needsRetrieval === true &&
(!result?.data?.retrieved_documents ||
result.data.retrieved_documents.length === 0)
) {
needsMoreRetrieval = true;
console.log('Agentic RAG indicates more retrieval needed');
// Add context about previous attempts to help LLM make better decisions
const attemptedActions = Array.from(attemptedToolCalls)
.filter(key => key.startsWith('agentic_rag:'))
.map(key => {
try {
const args = JSON.parse(key.split(':', 2)[1]);
return `${args.action} with query: "${args.query}"`;
} catch {
return 'unknown action';
}
});
if (attemptedActions.length > 0) {
safeMessages.push({
role: 'system',
content: `Previous retrieval attempts: ${attemptedActions.join(', ')}. Consider trying a different approach or more specific query.`,
});
}
}
// Send feedback about tool completion
dataCallback({
type: 'chat',
data: {
choices: [
{
delta: {
content: `\n ${JSON.stringify(result)}`,
},
},
],
},
});
// Add tool result to conversation
safeMessages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: JSON.stringify(result),
});
} catch (error) {
console.error(`Error executing tool ${name}:`, error);
// Send feedback about tool error
dataCallback({
type: 'chat',
data: {
choices: [
{
delta: {
content: ` ❌ Error\n`,
},
},
],
},
});
safeMessages.push({
role: 'tool',
tool_call_id: toolCall.id,
content: `Error: ${error.message}`,
});
}
}
}
// Only mark tools as executed if we don't need more retrieval
if (!needsMoreRetrieval) {
toolsExecuted = true;
}
// Send feedback that tool execution is complete
dataCallback({
type: 'chat',
data: {
choices: [
{
delta: {
content: `\n🎯 Tool execution complete. Generating response...\n\n`,
},
},
],
},
});
// Continue conversation with tool results
break;
} else {
// Regular completion - send final response
conversationComplete = true;
}
}
// Process chunk normally for non-tool-call responses
if (!chunk.choices[0]?.delta?.tool_calls) {
// console.log('after-tool-call-chunk', chunk);
const shouldBreak = await this.processChunk(chunk, dataCallback);
if (shouldBreak) {
conversationComplete = true;
break;
}
}
}
}
// Handle case where we hit maximum tool call iterations
if (toolCallIterations >= maxToolCallIterations && !conversationComplete) {
console.log('Maximum tool call iterations reached, forcing completion');
// Send a message indicating we've hit the limit and provide available information
dataCallback({
type: 'chat',
data: {
choices: [
{
delta: {
content:
'\n\n⚠ Maximum tool execution limit reached. Based on the available information, I can provide the following response:\n\n',
},
},
],
},
});
// Make one final call without tools to get a response based on the tool results
const finalStreamParams = this.getStreamParams(param, safeMessages);
const finalStream = await client.chat.completions.create({
...finalStreamParams,
tools: undefined, // Remove tools to force a text response
});
for await (const chunk of finalStream as unknown as AsyncIterable<any>) {
const shouldBreak = await this.processChunk(chunk, dataCallback);
if (shouldBreak) break;
}
}
}
}

View File

@@ -15,10 +15,21 @@ export class FireworksAiChatProvider extends BaseChatProvider {
let modelPrefix = 'accounts/fireworks/models/';
if (param.model.toLowerCase().includes('yi-')) {
modelPrefix = 'accounts/yi-01-ai/models/';
} else if (param.model.toLowerCase().includes('/perplexity/')) {
modelPrefix = 'accounts/perplexity/models/';
} else if (param.model.toLowerCase().includes('/sentientfoundation/')) {
modelPrefix = 'accounts/sentientfoundation/models/';
} else if (param.model.toLowerCase().includes('/sentientfoundation-serverless/')) {
modelPrefix = 'accounts/sentientfoundation-serverless/models/';
} else if (param.model.toLowerCase().includes('/instacart/')) {
modelPrefix = 'accounts/instacart/models/';
}
const finalModelIdentifier = param.model.includes(modelPrefix)
? param.model
: `${modelPrefix}${param.model}`;
console.log('using fireworks model', finalModelIdentifier);
return {
model: `${modelPrefix}${param.model}`,
model: finalModelIdentifier,
messages: safeMessages,
stream: true,
};

View File

@@ -0,0 +1,259 @@
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { agenticRAG, AgenticRAGTools } from '../agentic-rag';
// Mock the dependencies
vi.mock('@zilliz/milvus2-sdk-node', () => ({
MilvusClient: vi.fn().mockImplementation(() => ({
listCollections: vi.fn().mockResolvedValue({
collection_names: ['family_domestic', 'business_corporate'],
data: [{ name: 'family_domestic' }, { name: 'business_corporate' }],
}),
search: vi.fn().mockResolvedValue({
results: [
{
content: 'Test document about AI and machine learning',
score: 0.85,
metadata: '{"category": "AI", "author": "Test Author"}',
},
{
content: 'Another document about neural networks',
score: 0.75,
metadata: '{"category": "ML", "author": "Another Author"}',
},
],
}),
insert: vi.fn().mockResolvedValue({ success: true }),
createCollection: vi.fn().mockResolvedValue({ success: true }),
createIndex: vi.fn().mockResolvedValue({ success: true }),
})),
DataType: {
VarChar: 'VarChar',
FloatVector: 'FloatVector',
},
}));
vi.mock('openai', () => ({
OpenAI: vi.fn().mockImplementation(() => ({
embeddings: {
create: vi.fn().mockResolvedValue({
data: [{ embedding: new Array(768).fill(0.1) }],
}),
},
})),
}));
// Mock environment variables
vi.stubEnv('FIREWORKS_API_KEY', 'test-api-key');
describe('Agentic RAG System', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('should analyze queries correctly', async () => {
// Test factual query
const factualResult = await agenticRAG({
action: 'analyze_query',
query: 'What is artificial intelligence?',
collection_name: 'family_domestic',
});
expect(factualResult.status).toBe('success');
expect(factualResult.data.needsRetrieval).toBe(true);
expect(factualResult.data.queryType).toBe('factual');
// Test conversational query with multiple conversational keywords
const conversationalResult = await agenticRAG({
action: 'analyze_query',
query: 'Hello, how are you doing today?',
collection_name: 'family_domestic',
});
expect(conversationalResult.status).toBe('success');
expect(conversationalResult.data.needsRetrieval).toBe(false);
expect(conversationalResult.data.queryType).toBe('conversational');
// Test creative query with multiple creative keywords
const creativeResult = await agenticRAG({
action: 'analyze_query',
query: 'Write a story and compose a poem',
collection_name: 'family_domestic',
});
expect(creativeResult.status).toBe('success');
expect(creativeResult.data.needsRetrieval).toBe(false);
expect(creativeResult.data.queryType).toBe('creative');
});
it('should search knowledge base for factual queries', async () => {
const result = await agenticRAG({
action: 'search_knowledge',
query: 'What is machine learning?',
collection_name: 'family_domestic',
top_k: 2,
similarity_threshold: 0.1,
});
expect(result.status).toBe('success');
expect(result.context).toBeDefined();
expect(Array.isArray(result.context)).toBe(true);
expect(result.data.retrieved_documents).toBeDefined();
expect(result.data.analysis.needsRetrieval).toBe(true);
});
it('should not search for conversational queries', async () => {
const result = await agenticRAG({
action: 'search_knowledge',
query: 'Hello there! How are you?',
collection_name: 'family_domestic',
});
expect(result.status).toBe('success');
expect(result.data.analysis.needsRetrieval).toBe(false);
expect(result.data.retrieved_documents).toHaveLength(0);
});
it('should store documents successfully', async () => {
const result = await agenticRAG({
action: 'store_document',
document: {
id: 'test-doc-1',
content: 'This is a test document about neural networks and deep learning.',
metadata: { category: 'AI', author: 'Test Author' },
},
collection_name: 'family_domestic',
});
expect(result.status).toBe('success');
expect(result.data.document_id).toBe('test-doc-1');
expect(result.data.content_length).toBeGreaterThan(0);
});
it('should get context for factual queries', async () => {
const result = await agenticRAG({
action: 'get_context',
query: 'Tell me about vector databases',
collection_name: 'family_domestic',
top_k: 2,
});
expect(result.status).toBe('success');
expect(result.data.analysis.needsRetrieval).toBe(true);
expect(result.context).toBeDefined();
expect(result.data.context_summary).toBeDefined();
});
it('should handle semantic search', async () => {
const result = await agenticRAG({
action: 'semantic_search',
query: 'artificial intelligence concepts',
collection_name: 'family_domestic',
top_k: 3,
});
expect(result.status).toBe('success');
expect(result.data.results).toBeDefined();
expect(Array.isArray(result.data.results)).toBe(true);
});
it('should list collections', async () => {
const result = await agenticRAG({
action: 'list_collections',
collection_name: 'family_domestic',
});
expect(result.status).toBe('success');
expect(result.message).toContain('family_domestic');
});
it('should handle errors gracefully', async () => {
const result = await agenticRAG({
action: 'analyze_query',
collection_name: 'family_domestic',
// Missing query parameter
});
expect(result.status).toBe('error');
expect(result.message).toContain('Query is required');
});
it('should handle invalid actions', async () => {
const result = await agenticRAG({
action: 'invalid_action',
collection_name: 'family_domestic',
});
expect(result.status).toBe('error');
expect(result.message).toContain('Invalid action');
});
it('should have correct tool definition structure', () => {
expect(AgenticRAGTools.type).toBe('function');
expect(AgenticRAGTools.function.name).toBe('agentic_rag');
expect(AgenticRAGTools.function.description).toBeDefined();
expect(AgenticRAGTools.function.parameters.type).toBe('object');
expect(AgenticRAGTools.function.parameters.properties.action).toBeDefined();
expect(AgenticRAGTools.function.parameters.required).toContain('action');
expect(AgenticRAGTools.function.parameters.required).toContain('collection_name');
});
it('should demonstrate intelligent retrieval decision making', async () => {
// Test various query types to show intelligent decision making
const queries = [
{ query: 'What is AI?', expectedRetrieval: true },
{ query: 'Hello world how are you', expectedRetrieval: false },
{ query: 'Write a poem and create a story', expectedRetrieval: false },
{ query: 'Explain machine learning', expectedRetrieval: true },
{ query: 'How are you doing today?', expectedRetrieval: true },
{ query: 'Tell me about neural networks', expectedRetrieval: true },
];
for (const testCase of queries) {
const result = await agenticRAG({
action: 'search_knowledge',
query: testCase.query,
collection_name: 'family_domestic',
});
expect(result.status).toBe('success');
expect(result.data.analysis.needsRetrieval).toBe(testCase.expectedRetrieval);
console.log(
`[DEBUG_LOG] Query: "${testCase.query}" - Retrieval needed: ${result.data.analysis.needsRetrieval}`,
);
}
});
it('should filter results by similarity threshold', async () => {
const result = await agenticRAG({
action: 'search_knowledge',
query: 'What is machine learning?',
collection_name: 'family_domestic',
similarity_threshold: 0.8, // High threshold
});
expect(result.status).toBe('success');
if (result.data.analysis.needsRetrieval) {
// Should only return results above threshold
result.data.retrieved_documents.forEach((doc: any) => {
expect(doc.score).toBeGreaterThanOrEqual(0.8);
});
}
});
it('should handle context window limits', async () => {
const result = await agenticRAG({
action: 'get_context',
query: 'Tell me about artificial intelligence',
collection_name: 'family_domestic',
context_window: 1000,
});
expect(result.status).toBe('success');
if (result.data.analysis.needsRetrieval && result.data.context_summary) {
// Context should respect the window limit (approximate check)
expect(result.data.context_summary.length).toBeLessThanOrEqual(2000); // Allow some flexibility
}
});
});

View File

@@ -0,0 +1,530 @@
import { MilvusClient, DataType } from '@zilliz/milvus2-sdk-node';
import { OpenAI } from 'openai';
import { ProviderRepository } from '../providers/_ProviderRepository.ts';
/**
* Configuration for the Agentic RAG system
*/
export interface AgenticRAGConfig {
milvusAddress?: string;
collectionName?: string;
embeddingDimension?: number;
topK?: number;
similarityThreshold?: number;
}
/**
* Result structure for Agentic RAG operations
*/
export interface AgenticRAGResult {
message: string;
status: 'success' | 'error';
data?: any;
context?: string[];
relevanceScore?: number;
}
/**
* Document structure for knowledge base
*/
export interface Document {
id: string;
content: string;
metadata?: Record<string, any>;
embedding?: number[];
}
/**
* Agentic RAG Tools for intelligent retrieval-augmented generation
* This system makes intelligent decisions about when and how to retrieve information
*/
export const AgenticRAGTools = {
type: 'function',
function: {
name: 'agentic_rag',
description:
'Intelligent retrieval-augmented generation system that can store documents, search knowledge base, and provide contextual information based on user queries. The system intelligently decides when retrieval is needed.',
parameters: {
type: 'object',
properties: {
action: {
type: 'string',
enum: [
'list_collections',
'report_status',
'semantic_search',
'search_knowledge',
'analyze_query',
'get_context',
],
description: 'Action to perform with the agentic RAG system.',
},
query: {
type: 'string',
description: 'User query or search term for knowledge retrieval.',
},
// document: {
// type: 'object',
// properties: {
// content: { type: 'string', description: 'Document content to store' },
// metadata: { type: 'object', description: 'Additional metadata for the document' },
// id: { type: 'string', description: 'Unique identifier for the document' },
// },
// description: 'Document to store in the knowledge base.',
// },
collection_name: {
type: 'string',
// todo: make this fancy w/ dynamic collection
enum: [
'business_corporate',
'civil_procedure',
'criminal_justice',
'education_professions',
'environmental_infrastructure',
'family_domestic',
'foundational_law',
'government_administration',
'health_social_services',
'miscellaneous',
'property_real_estate',
'special_documents',
'taxation_finance',
'transportation_motor_vehicles',
],
description: 'Name of the collection to work with.',
},
top_k: {
type: 'number',
description: 'Number of similar documents to retrieve (default: 5).',
},
similarity_threshold: {
type: 'number',
description: 'Minimum similarity score for relevant results (0-1, default: 0.7).',
},
context_window: {
type: 'number',
description: 'Maximum number of context tokens to include (default: 2000).',
},
},
required: ['action', 'collection_name'],
additionalProperties: false,
},
strict: true,
},
};
/**
* Default configuration for the Agentic RAG system
*/
const DEFAULT_CONFIG: AgenticRAGConfig = {
milvusAddress: 'localhost:19530',
collectionName: 'family_domestic',
embeddingDimension: 768,
topK: 5,
similarityThreshold: 0.5,
};
/**
* Simple embedding function using a mock implementation
* In production, this should use a real embedding service like OpenAI, Cohere, etc.
*/
async function generateEmbedding(text: string): Promise<number[] | undefined> {
const embeddingsClient = new OpenAI({
apiKey: process.env.FIREWORKS_API_KEY,
baseURL: ProviderRepository.OPENAI_COMPAT_ENDPOINTS.fireworks,
}).embeddings;
const embeddings = await embeddingsClient.create({
input: [text],
model: 'nomic-ai/nomic-embed-text-v1.5',
dimensions: 768,
});
return embeddings.data.at(0)?.embedding;
}
/**
* Analyze query to determine if retrieval is needed
*/
function analyzeQueryForRetrieval(query: string): {
needsRetrieval: boolean;
confidence: number;
reasoning: string;
queryType: 'factual' | 'conversational' | 'creative' | 'analytical';
} {
const lowerQuery = query.toLowerCase();
// Keywords that suggest factual information is needed
const factualKeywords = [
'what is',
'who is',
'when did',
'where is',
'how does',
'explain',
'define',
'describe',
'tell me about',
'information about',
'details on',
'facts about',
'history of',
'background on',
];
// Keywords that suggest conversational/creative responses
const conversationalKeywords = [
'hello',
'hi',
'how are you',
'thank you',
'please help',
'i think',
'in my opinion',
'what do you think',
'can you help',
];
// Keywords that suggest creative tasks
const creativeKeywords = [
'write a',
'create a',
'generate',
'compose',
'draft',
'story',
'poem',
'essay',
'letter',
'email',
];
let factualScore = 0;
let conversationalScore = 0;
let creativeScore = 0;
factualKeywords.forEach(keyword => {
if (lowerQuery.includes(keyword)) factualScore += 1;
});
conversationalKeywords.forEach(keyword => {
if (lowerQuery.includes(keyword)) conversationalScore += 1;
});
creativeKeywords.forEach(keyword => {
if (lowerQuery.includes(keyword)) creativeScore += 1;
});
// Determine query type and retrieval need
if (factualScore > conversationalScore && factualScore > creativeScore) {
return {
needsRetrieval: true,
confidence: Math.min(factualScore * 0.3, 0.9),
reasoning:
'Query appears to be asking for factual information that may benefit from knowledge retrieval.',
queryType: 'factual',
};
} else if (creativeScore > conversationalScore && creativeScore > 1) {
// Only skip retrieval for clearly creative tasks with multiple creative keywords
return {
needsRetrieval: false,
confidence: 0.8,
reasoning: 'Query appears to be requesting creative content generation.',
queryType: 'creative',
};
} else if (conversationalScore > 1 && conversationalScore > factualScore) {
// Only skip retrieval for clearly conversational queries with multiple conversational keywords
return {
needsRetrieval: false,
confidence: 0.7,
reasoning: 'Query appears to be conversational in nature.',
queryType: 'conversational',
};
} else {
// Default to retrieval for most cases to ensure comprehensive responses
return {
needsRetrieval: true,
confidence: 0.8,
reasoning: 'Defaulting to retrieval to provide comprehensive and accurate information.',
queryType: 'analytical',
};
}
}
/**
* Main Agentic RAG function that handles intelligent retrieval decisions
*/
export async function agenticRAG(args: {
action: string;
query?: string;
document?: Document;
collection_name?: string;
top_k?: number;
similarity_threshold?: number;
context_window?: number;
user_confirmed?: boolean;
}): Promise<AgenticRAGResult> {
const config = { ...DEFAULT_CONFIG };
const collectionName = args.collection_name || config.collectionName!;
const topK = args.top_k || config.topK!;
const similarityThreshold = args.similarity_threshold || config.similarityThreshold!;
const milvusClient = new MilvusClient({ address: config.milvusAddress! });
try {
switch (args.action) {
case 'analyze_query':
if (!args.query) {
return { status: 'error', message: 'Query is required for analysis.' };
}
// eslint-disable-next-line no-case-declarations
const analysis = analyzeQueryForRetrieval(args.query);
return {
status: 'success',
message: `Query analysis complete. Retrieval ${analysis.needsRetrieval ? 'recommended' : 'not needed'}.`,
data: analysis,
};
case 'list_collections':
// eslint-disable-next-line no-case-declarations
const { collection_names } = (await milvusClient.listCollections()) as any as {
collection_names: string[];
};
return {
status: 'success',
message: JSON.stringify(collection_names),
};
case 'search_knowledge':
if (!args.query) {
return { status: 'error', message: 'Query is required for knowledge search.' };
}
// First, analyze if retrieval is needed
// eslint-disable-next-line no-case-declarations
const queryAnalysis = analyzeQueryForRetrieval(args.query);
if (!queryAnalysis.needsRetrieval) {
return {
status: 'success',
message: 'Query analysis suggests retrieval is not needed for this type of query.',
data: {
analysis: queryAnalysis,
retrieved_documents: [],
context: [],
},
};
}
// Generate embedding for the query
// eslint-disable-next-line no-case-declarations
const queryEmbedding = await generateEmbedding(args.query);
// Search for similar documents
// eslint-disable-next-line no-case-declarations
const searchResult = await milvusClient.search({
collection_name: collectionName,
vector: queryEmbedding,
topk: topK,
params: { nprobe: 8 },
output_fields: ['content', 'metadata'],
});
// Filter results by similarity threshold
// eslint-disable-next-line no-case-declarations
const relevantResults = searchResult.results.filter(
(result: any) => result.score >= similarityThreshold,
);
// eslint-disable-next-line no-case-declarations
const contextDocuments = relevantResults.map((result: any) => ({
content: result.content,
score: result.score,
metadata: result.metadata,
}));
return {
status: 'success',
message: `Found ${relevantResults.length} relevant documents for query.`,
data: {
analysis: queryAnalysis,
retrieved_documents: contextDocuments,
context: contextDocuments.map((doc: any) => doc.content),
},
context: contextDocuments.map((doc: any) => doc.content),
relevanceScore: relevantResults.length > 0 ? relevantResults.at(0)?.score : 0,
};
case 'store_document':
if (!args.document || !args.document.content) {
return { status: 'error', message: 'Document with content is required for storage.' };
}
// Generate embedding for the document
// eslint-disable-next-line no-case-declarations
const docEmbedding = await generateEmbedding(args.document.content);
// eslint-disable-next-line no-case-declarations
const docId =
args.document.id || `doc_${Date.now()}_${Math.random().toString(36).substr(2, 9)}`;
// Store document in Milvus
await milvusClient.insert({
collection_name: collectionName,
fields_data: [
{ name: 'id', values: [docId] },
{ name: 'embedding', values: [docEmbedding] },
{ name: 'content', values: [args.document.content] },
{ name: 'metadata', values: [JSON.stringify(args.document.metadata || {})] },
],
});
return {
status: 'success',
message: `Document stored successfully with ID: ${docId}`,
data: { document_id: docId, content_length: args.document.content.length },
};
case 'manage_collection':
try {
// Check if collection exists
const collections = await milvusClient.listCollections();
const collectionExists =
collections.data.filter(c => c.name.includes(collectionName)).length > 0;
if (!collectionExists) {
// Create collection with proper schema for RAG
const collectionSchema = {
collection_name: collectionName,
fields: [
{
name: 'id',
type: DataType.VarChar,
params: { max_length: 100 },
is_primary_key: true,
},
{
name: 'embedding',
type: DataType.FloatVector,
params: { dim: config.embeddingDimension },
},
{ name: 'content', type: DataType.VarChar, params: { max_length: 65535 } },
{ name: 'metadata', type: DataType.VarChar, params: { max_length: 1000 } },
],
};
await milvusClient.createCollection(collectionSchema as any);
// Create index for efficient similarity search
await milvusClient.createIndex({
collection_name: collectionName,
field_name: 'embedding',
index_type: 'IVF_FLAT',
params: { nlist: 1024 },
metric_type: 'COSINE',
});
return {
status: 'success',
message: `Collection '${collectionName}' created successfully with RAG schema.`,
data: { collection_name: collectionName, action: 'created' },
};
} else {
return {
status: 'success',
message: `Collection '${collectionName}' already exists.`,
data: { collection_name: collectionName, action: 'exists' },
};
}
} catch (error: any) {
return {
status: 'error',
message: `Error managing collection: ${error.message}`,
};
}
case 'semantic_search':
if (!args.query) {
return { status: 'error', message: 'Query is required for semantic search.' };
}
// eslint-disable-next-line no-case-declarations
const semanticEmbedding = await generateEmbedding(args.query);
// eslint-disable-next-line no-case-declarations
const semanticResults = await milvusClient.search({
collection_name: collectionName,
vector: semanticEmbedding,
topk: topK,
params: { nprobe: 8 },
output_fields: ['content', 'metadata'],
});
return {
status: 'success',
message: `Semantic search completed. Found ${semanticResults.results.length} results.`,
data: {
results: semanticResults.results.map((result: any) => ({
content: result.content,
score: result.score,
metadata: JSON.parse(result.metadata || '{}'),
})),
},
};
case 'get_context':
if (!args.query) {
return { status: 'error', message: 'Query is required to get context.' };
}
// This is a comprehensive context retrieval that combines analysis and search
// eslint-disable-next-line no-case-declarations
const contextAnalysis = analyzeQueryForRetrieval(args.query);
if (contextAnalysis.needsRetrieval) {
const contextEmbedding = await generateEmbedding(args.query);
const contextSearch = await milvusClient.search({
collection_name: collectionName,
vector: contextEmbedding,
topk: topK,
params: { nprobe: 8 },
output_fields: ['content', 'metadata'],
});
const contextResults = contextSearch.results
.filter((result: any) => result.score >= similarityThreshold)
.map((result: any) => ({
content: result.content,
score: result.score,
metadata: JSON.parse(result.metadata || '{}'),
}));
return {
status: 'success',
message: `Context retrieved successfully. Found ${contextResults.length} relevant documents.`,
data: {
analysis: contextAnalysis,
context_documents: contextResults,
context_summary: contextResults.map((doc: any) => doc.content).join('\n\n'),
},
context: contextResults.map((doc: any) => doc.content),
};
} else {
return {
status: 'success',
message: 'No context retrieval needed for this query type.',
data: {
analysis: contextAnalysis,
context_documents: [],
context_summary: '',
},
};
}
default:
return { status: 'error', message: 'Invalid action specified.' };
}
} catch (error: any) {
return {
status: 'error',
message: `Integration error: ${error.message}`,
};
}
}

View File

@@ -0,0 +1,21 @@
// tools/basicValue.ts
export interface BasicValueResult {
value: string;
}
export const BasicValueTool = {
name: 'basicValue',
type: 'function',
description: 'Returns a basic value (timestamp-based) for testing',
parameters: {
type: 'object',
properties: {},
required: [],
},
function: async (): Promise<BasicValueResult> => {
// generate something obviously basic
const basic = `tool-called-${Date.now()}`;
console.log('[BasicValueTool] returning:', basic);
return { value: basic };
},
};

View File

@@ -0,0 +1,25 @@
export async function getWeather(latitude: any, longitude: any) {
const response = await fetch(
`https://api.open-meteo.com/v1/forecast?latitude=${latitude}&longitude=${longitude}&current=temperature_2m,wind_speed_10m&hourly=temperature_2m,relative_humidity_2m,wind_speed_10m`,
);
const data = await response.json();
return data.current.temperature_2m;
}
export const WeatherTool = {
type: 'function',
function: {
name: 'get_weather',
description: 'Get current temperature for provided coordinates in celsius.',
parameters: {
type: 'object',
properties: {
latitude: { type: 'number' },
longitude: { type: 'number' },
},
required: ['latitude', 'longitude'],
additionalProperties: false,
},
strict: true,
},
};

View File

@@ -0,0 +1,68 @@
export interface ShipControlResult {
message: string;
status: 'success' | 'error';
data?: any;
}
/**
* A mock interface for controlling a ship.
*/
export const YachtpitTools = {
type: 'function',
description: 'Interface for controlling a ship: set speed, change heading, report status, etc.',
/**
* Mock implementation of a ship control command.
*/
function: {
name: 'ship_control',
parameters: {
type: 'object',
properties: {
action: {
type: 'string',
enum: ['set_speed', 'change_heading', 'report_status', 'stop'],
description: 'Action to perform on the ship.',
},
value: {
type: 'number',
description:
'Numeric value for the action, such as speed (knots) or heading (degrees). Only required for set_speed and change_heading.',
},
},
required: ['action'],
additionalProperties: false,
},
},
};
export function yachtpitAi(args: { action: string; value?: number }): Promise<ShipControlResult> {
switch (args.action) {
case 'set_speed':
if (typeof args.value !== 'number') {
return { status: 'error', message: 'Missing speed value.' };
}
return { status: 'success', message: `Speed set to ${args.value} knots.` };
case 'change_heading':
if (typeof args.value !== 'number') {
return { status: 'error', message: 'Missing heading value.' };
}
return { status: 'success', message: `Heading changed to ${args.value} degrees.` };
case 'report_status':
// Return a simulated ship status
return {
status: 'success',
message: 'Ship status reported.',
data: {
speed: 12,
heading: 87,
engine: 'nominal',
position: { lat: 42.35, lon: -70.88 },
},
};
case 'stop':
return { status: 'success', message: 'Ship stopped.' };
default:
return { status: 'error', message: 'Invalid action.' };
}
}

View File

@@ -8,7 +8,8 @@
"tests:coverage": "vitest run --coverage.enabled=true",
"generate:sitemap": "bun ./scripts/generate_sitemap.js open-gsio.seemueller.workers.dev",
"generate:robotstxt": "bun ./scripts/generate_robots_txt.js open-gsio.seemueller.workers.dev",
"generate:fonts": "cp -r ../../node_modules/katex/dist/fonts public/static"
"generate:fonts": "cp -r ../../node_modules/katex/dist/fonts public/static",
"generate:pwa:assets": "test ! -f public/pwa-64x64.png && pwa-assets-generator --preset minimal-2023 public/logo.png || echo 'PWA assets already exist'"
},
"exports": {
"./server/index.ts": {
@@ -17,19 +18,23 @@
}
},
"devDependencies": {
"@open-gsio/env": "workspace:*",
"@open-gsio/scripts": "workspace:*",
"@chakra-ui/icons": "^2.2.4",
"@chakra-ui/react": "^2.10.6",
"@cloudflare/workers-types": "^4.20241205.0",
"@emotion/react": "^11.13.5",
"@emotion/styled": "^11.13.5",
"@open-gsio/env": "workspace:*",
"@open-gsio/scripts": "workspace:*",
"@testing-library/jest-dom": "^6.4.2",
"@testing-library/react": "^16.3.0",
"@testing-library/user-event": "^14.5.2",
"@types/bun": "^1.2.17",
"@types/marked": "^6.0.0",
"@vite-pwa/assets-generator": "^1.0.0",
"@vitejs/plugin-react": "^4.3.4",
"@vitest/coverage-v8": "^3.1.4",
"@vitest/ui": "^3.1.4",
"bun": "^1.2.17",
"chokidar": "^4.0.1",
"framer-motion": "^11.13.1",
"isomorphic-dompurify": "^2.19.0",
@@ -37,6 +42,7 @@
"jsdom": "^24.0.0",
"katex": "^0.16.20",
"lucide-react": "^0.436.0",
"mapbox-gl": "^3.13.0",
"marked": "^15.0.4",
"marked-extended-latex": "^1.1.0",
"marked-footnote": "^1.2.4",
@@ -44,20 +50,19 @@
"mobx": "^6.13.5",
"mobx-react-lite": "^4.0.7",
"mobx-state-tree": "^6.0.1",
"moo": "^0.5.2",
"qrcode.react": "^4.1.0",
"react": "^19.1.0",
"react-dom": "^19.1.0",
"react-icons": "^5.4.0",
"react-streaming": "^0.3.44",
"react-map-gl": "^8.0.4",
"react-streaming": "^0.4.2",
"react-textarea-autosize": "^8.5.5",
"shiki": "^1.24.0",
"tslog": "^4.9.3",
"typescript": "^5.7.2",
"vike": "^0.4.235",
"vite": "^7.0.0",
"vite-plugin-pwa": "^1.0.0",
"vitest": "^3.1.4",
"bun": "^1.2.17",
"@types/bun": "^1.2.17"
"vite-plugin-pwa": "^1.0.1",
"vitest": "^3.1.4"
}
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.8 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 638 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 563 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 624 B

Binary file not shown.

Before

Width:  |  Height:  |  Size: 534 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 373 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 165 KiB

View File

@@ -1,19 +0,0 @@
{
"name": "",
"short_name": "",
"icons": [
{
"src": "/android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "/android-chrome-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
],
"theme_color": "#fffff0",
"background_color": "#000000",
"display": "standalone"
}

View File

@@ -28,7 +28,7 @@ const Chat = observer(({ height, width }) => {
<GridItem
overflow="auto"
width="100%"
maxH="100%"
maxH="100vh"
ref={scrollRef}
// If there are attachments, use "100px". Otherwise, use "128px" on Android, "73px" elsewhere.
pb={isAndroid ? '128px' : '73px'}

View File

@@ -171,7 +171,7 @@ const InputMenu: React.FC<{ isDisabled?: boolean }> = observer(({ isDisabled })
bg="background.tertiary"
color="text.primary"
onClick={() => {
clientChatStore.setActiveConversation('conversation:new');
clientChatStore.reset();
onClose();
}}
_hover={{ bg: 'rgba(0, 0, 0, 0.05)' }}

View File

@@ -22,7 +22,7 @@ const ChatInput = observer(() => {
const [shouldFollow, setShouldFollow] = useState<boolean>(userOptionsStore.followModeEnabled);
const [couldFollow, setCouldFollow] = useState<boolean>(chatStore.isLoading);
const [inputWidth, setInputWidth] = useState<string>('50%');
const [inputWidth, setInputWidth] = useState<string>('40%');
useEffect(() => {
setShouldFollow(chatStore.isLoading && userOptionsStore.followModeEnabled);
@@ -64,10 +64,10 @@ const ChatInput = observer(() => {
};
const inputMaxWidth = useBreakpointValue(
{ base: '50rem', lg: '50rem', md: '80%', sm: '100vw' },
{ base: '30rem', lg: '50rem', md: '80%', sm: '100vw' },
{ ssr: true },
);
const inputMinWidth = useBreakpointValue({ lg: '40rem' }, { ssr: true });
const inputMinWidth = useBreakpointValue({ lg: '40rem', md: '30rem' }, { ssr: true });
useEffect(() => {
setInputWidth('100%');
@@ -75,9 +75,7 @@ const ChatInput = observer(() => {
return (
<Box
width={inputWidth}
maxW={inputMaxWidth}
minWidth={inputMinWidth}
width={inputMinWidth}
mx="auto"
p={2}
pl={2}

View File

@@ -1,4 +1,4 @@
import { Box, chakra, InputGroup } from '@chakra-ui/react';
import { Box, chakra, InputGroup, useBreakpointValue } from '@chakra-ui/react';
import { observer } from 'mobx-react-lite';
import React, { useEffect, useRef, useState } from 'react';
import AutoResize from 'react-textarea-autosize';
@@ -19,7 +19,7 @@ const InputTextArea: React.FC<InputTextAreaProps> = observer(
useEffect(() => {
if (value.length > 10) {
setHeightConstraint();
setHeightConstraint(parseInt(value));
}
}, [value]);
@@ -38,6 +38,7 @@ const InputTextArea: React.FC<InputTextAreaProps> = observer(
ref={inputRef}
value={value}
height={heightConstraint}
maxH={heightConstraint}
autoFocus
onChange={e => onChange(e.target.value)}
onKeyDown={onKeyDown}
@@ -49,7 +50,13 @@ const InputTextArea: React.FC<InputTextAreaProps> = observer(
borderRadius="20px"
border="none"
placeholder="Free my mind..."
_placeholder={{ color: 'gray.400' }}
_placeholder={{
color: 'gray.400',
textWrap: 'nowrap',
textOverflow: 'ellipsis',
overflow: 'hidden',
width: '90%',
}}
_focus={{
outline: 'none',
}}

View File

@@ -9,7 +9,7 @@ export function formatConversationMarkdown(messages: Instance<typeof IMessage>[]
if (message.role === 'user') {
return `**You**: ${message.content}`;
} else if (message.role === 'assistant') {
return `**Geoff's AI**: ${message.content}`;
return `**open-gsio**: ${message.content}`;
}
return '';
})

View File

@@ -51,7 +51,7 @@ const MessageBubble = observer(({ msg, scrollRef }) => {
const [isEditing, setIsEditing] = useState(false);
const [isHovered, setIsHovered] = useState(false);
const isUser = msg.role === 'user';
const senderName = isUser ? 'You' : "Geoff's AI";
const senderName = isUser ? 'You' : 'open-gsio';
const isLoading = !msg.content || !(msg.content.trim().length > 0);
const messageRef = useRef();

View File

@@ -104,7 +104,7 @@ describe('MessageBubble', () => {
it('should render assistant message correctly', () => {
render(<MessageBubble msg={mockAssistantMessage} scrollRef={mockScrollRef} />);
expect(screen.getByText("Geoff's AI")).toBeInTheDocument();
expect(screen.getByText('open-gsio')).toBeInTheDocument();
expect(screen.getByTestId('message-content')).toHaveTextContent('Assistant response');
});

View File

@@ -0,0 +1,25 @@
import React, { createContext, useContext, useState } from 'react';
type ComponentContextType = {
enabledComponent: string;
setEnabledComponent: (component: string) => void;
};
const ComponentContext = createContext<ComponentContextType>({
enabledComponent: '',
setEnabledComponent: () => {},
});
export const useComponent = () => useContext(ComponentContext);
export const ComponentProvider: React.FC<{ children: React.ReactNode }> = ({ children }) => {
const [enabledComponent, setEnabledComponent] = useState<string>('');
return (
<ComponentContext.Provider value={{ enabledComponent, setEnabledComponent }}>
{children}
</ComponentContext.Provider>
);
};
export default ComponentContext;

View File

@@ -0,0 +1,7 @@
import React, { useEffect, useState } from 'react';
function InstallButton() {
return <button onClick={handleInstall}>Install App</button>;
}
export default InstallButton;

View File

@@ -0,0 +1,61 @@
import { IconButton } from '@chakra-ui/react';
import { HardDriveDownload } from 'lucide-react';
import React, { useEffect, useState } from 'react';
import { toolbarButtonZIndex } from '../toolbar/Toolbar.tsx';
function InstallButton() {
const [deferredPrompt, setDeferredPrompt] = useState(null);
const [isInstalled, setIsInstalled] = useState(false);
useEffect(() => {
const handleBeforeInstallPrompt = e => {
// Prevent the default prompt
e.preventDefault();
setDeferredPrompt(e);
};
window.addEventListener('beforeinstallprompt', handleBeforeInstallPrompt);
return () => {
window.removeEventListener('beforeinstallprompt', handleBeforeInstallPrompt);
};
}, []);
const handleInstall = () => {
if (deferredPrompt) {
deferredPrompt.prompt();
deferredPrompt.userChoice.then(choiceResult => {
if (choiceResult.outcome === 'accepted') {
console.log('User accepted the installation prompt');
} else {
console.log('User dismissed the installation prompt');
}
});
setDeferredPrompt(null);
}
};
return (
<IconButton
aria-label="Install App"
title="Install App"
icon={<HardDriveDownload />}
size="md"
bg="transparent"
stroke="text.accent"
color="text.accent"
onClick={handleInstall}
_hover={{
bg: 'transparent',
svg: {
stroke: 'accent.secondary',
transition: 'stroke 0.3s ease-in-out',
},
}}
zIndex={toolbarButtonZIndex}
/>
);
}
export default InstallButton;

View File

@@ -0,0 +1,73 @@
import { Box } from '@chakra-ui/react';
import React, { useEffect, useState } from 'react';
import { useComponent } from '../contexts/ComponentContext.tsx';
// import { BevyScene } from './BevyScene.tsx';
import Tweakbox from './Tweakbox.tsx';
export const LandingComponent: React.FC = () => {
const [intensity, setIntensity] = useState(0.99);
const [mapActive, setMapActive] = useState(false);
const [aiActive, setAiActive] = useState(true);
const component = useComponent();
const { setEnabledComponent } = component;
useEffect(() => {
if (mapActive) {
setEnabledComponent('gpsmap');
}
if (aiActive) {
setEnabledComponent('ai');
}
}, []);
return (
<Box as="section" bg="background.primary" overflow="hidden">
<Box position="fixed" right={0} maxWidth="300px" minWidth="200px" zIndex={1000}>
<Tweakbox
sliders={{
intensity: {
value: intensity,
onChange: setIntensity,
label: 'Brightness',
min: 0.01,
max: 0.99,
step: 0.01,
ariaLabel: 'effect-intensity',
},
}}
switches={{
// GpsMap: {
// value: mapActive,
// onChange(enabled) {
// if (enabled) {
// setEnabledComponent('gpsmap');
// setAiActive(false);
// } else {
// setEnabledComponent('');
// }
// setMapActive(enabled);
// },
// label: 'GPS',
// },
AI: {
value: aiActive,
onChange(enabled) {
if (enabled) {
setEnabledComponent('ai');
setMapActive(false);
} else {
setEnabledComponent('');
}
setAiActive(enabled);
},
label: 'AI',
},
}}
/>
</Box>
</Box>
);
};

View File

@@ -0,0 +1,63 @@
import ReactMap from 'react-map-gl/mapbox'; // ↔ v5+ uses this import path
import 'mapbox-gl/dist/mapbox-gl.css';
import { Box, HStack, Button, Input, Center } from '@chakra-ui/react';
import { useState, useEffect, useCallback } from 'react';
import MapNext from './MapNext.tsx';
// Types for bevy_flurx_ipc communication
interface GpsPosition {
latitude: number;
longitude: number;
zoom: number;
}
interface VesselStatus {
latitude: number;
longitude: number;
heading: number;
speed: number;
}
interface MapViewParams {
latitude: number;
longitude: number;
zoom: number;
}
interface AuthParams {
authenticated: boolean;
token: string | null;
}
// public key
const key =
'cGsuZXlKMUlqb2laMlZ2Wm1aelpXVWlMQ0poSWpvaVkycDFOalo0YkdWNk1EUTRjRE41YjJnNFp6VjNNelp6YXlKOS56LUtzS1l0X3VGUGdCSDYwQUFBNFNn';
function Map(props: { visible: boolean }) {
return (
/* Full-screen wrapper — fills the viewport and becomes the positioning context */
<Box position={'absolute'} top={0} w="100vw" h={'100vh'} overflow="hidden">
{/* Button bar — absolutely positioned inside the wrapper */}
<MapNext mapboxPublicKey={atob(key)} />
{/*<Map*/}
{/* mapboxAccessToken={atob(key)}*/}
{/* initialViewState={mapView}*/}
{/* onMove={handleMapViewChange}*/}
{/* mapStyle="mapbox://styles/mapbox/dark-v11"*/}
{/* reuseMaps*/}
{/* attributionControl={false}*/}
{/* style={{width: '100%', height: '100%'}} // let the wrapper dictate size*/}
{/*>*/}
{/* /!*{vesselPosition && (*!/*/}
{/* /!* <Source id="vessel-data" type="geojson" data={vesselGeojson}>*!/*/}
{/* /!* <Layer {...vesselLayerStyle} />*!/*/}
{/* /!* </Source>*!/*/}
{/* /!*)}*!/*/}
{/*</Map>*/}
</Box>
);
}
export default Map;

View File

@@ -0,0 +1,172 @@
import { Box } from '@chakra-ui/react';
import { useCallback, useEffect, useMemo, useState } from 'react';
import Map, {
FullscreenControl,
GeolocateControl,
Marker,
NavigationControl,
Popup,
ScaleControl,
} from 'react-map-gl/mapbox';
import PORTS from './nautical-base-data.json';
import Pin from './pin';
export default function MapNext(props: any = { mapboxPublicKey: '' } as any) {
const [popupInfo, setPopupInfo] = useState(null);
const [isSearchOpen, setIsSearchOpen] = useState(false);
const [isTokenLoading, setIsTokenLoading] = useState(false);
const [authenticated, setAuthenticated] = useState(false);
useEffect(() => {
setAuthenticated(true);
setIsTokenLoading(false);
}, []);
const [mapView, setMapView] = useState({
longitude: -122.4,
latitude: 37.8,
zoom: 14,
});
const handleNavigationClick = useCallback(async () => {
console.log('handling navigation in map');
}, []);
const handleSearchClick = useCallback(async () => {
console.log('handling click search in map');
}, []);
const handleMapViewChange = useCallback(async (evt: any) => {
const { longitude, latitude, zoom } = evt.viewState;
setMapView({ longitude, latitude, zoom });
}, []);
const pins = useMemo(
() =>
PORTS.map((city, index) => (
<Marker
key={`marker-${index}`}
longitude={city.longitude}
latitude={city.latitude}
anchor="bottom"
onClick={e => {
// If we let the click event propagates to the map, it will immediately close the popup
// with `closeOnClick: true`
e.originalEvent.stopPropagation();
/*
src/MapNext.tsx:34:38 - error TS2345: Argument of type '{ city: string; population: string; image: string; state: string; latitude: number; longitude: number; }' is not assignable to parameter of type 'SetStateAction<null>'.
Type '{ city: string; population: string; image: string; state: string; latitude: number; longitude: number; }' provides no match for the signature '(prevState: null): null'.
*/
// @ts-ignore
setPopupInfo(city);
}}
>
<Pin />
</Marker>
)),
[],
);
return (
<Box justifySelf={'right'} w={'100%'}>
{/*<HStack position="absolute" top={4} right={4} zIndex={1}>*/}
{/* <Box display="flex" alignItems="center">*/}
{/* <Button colorScheme="teal" size="sm" variant="solid" onClick={handleSearchClick} mr={2}>*/}
{/* Search*/}
{/* </Button>*/}
{/* {isSearchOpen && (*/}
{/* <Box*/}
{/* w="200px"*/}
{/* transition="all 0.3s"*/}
{/* transform={`translateX(${isSearchOpen ? '0' : '100%'})`}*/}
{/* opacity={isSearchOpen ? 1 : 0}*/}
{/* color="white"*/}
{/* >*/}
{/* <Input*/}
{/* placeholder="Search..."*/}
{/* size="sm"*/}
{/* _placeholder={{*/}
{/* color: '#d1cfcf',*/}
{/* }}*/}
{/* />*/}
{/* </Box>*/}
{/* )}*/}
{/* </Box>*/}
{/* <Button colorScheme="blue" size="sm" variant="solid" onClick={handleNavigationClick}>*/}
{/* Layer*/}
{/* </Button>*/}
{/*</HStack>*/}
<Map
initialViewState={{
latitude: 40,
longitude: -100,
zoom: 3.5,
bearing: 0,
pitch: 0,
}}
mapStyle="mapbox://styles/geoffsee/cmd1qz39x01ga01qv5acea02y"
attributionControl={false}
mapboxAccessToken={props.mapboxPublicKey}
style={{
position: 'absolute',
width: '100%',
// height: '50%',
bottom: 0,
top: 0,
left: 0,
right: 0,
}}
>
<GeolocateControl position="top-left" style={{ marginTop: '6rem' }} />
<FullscreenControl position="top-left" />
<NavigationControl position="top-left" />
<ScaleControl position="top-left" />
{pins}
{popupInfo && (
<Popup
anchor="top"
/*
src/MapNext.tsx:66:53 - error TS2339: Property 'longitude' does not exist on type 'never'.
66 longitude={Number(popupInfo.longitude)}
*/
// @ts-ignore
longitude={Number(popupInfo.longitude)}
/*
src/MapNext.tsx:67:52 - error TS2339: Property 'latitude' does not exist on type 'never'.
67 latitude={Number(popupInfo.latitude)}
~~~~~~~~
*/
// @ts-ignore
latitude={Number(popupInfo.latitude)}
onClose={() => setPopupInfo(null)}
>
<div style={{ color: 'black' }}>
{/*src/MapNext.tsx:71:40 - error TS2339: Property 'city' does not exist on type 'never'.
71 {popupInfo.city}, {popupInfo.state} |{' '}
~~~~*/}
{/*@ts-ignore*/}
{/*@ts-ignore*/}
{popupInfo.city},{popupInfo.state}
{/*@ts-ignore*/}
</div>
{/*@ts-ignore*/}
<img width="100%" src={popupInfo.image} />
<br />
<a
style={{ color: 'blue' }}
target="_new"
href={`http://en.wikipedia.org/w/index.php?title=Special:Search&search=${(popupInfo as any).city}, ${(popupInfo as any).state}`}
>
Wikipedia
</a>
</Popup>
)}
</Map>
</Box>
);
}

View File

@@ -0,0 +1,124 @@
import { useBreakpointValue, useTheme } from '@chakra-ui/react';
import React, { useEffect, useRef, useMemo } from 'react';
const MATRIX_CHARS =
'アイウエオカキクケコサシスセソタチツテトナニヌネハヒフヘホ0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ';
interface MatrixRainProps {
speed?: number;
glow?: boolean;
intensity?: number;
visible?: boolean;
}
export const MatrixRain: React.FC<MatrixRainProps> = ({
speed = 1,
glow = false,
intensity = 1,
visible,
}) => {
const fontSize = useBreakpointValue({ base: 14, md: 18, lg: 22 }) ?? 14;
const theme = useTheme();
const canvasRef = useRef<HTMLCanvasElement | null>(null);
const animationRef = useRef<number | null>(null);
const dropsRef = useRef<number[]>([]);
const columnsRef = useRef<number>(0);
const colors = useMemo(
() => ({
background: theme.colors.background.primary,
textAccent: theme.colors.text.accent,
}),
[theme.colors.background.primary, theme.colors.text.accent],
);
const colorsRef = useRef(colors);
colorsRef.current = colors;
useEffect(() => {
const canvas = canvasRef.current;
if (!canvas) return;
const ctx = canvas.getContext('2d');
if (!ctx) return;
const resize = () => {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
const newColumns = Math.floor(canvas.width / fontSize);
if (newColumns !== columnsRef.current) {
columnsRef.current = newColumns;
const newDrops: number[] = [];
for (let i = 0; i < newColumns; i++) {
if (i < dropsRef.current.length) {
newDrops[i] = dropsRef.current[i];
} else {
newDrops[i] = Math.random() * (canvas.height / fontSize);
}
}
dropsRef.current = newDrops;
}
};
resize();
window.addEventListener('resize', resize);
if (dropsRef.current.length === 0) {
const columns = Math.floor(canvas.width / fontSize);
columnsRef.current = columns;
for (let i = 0; i < columns; i++) {
dropsRef.current[i] = Math.random() * (canvas.height / fontSize);
}
}
const draw = () => {
if (!ctx || !canvas) return;
const currentColors = colorsRef.current;
ctx.fillStyle = currentColors.background;
ctx.fillRect(0, 0, canvas.width, canvas.height);
ctx.font = `${fontSize}px monospace`;
for (let i = 0; i < dropsRef.current.length; i++) {
const text = MATRIX_CHARS[Math.floor(Math.random() * MATRIX_CHARS.length)];
const x = i * fontSize;
const y = dropsRef.current[i] * fontSize;
ctx.fillStyle = currentColors.textAccent;
if (glow) {
ctx.shadowBlur = 10;
ctx.shadowColor = currentColors.textAccent;
}
ctx.fillText(text, x, y);
if (y > canvas.height) {
dropsRef.current[i] = -Math.random() * 5;
} else {
dropsRef.current[i] += (0.1 + Math.random() * 0.5) * speed * intensity;
}
}
animationRef.current = requestAnimationFrame(draw);
};
animationRef.current = requestAnimationFrame(draw);
return () => {
window.removeEventListener('resize', resize);
if (animationRef.current) {
cancelAnimationFrame(animationRef.current);
}
};
}, [fontSize, speed, glow, intensity, visible]);
return (
<canvas
ref={canvasRef}
style={{ display: visible ? 'block' : 'none', pointerEvents: 'none' }}
/>
);
};

View File

@@ -0,0 +1,162 @@
import { Box, useTheme } from '@chakra-ui/react';
import React, { useEffect, useRef } from 'react';
interface ParticlesProps {
speed: number;
intensity: number;
particles: boolean;
glow: boolean;
visible?: boolean;
}
interface Particle {
x: number;
y: number;
vx: number;
vy: number;
size: number;
}
const Particles: React.FC<ParticlesProps> = ({ speed, intensity, glow, visible }) => {
const canvasRef = useRef<HTMLCanvasElement>(null);
const particlesRef = useRef<Particle[]>([]);
const animationFrameRef = useRef<number | undefined>(undefined);
const theme = useTheme();
// Helper function to create a single particle with proper canvas dimensions
const createParticle = (canvas: HTMLCanvasElement): Particle => ({
x: Math.random() * canvas.parentElement!.getBoundingClientRect().width,
y: Math.random() * canvas.parentElement!.getBoundingClientRect().height,
vx: (Math.random() - 0.5) * speed,
vy: (Math.random() - 0.5) * speed,
size: Math.random() * 3 + 1,
});
// Main animation effect
useEffect(() => {
if (!visible) {
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = undefined;
}
particlesRef.current = []; // Clear particles when disabled
return;
}
const canvas = canvasRef.current;
if (!canvas) return;
const ctx = canvas.getContext('2d');
if (!ctx) return;
const resizeCanvas = () => {
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
// Reposition existing particles that are outside new bounds
particlesRef.current.forEach(particle => {
if (particle.x > canvas.width) particle.x = Math.random() * canvas.width;
if (particle.y > canvas.height) particle.y = Math.random() * canvas.height;
});
};
const ensureParticleCount = () => {
const targetCount = Math.floor(intensity * 100);
const currentCount = particlesRef.current.length;
if (currentCount < targetCount) {
// Add new particles
const newParticles = Array.from({ length: targetCount - currentCount }, () =>
createParticle(canvas),
);
particlesRef.current = [...particlesRef.current, ...newParticles];
} else if (currentCount > targetCount) {
// Remove excess particles
particlesRef.current = particlesRef.current.slice(0, targetCount);
}
};
const updateParticles = () => {
particlesRef.current.forEach(particle => {
particle.x += particle.vx;
particle.y += particle.vy;
if (particle.x < 0) particle.x = canvas.width;
if (particle.x > canvas.width) particle.x = 0;
if (particle.y < 0) particle.y = canvas.height;
if (particle.y > canvas.height) particle.y = 0;
});
};
const drawParticles = () => {
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.fillStyle = theme.colors.text.accent;
ctx.globalCompositeOperation = 'lighter';
if (glow) {
ctx.shadowBlur = 10;
ctx.shadowColor = 'white';
} else {
ctx.shadowBlur = 0;
}
particlesRef.current.forEach(particle => {
ctx.beginPath();
ctx.arc(particle.x, particle.y, particle.size, 0, Math.PI * 2);
ctx.fill();
});
};
const animate = () => {
updateParticles();
drawParticles();
animationFrameRef.current = requestAnimationFrame(animate);
};
const handleResize = () => {
resizeCanvas();
};
window.addEventListener('resize', handleResize);
resizeCanvas(); // Set canvas size first
ensureParticleCount(); // Then create particles with proper dimensions
animate();
return () => {
window.removeEventListener('resize', handleResize);
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current);
animationFrameRef.current = undefined;
}
};
}, [visible, intensity, speed, glow, theme.colors.text.accent]);
// Separate effect for speed changes - update existing particle velocities
useEffect(() => {
if (!visible) return;
particlesRef.current.forEach(particle => {
const currentSpeed = Math.sqrt(particle.vx * particle.vx + particle.vy * particle.vy);
if (currentSpeed > 0) {
const normalizedVx = particle.vx / currentSpeed;
const normalizedVy = particle.vy / currentSpeed;
particle.vx = normalizedVx * speed;
particle.vy = normalizedVy * speed;
} else {
particle.vx = (Math.random() - 0.5) * speed;
particle.vy = (Math.random() - 0.5) * speed;
}
});
}, [speed, visible]);
return (
<Box zIndex={0} pointerEvents={'none'}>
<canvas
ref={canvasRef}
style={{ display: visible ? 'block' : 'none', pointerEvents: 'none' }}
/>
</Box>
);
};
export default Particles;

View File

@@ -0,0 +1,111 @@
import {
Box,
Grid,
GridItem,
Heading,
Slider,
SliderTrack,
SliderFilledTrack,
SliderThumb,
Text,
Switch,
Collapse,
IconButton,
} from '@chakra-ui/react';
import { ChevronDownIcon, ChevronUpIcon } from 'lucide-react';
import { observer } from 'mobx-react-lite';
import React, { useState } from 'react';
interface SliderControl {
value: number;
onChange: (value: number) => void;
label: string;
min: number;
max: number;
step: number;
ariaLabel: string;
}
interface SwitchControl {
value: boolean;
onChange: (enabled: boolean) => void;
label: string;
exclusive?: boolean;
}
interface TweakboxProps {
sliders: {
speed: SliderControl;
intensity: SliderControl;
};
switches: {
particles: SwitchControl;
glow: SwitchControl;
} & Record<string, SwitchControl>;
}
const Tweakbox = observer(({ sliders, switches }: TweakboxProps) => {
const [isCollapsed, setIsCollapsed] = useState(false);
return (
<Box display="flex" alignItems="flex-start">
<IconButton
aria-label="Toggle controls"
borderRadius="lg"
bg="whiteAlpha.300"
backdropFilter="blur(10px)"
boxShadow="xl"
icon={isCollapsed ? <ChevronUpIcon /> : <ChevronDownIcon />}
onClick={() => setIsCollapsed(!isCollapsed)}
size="sm"
marginRight={2}
/>
<Collapse in={!isCollapsed} style={{ width: '100%' }}>
<Box p={4} borderRadius="lg" bg="whiteAlpha.100" backdropFilter="blur(10px)" boxShadow="xl">
<Grid templateColumns="1fr" gap={4}>
<GridItem>
<Heading hidden={true} size="sm" mb={4} color="text.accent">
Controls
</Heading>
</GridItem>
{Object.keys(switches).map(key => {
return (
<GridItem key={key}>
<Text mb={2} color="text.accent">
{switches[key].label}
</Text>
<Switch
isChecked={switches[key].value}
onChange={e => switches[key].onChange(e.target.checked)}
/>
</GridItem>
);
})}
{Object.entries(sliders).map(([key, slider]) => (
<GridItem key={key}>
<Text mb={2} color="text.accent">
{slider.label}
</Text>
<Slider
aria-label={slider.ariaLabel}
value={slider.value}
min={slider.min}
step={slider.step}
max={slider.max}
onChange={slider.onChange}
>
<SliderTrack>
<SliderFilledTrack />
</SliderTrack>
<SliderThumb />
</Slider>
</GridItem>
))}
</Grid>
</Box>
</Collapse>
</Box>
);
});
export default Tweakbox;

View File

@@ -0,0 +1,24 @@
import * as React from 'react';
function ControlPanel() {
return (
<div className="control-panel">
<p>
Data source:{' '}
<a href="https://en.wikipedia.org/wiki/List_of_United_States_cities_by_population">
Wikipedia
</a>
</p>
<div className="source-link">
<a
href="https://github.com/visgl/react-map-gl/tree/8.0-release/examples/mapbox/controls"
target="_new"
>
View Code
</a>
</div>
</div>
);
}
export default React.memo(ControlPanel);

View File

@@ -0,0 +1,22 @@
[
{"city":"New York","population":"8,335,897","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Above_Gotham.jpg/240px-Above_Gotham.jpg","state":"New York","latitude":40.7128,"longitude":-74.0060},
{"city":"Los Angeles","population":"3,822,238","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/5/57/LA_Skyline_Mountains2.jpg/240px-LA_Skyline_Mountains2.jpg","state":"California","latitude":34.0522,"longitude":-118.2437},
{"city":"Long Beach","population":"456,062","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/d/d9/Long_Beach_skyline_from_Shoreline_Village.jpg/240px-Long_Beach_skyline_from_Shoreline_Village.jpg","state":"California","latitude":33.7701,"longitude":-118.1937},
{"city":"Seattle","population":"749,256","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/SeattleI5Skyline.jpg/240px-SeattleI5Skyline.jpg","state":"Washington","latitude":47.6062,"longitude":-122.3321},
{"city":"San Francisco","population":"808,437","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/6/6a/San_Francisco_skyline_from_Coit_Tower.jpg/240px-San_Francisco_skyline_from_Coit_Tower.jpg","state":"California","latitude":37.7749,"longitude":-122.4194},
{"city":"San Diego","population":"1,386,932","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/5/53/US_Navy_110604-N-NS602-574_Navy_and_Marine_Corps_personnel%2C_along_with_community_leaders_from_the_greater_San_Diego_area_come_together_to_commemora.jpg/240px-US_Navy_110604-N-NS602-574_Navy_and_Marine_Corps_personnel%2C_along_with_community_leaders_from_the_greater_San_Diego_area_come_together_to_commemora.jpg","state":"California","latitude":32.7157,"longitude":-117.1611},
{"city":"Norfolk","population":"235,089","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Norfolk_Skyline_from_Portsmouth.jpg/240px-Norfolk_Skyline_from_Portsmouth.jpg","state":"Virginia","latitude":36.8508,"longitude":-76.2859},
{"city":"Miami","population":"449,514","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/4/4b/Miami_skyline_201807_cat.jpg/240px-Miami_skyline_201807_cat.jpg","state":"Florida","latitude":25.7617,"longitude":-80.1918},
{"city":"Boston","population":"675,647","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Boston_skyline_and_Boston_Harbor.jpg/240px-Boston_skyline_and_Boston_Harbor.jpg","state":"Massachusetts","latitude":42.3601,"longitude":-71.0589},
{"city":"Baltimore","population":"585,708","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/Baltimore_Skyline.jpg/240px-Baltimore_Skyline.jpg","state":"Maryland","latitude":39.2904,"longitude":-76.6122},
{"city":"Charleston","population":"151,612","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/a/a7/Charleston_SC_Skyline.jpg/240px-Charleston_SC_Skyline.jpg","state":"South Carolina","latitude":32.7765,"longitude":-79.9311},
{"city":"Savannah","population":"147,780","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/4/40/Savannah_GA%2C_River_Street.jpg/240px-Savannah_GA%2C_River_Street.jpg","state":"Georgia","latitude":32.0809,"longitude":-81.0912},
{"city":"Tampa","population":"403,364","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/4/4d/Tampa_skyline_from_South%2C_2022.jpg/240px-Tampa_skyline_from_South%2C_2022.jpg","state":"Florida","latitude":27.9506,"longitude":-82.4572},
{"city":"Mobile","population":"187,041","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/Mobile_skyline_from_Mobile_River.jpg/240px-Mobile_skyline_from_Mobile_River.jpg","state":"Alabama","latitude":30.6954,"longitude":-88.0399},
{"city":"Anchorage","population":"288,121","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/5/55/Anchorage_skyline_and_susitna.jpg/240px-Anchorage_skyline_and_susitna.jpg","state":"Alaska","latitude":61.2181,"longitude":-149.9003},
{"city":"Portland","population":"68,408","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/4/48/Portland_Maine_skyline.jpg/240px-Portland_Maine_skyline.jpg","state":"Maine","latitude":43.6591,"longitude":-70.2568},
{"city":"Honolulu","population":"349,547","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/1/10/Honolulu_and_Diamond_Head.jpg/240px-Honolulu_and_Diamond_Head.jpg","state":"Hawaii","latitude":21.3069,"longitude":-157.8583},
{"city":"New Orleans","population":"376,971","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/f/fb/New_Orleans_skyline_sunset_Dec_28_2021_PANO_DSC07177-07179.jpg/240px-New_Orleans_skyline_sunset_Dec_28_2021_PANO_DSC07177-07179.jpg","state":"Louisiana","latitude":29.9511,"longitude":-90.0715},
{"city":"Jacksonville","population":"971,319","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/f/f3/Skyline_of_Jacksonville_FL%2C_South_view_20160706_1.jpg/240px-Skyline_of_Jacksonville_FL%2C_South_view_20160706_1.jpg","state":"Florida","latitude":30.3322,"longitude":-81.6557},
{"city":"Houston","population":"2,302,878","image":"https://upload.wikimedia.org/wikipedia/commons/thumb/6/60/Aerial_views_of_the_Houston%2C_Texas%2C_28005u.jpg/240px-Aerial_views_of_the_Houston%2C_Texas%2C_28005u.jpg","state":"Texas","latitude":29.7604,"longitude":-95.3698}
]

View File

@@ -0,0 +1,21 @@
import * as React from 'react';
const ICON = `M20.2,15.7L20.2,15.7c1.1-1.6,1.8-3.6,1.8-5.7c0-5.6-4.5-10-10-10S2,4.5,2,10c0,2,0.6,3.9,1.6,5.4c0,0.1,0.1,0.2,0.2,0.3
c0,0,0.1,0.1,0.1,0.2c0.2,0.3,0.4,0.6,0.7,0.9c2.6,3.1,7.4,7.6,7.4,7.6s4.8-4.5,7.4-7.5c0.2-0.3,0.5-0.6,0.7-0.9
C20.1,15.8,20.2,15.8,20.2,15.7z`;
const pinStyle = {
cursor: 'pointer',
fill: '#d00',
stroke: 'none'
};
function Pin({size = 20}) {
return (
<svg height={size} viewBox="0 0 24 24" style={pinStyle}>
<path d={ICON} />
</svg>
);
}
export default React.memo(Pin);

View File

@@ -2,6 +2,7 @@ import { Flex } from '@chakra-ui/react';
import React from 'react';
import BuiltWithButton from '../BuiltWithButton';
import InstallButton from '../install/InstallButton.tsx';
import GithubButton from './GithubButton';
import SupportThisSiteButton from './SupportThisSiteButton';
@@ -17,6 +18,7 @@ function ToolBar({ isMobile }) {
alignItems={isMobile ? 'flex-start' : 'flex-end'}
pb={4}
>
<InstallButton />
<SupportThisSiteButton />
<GithubButton />
<BuiltWithButton />

View File

@@ -6,7 +6,7 @@ import { useIsMobile } from '../components/contexts/MobileContext';
function Content({ children }) {
const isMobile = useIsMobile();
return (
<Flex flexDirection="column" w="100%" h="100vh" p={!isMobile ? 4 : 1}>
<Flex flexDirection="column" w="100%" h="100vh">
{children}
</Flex>
);

View File

@@ -10,16 +10,16 @@ export default function Hero() {
const isMobile = useIsMobile();
return (
<Box p={2}>
<Box p={2} mt={2}>
<Box>
<Heading
textAlign={isMobile ? 'left' : 'right'}
minWidth="90px"
maxWidth={'220px'}
color="text.accent"
as="h3"
// as="h3"
letterSpacing={'tight'}
size="lg"
size="xl"
>
{Routes[normalizePath(pageContext.urlPathname)]?.heroLabel}
</Heading>

View File

@@ -2,6 +2,7 @@ import { observer } from 'mobx-react-lite';
import React, { useEffect, useState } from 'react';
import { Chakra } from '../components/contexts/ChakraContext';
import ComponentContext, { ComponentProvider } from '../components/contexts/ComponentContext.tsx';
import { MobileProvider } from '../components/contexts/MobileContext';
import { PageContextProvider } from '../renderer/usePageContext';
import userOptionsStore from '../stores/UserOptionsStore';
@@ -13,6 +14,7 @@ export { Layout };
const Layout = observer(({ pageContext, children }) => {
const [activeTheme, setActiveTheme] = useState<string>('darknight');
const [enabledComponent, setEnabledComponent] = useState('gpsmap');
useEffect(() => {
if (userOptionsStore.theme !== activeTheme) {
@@ -47,7 +49,9 @@ const Layout = observer(({ pageContext, children }) => {
<PageContextProvider pageContext={pageContext}>
<MobileProvider>
<Chakra theme={getTheme(activeTheme)}>
<LayoutComponent>{children}</LayoutComponent>
<ComponentProvider>
<LayoutComponent>{children}</LayoutComponent>
</ComponentProvider>
</Chakra>
</MobileProvider>
</PageContextProvider>

View File

@@ -5,7 +5,7 @@ function NavItem({ path, children, color, onClick, as, cursor }) {
return (
<Box
as={as ?? 'a'}
href={path}
href={path && path.length > 1 ? path : '/'}
mb={2}
cursor={cursor}
// ml={5}

View File

@@ -1,4 +1,4 @@
import { Box, Collapse, Grid, GridItem, useBreakpointValue } from '@chakra-ui/react';
import { Box, Collapse, Grid, GridItem, useBreakpointValue, useTheme } from '@chakra-ui/react';
import { MenuIcon } from 'lucide-react';
import { observer } from 'mobx-react-lite';
import React, { useEffect } from 'react';
@@ -18,6 +18,8 @@ const Navigation = observer(({ children, routeRegistry }) => {
const currentPath = pageContext.urlPathname || '/';
const theme = useTheme();
const getTopValue = () => {
if (!isMobile) return undefined;
if (currentPath === '/') return 12;
@@ -40,6 +42,7 @@ const Navigation = observer(({ children, routeRegistry }) => {
return (
<Grid templateColumns="1fr" templateRows="auto 1fr">
{/*this is the menu button*/}
<GridItem
p={4}
position="fixed"
@@ -53,9 +56,10 @@ const Navigation = observer(({ children, routeRegistry }) => {
<GridItem>
<MenuIcon
cursor="pointer"
color="text.accent"
w={6}
h={6}
stroke={getTheme(userOptionsStore.theme).colors.text.accent}
stroke={theme.colors.text.accent}
onClick={() => {
switch (menuState.isOpen) {
case true:

View File

@@ -15,8 +15,8 @@ export default {
},
background: {
primary: 'linear-gradient(360deg, #15171C 100%, #353A47 100%)',
// primary: 'linear-gradient(360deg, #15171C 100%, #353A47 100%)',
primary: '#15171C',
secondary: '#1B1F26',
tertiary: '#1E1E2E',
},

View File

@@ -1,4 +1,27 @@
// runs before anything else
import { registerSW } from 'virtual:pwa-register';
import UserOptionsStore from '../stores/UserOptionsStore';
UserOptionsStore.initialize();
try {
const isLocal = window.location.hostname.includes('localhost');
if (!isLocal) {
if ('serviceWorker' in navigator) {
// && !/localhost/.test(window.location)) {
registerSW();
}
// navigator.serviceWorker.register('/service-worker.js');
} else {
(async () => {
await navigator.serviceWorker.getRegistrations().then(registrations => {
registrations.map(r => {
r.unregister();
});
});
})();
}
} catch (e) {
// fail silent
}

View File

@@ -1,7 +1,10 @@
import { Stack } from '@chakra-ui/react';
import { Box } from '@chakra-ui/react';
import React, { useEffect } from 'react';
import Chat from '../../components/chat/Chat';
import Chat from '../../components/chat/Chat.tsx';
import { useComponent } from '../../components/contexts/ComponentContext.tsx';
import { LandingComponent } from '../../components/landing-component/LandingComponent.tsx';
import ReactMap from '../../components/landing-component/Map.tsx';
import clientChatStore from '../../stores/ClientChatStore';
// renders "/"
@@ -16,9 +19,29 @@ export default function IndexPage() {
}
}, []);
const component = useComponent();
return (
<Stack direction="column" height="100%" width="100%" spacing={0}>
<Chat height="100%" width="100%" />
</Stack>
<Box height="100%" width="100%">
<LandingComponent />
<Box
display={component.enabledComponent === 'ai' ? undefined : 'none'}
width="100%"
height="100%"
overflowY="scroll"
padding={'unset'}
>
<Chat />
</Box>
<Box
display={component.enabledComponent === 'gpsmap' ? undefined : 'none'}
width="100%"
height="100%"
padding={'unset'}
>
<ReactMap visible={component.enabledComponent === 'gpsmap'} />
</Box>
</Box>
);
}

View File

@@ -28,10 +28,9 @@ const onRenderHtml: OnRenderHtmlAsync = async (pageContext): ReturnType<OnRender
<html data-theme="dark" lang="en">
<head>
<title>open-gsio</title>
<link rel="apple-touch-icon" sizes="180x180" href="/apple-touch-icon.png">
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png">
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png">
<link rel="manifest" href="/site.webmanifest">
<link rel="icon" href="/favicon.ico" sizes="48x48">
<link rel="icon" href="/favicon.svg" sizes="any" type="image/svg+xml">
<link rel="apple-touch-icon" href="/apple-touch-icon-180x180.png">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta charset="UTF-8">
<meta name="description" content="Maker Site">

View File

@@ -1,5 +1,5 @@
export default {
'/': { sidebarLabel: 'Home', heroLabel: 'gsio' },
'/': { sidebarLabel: 'Home', heroLabel: 'va-chat' },
'/connect': { sidebarLabel: 'Connect', heroLabel: 'connect' },
'/privacy-policy': {
sidebarLabel: '',

View File

@@ -1,7 +1,7 @@
export const welcome_home_text = `
# welcome!
# open-gsio
---
Please enjoy [responsibly](https://centerforresponsible.ai/the-center)
<br/>
<br/>
`;

View File

@@ -7,10 +7,15 @@ import { VitePWA } from 'vite-plugin-pwa';
// eslint-disable-next-line import/no-unresolved
import { configDefaults } from 'vitest/config';
import { getColorThemes } from './src/layout/theme/color-themes';
const prebuildPlugin = () => ({
name: 'prebuild',
config(config, { command }) {
if (command === 'build') {
console.log('Generate PWA Assets -> public/');
child_process.execSync('bun generate:pwa:assets');
console.log('Generated Sitemap -> public/sitemap.xml');
child_process.execSync('bun generate:sitemap');
console.log('Generated Sitemap -> public/sitemap.xml');
child_process.execSync('bun run generate:robotstxt');
@@ -21,6 +26,13 @@ const prebuildPlugin = () => ({
},
});
// eslint-disable-next-line @typescript-eslint/no-require-imports
// const PROJECT_SOURCES_HASH = sha512Dir('./src');
//
// console.log({ PROJECT_SOURCES_HASH });
const buildId = crypto.randomUUID();
export default defineConfig(({ command }) => {
return {
mode: 'production',
@@ -31,6 +43,62 @@ export default defineConfig(({ command }) => {
prerender: true,
disableAutoFullBuild: false,
}),
VitePWA({
registerType: 'autoUpdate',
injectRegister: null,
minify: true,
disable: false,
filename: 'service-worker.js',
devOptions: {
enabled: false,
navigateFallback: 'index.html',
suppressWarnings: true,
type: 'module',
},
manifest: {
name: `open-gsio`,
short_name: 'open-gsio',
display: 'standalone',
description: `open-gsio client`,
theme_color: getColorThemes().at(0)?.colors.text.accent,
background_color: getColorThemes().at(0)?.colors.background.primary,
scope: '/',
start_url: '/',
icons: [
{
src: 'pwa-64x64.png',
sizes: '64x64',
type: 'image/png',
},
{
src: 'pwa-192x192.png',
sizes: '192x192',
type: 'image/png',
},
{
src: 'pwa-512x512.png',
sizes: '512x512',
type: 'image/png',
purpose: 'any',
},
{
src: 'maskable-icon-512x512.png',
sizes: '512x512',
type: 'image/png',
purpose: 'maskable',
},
],
},
workbox: {
globPatterns: ['**/*.{js,css,html,ico,png,svg,wasm}'],
navigateFallbackDenylist: [/^\/api\//],
maximumFileSizeToCacheInBytes: 25000000,
cacheId: buildId,
cleanupOutdatedCaches: true,
clientsClaim: true,
},
}),
// PWA plugin saves money on data transfer by caching assets on the client
/*
For safari, use this script in the console to unregister the service worker.
@@ -41,22 +109,15 @@ export default defineConfig(({ command }) => {
})
})
*/
// VitePWA({
// registerType: 'autoUpdate',
// devOptions: {
// enabled: false,
// },
// manifest: {
// name: "open-gsio",
// short_name: "open-gsio",
// description: "Assistant"
// },
// workbox: {
// globPatterns: ['**/*.{js,css,html,ico,png,svg}'],
// navigateFallbackDenylist: [/^\/api\//],
// }
// })
],
workbox: {
globPatterns: ['**/*.{js,css,html,ico,png,svg,wasm}'],
navigateFallbackDenylist: [/^\/api\//],
maximumFileSizeToCacheInBytes: 25000000,
cacheId: buildId,
cleanupOutdatedCaches: true,
clientsClaim: true,
},
server: {
port: 3000,
proxy: {

View File

@@ -0,0 +1,3 @@
{
"name": "@open-gsio/analytics-worker"
}

View File

@@ -1,6 +1,7 @@
import { ServerCoordinator } from '@open-gsio/coordinators';
import Router from '@open-gsio/router';
import { error } from 'itty-router';
export { ServerCoordinator };
export default Router.Router();
export default Router.Router().catch(error);

View File

@@ -1,5 +1,5 @@
{
"$schema": "https://workers.cloudflare.com/sites/config-schema.json",
"$schema": "../../../node_modules/wrangler/config-schema.json",
"name": "open-gsio",
"assets": {
"binding": "ASSETS",
@@ -20,9 +20,9 @@
{
"binding": "KV_STORAGE",
// $ npx wrangler kv namespace create open-gsio
"id": "placeholderId",
"id": "",
// $ npx wrangler kv namespace create open-gsio --preview
"preview_id": "placeholderIdPreview"
"preview_id": ""
}
],
"migrations": [

View File

@@ -52,13 +52,15 @@ export function createRouter() {
// })
.get('/api/metrics*', async (r, e, c) => {
const { metricsService } = createRequestContext(e, c);
return metricsService.handleMetricsRequest(r);
return new Response('ok');
// const { metricsService } = createRequestContext(e, c);
// return metricsService.handleMetricsRequest(r);
})
.post('/api/metrics*', async (r, e, c) => {
const { metricsService } = createRequestContext(e, c);
return metricsService.handleMetricsRequest(r);
return new Response('ok');
// const { metricsService } = createRequestContext(e, c);
// return metricsService.handleMetricsRequest(r);
})
// renders the app

View File

@@ -15,7 +15,12 @@ find . -name ".wrangler" -type d -prune -exec rm -rf {} \;
# Remove build directories
find . -name "dist" -type d -prune -exec rm -rf {} \;
find . -name "build" -type d -prune -exec rm -rf {} \;
#-----
# crates/yachtpit uses a directory called build for staging assets so it can't be removed
#find . -name "build" -type d -prune -exec rm -rf {} \;
#-----
find . -name "fonts" -type d -prune -exec rm -rf {} \;

View File

@@ -2,8 +2,8 @@
"name": "@open-gsio/server",
"type": "module",
"scripts": {
"dev": "bun src/server/server.ts",
"build": "bun run src/server/build.ts"
"dev": "bun --watch src/server/server.ts",
"build": "bun ./src/server/build.ts"
},
"devDependencies": {
"@open-gsio/env": "workspace:*",
@@ -24,8 +24,11 @@
"mobx-state-tree": "^6.0.1",
"moo": "^0.5.2",
"typescript": "^5.7.2",
"vike": "0.4.193",
"vike": "0.4.235",
"vite": "^7.0.0",
"zod": "^3.23.8",
"dotenv": "^16.5.0"
"dotenv": "^17.0.0",
"bun": "^1.2.17",
"@types/bun": "^1.2.17"
}
}

View File

@@ -0,0 +1,53 @@
import { readdir } from 'node:fs/promises';
export const assetHandler = {
ASSETS: {
/**
* Fetches the requested static asset from local dist
*
* @param {Request} request - The incoming Fetch API Request object.
* @returns {Promise<Response>} A Promise that resolves with the Response for the requested asset,
* or a 404 Response if the asset is not found or an error occurs.
*/
async fetch(request: Request): Promise<Response> {
// Serialize incoming request URL
const originalUrl = new URL(request.url);
const url = new URL(request.url);
// Fixed path: go up to packages level, then to client/public
const PUBLIC_DIR = new URL('../../../client/public/', import.meta.url).pathname;
let publicFiles: string[] = [];
try {
publicFiles = await readdir(PUBLIC_DIR, { recursive: true });
} catch (error) {
console.warn(`Could not read public directory ${PUBLIC_DIR}:`, error);
// Continue without public files list
}
// Get the filename from pathname and remove any path traversal attempts
const filename = url.pathname.split('/').pop()?.replace(/\.\./g, '') || '';
const isStatic = publicFiles.some(file => file === filename);
if (url.pathname === '/') {
url.pathname = '/index.html';
} else if (isStatic && !url.pathname.startsWith('/static')) {
// leave it alone
} else if (isStatic) {
url.pathname = `/static${url.pathname}`;
}
// Fixed path: go up to packages level, then to client/dist/client
const dist = new URL('../../../client/dist/client', import.meta.url).pathname;
try {
return new Response(Bun.file(`${dist}${url.pathname}`));
} catch (error) {
// Log the error with the original requested path
console.error(`Error reading asset from path ${originalUrl.pathname}:`, error);
return new Response(null, { status: 404 });
}
},
},
};

View File

@@ -1,9 +1,10 @@
// handles builds the server into js
await Bun.build({
entrypoints: ['./server.ts'],
outdir: '../dist',
entrypoints: [import.meta.dir + '/server.ts'],
outdir: './dist', // Changed from '../dist' to './dist'
minify: true,
target: 'node',
splitting: true,
format: 'esm', // Explicitly set ESM format
throw: true,
external: ['@open-gsio/client'], // Mark client as external to avoid bundling issues
});

View File

@@ -0,0 +1,61 @@
import ServerCoordinator from '@open-gsio/coordinators/src/ServerCoordinatorBun.ts';
import Router from '@open-gsio/router';
import { config } from 'dotenv';
import type { RequestLike } from 'itty-router';
import { error } from 'itty-router';
import { BunSqliteKVNamespace } from '../storage/BunSqliteKVNamespace.ts';
import { assetHandler } from './asset-handler.ts';
export function createServer() {
const router = Router.Router();
config({
path: '.env',
debug: true,
// defaults: {
// EVENTSOURCE_HOST: "https://eventsource.seemueller.io",
// }
});
// bootstrap the root path of the existing router to the asset handler defined here
router.get('/', async (request: RequestLike, env: any) => {
return await assetHandler.ASSETS.fetch(request as Request);
});
const server = {
port: 3003,
fetch: async (request: RequestLike, env: { [key: string]: any }, ctx: any) => {
// console.log("[trace] request: ", request.method, request.url, "headers: ", request.headers.get("referer"), "body: ", request.body, "env: ", env, "ctx: ", ctx, "")
env['SERVER_COORDINATOR'] = ServerCoordinator;
env['ASSETS'] = assetHandler.ASSETS;
env['EVENTSOURCE_HOST'] = process.env.EVENTSOURCE_HOST;
env['GROQ_API_KEY'] = process.env.GROQ_API_KEY;
env['ANTHROPIC_API_KEY'] = process.env.ANTHROPIC_API_KEY;
env['FIREWORKS_API_KEY'] = process.env.FIREWORKS_API_KEY;
env['XAI_API_KEY'] = process.env.XAI_API_KEY;
env['CEREBRAS_API_KEY'] = process.env.CEREBRAS_API_KEY;
env['CLOUDFLARE_API_KEY'] = process.env.CLOUDFLARE_API_KEY;
env['CLOUDFLARE_ACCOUNT_ID'] = process.env.CLOUDFLARE_ACCOUNT_ID;
env['MLX_API_KEY'] = process.env.MLX_API_KEY;
env['OLLAMA_API_KEY'] = process.env.OLLAMA_API_KEY;
env['KV_STORAGE'] = new BunSqliteKVNamespace({ namespace: 'open-gsio' });
try {
const controller = new AbortController();
const timeout = new Promise((_, reject) =>
setTimeout(() => {
controller.abort();
reject(new Error('Request timeout after 5s'));
}, 5000),
);
return await Promise.race([router.fetch(request, env, ctx).catch(error), timeout]);
} catch (e) {
console.error('Error handling request:', e);
return new Response('Server Error', { status: 500 });
}
},
};
return { server, router, assetHandler };
}

View File

@@ -1,111 +1,6 @@
import { readdir } from 'node:fs/promises';
import { createServer } from './create-server.ts';
import ServerCoordinator from '@open-gsio/coordinators/src/ServerCoordinatorBun.ts';
import Router from '@open-gsio/router';
import { config } from 'dotenv';
import type { RequestLike } from 'itty-router';
// creates a bun server with the itty router
const { server } = createServer();
import { BunSqliteKVNamespace } from '../storage/BunSqliteKVNamespace.ts';
const router = Router.Router();
config({
path: '.env',
debug: true,
// defaults: {
// EVENTSOURCE_HOST: "https://eventsource.seemueller.io",
// }
});
// bootstrap the root path of the existing router to the asset handler defined here
router.get('/', async (request: RequestLike, env: any) => {
return await assetHandler.ASSETS.fetch(request as Request);
});
export default {
port: 3003,
fetch: async (request: RequestLike, env: { [key: string]: any }, ctx: any) => {
// console.log("[trace] request: ", request.method, request.url, "headers: ", request.headers.get("referer"), "body: ", request.body, "env: ", env, "ctx: ", ctx, "")
env['SERVER_COORDINATOR'] = ServerCoordinator;
env['ASSETS'] = assetHandler.ASSETS;
env['EVENTSOURCE_HOST'] = process.env.EVENTSOURCE_HOST;
env['GROQ_API_KEY'] = process.env.GROQ_API_KEY;
env['ANTHROPIC_API_KEY'] = process.env.ANTHROPIC_API_KEY;
env['FIREWORKS_API_KEY'] = process.env.FIREWORKS_API_KEY;
env['XAI_API_KEY'] = process.env.XAI_API_KEY;
env['CEREBRAS_API_KEY'] = process.env.CEREBRAS_API_KEY;
env['CLOUDFLARE_API_KEY'] = process.env.CLOUDFLARE_API_KEY;
env['CLOUDFLARE_ACCOUNT_ID'] = process.env.CLOUDFLARE_ACCOUNT_ID;
env['MLX_API_KEY'] = process.env.MLX_API_KEY;
env['OLLAMA_API_KEY'] = process.env.OLLAMA_API_KEY;
env['KV_STORAGE'] = new BunSqliteKVNamespace({ namespace: 'open-gsio' });
try {
const controller = new AbortController();
const timeout = new Promise((_, reject) =>
setTimeout(() => {
controller.abort();
reject(new Error('Request timeout after 5s'));
}, 5000),
);
return await Promise.race([router.fetch(request, env, ctx), timeout]);
} catch (e) {
console.error('Error handling request:', e);
return new Response('Server Error', { status: 500 });
}
},
};
export const assetHandler = {
ASSETS: {
/**
* Fetches the requested static asset from local dist
*
* @param {Request} request - The incoming Fetch API Request object.
* @returns {Promise<Response>} A Promise that resolves with the Response for the requested asset,
* or a 404 Response if the asset is not found or an error occurs.
*/
async fetch(request: Request): Promise<Response> {
// Serialize incoming request URL
const originalUrl = new URL(request.url);
const url = new URL(request.url);
// Fixed path: go up to packages level, then to client/public
const PUBLIC_DIR = new URL('../../../client/public/', import.meta.url).pathname;
let publicFiles: string[] = [];
try {
publicFiles = await readdir(PUBLIC_DIR, { recursive: true });
} catch (error) {
console.warn(`Could not read public directory ${PUBLIC_DIR}:`, error);
// Continue without public files list
}
// Get the filename from pathname and remove any path traversal attempts
const filename = url.pathname.split('/').pop()?.replace(/\.\./g, '') || '';
const isStatic = publicFiles.some(file => file === filename);
if (url.pathname === '/') {
url.pathname = '/index.html';
} else if (isStatic && !url.pathname.startsWith('/static')) {
// leave it alone
} else if (isStatic) {
url.pathname = `/static${url.pathname}`;
}
// Fixed path: go up to packages level, then to client/dist/client
const dist = new URL('../../../client/dist/client', import.meta.url).pathname;
try {
return new Response(Bun.file(`${dist}${url.pathname}`));
} catch (error) {
// Log the error with the original requested path
console.error(`Error reading asset from path ${originalUrl.pathname}:`, error);
return new Response(null, { status: 404 });
}
},
},
};
export default server;

View File

@@ -4,9 +4,12 @@ import { configDefaults } from 'vitest/config';
export default defineConfig(({ command }) => {
return {
build: {
cssMinify: 'esbuild',
},
test: {
globals: true,
environment: 'jsdom',
environment: 'node',
registerNodeLoader: false,
// setupFiles: ['./src/test/setup.ts'],
exclude: [...configDefaults.exclude, 'dist/**', '.open-gsio/**'],

View File

@@ -39,6 +39,6 @@
"vitest": "^3.1.4",
"wrangler": "^4.18.0",
"zod": "^3.23.8",
"dotenv": "^16.5.0"
"dotenv": "^17.0.0"
}
}

View File

@@ -37,6 +37,18 @@ vi.mock('../../lib/handleStreamData', () => ({
default: vi.fn().mockReturnValue(() => {}),
}));
// Mock ProviderRepository
vi.mock('@open-gsio/ai/providers/_ProviderRepository.ts', () => {
return {
ProviderRepository: class {
constructor() {}
getProviders() {
return [{ name: 'openai', key: 'test-key', endpoint: 'https://api.openai.com/v1' }];
}
},
};
});
describe('ChatService', () => {
let chatService: any;
let mockEnv: any;
@@ -221,6 +233,105 @@ describe('ChatService', () => {
Response.json = originalResponseJson;
localService.getSupportedModels = originalGetSupportedModels;
});
it('should test the cache refresh mechanism when providers change', async () => {
// This test verifies that the cache is refreshed when providers change
// and that the cache is used when providers haven't changed.
// Mock data for the first scenario (cache hit)
const cachedModels = [
{ id: 'model-1', provider: 'openai' },
{ id: 'model-2', provider: 'openai' },
];
const providersSignature = JSON.stringify(['openai']);
// Mock KV_STORAGE for the first scenario (cache hit)
const mockKVStorage = {
get: vi.fn().mockImplementation(key => {
if (key === 'supportedModels') return Promise.resolve(JSON.stringify(cachedModels));
if (key === 'providersSignature') return Promise.resolve(providersSignature);
return Promise.resolve(null);
}),
put: vi.fn().mockResolvedValue(undefined),
};
// The ProviderRepository is already mocked at the top of the file
// Create a service instance with the mocked environment
const service = ChatService.create({
maxTokens: 2000,
systemPrompt: 'You are a helpful assistant.',
});
// Set up the environment with the mocked KV_STORAGE
service.setEnv({
...mockEnv,
KV_STORAGE: mockKVStorage,
});
// Scenario 1: Cache hit - providers haven't changed
const response1 = await service.getSupportedModels();
const data1 = await response1.json();
// Verify the cache was used
expect(mockKVStorage.get).toHaveBeenCalledWith('supportedModels');
expect(mockKVStorage.get).toHaveBeenCalledWith('providersSignature');
expect(data1).toEqual(cachedModels);
expect(mockKVStorage.put).not.toHaveBeenCalled();
// Reset the mock calls for the next scenario
vi.clearAllMocks();
// Scenario 2: Cache miss - providers have changed
// Update the mock to return a different providers signature
mockKVStorage.get.mockImplementation(key => {
if (key === 'supportedModels') {
return Promise.resolve(JSON.stringify(cachedModels));
}
if (key === 'providersSignature') {
// Different signature
return Promise.resolve(JSON.stringify(['openai', 'anthropic']));
}
return Promise.resolve(null);
});
// Mock the provider models fetching to avoid actual API calls
const mockModels = [
{ id: 'new-model-1', provider: 'openai' },
{ id: 'new-model-2', provider: 'openai' },
];
// Mock OpenAI instance for the second scenario
const mockOpenAIInstance = {
models: {
list: vi.fn().mockResolvedValue({
data: mockModels,
}),
retrieve: vi.fn().mockImplementation(id => {
return Promise.resolve({ id, provider: 'openai' });
}),
},
};
// Update the OpenAI mock
vi.mocked(OpenAI).mockImplementation(() => mockOpenAIInstance as any);
// Call getSupportedModels again
const response2 = await service.getSupportedModels();
// Verify the cache was refreshed
expect(mockKVStorage.get).toHaveBeenCalledWith('supportedModels');
expect(mockKVStorage.get).toHaveBeenCalledWith('providersSignature');
expect(mockKVStorage.put).toHaveBeenCalledTimes(2); // Called twice: once for models, once for signature
expect(mockKVStorage.put).toHaveBeenCalledWith('supportedModels', expect.any(String), {
expirationTtl: 60 * 60 * 24,
});
expect(mockKVStorage.put).toHaveBeenCalledWith('providersSignature', expect.any(String), {
expirationTtl: 60 * 60 * 24,
});
// No need to restore mocks as we're using vi.mock at the module level
});
});
// TODO: Fix this test suite

View File

@@ -118,11 +118,19 @@ const ChatService = types
const useCache = true;
// Create a signature of the current providers
const providerRepo = new ProviderRepository(self.env);
const providers = providerRepo.getProviders();
const currentProvidersSignature = JSON.stringify(providers.map(p => p.name).sort());
if (useCache) {
// ----- 1. Try cached value ---------------------------------------------
try {
const cached = yield self.env.KV_STORAGE.get('supportedModels');
if (cached) {
const cachedSignature = yield self.env.KV_STORAGE.get('providersSignature');
// Check if cache exists and providers haven't changed
if (cached && cachedSignature && cachedSignature === currentProvidersSignature) {
const parsed = JSON.parse(cached as string);
if (Array.isArray(parsed) && parsed.length > 0) {
logger.info('Cache hit returning supportedModels from KV');
@@ -130,6 +138,11 @@ const ChatService = types
}
logger.warn('Cache entry malformed refreshing');
throw new Error('Malformed cache entry');
} else if (
cached &&
(!cachedSignature || cachedSignature !== currentProvidersSignature)
) {
logger.info('Providers changed refreshing cache');
}
} catch (err) {
logger.warn('Error reading/parsing supportedModels cache', err);
@@ -137,8 +150,6 @@ const ChatService = types
}
// ----- 2. Build fresh list ---------------------------------------------
const providerRepo = new ProviderRepository(self.env);
const providers = providerRepo.getProviders();
const providerModels = new Map<string, any[]>();
const modelMeta = new Map<string, any>();
@@ -150,19 +161,29 @@ const ChatService = types
const openai = new OpenAI({ apiKey: provider.key, baseURL: provider.endpoint });
// 2a. List models
const basicFilters = (model: any) => {
return (
!model.id.includes('whisper') &&
!model.id.includes('flux') &&
!model.id.includes('ocr') &&
!model.id.includes('tts') &&
!model.id.includes('guard')
);
}; // 2a. List models
try {
const listResp: any = yield openai.models.list(); // < async
const models = 'data' in listResp ? listResp.data : listResp;
providerModels.set(
provider.name,
models.filter(
(mdl: any) =>
!mdl.id.includes('whisper') &&
!mdl.id.includes('tts') &&
!mdl.id.includes('guard'),
),
models.filter((mdl: any) => {
if ('supports_chat' in mdl && mdl.supports_chat) {
return basicFilters(mdl);
} else if ('supports_chat' in mdl && !mdl.supports_chat) {
return false;
}
return basicFilters(mdl);
}),
);
// 2b. Retrieve metadata
@@ -195,11 +216,20 @@ const ChatService = types
// ----- 4. Cache fresh list ---------------------------------------------
try {
// Store the models
yield self.env.KV_STORAGE.put(
'supportedModels',
JSON.stringify(resultArr),
{ expirationTtl: 60 * 60 * 24 }, // 24
{ expirationTtl: 60 * 60 * 24 }, // 24 hours
);
// Store the providers signature
yield self.env.KV_STORAGE.put(
'providersSignature',
currentProvidersSignature,
{ expirationTtl: 60 * 60 * 24 }, // 24 hours
);
logger.info('supportedModels cache refreshed');
} catch (err) {
logger.error('KV put failed for supportedModels', err);
@@ -298,7 +328,8 @@ const ChatService = types
);
}
if (message.includes('404')) {
throw new ClientError(`Something went wrong, try again.`, 413, {});
console.log(message);
throw new ClientError(`Something went wrong, try again.`, 404, {});
}
throw error;
}