run format

This commit is contained in:
geoffsee
2025-06-24 17:31:15 -04:00
committed by Geoff Seemueller
parent 02c3253343
commit f76301d620
17 changed files with 180 additions and 199 deletions

View File

@@ -1,60 +1,60 @@
Legacy Development History
---
## Legacy Development History
The source code of open-gsio was drawn from the source code of my personal website. That commit history was contaminated early on with secrets. `open-gsio` is a refinement of those sources. A total of 367 commits were submitted to the main branch of the upstream source repository between August 2024 and May 2025.
#### **May 2025**
* Added **seemueller.ai** link to UI sidebar.
* Global config/markdown guide cleanup; patched a critical forgotten bug.
- Added **seemueller.ai** link to UI sidebar.
- Global config/markdown guide cleanup; patched a critical forgotten bug.
#### **Apr 2025**
* **CI/CD overhaul**: autodeploy to dev & staging, Bun adoption as package manager, streamlined blocklist workflow (now autoupdates via VPN blocker).
* New 404 error page; multiple robots.txt and editorresize fixes; removed dead/duplicate code.
- **CI/CD overhaul**: autodeploy to dev & staging, Bun adoption as package manager, streamlined blocklist workflow (now autoupdates via VPN blocker).
- New 404 error page; multiple robots.txt and editorresize fixes; removed dead/duplicate code.
#### **Mar 2025**
* Introduced **modelspecific `max_tokens`** handling and plugged in **Cloudflare AI models** for testing.
* Bundle size minimised (reenabled minifier, smaller vendor set).
- Introduced **modelspecific `max_tokens`** handling and plugged in **Cloudflare AI models** for testing.
- Bundle size minimised (reenabled minifier, smaller vendor set).
#### **Feb 2025**
* **Full theme system** (runtime switching, Centauri theme, serversaved prefs).
* Tightened MobX typing for messages; responsive breakpoints & input scaling repaired.
* Dropped legacy document API; general folder restructure.
- **Full theme system** (runtime switching, Centauri theme, serversaved prefs).
- Tightened MobX typing for messages; responsive breakpoints & input scaling repaired.
- Dropped legacy document API; general folder restructure.
#### **Jan 2025**
* **Ratelimit middleware**, larger KV/R2 storage quota.
* Switched default model → *llamav3p170binstruct*; pluggable model handlers.
* Added **KaTeX fonts** & **Marked.js** for rich math/markdown.
* Fireworks key rotation; deprecated Google models removed.
- **Ratelimit middleware**, larger KV/R2 storage quota.
- Switched default model → _llamav3p170binstruct_; pluggable model handlers.
- Added **KaTeX fonts** & **Marked.js** for rich math/markdown.
- Fireworks key rotation; deprecated Google models removed.
#### **Dec 2024**
* Major package upgrades; **CodeHighlighter** now supports HTML/JSX/TS(X)/Zig.
* Refactored streaming + markdown renderer; Androidspecific padding fixes.
* Reset default chat model to **gpt4o**; welcome message & richer searchintent logic.
- Major package upgrades; **CodeHighlighter** now supports HTML/JSX/TS(X)/Zig.
- Refactored streaming + markdown renderer; Androidspecific padding fixes.
- Reset default chat model to **gpt4o**; welcome message & richer searchintent logic.
#### **Nov 2024**
* **Fireworks API** + agent server; firstclass support for **Anthropic** & **GROQ** models (incl. attachments).
* **VPN blocker** shipped with CIDR validation and dedicated GitHub Action.
* Live search buffering, feedback modal, smarter context preprocessing.
- **Fireworks API** + agent server; firstclass support for **Anthropic** & **GROQ** models (incl. attachments).
- **VPN blocker** shipped with CIDR validation and dedicated GitHub Action.
- Live search buffering, feedback modal, smarter context preprocessing.
#### **Oct 2024**
* Rolled out **image generation** + picker for image models.
* Deployed **ETH payment processor** & depositaddress flow.
* Introduced fewshot prompting library; analytics worker refactor; Halloween prompt.
* Extensive mobileUX polish and bundling/worker config updates.
- Rolled out **image generation** + picker for image models.
- Deployed **ETH payment processor** & depositaddress flow.
- Introduced fewshot prompting library; analytics worker refactor; Halloween prompt.
- Extensive mobileUX polish and bundling/worker config updates.
#### **Sep 2024**
* Endtoend **math rendering** (KaTeX) and **GitHubflavoured markdown**.
* Migrated chat state to **MobX**; launched analytics service & metrics worker.
* Switched build minifier to **esbuild**; tokenizer limits enforced; gradient sidebar & cookieconsent manager added.
- Endtoend **math rendering** (KaTeX) and **GitHubflavoured markdown**.
- Migrated chat state to **MobX**; launched analytics service & metrics worker.
- Switched build minifier to **esbuild**; tokenizer limits enforced; gradient sidebar & cookieconsent manager added.
#### **Aug 2024**
* **Initial MVP**: iMessagestyle chat UI, websocket prototype, Google Analytics, Cloudflare bindings, base workersite scaffold.
- **Initial MVP**: iMessagestyle chat UI, websocket prototype, Google Analytics, Cloudflare bindings, base workersite scaffold.

View File

@@ -1,29 +1,29 @@
# open-gsio
[![Tests](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml/badge.svg)](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</br>
<p align="center">
<img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" />
</p>
This is a full-stack Conversational AI.
This is a full-stack Conversational AI.
## Table of Contents
- [Installation](#installation)
- [Deployment](#deployment)
- [Local Inference](#local-inference)
- [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon)
- [Ollama](#ollama)
- [Adding models](#adding-models-for-local-inference-ollama)
- [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon)
- [Ollama](#ollama)
- [Adding models](#adding-models-for-local-inference-ollama)
- [Testing](#testing)
- [Troubleshooting](#troubleshooting)
- [Acknowledgments](#acknowledgments)
- [License](#license)
## Installation
1. `bun i && bun test:all`
@@ -33,29 +33,34 @@ This is a full-stack Conversational AI.
> Note: it should be possible to use pnpm in place of bun.
## Deployment
1. Setup KV_STORAGE binding in `packages/server/wrangler.jsonc`
1. [Add keys in secrets.json](https://console.groq.com/keys)
1. [Add keys in secrets.json](https://console.groq.com/keys)
1. Run `bun run deploy && bun run deploy:secrets && bun run deploy`
> Note: Subsequent deployments should omit `bun run deploy:secrets`
## Local Inference
> Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.
> Local inference is supported for Ollama and mlx-omni-server. OpenAI compatible servers can be used by overriding OPENAI_API_KEY and OPENAI_API_ENDPOINT.
### mlx-omni-server
(default) (Apple Silicon Only)
~~~bash
```bash
# (prereq) install mlx-omni-server
brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server
brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server
bun run openai:local mlx-omni-server # Start mlx-omni-server
bun run openai:local:configure # Configure connection
bun run server:dev # Restart server
~~~
```
#### Adding models for local inference (Apple Silicon)
~~~bash
```bash
# ensure mlx-omni-server is running
# See https://huggingface.co/mlx-community for available models
@@ -67,21 +72,22 @@ curl http://localhost:10240/v1/chat/completions \
\"model\": \"$MODEL_TO_ADD\",
\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
}"
~~~
```
### Ollama
~~~bash
```bash
bun run openai:local ollama # Start ollama server
bun run openai:local:configure # Configure connection
bun run server:dev # Restart server
~~~
```
#### Adding models for local inference (ollama)
~~~bash
```bash
# See https://ollama.com/library for available models
use the ollama web ui @ http://localhost:8080
~~~
```
## Testing
@@ -89,44 +95,44 @@ Tests are located in `__tests__` directories next to the code they test. Testing
> `bun test:all` will run all tests
## Troubleshooting
1. `bun clean`
1. `bun i`
1. `bun server:dev`
1. `bun client:dev`
1. Submit an issue
1. Submit an issue
## History
History
---
A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](./LEGACY.md).
## Acknowledgments
I would like to express gratitude to the following projects, libraries, and individuals that have contributed to making open-gsio possible:
- [TypeScript](https://www.typescriptlang.org/) - Primary programming language
- [React](https://react.dev/) - UI library for building the frontend
- [Vike](https://vike.dev/) - Framework for server-side rendering and routing
- [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
- [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments
- [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution
- [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration
- [Vitest](https://vitest.dev/) - Testing framework
- [OpenAI](https://github.com/openai)
- [Groq](https://console.groq.com/) - Fast inference API
- [Anthropic](https://www.anthropic.com/) - Creator of Claude models
- [Fireworks](https://fireworks.ai/) - AI inference platform
- [XAI](https://x.ai/) - Creator of Grok models
- [Cerebras](https://www.cerebras.net/) - AI compute and models
- [(madroidmaq) MLX Omni Server](https://github.com/madroidmaq/mlx-omni-server) - Open-source high-performance inference for Apple Silicon
- [MLX](https://github.com/ml-explore/mlx) - An array framework for Apple silicon
- [Ollama](https://github.com/ollama/ollama) - Versatile solution for self-hosting models
- [TypeScript](https://www.typescriptlang.org/) - Primary programming language
- [React](https://react.dev/) - UI library for building the frontend
- [Vike](https://vike.dev/) - Framework for server-side rendering and routing
- [Cloudflare Workers](https://developers.cloudflare.com/workers/) - Serverless execution environment
- [Bun](https://bun.sh/) - JavaScript runtime and toolkit
- [itty-router](https://github.com/kwhitley/itty-router) - Lightweight router for serverless environments
- [MobX-State-Tree](https://mobx-state-tree.js.org/) - State management solution
- [OpenAI SDK](https://github.com/openai/openai-node) - Client for AI model integration
- [Vitest](https://vitest.dev/) - Testing framework
- [OpenAI](https://github.com/openai)
- [Groq](https://console.groq.com/) - Fast inference API
- [Anthropic](https://www.anthropic.com/) - Creator of Claude models
- [Fireworks](https://fireworks.ai/) - AI inference platform
- [XAI](https://x.ai/) - Creator of Grok models
- [Cerebras](https://www.cerebras.net/) - AI compute and models
- [(madroidmaq) MLX Omni Server](https://github.com/madroidmaq/mlx-omni-server) - Open-source high-performance inference for Apple Silicon
- [MLX](https://github.com/ml-explore/mlx) - An array framework for Apple silicon
- [Ollama](https://github.com/ollama/ollama) - Versatile solution for self-hosting models
## License
~~~text
```text
MIT License
Copyright (c) 2025 Geoff Seemueller
@@ -148,4 +154,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
~~~
```

View File

@@ -38,5 +38,6 @@
},
"peerDependencies": {
"typescript": "^5"
}
},
"packageManager": "pnpm@10.10.0+sha512.d615db246fe70f25dcfea6d8d73dee782ce23e2245e3c4f6f888249fb568149318637dca73c2c5c8ef2a4ca0d5657fb9567188bfab47f566d1ee6ce987815c39"
}

View File

@@ -1,4 +1,4 @@
{
"name": "@open-gsio/ai",
"module": "index.ts"
}
}

View File

@@ -4,10 +4,6 @@
"outDir": "dist",
"rootDir": "."
},
"include": [
"*.ts"
],
"exclude": [
"node_modules"
]
}
"include": ["*.ts"],
"exclude": ["node_modules"]
}

View File

@@ -15,30 +15,29 @@
};
function s() {
var i = [
g(m(4)) + "=" + g(m(6)),
"ga=" + t.ga_tid,
"dt=" + r(e.title),
"de=" + r(e.characterSet || e.charset),
"dr=" + r(e.referrer),
"ul=" + (n.language || n.browserLanguage || n.userLanguage),
"sd=" + a.colorDepth + "-bit",
"sr=" + a.width + "x" + a.height,
"vp=" +
g(m(4)) + '=' + g(m(6)),
'ga=' + t.ga_tid,
'dt=' + r(e.title),
'de=' + r(e.characterSet || e.charset),
'dr=' + r(e.referrer),
'ul=' + (n.language || n.browserLanguage || n.userLanguage),
'sd=' + a.colorDepth + '-bit',
'sr=' + a.width + 'x' + a.height,
'vp=' +
o(e.documentElement.clientWidth, t.innerWidth || 0) +
"x" +
'x' +
o(e.documentElement.clientHeight, t.innerHeight || 0),
"plt=" + c(d.loadEventStart - d.navigationStart || 0),
"dns=" + c(d.domainLookupEnd - d.domainLookupStart || 0),
"pdt=" + c(d.responseEnd - d.responseStart || 0),
"rrt=" + c(d.redirectEnd - d.redirectStart || 0),
"tcp=" + c(d.connectEnd - d.connectStart || 0),
"srt=" + c(d.responseStart - d.requestStart || 0),
"dit=" + c(d.domInteractive - d.domLoading || 0),
"clt=" + c(d.domContentLoadedEventStart - d.navigationStart || 0),
"z=" + Date.now(),
'plt=' + c(d.loadEventStart - d.navigationStart || 0),
'dns=' + c(d.domainLookupEnd - d.domainLookupStart || 0),
'pdt=' + c(d.responseEnd - d.responseStart || 0),
'rrt=' + c(d.redirectEnd - d.redirectStart || 0),
'tcp=' + c(d.connectEnd - d.connectStart || 0),
'srt=' + c(d.responseStart - d.requestStart || 0),
'dit=' + c(d.domInteractive - d.domLoading || 0),
'clt=' + c(d.domContentLoadedEventStart - d.navigationStart || 0),
'z=' + Date.now(),
];
(t.__ga_img = new Image()), (t.__ga_img.src = t.ga_api + "?" + i.join("&"));
((t.__ga_img = new Image()), (t.__ga_img.src = t.ga_api + '?' + i.join('&')));
}
(t.cfga = s),
"complete" === e.readyState ? s() : t.addEventListener("load", s);
((t.cfga = s), 'complete' === e.readyState ? s() : t.addEventListener('load', s));
})(window, document, navigator);

View File

@@ -8,12 +8,6 @@
"baseUrl": "src",
"noEmit": true
},
"include": [
"src/**/*.ts",
"src/**/*.tsx"
],
"exclude": [
"node_modules",
"dist"
]
"include": ["src/**/*.ts", "src/**/*.tsx"],
"exclude": ["node_modules", "dist"]
}

View File

@@ -1,7 +1,9 @@
# open-gsio
[![Tests](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml/badge.svg)](https://github.com/geoffsee/open-gsio/actions/workflows/test.yml)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
</br>
<p align="center">
<img src="https://github.com/user-attachments/assets/620d2517-e7be-4bb0-b2b7-3aa0cba37ef0" width="250" />
</p>
@@ -15,59 +17,63 @@
- [Installation](#installation)
- [Deployment](#deployment)
- [Local Inference](#local-inference)
- [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon)
- [Ollama](#ollama)
- [Adding models](#adding-models-for-local-inference-ollama)
- [mlx-omni-server (default)](#mlx-omni-server)
- [Adding models](#adding-models-for-local-inference-apple-silicon)
- [Ollama](#ollama)
- [Adding models](#adding-models-for-local-inference-ollama)
- [Testing](#testing)
- [Troubleshooting](#troubleshooting)
- [History](#history)
- [License](#license)
## Stack
* [TypeScript](https://www.typescriptlang.org/)
* [Vike](https://vike.dev/)
* [React](https://react.dev/)
* [Cloudflare Workers](https://developers.cloudflare.com/workers/)
* [ittyrouter](https://github.com/kwhitley/itty-router)
* [MobXStateTree](https://mobx-state-tree.js.org/)
* [OpenAI SDK](https://github.com/openai/openai-node)
* [Vitest](https://vitest.dev/)
- [TypeScript](https://www.typescriptlang.org/)
- [Vike](https://vike.dev/)
- [React](https://react.dev/)
- [Cloudflare Workers](https://developers.cloudflare.com/workers/)
- [ittyrouter](https://github.com/kwhitley/itty-router)
- [MobXStateTree](https://mobx-state-tree.js.org/)
- [OpenAI SDK](https://github.com/openai/openai-node)
- [Vitest](https://vitest.dev/)
## Installation
1. `bun i && bun test`
1. [Add your own `GROQ_API_KEY` in .dev.vars](https://console.groq.com/keys)
1. [Add your own `GROQ_API_KEY` in .dev.vars](https://console.groq.com/keys)
1. In isolated shells, run `bun run server:dev` and `bun run client:dev`
> Note: it should be possible to use pnpm in place of bun.
> Note: it should be possible to use pnpm in place of bun.
## Deployment
1. Setup the KV_STORAGE bindings in `wrangler.jsonc`
1. [Add another `GROQ_API_KEY` in secrets.json](https://console.groq.com/keys)
1. Setup the KV_STORAGE bindings in `wrangler.jsonc`
1. [Add another `GROQ_API_KEY` in secrets.json](https://console.groq.com/keys)
1. Run `bun run deploy && bun run deploy:secrets && bun run deploy`
> Note: Subsequent deployments should omit `bun run deploy:secrets`
## Local Inference
> Local inference is achieved by overriding the `OPENAI_API_KEY` and `OPENAI_API_ENDPOINT` environment variables. See below.
### mlx-omni-server
(default) (Apple Silicon Only) - Use Ollama for other platforms.
~~~bash
```bash
# (prereq) install mlx-omni-server
brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server
brew tap seemueller-io/tap
brew install seemueller-io/tap/mlx-omni-server
bun run openai:local mlx-omni-server # Start mlx-omni-server
bun run openai:local:enable # Configure connection
bun run server:dev # Restart server
~~~
```
#### Adding models for local inference (Apple Silicon)
~~~bash
```bash
# ensure mlx-omni-server is running
# See https://huggingface.co/mlx-community for available models
@@ -79,22 +85,23 @@ curl http://localhost:10240/v1/chat/completions \
\"model\": \"$MODEL_TO_ADD\",
\"messages\": [{\"role\": \"user\", \"content\": \"Hello\"}]
}"
~~~
```
### Ollama
~~~bash
```bash
bun run openai:local ollama # Start ollama server
bun run openai:local:enable # Configure connection
bun run server:dev # Restart server
~~~
```
#### Adding models for local inference (ollama)
~~~bash
```bash
# See https://ollama.com/library for available models
MODEL_TO_ADD=gemma3
MODEL_TO_ADD=gemma3
docker exec -it ollama ollama run ${MODEL_TO_ADD}
~~~
```
## Testing
@@ -102,20 +109,21 @@ Tests are located in `__tests__` directories next to the code they test. Testing
> `bun run test` will run all tests
## Troubleshooting
1. `bun run clean`
1. `bun i`
1. `bun server:dev`
1. `bun client:dev`
1. Submit an issue
1. `bun server:dev`
1. `bun client:dev`
1. Submit an issue
History
---
A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](../../LEGACY.md).
## History
A high-level overview for the development history of the parent repository, [geoff-seemueller-io](https://geoff.seemueller.io), is provided in [LEGACY.md](../../LEGACY.md).
## License
~~~text
```text
MIT License
Copyright (c) 2025 Geoff Seemueller
@@ -137,5 +145,4 @@ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
~~~
```

View File

@@ -8,11 +8,6 @@
"outDir": "dist",
"rootDir": "."
},
"include": [
"*.ts"
],
"exclude": [
"node_modules",
"*.test.ts"
]
"include": ["*.ts"],
"exclude": ["node_modules", "*.test.ts"]
}

View File

@@ -4,7 +4,7 @@ interface Env {
EMAIL_SERVICE: any;
// Durable Objects
SERVER_COORDINATOR: import("packages/server/durable-objects/ServerCoordinator.ts");
SERVER_COORDINATOR: import('packages/server/durable-objects/ServerCoordinator.ts');
// Handles serving static assets
ASSETS: Fetcher;
@@ -12,7 +12,6 @@ interface Env {
// KV Bindings
KV_STORAGE: KVNamespace;
// Text/Secrets
METRICS_HOST: string;
OPENAI_API_ENDPOINT: string;

View File

@@ -1,4 +1,4 @@
{
"name": "@open-gsio/env",
"module": "env.d.ts"
}
}

View File

@@ -4,11 +4,6 @@
"outDir": "dist",
"rootDir": "."
},
"include": [
"*.ts",
"*.d.ts"
],
"exclude": [
"node_modules"
]
}
"include": ["*.ts", "*.d.ts"],
"exclude": ["node_modules"]
}

View File

@@ -9,7 +9,7 @@ bun install
To run:
```bash
bun run
bun run
```
This project was created using `bun init` in bun v1.2.8. [Bun](https://bun.sh) is a fast all-in-one JavaScript runtime.

View File

@@ -6,11 +6,6 @@
"allowJs": true,
"noEmit": false
},
"include": [
"*.js",
"*.ts"
],
"exclude": [
"node_modules"
]
"include": ["*.js", "*.ts"],
"exclude": ["node_modules"]
}

View File

@@ -6,15 +6,15 @@ This directory contains the server component of open-gsio, a full-stack Conversa
- `__tests__/`: Contains test files for the server components
- `services/`: Contains service modules for different functionalities
- `AssetService.ts`: Handles static assets and SSR
- `ChatService.ts`: Manages chat interactions with AI models
- `ContactService.ts`: Processes contact form submissions
- `FeedbackService.ts`: Handles user feedback
- `MetricsService.ts`: Collects and processes metrics
- `TransactionService.ts`: Manages transactions
- `AssetService.ts`: Handles static assets and SSR
- `ChatService.ts`: Manages chat interactions with AI models
- `ContactService.ts`: Processes contact form submissions
- `FeedbackService.ts`: Handles user feedback
- `MetricsService.ts`: Collects and processes metrics
- `TransactionService.ts`: Manages transactions
- `durable_objects/`: Contains durable object implementations
- `ServerCoordinator.ts`: Cloudflare Implementation
- `ServerCoordinatorBun.ts`: Bun Implementation
- `ServerCoordinator.ts`: Cloudflare Implementation
- `ServerCoordinatorBun.ts`: Bun Implementation
- `api-router.ts`: API Router
- `RequestContext.ts`: Application Context
- `server.ts`: Main server entry point
- `server.ts`: Main server entry point

View File

@@ -10,12 +10,6 @@
"allowJs": true,
"jsx": "react-jsx"
},
"include": [
"**/*.ts",
"**/*.tsx"
],
"exclude": [
"node_modules",
"dist"
]
"include": ["**/*.ts", "**/*.tsx"],
"exclude": ["node_modules", "dist"]
}

View File

@@ -1,5 +1,5 @@
declare global {
type ExecutionContext = any
type Env = import("@open-gsio/env")
type ExecutionContext = any;
type Env = import('@open-gsio/env');
}
export type ExecutionContext = any
export type ExecutionContext = any;