- Replaced single Docker command for Ollama with a `docker-compose` setup.
- Updated `start_inference_server.sh` to use `ollama-compose.yml`.
- Updated README with new usage instructions for Ollama web UI access.
Update README deployment steps and add deploy:secrets script to package.json
update local inference script and README
update lockfile
reconfigure package scripts for development
update test execution
pass server tests
Update README with revised Bun commands and workspace details
remove pnpm package manager designator
create bun server
- Introduced `configure_local_inference.sh` to automatically set `.dev.vars` based on active local inference services.
- Updated `start_inference_server.sh` to handle both Ollama and mlx-omni-server server types.
- Enhanced `package.json` to include new commands for starting and configuring inference servers.
- Refined README to include updated instructions for running and adding models for local inference.
- Minor cleanup in `MessageBubble.tsx`.
- Introduce `supportedModels` in `ClientChatStore` and update model validation logic
- Enhance OpenAI inferencing with local setup adaptations and improved streaming options
- Modify ChatService to handle local and remote model fetching
- Update input menu to dynamically fetch and display supported models
- Add start_inference_server.sh for initiating local inference server
- Upgrade OpenAI SDK to v5.0.1 and adjust dependencies accordingly
- Revise deployment steps and docs for `GROQ_API_KEY`
- Enable `workers_dev` in `wrangler.jsonc`
- Adjust hero label to `open-gsio` in routes
- Update `.gitignore` to include sensitive config files
- Add `deploy:secrets` script in `package.json`
Introduce a new `session-proxy` worker with its configuration file. Update deployment scripts to include `deploy:session-proxy` and add a `deploy:all` script for streamlined deployment of all workers. Expand README with deployment instructions and usage of `pnpm` as an alternative to `bun`.