Commit Graph

31 Commits

Author SHA1 Message Date
geoffsee
ff55d882c7 reorg + update docs with new paths 2025-09-04 12:40:59 -04:00
geoffsee
400c70f17d streaming implementaion re-added to UI 2025-09-02 14:45:16 -04:00
geoffsee
2deecb5e51 chat client only displays available models 2025-09-01 22:29:54 -04:00
geoffsee
d1a7d5b28e fix format error 2025-08-31 19:59:09 -04:00
geoffsee
8d2b85b0b9 update docs 2025-08-31 19:27:15 -04:00
geoffsee
64daa77c6b leptos chat ui renders 2025-08-31 18:50:25 -04:00
geoffsee
2b4a8a9df8 chat-ui not functional yet but builds 2025-08-31 18:18:56 -04:00
geoffsee
38d51722f2 Update configuration loading with Cargo.toml path and clean up .gitignore
---

This commit message concisely communicates the key changes:

1. The code now builds an absolute path to the `Cargo.toml` file, enhancing clarity in configuration loading.
2. The addition of `PathBuf` usage improves type safety.
3. The removal of unnecessary entries from `.gitignore` helps maintain a clean project structure.

These updates reflect improvements in both functionality and project organization.
2025-08-31 14:06:44 -04:00
geoffsee
7bc9479a11 fix format issues, needs precommit hook 2025-08-31 13:24:51 -04:00
geoffsee
0580dc8c5e move cli into crates and stage for release 2025-08-31 13:23:50 -04:00
geoffsee
9e9aa69769 bump version in Cargo.toml 2025-08-31 11:04:31 -04:00
geoffsee
eb1591aa5d fix fmt error 2025-08-31 10:52:48 -04:00
geoffsee
e6c417bd83 align dependencies across inference features 2025-08-31 10:49:04 -04:00
geoffsee
f5d2a85f2e cleanup, add ci 2025-08-31 10:31:20 -04:00
geoffsee
315ef17605 supports small llama and gemma models
Refactor inference

dedicated crates for llama and gemma inferencing, not integrated
2025-08-29 20:00:41 -04:00
geoffsee
d06b16bb12 remove confusing comments 2025-08-28 16:09:29 -04:00
geoffsee
d04340d9ac update docs 2025-08-28 12:54:09 -04:00
geoffsee
e38a2d4512 predict-otron-9000 serves a leptos SSR frontend 2025-08-28 12:06:22 -04:00
geoffsee
45d7cd8819 - Introduced ServerConfig for handling deployment modes and services.
- Added HighAvailability mode for proxying requests to external services.
- Maintained Local mode for embedded services.
- Updated `README.md` and included `SERVER_CONFIG.md` for detailed documentation.
2025-08-28 09:55:39 -04:00
geoffsee
c8b3561e36 Remove ROOT_CAUSE_ANALYSIS.md and outdated server logs 2025-08-28 08:26:18 -04:00
geoffsee
b606adbe5d Add Docker Compose and Kubernetes metadata to Cargo.toml files 2025-08-28 07:56:34 -04:00
geoffsee
9d6cb62b10 Add Dockerfile for Leptos Chat deployment 2025-08-28 07:54:57 -04:00
geoffsee
956d00f596 Add CLEANUP.md with identified documentation and code issues. Update README files to fix repository URL, unify descriptions, and clarify Gemma model usage. 2025-08-28 07:24:14 -04:00
geoffsee
719beb3791 - Change default server host to localhost for improved security.
- Increase default maximum tokens in CLI configuration to 256.
- Refactor and reorganize CLI
2025-08-27 21:47:31 -04:00
geoffsee
766d41af78 - Refactored build_pipeline usage to ensure pipeline arguments are cloned.
- Introduced `reset_state` for clearing cached state between requests.
- Enhanced chat UI with model selector and dynamic model fetching.
- Improved error logging and detailed debug messages for chat request flows.
- Added fresh instantiation of `TextGeneration` to prevent tensor shape mismatches.
2025-08-27 17:53:50 -04:00
geoffsee
9e28e259ad Add support for listing available models via CLI and HTTP endpoint 2025-08-27 16:35:08 -04:00
geoffsee
432c04d9df Removed legacy inference engine assets. 2025-08-27 16:19:31 -04:00
geoffsee
8338750beb Refactor apply_cached_repeat_penalty for optimized caching and reuse, add extensive unit tests, and integrate special handling for gemma-specific models.
Removed `test_request.sh`, deprecated functionality, and unused imports; introduced a new CLI tool (`cli.ts`) for testing inference engine and adjusted handling of non-streaming/streaming chat completions.

- Add CPU fallback support for text generation when primary device is unsupported
- Introduce `execute_with_fallback` method to handle device compatibility and shape mismatch errors
- Extend unit tests to reproduce tensor shape mismatch errors specific to model configurations
- Increase HTTP timeout limits in `curl_chat_stream.sh` script for reliable API testing

chat completion endpoint functions with gemma3 (no streaming)

Add benchmarking guide with HTML reporting, Leptos chat crate, and middleware for metrics tracking
2025-08-27 16:15:01 -04:00
geoffsee
b8ba994783 Integrate create_inference_router from inference-engine into predict-otron-9000, simplify server routing, and update dependencies to unify versions. 2025-08-16 19:53:33 -04:00
Geoff Seemueller
411ad78026 Remove stale reference in documentation. 2025-08-16 19:29:11 -04:00
geoffsee
2aa6d4cdf8 Introduce predict-otron-9000: Unified server combining embeddings and inference engines. Includes OpenAI-compatible APIs, full documentation, and example scripts. 2025-08-16 19:11:35 -04:00