From 9a84675ffa5054b7b9dbe3996b3fbf2d38db627b Mon Sep 17 00:00:00 2001 From: Geoff Seemueller <28698553+geoffsee@users.noreply.github.com> Date: Thu, 5 Jun 2025 21:46:00 -0400 Subject: [PATCH] Update README.md Signed-off-by: Geoff Seemueller <28698553+geoffsee@users.noreply.github.com> --- README.md | 27 ++++++--------------------- 1 file changed, 6 insertions(+), 21 deletions(-) diff --git a/README.md b/README.md index 836ce8e..4f03d79 100644 --- a/README.md +++ b/README.md @@ -1,27 +1,12 @@ # open-web-agent-rs -A Rust-based web agent with local inference capabilities. +A Rust-based web agent with an embedded openai compatible inference server (supports gemma models only). -## Components - -### Local Inference Engine - -The [Local Inference Engine](./local_inference_engine/README.md) provides a way to run large language models locally. It supports both CLI mode for direct text generation and server mode with an OpenAI-compatible API. - -Features: -- Run Gemma models locally (1B, 2B, 7B, 9B variants) -- CLI mode for direct text generation -- Server mode with OpenAI-compatible API -- Support for various model configurations (base, instruction-tuned) -- Metal acceleration on macOS - -See the [Local Inference Engine README](./local_inference_engine/README.md) for detailed usage instructions. - -### Web Server - -Server is being converted to MCP. Things are probably broken. - -```text +## Quickstart +```bash +cp .env.example .env bun i +(cd local_inference_server && cargo run --release -- --server) +docker compose up -d searxng bun dev ```