内容摘录
<p align="center">
<img src="logo.png" alt="Atmosphere" width="120"/>
</p>
Atmosphere
The missing transport layer between your LLM and your browser. Spring AI gives you Flux<ChatResponse>. LangChain4j gives you StreamingChatResponseHandler. Neither delivers tokens to the user. Atmosphere does — over WebSocket with SSE/Long-Polling fallback, reconnection, rooms, presence, and Kafka/Redis clustering. Add one dependency to your Spring Boot or Quarkus app.
Maven Central
npm
Atmosphere CI
Atmosphere.js CI
AI/LLM Token Streaming
Frameworks like Spring AI, LangChain4j, and Embabel handle **LLM ↔ server** communication. Atmosphere handles the other half: **server ↔ browser**. Built on 18 years of WebSocket experience, rewritten for JDK 21 virtual threads. It streams tokens to the client in real time over WebSocket (with SSE/Long-Polling fallback), manages reconnection and backpressure, and provides React/Vue/Svelte hooks — so you don't have to build all of that yourself.
What you get
**@AiEndpoint + @Prompt** — annotate a class, receive prompts, stream tokens. Runs on virtual threads.
**Built-in LLM client** — zero-dependency OpenAiCompatibleClient that talks to OpenAI, Gemini, Ollama, or any OpenAI-compatible API. No Spring AI or LangChain4j required.
**Adapter SPI** — plug in Spring AI (Flux<ChatResponse>), LangChain4j (StreamingChatResponseHandler), or Embabel (OutputChannel). Your framework generates tokens; Atmosphere delivers them.
**Standardized wire protocol** — every token is a JSON frame with type, data, sessionId, and seq for ordering. Progress events, metadata (model, token usage), and error frames are built in.
**AI as a room participant** — LlmRoomMember joins a Room like any user. When someone sends a message, the LLM receives it, streams a response, and broadcasts it back. Humans and AI in the same room.
**Client hooks** — useStreaming() for React/Vue/Svelte gives you fullText, isStreaming, progress, metadata, and error out of the box. No custom WebSocket code.
Server — 5 lines with the built-in client
Configure with environment variables — no code changes to switch providers:
| Variable | Description | Default |
|----------|-------------|---------|
| LLM_MODE | remote (cloud) or local (Ollama) | remote |
| LLM_MODEL | gemini-2.5-flash, gpt-5, o3-mini, llama3.2, … | gemini-2.5-flash |
| LLM_API_KEY | API key (or GEMINI_API_KEY for Gemini) | — |
| LLM_BASE_URL | Override endpoint (auto-detected from model name) | auto |
Server — with Spring AI, LangChain4j, or Embabel
Atmosphere doesn't replace your AI framework. It gives it a transport:
<details>
<summary>Spring AI adapter</summary>
</details>
<details>
<summary>LangChain4j adapter</summary>
</details>
<details>
<summary>Embabel adapter</summary>
</details>
Browser — React
AI in rooms — virtual members
See the AI / LLM Streaming wiki for the full guide.
Installation
Maven
For Spring Boot:
For Quarkus:
Gradle
npm (TypeScript/JavaScript client)
Modules
| Module | Artifact | Description |
|--------|----------|-------------|
| Core runtime | atmosphere-runtime | WebSocket, SSE, Long-Polling transport layer (Servlet 6.0+) |
| Spring Boot starter | atmosphere-spring-boot-starter | Auto-configuration for Spring Boot 4.0.2+ |
| Quarkus extension | atmosphere-quarkus-extension | Build-time processing for Quarkus 3.21+ |
| AI streaming | atmosphere-ai | Token-by-token LLM response streaming |
| Spring AI adapter | atmosphere-spring-ai | Spring AI ChatClient integration |
| LangChain4j adapter | atmosphere-langchain4j | LangChain4j streaming integration |
| MCP server | atmosphere-mcp | Model Context Protocol server over WebSocket |
| Rooms | built into atmosphere-runtime | Room management with join/leave and presence |
| Redis clustering | atmosphere-redis | Cross-node broadcasting via Redis pub/sub |
| Kafka clustering | atmosphere-kafka | Cross-node broadcasting via Kafka |
| Durable sessions | atmosphere-durable-sessions | Session persistence across restarts (SQLite / Redis) |
| Kotlin DSL | atmosphere-kotlin | Builder API and coroutine extensions |
| TypeScript client | atmosphere.js (npm) | Browser client with React, Vue, and Svelte bindings |
Rooms & Presence
Server-side room management with presence tracking:
Framework Integration
Spring Boot
The starter provides auto-configuration for Spring Boot 4.0.2+.
<details>
<summary>Configuration properties</summary>
| Property | Default | Description |
|----------|---------|-------------|
| servlet-path | /atmosphere/* | Servlet URL mapping |
| packages | | Annotation scanning packages |
| order | 0 | Servlet load-on-startup order |
| session-support | false | Enable HttpSession support |
| websocket-support | | Enable/disable WebSocket |
| heartbeat-interval-in-seconds | | Server heartbeat frequency |
| broadcaster-class | | Custom Broadcaster FQCN |
| broadcaster-cache-class | | Custom BroadcasterCache FQCN |
| init-params | | Map of any ApplicationConfig key/value |
</details>
<details>
<summary>GraalVM native image</summary>
The starter includes AOT runtime hints. Activate the native Maven profile:
Requires GraalVM JDK 25+ (Spring Boot 4.0 / Spring Framework 7 baseline).
</details>
Quarkus
The extension provides build-time annotation scanning for Quarkus 3.21+.
<details>
<summary>Configuration properties</summary>
| Property | Default | Description |
|----------|---------|-------------|
| quarkus.atmosphere.servlet-path | /atmosphere/* | Servlet URL mapping |
| quarkus.atmosphere.packages | | Annotation scanning packages |
| quarkus.atmosphere.load-on-startup | 1 | Servlet load-on-startup order |
| quarkus.atmosphere.session-support | false | Enable HttpSession support |
| quarkus.atmosphere.broadcaster-class | | Custom Broadcaster FQCN |
| quarkus.atmosphere.broadcaster-cache-class | | Custom BroadcasterCache FQCN |
| quarkus.atmosphere.heartbeat-interval-in-seconds | | Server heartbeat frequency |
| quarkus.atmosphere.init-params | | Map of any Applicatio…