Quick Start¶
Environment Requirements¶
- Go Version: Go 1.18 or higher is recommended. If you need to use the latest
pipelinegeneric workflow features, please ensure your environment has Go 1.24+ installed. - Network Environment: If using overseas models (such as OpenAI, Anthropic), please ensure your network environment can successfully access the relevant API endpoints.
Installation Steps¶
In your Go project root directory, run the following command to get GoChat:
If necessary, clean up and verify dependencies:
Basic Usage Examples¶
The following provides a runnable basic example. This example demonstrates how to perform a basic chat using an OpenAI-compatible client.
1. Basic Chat¶
Create a main.go file and fill in the following code. Before running, please ensure you have set the relevant environment variables (such as DASHSCOPE_API_KEY or the API Key for your vendor):
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/DotNetAge/gochat/pkg/client/base"
"github.com/DotNetAge/gochat/pkg/client/openai"
"github.com/DotNetAge/gochat/pkg/core"
)
func main() {
// 1. Get your API Key
apiKey := os.Getenv("DASHSCOPE_API_KEY")
if apiKey == "" {
log.Fatal("Please set the DASHSCOPE_API_KEY environment variable first")
}
// 2. Initialize config and create client
config := openai.Config{
Config: base.Config{
APIKey: apiKey,
Model: "qwen-plus", // Replace with the specific model you want to use
// BaseURL: "https://dashscope.aliyuncs.com/compatible-mode/v1", // Built-in by default, can also be customized
},
}
client, err := openai.New(config)
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
// 3. Build message body
messages := []core.Message{
core.NewSystemMessage("You are a professional senior Go language development engineer."),
core.NewUserMessage("Please explain the core role of Context in Go language."),
}
// 4. Make request and configure additional parameters using Functional Options pattern
ctx := context.Background()
resp, err := client.Chat(ctx, messages,
core.WithTemperature(0.7),
core.WithMaxTokens(1000),
)
if err != nil {
log.Fatalf("Chat request failed: %v", err)
}
// 5. Print results and token consumption
fmt.Println("=== AI Response ===")
fmt.Println(resp.Content)
fmt.Println("\n=== Token Statistics ===")
fmt.Printf("Total consumed: %d Tokens\n", resp.Usage.TotalTokens)
}
2. Streaming Chat¶
For long text generation, streaming output can significantly improve user experience:
package main
import (
"context"
"fmt"
"log"
"os"
"github.com/DotNetAge/gochat/pkg/client/base"
"github.com/DotNetAge/gochat/pkg/client/openai"
"github.com/DotNetAge/gochat/pkg/core"
)
func main() {
client, _ := openai.New(openai.Config{
Config: base.Config{
APIKey: os.Getenv("API_KEY"),
Model: "qwen-plus",
},
})
messages := []core.Message{
core.NewUserMessage("Write me a modern poem about autumn."),
}
stream, err := client.ChatStream(context.Background(), messages)
if err != nil {
log.Fatalf("Failed to create stream: %v", err)
}
defer stream.Close()
fmt.Print("AI: ")
// Iterate through stream events
for stream.Next() {
event := stream.Event()
if event.Err != nil {
log.Printf("\nStream error: %v\n", event.Err)
break
}
// Handle based on event type
switch event.Type {
case core.EventThinking:
// Handle thinking process for reasoning models
fmt.Printf("\033[90m%s\033[0m", event.Content) // Print thinking content in gray
case core.EventContent:
// Handle standard content output
fmt.Print(event.Content)
}
}
fmt.Println()
}
Run the above code to successfully integrate and experience GoChat's core features. For more information about other modules (such as Embedding, Pipeline, etc.), please refer to the module documentation.