GLM-4-9B-Chat-1M与Vue3集成实战:构建现代化AI应用前端
2026/4/2 13:19:43 网站建设 项目流程

GLM-4-9B-Chat-1M与Vue3集成实战:构建现代化AI应用前端

1. 引言

想象一下,你正在开发一个AI聊天应用,用户可能会输入超长的文档或进行多轮对话。传统的AI模型可能无法处理这么长的上下文,导致回答不准确或丢失重要信息。这时候,GLM-4-9B-Chat-1M模型就能大显身手了——它支持高达100万tokens的上下文长度,相当于约200万中文字符!

但光有强大的后端模型还不够,用户需要一个流畅、美观的前端界面来交互。这就是Vue3的用武之地。Vue3作为现代前端框架,提供了响应式数据绑定、组件化开发等特性,让构建复杂的AI应用界面变得简单高效。

本文将带你一步步实现GLM-4-9B-Chat-1M与Vue3的集成,重点解决长文本渲染、流式响应处理和多语言界面适配等关键技术难题。无论你是前端开发者想了解AI集成,还是AI工程师想学习前端技术,都能从本文中获得实用价值。

2. 环境准备与项目搭建

2.1 创建Vue3项目

首先,我们需要创建一个新的Vue3项目。推荐使用Vite作为构建工具,因为它提供了更快的启动速度和热重载功能。

打开终端,运行以下命令:

npm create vite@latest glm4-vue3-app --template vue cd glm4-vue3-app npm install

这样就创建了一个基础的Vue3项目。接下来安装一些必要的依赖:

npm install axios # 用于API调用 npm install highlight.js # 用于代码高亮 npm install marked # 用于Markdown渲染

2.2 配置GLM-4 API连接

在项目根目录下创建.env文件,配置API基础地址:

VITE_API_BASE_URL=http://localhost:8000 VITE_MODEL_NAME=glm-4-9b-chat-1m

创建src/utils/api.js文件,配置API调用:

import axios from 'axios' const API_BASE_URL = import.meta.env.VITE_API_BASE_URL const apiClient = axios.create({ baseURL: API_BASE_URL, timeout: 300000, // 5分钟超时,适应长文本处理 headers: { 'Content-Type': 'application/json' } }) // 请求拦截器 apiClient.interceptors.request.use( (config) => { // 可以在这里添加认证token等 return config }, (error) => { return Promise.reject(error) } ) // 响应拦截器 apiClient.interceptors.response.use( (response) => { return response }, (error) => { console.error('API请求错误:', error) return Promise.reject(error) } ) export const chatAPI = { // 普通聊天接口 async sendMessage(messages) { const response = await apiClient.post('/chat', { model: import.meta.env.VITE_MODEL_NAME, messages: messages, stream: false }) return response.data }, // 流式聊天接口 async sendMessageStream(messages, onMessage, onError, onComplete) { try { const response = await fetch(`${API_BASE_URL}/chat/stream`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: import.meta.env.VITE_MODEL_NAME, messages: messages, stream: true }) }) const reader = response.body.getReader() const decoder = new TextDecoder() while (true) { const { done, value } = await reader.read() if (done) { onComplete() break } const chunk = decoder.decode(value) const lines = chunk.split('\n').filter(line => line.trim()) for (const line of lines) { if (line.startsWith('data: ')) { const data = line.slice(6) if (data === '[DONE]') { onComplete() return } try { const parsed = JSON.parse(data) onMessage(parsed) } catch (e) { console.error('解析错误:', e) } } } } } catch (error) { onError(error) } } }

3. 核心功能实现

3.1 长文本渲染优化

GLM-4-9B-Chat-1M支持超长文本处理,这意味着前端也需要相应的优化来处理长内容的显示。

创建src/components/LongTextRenderer.vue

<template> <div class="long-text-container"> <!-- 虚拟滚动容器 --> <div ref="scrollContainer" class="scroll-container" @scroll="handleScroll" > <div class="content-wrapper" :style="{ height: totalHeight + 'px' }"> <div v-for="visibleItem in visibleItems" :key="visibleItem.index" :style="{ transform: `translateY(${visibleItem.top}px)` }" class="text-item" > <div v-html="formatText(visibleItem.content)"></div> </div> </div> </div> <!-- 加载状态 --> <div v-if="loading" class="loading-indicator"> 加载中... </div> <!-- 回到顶部按钮 --> <button v-if="showScrollToTop" @click="scrollToTop" class="scroll-to-top-btn" > ↑ </button> </div> </template> <script> import { ref, computed, onMounted, onUnmounted } from 'vue' import { marked } from 'marked' export default { name: 'LongTextRenderer', props: { content: { type: String, default: '' }, itemHeight: { type: Number, default: 100 }, buffer: { type: Number, default: 3 } }, setup(props) { const scrollContainer = ref(null) const scrollTop = ref(0) const containerHeight = ref(0) const loading = ref(false) // 将长文本分割成多个段落 const textSegments = computed(() => { if (!props.content) return [] const segments = [] const words = props.content.split(' ') let currentSegment = '' for (let i = 0; i < words.length; i++) { if (currentSegment.length + words[i].length > 500) { segments.push(currentSegment) currentSegment = words[i] } else { currentSegment += (currentSegment ? ' ' : '') + words[i] } } if (currentSegment) { segments.push(currentSegment) } return segments }) const totalHeight = computed(() => textSegments.value.length * props.itemHeight) // 计算可见的项目 const visibleItems = computed(() => { if (!scrollContainer.value) return [] const startIndex = Math.max(0, Math.floor(scrollTop.value / props.itemHeight) - props.buffer) const endIndex = Math.min( textSegments.value.length - 1, Math.floor((scrollTop.value + containerHeight.value) / props.itemHeight) + props.buffer ) const items = [] for (let i = startIndex; i <= endIndex; i++) { items.push({ index: i, content: textSegments.value[i], top: i * props.itemHeight }) } return items }) const showScrollToTop = computed(() => scrollTop.value > 1000) const handleScroll = () => { if (scrollContainer.value) { scrollTop.value = scrollContainer.value.scrollTop } } const scrollToTop = () => { if (scrollContainer.value) { scrollContainer.value.scrollTo({ top: 0, behavior: 'smooth' }) } } const formatText = (text) => { return marked.parse(text) } // 监听窗口大小变化 const updateContainerHeight = () => { if (scrollContainer.value) { containerHeight.value = scrollContainer.value.clientHeight } } onMounted(() => { updateContainerHeight() window.addEventListener('resize', updateContainerHeight) }) onUnmounted(() => { window.removeEventListener('resize', updateContainerHeight) }) return { scrollContainer, scrollTop, containerHeight, loading, textSegments, totalHeight, visibleItems, showScrollToTop, handleScroll, scrollToTop, formatText } } } </script> <style scoped> .long-text-container { position: relative; height: 100%; } .scroll-container { height: 100%; overflow-y: auto; } .content-wrapper { position: relative; } .text-item { position: absolute; width: 100%; height: v-bind('itemHeight + "px"'); padding: 8px; border-bottom: 1px solid #eee; box-sizing: border-box; } .loading-indicator { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); padding: 10px 20px; background: rgba(0, 0, 0, 0.7); color: white; border-radius: 4px; } .scroll-to-top-btn { position: fixed; bottom: 20px; right: 20px; width: 40px; height: 40px; border-radius: 50%; background: #007bff; color: white; border: none; cursor: pointer; font-size: 18px; } .scroll-to-top-btn:hover { background: #0056b3; } </style>

3.2 流式响应处理

流式响应可以让用户实时看到模型生成的内容,提升用户体验。创建src/components/StreamingChat.vue

<template> <div class="chat-container"> <div class="messages-container"> <div v-for="(message, index) in messages" :key="index" :class="['message', message.role]" > <div class="message-content"> <div v-if="message.role === 'user'" class="user-message"> {{ message.content }} </div> <div v-else class="assistant-message"> <LongTextRenderer v-if="message.isLongText" :content="message.content" /> <div v-else v-html="formatMessage(message.content)"></div> </div> </div> </div> <!-- 流式响应显示区域 --> <div v-if="isStreaming" class="streaming-message"> <div class="streaming-content"> <div v-html="formatMessage(streamingContent)"></div> <span class="cursor">|</span> </div> </div> </div> <div class="input-container"> <textarea v-model="inputText" @keydown.enter.prevent="sendMessage" placeholder="输入您的问题..." rows="3" class="message-input" ></textarea> <div class="controls"> <label class="stream-toggle"> <input type="checkbox" v-model="useStreaming" /> 启用流式响应 </label> <button @click="sendMessage" :disabled="isLoading" class="send-btn" > {{ isLoading ? '发送中...' : '发送' }} </button> </div> </div> </div> </template> <script> import { ref, computed, nextTick } from 'vue' import { marked } from 'marked' import { chatAPI } from '../utils/api' import LongTextRenderer from './LongTextRenderer.vue' export default { name: 'StreamingChat', components: { LongTextRenderer }, setup() { const messages = ref([]) const inputText = ref('') const isLoading = ref(false) const useStreaming = ref(true) const isStreaming = ref(false) const streamingContent = ref('') // 判断是否为长文本 const isLongText = (text) => { return text.length > 1000 || text.split(' ').length > 200 } const formatMessage = (content) => { return marked.parse(content) } const scrollToBottom = () => { nextTick(() => { const container = document.querySelector('.messages-container') if (container) { container.scrollTop = container.scrollHeight } }) } const sendMessage = async () => { if (!inputText.value.trim() || isLoading.value) return const userMessage = { role: 'user', content: inputText.value.trim(), timestamp: new Date() } messages.value.push(userMessage) const currentInput = inputText.value inputText.value = '' isLoading.value = true try { if (useStreaming.value) { // 流式响应 isStreaming.value = true streamingContent.value = '' await chatAPI.sendMessageStream( [...messages.value.map(m => ({ role: m.role, content: m.content }))], (chunk) => { if (chunk.choices && chunk.choices[0].delta.content) { streamingContent.value += chunk.choices[0].delta.content scrollToBottom() } }, (error) => { console.error('流式请求错误:', error) messages.value.push({ role: 'assistant', content: '抱歉,发生了一些错误。请重试。', timestamp: new Date() }) }, () => { // 流式响应完成 messages.value.push({ role: 'assistant', content: streamingContent.value, isLongText: isLongText(streamingContent.value), timestamp: new Date() }) isStreaming.value = false streamingContent.value = '' isLoading.value = false scrollToBottom() } ) } else { // 非流式响应 const response = await chatAPI.sendMessage([ ...messages.value.map(m => ({ role: m.role, content: m.content })), { role: 'user', content: currentInput } ]) messages.value.push({ role: 'assistant', content: response.choices[0].message.content, isLongText: isLongText(response.choices[0].message.content), timestamp: new Date() }) isLoading.value = false scrollToBottom() } } catch (error) { console.error('发送消息错误:', error) messages.value.push({ role: 'assistant', content: '抱歉,发生了一些错误。请重试。', timestamp: new Date() }) isLoading.value = false isStreaming.value = false } } return { messages, inputText, isLoading, useStreaming, isStreaming, streamingContent, sendMessage, formatMessage } } } </script> <style scoped> .chat-container { height: 100vh; display: flex; flex-direction: column; } .messages-container { flex: 1; overflow-y: auto; padding: 20px; } .message { margin-bottom: 16px; } .message-content { max-width: 80%; } .user-message { background: #007bff; color: white; padding: 12px; border-radius: 12px; margin-left: auto; } .assistant-message { background: #f1f3f5; padding: 12px; border-radius: 12px; margin-right: auto; } .streaming-message { background: #f1f3f5; padding: 12px; border-radius: 12px; margin-right: auto; } .streaming-content { display: inline; } .cursor { animation: blink 1s infinite; } @keyframes blink { 0%, 50% { opacity: 1; } 51%, 100% { opacity: 0; } } .input-container { padding: 20px; border-top: 1px solid #ddd; background: white; } .message-input { width: 100%; padding: 12px; border: 1px solid #ddd; border-radius: 8px; resize: vertical; font-family: inherit; } .controls { display: flex; justify-content: space-between; align-items: center; margin-top: 12px; } .stream-toggle { display: flex; align-items: center; gap: 8px; } .send-btn { padding: 8px 20px; background: #007bff; color: white; border: none; border-radius: 4px; cursor: pointer; } .send-btn:disabled { background: #ccc; cursor: not-allowed; } .send-btn:hover:not(:disabled) { background: #0056b3; } </style>

3.3 多语言界面适配

GLM-4-9B-Chat-1M支持26种语言,我们的前端界面也需要相应的多语言支持。创建src/utils/i18n.js

// 多语言资源文件 const translations = { en: { chat: { placeholder: 'Type your message...', send: 'Send', sending: 'Sending...', clear: 'Clear', streaming: 'Streaming response' }, menu: { home: 'Home', chat: 'Chat', settings: 'Settings' } }, zh: { chat: { placeholder: '输入您的问题...', send: '发送', sending: '发送中...', clear: '清空', streaming: '流式响应' }, menu: { home: '首页', chat: '聊天', settings: '设置' } }, ja: { chat: { placeholder: 'メッセージを入力...', send: '送信', sending: '送信中...', clear: 'クリア', streaming: 'ストリーミング応答' }, menu: { home: 'ホーム', chat: 'チャット', settings: '設定' } } // 可以继续添加其他语言... } // 当前语言 let currentLanguage = 'zh' export const i18n = { // 设置当前语言 setLanguage(lang) { if (translations[lang]) { currentLanguage = lang document.documentElement.lang = lang // 触发语言变化事件 window.dispatchEvent(new CustomEvent('languageChanged', { detail: lang })) } }, // 获取当前语言 getLanguage() { return currentLanguage }, // 翻译函数 t(key, defaultValue = '') { const keys = key.split('.') let value = translations[currentLanguage] for (const k of keys) { if (value && value[k] !== undefined) { value = value[k] } else { return defaultValue } } return value }, // 获取所有支持的语言 getSupportedLanguages() { return Object.keys(translations).map(code => ({ code, name: this.getLanguageName(code) })) }, // 获取语言名称 getLanguageName(code) { const names = { en: 'English', zh: '中文', ja: '日本語', ko: '한국어', de: 'Deutsch', fr: 'Français', es: 'Español', // 可以继续添加其他语言... } return names[code] || code } } // 自动检测浏览器语言 const browserLanguage = navigator.language.split('-')[0] if (translations[browserLanguage]) { i18n.setLanguage(browserLanguage) } export default i18n

创建语言切换组件src/components/LanguageSwitcher.vue

<template> <div class="language-switcher"> <select v-model="selectedLanguage" @change="changeLanguage" class="language-select"> <option v-for="lang in supportedLanguages" :key="lang.code" :value="lang.code" > {{ lang.name }} </option> </select> </div> </template> <script> import { ref, onMounted } from 'vue' import { i18n } from '../utils/i18n' export default { name: 'LanguageSwitcher', setup() { const selectedLanguage = ref(i18n.getLanguage()) const supportedLanguages = ref(i18n.getSupportedLanguages()) const changeLanguage = () => { i18n.setLanguage(selectedLanguage.value) } // 监听语言变化事件 onMounted(() => { window.addEventListener('languageChanged', (event) => { selectedLanguage.value = event.detail }) }) return { selectedLanguage, supportedLanguages, changeLanguage } } } </script> <style scoped> .language-switcher { display: inline-block; } .language-select { padding: 8px 12px; border: 1px solid #ddd; border-radius: 4px; background: white; font-size: 14px; } .language-select:focus { outline: none; border-color: #007bff; } </style>

4. 完整应用集成

现在让我们把所有组件集成到一个完整的应用中。创建src/App.vue

<template> <div class="app"> <header class="app-header"> <h1>GLM-4 AI聊天应用</h1> <LanguageSwitcher /> </header> <main class="app-main"> <StreamingChat /> </main> <footer class="app-footer"> <p>Powered by GLM-4-9B-Chat-1M & Vue3</p> </footer> </div> </template> <script> import StreamingChat from './components/StreamingChat.vue' import LanguageSwitcher from './components/LanguageSwitcher.vue' export default { name: 'App', components: { StreamingChat, LanguageSwitcher } } </script> <style> * { margin: 0; padding: 0; box-sizing: border-box; } body { font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif; line-height: 1.6; color: #333; } .app { display: flex; flex-direction: column; height: 100vh; } .app-header { display: flex; justify-content: space-between; align-items: center; padding: 1rem 2rem; background: #007bff; color: white; box-shadow: 0 2px 4px rgba(0,0,0,0.1); } .app-header h1 { font-size: 1.5rem; font-weight: 600; } .app-main { flex: 1; overflow: hidden; } .app-footer { padding: 1rem 2rem; text-align: center; background: #f8f9fa; border-top: 1px solid #dee2e6; } /* 响应式设计 */ @media (max-width: 768px) { .app-header { padding: 0.75rem 1rem; flex-direction: column; gap: 0.5rem; } .app-header h1 { font-size: 1.25rem; } .app-footer { padding: 0.75rem 1rem; } } </style>

5. 部署与优化建议

5.1 生产环境部署

创建vite.config.js进行生产环境配置:

import { defineConfig } from 'vite' import vue from '@vitejs/plugin-vue' export default defineConfig({ plugins: [vue()], build: { outDir: 'dist', sourcemap: false, chunkSizeWarningLimit: 1000, rollupOptions: { output: { manualChunks: { vendor: ['vue', 'axios', 'marked'], highlight: ['highlight.js'] } } } }, server: { proxy: { '/api': { target: 'http://localhost:8000', changeOrigin: true } } } })

5.2 性能优化建议

  1. 代码分割:使用Vite的自动代码分割功能,减少初始加载时间
  2. 图片优化:对静态资源进行压缩和懒加载
  3. 缓存策略:合理配置HTTP缓存头
  4. CDN加速:使用CDN分发静态资源
  5. 监控告警:集成前端监控工具,实时监控应用性能

6. 总结

通过本文的实践,我们成功将GLM-4-9B-Chat-1M大模型与Vue3前端框架进行了深度集成。这个方案不仅解决了长文本渲染的技术难题,还实现了流畅的流式响应处理和灵活的多语言支持。

实际开发中,这种集成方式确实能显著提升用户体验。长文本虚拟滚动让大量内容的浏览变得流畅,流式响应让用户能够实时看到生成过程,多语言支持则让应用具备了国际化能力。

当然,每个项目都有其特殊性,你可能需要根据实际需求调整一些细节。比如对话历史的管理方式、错误处理机制、或者特定的UI交互需求等。但整体的技术思路和架构是相通的,希望本文能为你提供有价值的参考。

如果你在实践过程中遇到问题,或者有更好的优化建议,欢迎交流讨论。技术总是在不断演进,保持学习和实践才能跟上时代的步伐。


获取更多AI镜像

想探索更多AI镜像和应用场景?访问 CSDN星图镜像广场,提供丰富的预置镜像,覆盖大模型推理、图像生成、视频生成、模型微调等多个领域,支持一键部署。

需要专业的网站建设服务?

联系我们获取免费的网站建设咨询和方案报价,让我们帮助您实现业务目标

立即咨询