衔接上一篇 玩转AI新声态 | 玩转TTS/ASR/YuanQI 打造自己的AI助手 页面数据渲染篇
紧接着上一篇我们搭建了童话匠智能体极速版-前端架子, 接下来就是将功能全部实现
在前面我已经设计好了前端原型草稿, 根据操作可以清楚的看到架构、功能、内容、调用流程, 在问答过来的时候可以一边播放一边流试输出
我这里给大家打包好了一个初始化脚手架项目, 相当于壳子我们只需往里面写功能模块就行其他的不用管太多,下面视频是简单对脚手架的讲解
脚手架地址: https://gitee.com/yangbuyi/agent-ui
本次前端项目需要一定前端能力的同学, 我就不太详细的说明 html、css 这些基础的东西,重心放在 TTS/ASR/元器上面
我已经给同学们搭建了一个空壳的样式我们就需要实现功能即可,那么接下来我们就先开始 童话匠智能体极速版
的代码编写和实现
复制下面代码到 wechatAgentSpeedEdition
页面当中
<script setup>
</script>
<template>
<div class="audio-wrapper">
<div class="agent">
<div class="agent-body">
<div class="agent-head">
<div class="img" id="agent-head-logo"></div>
<span>童话匠</span>
</div>
<div class="agent-content" id="agent-content">
<div class="msg-list" name="fade" ref="scrollContainerRef">
<div class="msg">
<div class="avatar2"></div>
<div class="audio2">
欢迎来到玩转新声 - 童话匠 By腾讯云社区领袖杨不易呀
</div>
</div>
<ul>
<!-- 用户 -->
<li class="msg">
<!-- 头像区域 -->
<div :class="{ avatar:true }"></div>
<!-- 内容区域 -->
<div v-cloak :class="{ audio:true }">
你是谁呀?
</div>
</li>
<!-- 机器人 -->
<li class="msg">
<!-- 头像 -->
<div :class="{ avatar2: true }"></div>
<!-- 内容区域 -->
<div v-cloak :class="{ audio2: true, 'duration-item':true, wink: false }">
<div class="markdown">
我是您专属的智能体童话匠呀
</div>
<!-- 语音区域 -->
<div class="voice-content">
<div class="bg voicePlay"></div>
<div class="duration2">60"</div>
</div>
</div>
</li>
</ul>
</div>
</div>
<div class="height-bg">
<div id="agent-operate"
:class="{'agent-operate': true}">
按 住 说 话
</div>
</div>
</div>
</div>
<audio ref="audio"></audio>
</div>
</template>
<style scoped lang="scss">
.audio-wrapper {
margin-top: 10px;
padding: 20px;
width: 100%;
}
.agent {
margin: 168px auto;
padding: 55px 11px 53px;
width: 221px;
height: 448px;
font-size: 12px;
border-radius: 35px;
background-image: url("../assets/img/iphone-bg.png");
box-sizing: border-box;
user-select: none;
transform: scale(1.7);
}
.agent-body {
height: 100%;
background-color: #fff;
}
.agent-head {
height: 30px;
line-height: 30px;
color: #000;
background-color: transparent;
text-align: center;
position: relative;
}
.agent-head .img {
width: 20px;
height: 20px;
background: url("https://pinia.vuejs.org/logo.svg") 1px 28px;
background-size: 100%;
border-radius: 50%;
position: absolute;
left: 5px;
top: 5px;
animation: shake_logo_img 1s infinite;
@keyframes shake_logo_img {
0% {
transform: rotate(0deg);
}
25% {
transform: rotate(5deg);
}
50% {
transform: rotate(0deg);
}
75% {
transform: rotate(-5deg);
}
100% {
transform: rotate(0deg);
}
}
}
.agent-head span {
display: inline-block;
}
.agent-head span:nth-child(2) {
//width: 100px;
text-align: center;
}
.agent-head span:nth-child(3) {
float: right;
margin-right: 10px;
}
.agent-content {
height: 282px;
background-color: #f1eded;
}
.agent-container {
line-height: 28px;
width: 100%;
}
.height-bg {
height: 100px;
background: transparent;
}
.agent-operate {
color: #222222;
position: relative;
line-height: 28px;
text-align: center;
cursor: pointer;
font-weight: bold;
box-shadow: 0 -1px 1px rgba(0, 0, 0, .1);
}
.agent-operate:active {
background-color: rgba(255, 255, 255, 0.65);
}
.agent-operate-red {
background-color: rgba(215, 53, 53, 0.91) !important;
}
.agent-operate:active:before {
position: absolute;
left: 50%;
transform: translate(-50%, 0);
top: -2px;
content: '';
width: 0%;
height: 4px;
background-color: #7bed9f;
animation: loading 1s ease-in-out infinite backwards;
}
.msg-list {
margin: 0;
padding: 0;
height: 100%;
overflow-y: auto;
-webkit-overflow-scrolling: touch;
}
.msg-list::-webkit-scrollbar {
display: none;
}
.msg-list .msg {
list-style: none;
padding: 0 8px;
margin: 10px 0;
overflow: hidden;
//cursor: pointer;
}
.msg-list .msg .avatar,
.msg-list .msg .audio,
.msg-list .msg .duration {
float: right;
}
.msg-list .msg .avatar2,
.msg-list .msg .audio2 {
float: left;
}
.msg-list .msg .avatar, .msg-list .msg .avatar2 {
width: 24px;
height: 24px;
border-radius: 50%;
line-height: 24px;
text-align: center;
background-color: #000;
background: url("../assets/img/yby.png") 0 0;
background-size: 100%;
}
.msg-list .msg .avatar2 {
background: url('../assets/img/yq.png') -25px 74px !important;
background-size: 100% !important;
transform: scale(1.0);
}
.msg-list .msg .audio, .msg-list .msg .audio2 {
position: relative;
margin-right: 6px;
max-width: 125px;
min-width: 30px;
height: 24px;
line-height: 24px;
padding: 0 4px 0 10px;
border-radius: 2px;
color: #000;
text-align: left;
background-color: rgba(107, 197, 107, 0.85);
}
.msg-list .msg .audio2 {
margin-left: 6px;
text-align: left;
}
.msg-list .msg.eg {
cursor: default;
}
.msg-list .msg.eg .audio {
text-align: left;
}
.msg-list .msg .audio:before {
position: absolute;
right: -8px;
top: 8px;
content: '';
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 4px;
border-color: transparent transparent transparent rgba(107, 197, 107, 0.85);
}
.msg-list .msg .audio2:before {
position: absolute;
left: -8px;
top: 8px;
content: '';
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 4px;
border-color: transparent rgba(107, 197, 107, 0.85) transparent transparent;
}
.msg-list .msg .audio span, .msg-list .msg .audio2 span {
color: rgba(255, 255, 255, .8);
display: inline-block;
transform-origin: center;
}
.msg-list .msg .audio span:nth-child(1) {
font-weight: 400;
}
.msg-list .msg .audio span:nth-child(2) {
transform: scale(0.8);
font-weight: 500;
}
.msg-list .msg .audio span:nth-child(3) {
transform: scale(0.5);
font-weight: 700
}
.msg-list .msg .audio2 span:nth-child(1) {
transform: scale(0.5);
font-weight: 300;
}
.msg-list .msg .audio2 span:nth-child(2) {
transform: scale(0.8);
font-weight: 400;
}
.msg-list .msg .audio2 span:nth-child(3) {
font-weight: 500;
}
.msg-list .msg .audio.wink .voicePlay,
.msg-list .msg .audio2.wink .voicePlay {
animation-name: voicePlay;
animation-duration: 1s;
animation-direction: normal;
animation-iteration-count: infinite;
animation-timing-function: steps(3);
top: 0 !important;
}
.duration-item {
position: relative;
}
//.msg-list .msg .duration, .msg-list .msg .duration2 {
// //margin: 3px 2px;
// color: rgba(255, 255, 255, 0.73);
// position: absolute;
// right: 18px;
// top: 0;
//}
.msg-list .msg .duration2 {
color: rgba(255, 255, 255, 0.73);
margin-left: 1px;
font-size: 10px;
}
.fade-enter-active, .fade-leave-active {
transition: opacity .5s;
}
.fade-enter, .fade-leave-to {
opacity: 0;
}
//@keyframes wink {
// from {
// color: rgba(255, 255, 255, .8);
// }
// to {
// color: rgba(255, 255, 255, .1);
// }
//}
@keyframes loading {
from {
width: 0%;
}
to {
width: 100%;
}
}
.msg-list .msg .audio, .msg-list .msg .audio2 {
font-size: 9px !important;
line-height: 14px !important;
padding: 5px !important;
box-sizing: border-box !important;
height: auto !important;
}
.bg, .bg2 {
background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGAAAAAYCAYAAAAF6fiUAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNi1jMDY3IDc5LjE1Nzc0NywgMjAxNS8wMy8zMC0yMzo0MDo0MiAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6NzlFRDZDRDNENzlFMTFFNkJDN0NFMjA2QTFFRTRDQkIiIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6NzlFRDZDRDJENzlFMTFFNkJDN0NFMjA2QTFFRTRDQkIiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIDIwMTcgKFdpbmRvd3MpIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6MTAxQkEzQ0RENzM2MTFFNjgyMEI5MTNDRkQ0OTM5QUEiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6MTAxQkEzQ0VENzM2MTFFNjgyMEI5MTNDRkQ0OTM5QUEiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz4K4iKVAAACUUlEQVR42uSazytEURTHvTHjR4kaU8xsSDZSdmbjx4oSK8XGQrJlpSwYTSmxEWWhUIpsZK3kD7DRNBuSBZFCNjZ+JPKcV6ecXu/d3sy7595bc+vbfXPue5/749z77o83lm3bZYYFC8RZqAbQAigP2tXNj5aZF7gdkAZNk9+7WvnOCCgxRUCb9n/o1sk3pUH6QDHF/GNsoM+QeYfiy6qkFeLZDBb0GlTB4AAR/xXT9nXxZVa0WCekQd9Y0HOJjg3CHySviiZmfjO3AyIhnu0gBc0wjAIR/wLtW8z87aAOWAI9gqaYRoAff4ZUoi7EKCiUP462j4CdSCrfK4N1Ahpi6I0i/hPa50M4oFB+Dbm/SzXfL5MD4rUogxP8+Itozynm59E+q5ovyuQdHxphWh568XvR5kxq1SEn40L4e0XMA1L4EcEe7RTjLqYdqRf/gezQUwr5LxjXq+aLHPCFcTmTA7z4tutIQhXfLiJPKXyRA/oxzgW8v9DgxU+S62eF/ATGr6r5fg26Corj9RHD4Z0fvwfjS9CbQn4bxrfK+R6TyzxZNk260solTL4i/g3al10TsMXIryA72T7VfK8MnJO8X9CKy14lafXjxx8jFUsSeyUzfxhtPwHPoqTy/TJJMJzJiPgNpJdsuNJizPwztB/q4JtwHN2KW3sn3HuMOouR30l6bbsOvgkOyGIBnaPbRldalJl/h2knuvgmOKAWNAFKMUz4Iv4O6Z1xXXxTPxtazHy6khnVyS/Fb8IDpHGyuvmWgX9L4Q4toDnQFWhNN/9PgAEAR4w1ULjdCbEAAAAASUVORK5CYII=) right 0 no-repeat;
width: 14px;
height: 24px;
background-size: 400%;
position: relative;
top: 5px;
}
.bg2 {
transform: rotateY(200deg);
}
.voice-content {
display: flex;
align-items: center;
cursor: pointer;
}
@keyframes voicePlay {
0% {
background-position: 0;
}
100% {
background-position: 100%;
}
}
.markdown :deep(.markdown-it) {
img {
width: 100%;
}
a {
color: #0052d9;
font-weight: bold;
}
}
.loader {
width: 15px;
height: 15px;
border: 2px solid #ffffff;
border-bottom-color: #9cecb0;
border-radius: 50%;
box-sizing: border-box;
animation: rotation-1 1s linear infinite;
}
@keyframes rotation-1 {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
</style>
那么空壳架子就搭建好了, 我们只需要编写实现功能, 按住说话发送消息,拿到消息,发送到当前集合当中前端渲染显示
按住说话功能有拿到录音音频数据并且 发起接口调用
以及 取消接口调用
按住细节: 当按下时开启录音设备、当松开时停止录音设备并且生成录音音频数据,想要实现就需要使用浏览器音频 API MediaRecorder
MediaRecorder 是 Web API 中的一个接口,允许你可以轻松地录制来自 MediaStream(如相机和麦克风)的媒体内容, 这个接口提供了一种便捷的方法来捕获和记录音频和视频,并将其保存为文件或传输到服务器
使用方法
<script setup>
const { proxy } = getCurrentInstance();
// 录音音频数据
const recordedAudioData = ref([]);
// 录音机
const recorder = ref(null);
// 对话按钮
const btnText = ref('按住说话');
// 请求浏览器音频访问
const requestAudioAccess = () => {
navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
// 媒体录像机
recorder.value = new MediaRecorder(stream);
bindEvent();
}).catch(error => {
console.log(error);
proxy.$modal.notifyError("出错,请确保已允许浏览器获取录音权限")
});
};
// 事件录入数据和发送请求
const bindEvent = () => {
// 录音数据
recorder.value.ondataavailable = getRecordingData;
// 结束播放就开始发起请求
recorder.value.onstop = saveRecordingData;
}
// 实时获取音频数据
const getRecordingData = (event) => {
console.log(event);
recordedAudioData.value.push(event.data);
};
/**
* 说话完毕 发起聚合接口
*/
const saveRecordingData = () => {
// ...这里是发起聚合接口调用
console.log("发起调用: ",recordedAudioData.value);
}
// 生命周期函数 - mounted
onMounted(() => {
// 初始化媒体
requestAudioAccess()
})
</script>
按下去开启媒体流进行录音, 松开停止媒体流录音, 那么就需要 mousedown
按下、mouseup
松开 的事件进行监听, 然后还需要下滑取消问答请求分别的事件如下:
修改页面代码新增这些事件
<div class="height-bg"
@mousemove="onMouseMove"
@touchmove.prevent="onTouchmove"
@mousedown="onMousedown"
@touchstart.prevent="onMousedown"
@mouseup="onMouseup"
@touchend.prevent="onMouseup"
>
<div id="agent-operate" :class="{'agent-operate': true, 'agent-operate-red': shouldCancel}">
{{ btnText }}
</div>
</div>
新增对应事件的业务处理, 我已经写明了注释方便查看
// 移动距离阈值 - 阈值可以根据需要调整
const cancelThreshold = ref(20);
// 记录当前位置
const startY = ref(0)
// 限制是否取消问答
let shouldCancel = ref(false);
// 按下
const onMousedown = (event) => {
startY.value = event.clientY; // 或使用event.touches[0].clientY对于触摸事件
btnText.value = '松开结束, 下滑取消问答';
onStart();
};
// 松开
const onMouseup = (event) => {
// 停止录音
onStop();
if (shouldCancel.value) {
proxy.$modal.msgSuccess("取消问答")
setTimeout(() => {
// 清空音频数据
recordedAudioData.value = []
shouldCancel.value = false;
btnText.value = '按住说话';
// 重置按钮按下位置
startY.value = 0
}, 20)
return; // 如果应该取消,则不执行任何操作
}
console.log(shouldCancel.value);
setTimeout(() => {
//成功也要取消
shouldCancel.value = false;
btnText.value = '按住说话';
// 重置按钮按下位置
startY.value = 0
// 发送请求
// sendRemote()
}, 200)
};
// 在鼠标移动时
const onMouseMove = (event) => {
if (startY.value <= 10) {
return
}
const currentY = event.clientY;
const distance = Math.sqrt(Math.pow(currentY - startY.value, 2));
// console.log(distance, cancelThreshold.value);
// 超过距离
if (distance > cancelThreshold.value) {
btnText.value = '松开取消';
// 设置 true 限制发送请求
shouldCancel.value = true;
} else {
shouldCancel.value = false;
btnText.value = '松开结束, 下滑取消问答';
}
};
// 悬浮上
const onTouchmove = (event) => {
event.preventDefault();
onMouseMove(event.touches[0]);
};
// 开启
const onStart = () => {
recorder.value.start();
};
// 停止
const onStop = () => {
recorder.value.stop();
};
编写完毕后进行测试一个是正常操作
向下滑动取消问答
并且可以在控制台看到打印的数据, 这就是音频数据, 你可以直接丢给 audio 标签来进行播放 我这里就不演示了
那么接下来就是实现发起调用接口拿到结果后渲染在页面当中
修改 saveRecordingData
函数,该函数是将用户录制的音频数据 转 blob 对象(前端可执行的)并且检测是否存在说话动态,存在则发起否则抛出错误
⚠️ 消息核心代码 1 Blob 对象可以转 Base64
业务流程如下:
/**
* 说话完毕 发起聚合接口
*/
const saveRecordingData = () => {
// 如果 `shouldCancel.value` 为 `false` ,则直接返回,不执行后续操作
if (shouldCancel.value) {
return
}
// 创建一个 Blob 对象,包含录制的音频数据,并指定数据类型为 'audio/ogg; codecs=opus'
let blob = new Blob(recordedAudioData.value, { type: 'audio/ogg; codecs=opus' });
// 创建一个对象 URL 来表示 Blob
let audioStream = URL.createObjectURL(blob);
// 创建一个 FileReader 对象来读取 Blob 数据
const reader = new FileReader();
// 以 `ArrayBuffer` 格式读取 Blob 数据
reader.readAsArrayBuffer(blob);
// 当读取操作完成时触发
reader.onloadend = () => {
// 创建音频上下文
const audioContext = new (window.AudioContext || window.webkitAudioContext)();
// 对读取的音频数据进行解码
audioContext.decodeAudioData(reader.result, (audioBuffer) => {
// 获取音频数据的第一个通道的数据
const rawData = audioBuffer.getChannelData(0);
// 调用 `detectSpeech` 函数来检测音频数据中是否包含语音活动
const isSpeaking = detectSpeech(rawData);
// 打印检测结果
console.log("Is speaking: ", isSpeaking);
if (isSpeaking) {
// 清空之前的音频数据
recordedAudioData.value = []
// 执行录音结束的操作,并发送相关数据
sendRemote(audioStream, blob)
} else {
// 显示错误提示
proxy.$modal.notifyError("请确保您正在说话!")
console.log("结束");
recordedAudioData.value = []
onStop()
}
});
};
}
⚠️ 消息核心代码 2 根据上面 Blob 对象 转 Base64 然后在发起聚合接口获取数据
这段代码实现了一个音频处理和聊天功能的综合调用, 具体来说这个函数sendRemote的作用是处理音频数据, 并调用一系列后端服务来实现语音识别、文本处理和语音合成.
/**
* 调用聚合接口函数
*/
const sendRemote = (audioStream, blob) => {
if (shouldCancel.value) {
return;
}
let duration = parseInt(blob.size / 6600);
if (duration <= 2) {
proxy.$modal.notifyError("说话时间太短,至少5秒!")
recordedAudioData.value = [];
return;
}
if (duration > 60) {
duration = 60;
}
// 这里调用语腾讯云语音识别功能拿到文本
// 然后根据文本调用元器智能体获取回复文本
// 然后根据文本进行语音合成出定制的语音风格
// 返回前端进行播放
// 发请求
blobToBase64(blob).then(base64Audio => {
loading.value = true
chatCompletionV1({
"audioFile": base64Audio
}).then(res => {
console.log(res);
// 聚合接口返回来的数据格式
const msgText = res.data.userMessage
const botMessage = res.data.botMessage
const voice = res.data.botVoice
if (msgText.length <= 0) {
proxy.$modal.notifyError("您没说话,请重新说话!")
loading.value = false
recordedAudioData.value = [];
return
}
// TODO 用户的消息
chatStore.pushMessage({
duration: duration,
stream: audioStream,
isUser: true,
text: msgText,
sort: chatStore.addIndex(), // 当前消息排序 +1
})
// TODO 机器人的消息
chatStore.pushMessage({
duration: duration,
stream: voice,
isUser: false,
text: botMessage,
sort: chatStore.addIndex(), // 当前消息排序 +1
});
// 滚动 - 每次发起的时候始终在自动滚动到底部
scrollToBottom()
setTimeout(() => {
loading.value = false
// 设置未播放
isPlay.value = false
// 播放最新的一个语音
onPlay(chatStore.messageList.length - 1)
// 清空录音数据
recordedAudioData.value = [];
}, 50)
}).catch(err => {
console.log(err);
proxy.$modal.notifyError("出错,请稍后再试!")
loading.value = false
recordedAudioData.value = [];
})
})
}
测试按住说话发起调用聚合接口 , 记得将请求地址和你服务保持一致
请求成功后会在缓存当中新增两个数据一个是用户一个是机器人进行持久化数据 可以去 useChatStore.js
文件查看相关代码
可以看到缓存到了浏览器当中,那么接下来就是将这个集合进行页面渲染即可
如何渲染就不需要介绍了吧, 直接 v-for
操作操作即可, 一共五步记得用户这里别忘记填写了
在定义变量的地方 新增代码用于播放音频载体
// 机器人音频数据播放
const audio = ref(null);
在定义函数的地方新增函数方法, 用于 绑定音频事件,变更动画 然后修改页面标签样式为动态样式 wink
来控制动画
/**
* 绑定音频事件,变更动画
*/
const bindAudioEvent = (index) => {
let item = chatStore.messageList[index];
audio.value.onplaying = () => {
item.wink = true;
};
audio.value.onended = () => {
item.wink = false;
};
// 动画变更
chatStore.messageList[index] = item
};
整体如下图:
然后就可以进行页面测试效果啦
测试点:
视频当中页面渲染聊天数据代码如下:
/**
* 调用聚合接口函数
*/
const sendRemote = (audioStream, blob) => {
if (shouldCancel.value) {
return;
}
let duration = parseInt(blob.size / 6600);
if (duration <= 2) {
proxy.$modal.notifyError("说话时间太短,至少5秒!")
recordedAudioData.value = [];
return;
}
if (duration > 60) {
duration = 60;
}
// 这里调用语腾讯云语音识别功能拿到文本
// 然后根据文本调用元器智能体获取回复文本
// 然后根据文本进行语音合成出定制的语音风格
// 返回前端进行播放
// 发请求
blobToBase64(blob).then(base64Audio => {
loading.value = true
chatCompletionV1({
"audioFile": base64Audio
}).then(res => {
console.log(res);
// 聚合接口返回来的数据格式
const msgText = res.data.userMessage
const botMessage = res.data.botMessage
const voice = res.data.botVoice
if (msgText.length <= 0) {
proxy.$modal.notifyError("您没说话,请重新说话!")
loading.value = false
recordedAudioData.value = [];
return
}
// TODO 用户
chatStore.pushMessage({
duration: duration,
stream: audioStream,
isUser: true,
text: msgText,
sort: chatStore.addIndex(), // 当前消息排序 +1
})
// 计算音频时长
calculateDuration(voice, botMessage)
setTimeout(() => {
loading.value = false
// 设置未播放
isPlay.value = false
// 进行流试(伪)
// chatStore.updateMessage(chatStore.messageList, botMessage, true)
// 播放最新的一个语音
onPlay(chatStore.messageList.length - 1)
// 清空录音数据
recordedAudioData.value = [];
}, 50)
}).catch(err => {
console.log(err);
proxy.$modal.notifyError("出错,请稍后再试!")
loading.value = false
recordedAudioData.value = [];
})
})
}
// 滚动到底部
const scrollToBottom = () => {
setTimeout(() => {
// 确保 scrollContainerRef.value 是定义的
if (proxy.$refs.scrollContainerRef) {
proxy.$refs.scrollContainerRef.scrollTop = proxy.$refs.scrollContainerRef.scrollHeight;
}
}, 10)
};
/**
* 计算音频时长
* @param voice 机器人音频
* @param botMessage 机器人消息
*/
const calculateDuration = (voice, botMessage) => {
const binaryString = window.atob(voice.split(',')[1]);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
const blob = new Blob([ bytes ], { type: 'audio/mp3' });
const audio = new Audio();
audio.src = URL.createObjectURL(blob);
audio.addEventListener('loadedmetadata', () => {
// TODO 机器人
chatStore.pushMessage({
duration: parseFloat(audio.duration.toFixed(2)),
stream: voice,
isUser: false,
text: botMessage,
sort: chatStore.addIndex(), // 当前消息排序 +1
});
});
}
// 监听文本变化
watch(chatStore.messageList, (newValue, oldValue) => {
scrollToBottom()
});
那么童话匠-录音极速版整体的功能就已经实现完毕啦, 是不是感觉还缺少什么? 好像都是流试来加载消息的我这里为什么没有搞呢?
在缓存当中定义了一个名为 updateMessage
的函数,其主要作用是更新一个消息列表中最后一个消息的文本内容, 提供了一种方式来动态地显示或更新文本,可以用于实现类似于打字机效果的流式输出
updateMessage
是一个接受四个参数的函数。 messageList
: 一个数组,包含了消息对象,每个对象至少包含一个 text
属性。reader
: 要显示或更新的文本字符串。stream
: 一个布尔值,默认为 false
。如果设置为 true
,则文本将按字符逐个显示,实现流式效果。speed
: 一个数值,表示流式输出时每个字符显示的时间间隔,单位为毫秒,默认值为 50 毫秒。stream
参数为 true
,则执行以下操作: i
为 0。messageList
中最后一个消息的 text
属性。printInterval
,每隔 speed
毫秒执行一次,将 reader
中的字符逐个添加到 messageList
的最后一个消息的 text
属性中。i
递增,直到 i
大于或等于 reader
的长度(即所有字符都已添加完毕)。clearInterval
清除定时器,并将 i
重置为 0,以便可以重新流式输出文本。stream
参数为 false
,则直接将 reader
字符串赋值给 messageList
中最后一个消息的 text
属性。 /**
* 更新指定消息文本达到流试
* @param messageList 当前消息集合
* @param reader 文本
* @param stream 是否开启
* @param speed 流试
*/
const updateMessage = (messageList, reader, stream = false, speed = 50) => {
if (stream) {
let i = 0
messageList[messageList.length - 1].text = ""
let printInterval = setInterval(() => {
messageList[messageList.length - 1].text += reader[i];
i++
if (i >= reader.length) {
clearInterval(printInterval)
i = 0
}
}, speed)
} else {
messageList[messageList.length - 1].text = reader
}
}
修改 sendRemote
函数 新增流试操作
那么就可以去测试一下流试输出效果
那么到这里我们的录音文件识别版本就已经大功告成! 接下来就剩下实时识别版本, 继续开干!
还记得前面玩的实时 JS 示例吗, 我们来将里面的实现和配置复制进来, 一共 6 个文件直接干进来 config.js
还是一样的将密钥设置好
将录音版的代码复制到 wechatAgent
页面当中, 直接全部复制也可以使用我整理好的
童话匠接入实时语音识别, 我这里已经写好了实时识别组件 useWebAudioSpeechRecognizer
, 直接引入到项目当中即可
<!--
- 您可以更改此项目但请不要删除作者署名谢谢,否则根据中华人民共和国版权法进行处理.
- You may change this item but please do not remove the author's signature,
- otherwise it will be dealt with according to the Copyright Law of the People's Republic of China.
-
- yangbuyi Copyright (c) https://yby6.com 2024.
-->
<script setup>
import {onMounted, ref} from 'vue';
import {signCallback} from '@/aigc/vioce/asrauthentication';
import '@/aigc/vioce/config';
import '@/aigc/vioce/speechrecognizer.js';
import '@/aigc/vioce/webaudiospeechrecognizer.js';
import '@/aigc/vioce/webrecorder.js';
import WebAudioSpeechRecognizer from "@/aigc/vioce/webaudiospeechrecognizer.js";
const {proxy} = getCurrentInstance()
const webAudioSpeechRecognizer = ref(null); // Web 音频语音识别器
const voiceBool = ref(false); // 音频
const stagingText = ref(''); // 暂存结果字符
const resultData = ref(''); // 最终结果
const countdown = ref(0); // 计数器
const time = ref('30'); // 时间
let timer = ref(null);
const isOpen = ref(false)
onMounted(() => {
});
const timerInterval = () => {
clearTimer();
timer.value = setInterval(() => {
countdown.value++;
if (countdown.value === 30) {
clearTimer();
close();
countdown.value = 0;
}
}, 1000);
};
const clearTimer = () => {
clearInterval(timer.value);
timer.value = null;
};
// 开启实时音频录入
const openVoice = () => {
proxy.$modal.notifySuccess("开启实时音频录入")
isOpen.value = true;
countdown.value = 0;
// 初始化实时语音参数
const params = {
signCallback: signCallback, // 签名鉴权回调
secretid: config.secretId,
appid: config.appId,
engine_model_type: '16k_zh', // 引擎模型类型
voice_format: 1, // 语音编码方式 默认就行
needvad: 0, // 如果语音分片长度超过60秒,用户需开启 vad(人声检测切分功能) 我们一般最多就三十秒
filter_dirty: 0, // 是否过滤脏词(目前支持中文普通话引擎)。默认为0。0:不过滤脏词;1:过滤脏词;2:将脏词替换为“ * ”
filter_modal: 0, // 是否过滤语气词(目前支持中文普通话引擎)。默认为0。0:不过滤语气词;1:部分过滤;2:严格过滤
filter_punc: 1, // 是否过滤句末的句号(目前支持中文普通话引擎)。默认为0。0:不过滤句末的句号;1:过滤句末的句号
// 是否进行阿拉伯数字智能转换(目前支持中文普通话引擎)
// 0:不转换,直接输出中文数字, 1:根据场景智能转换为阿拉伯数字,3: 打开数学相关数字转换。默认值为1
convert_num_mode: 1,
// 是否显示词级别时间戳。0:不显示;1:显示,不包含标点时间戳,2:显示,包含标点时间戳。
// 支持引擎 8k_en、8k_zh、8k_zh_finance、16k_zh、16k_en、16k_ca、16k_zh-TW、16k_ja、16k_wuu-SH,默认为0
word_info: 2,
isLog: true
};
// 创建 Web 音频语音识别器
webAudioSpeechRecognizer.value = new WebAudioSpeechRecognizer(params);
// 计时器
timerInterval();
voiceBool.value = true;
webAudioSpeechRecognizer.value.OnRecognitionStart = (res) => {
};
webAudioSpeechRecognizer.value.OnSentenceBegin = (res) => {
};
webAudioSpeechRecognizer.value.OnRecognitionResultChange = (res) => {
//console.log('实时识别开启 : ', res);
resultData.value = `${stagingText.value}${res.result.voice_text_str}`;
proxy.$emit('resultSpeechRecognizer', resultData.value, countdown.value);
};
webAudioSpeechRecognizer.value.OnSentenceEnd = (res) => {
//console.log('实时识别结束 : ', res);
stagingText.value += res.result.voice_text_str;
resultData.value = stagingText.value;
proxy.$emit('resultSpeechRecognizer', resultData.value, countdown.value);
isOpen.value = false;
};
webAudioSpeechRecognizer.value.OnRecognitionComplete = (res) => {
};
webAudioSpeechRecognizer.value.OnError = (res) => {
};
// 开启语音识别
webAudioSpeechRecognizer.value.start();
};
// 关闭实时音频录入
const close = () => {
console.log("组件取消");
webAudioSpeechRecognizer.value.stop();
voiceBool.value = false;
resultData.value = ''
stagingText.value = ''
// 录音30秒则强制结束
if (countdown.value === 30) {
proxy.$modal.msgWarning("录音结束")
countdown.value = 0;
}
};
defineExpose({
close, openVoice
})
</script>
<template>
<div style="margin-top: 40px;margin-bottom: 20px;" v-if="isOpen">
00:
<span v-if="countdown < 10">0{{ countdown }}</span>
<span v-if="countdown >= 10">{{ countdown }}</span> /
00:{{ time }}
</div>
</template>
<style scoped lang="scss">
</style>
我已经提前给同学们改造好了, 我感觉没必要在详细讲解如果你完完全全看完了前面的录音版本那么实时版本你不在话下,因为都是共通的只需要剔除录音版本的全部相关代码即可, 实时组件将会直接拿到用户说出的文本信息然后发起聚合接口直接调用后端即可, 下面是我操作的流程:
需求: 点击 logo 弹出抽屉、选择不同的智能体传递给后端、来调用不同的智能体 API
实现方式:
<script setup>
const list = ref([
{
"assistantId": "",
"userId": "",
"token": "",
"name": "小朋友的故事屋",
"description": "小朋友的故事屋可以根据需要的主题生成出图文故事",
"logo": "https://hunyuan-base-prod-1258344703.cos.ap-guangzhou.myqcloud.com/hunyuan_open/agentlogo/091b2322d19771cda94eccee62f03fa9.png",
"to": "https://yuanqi.tencent.com/agent/pCwCLmfl48p7?from=share&shareId=TK5rN7MMKrQy"
},
{
"assistantId": "",
"userId": "",
"token": "",
"name": "隔壁大妈",
"description": "隔壁大妈是一个脾气不好的编程巨星,她掌握Java、python、人工智能AI等各项领域的技能,但是脾气不好说话很冲.",
"logo": "https://hunyuan-base-prod-1258344703.cos.ap-guangzhou.myqcloud.com/hunyuan_open/agentlogo/63b5448a11e415d3604baef0b5a33944.png",
"to": "https://yuanqi.tencent.com/agent/ogKZMi24zGJk?from=share&shareId=F9Q1sqbFBvtb"
},
{
"assistantId": "",
"userId": "",
"token": "",
"name": "隔壁老王",
"description": "扮演隔壁老王,完全精通编程知识点,会图片修改,图片生成,图片解析。",
"logo": "https://hunyuan-base-prod-1258344703.cos.ap-guangzhou.myqcloud.com/hunyuan_open/agentlogo/20240522100010_6af83d277ea14861d4ac43e41904700a.png",
"to": "https://yuanqi.tencent.com/agent/mmbnqMnLdYz0?from=share&shareId=PBGhVPTaj1Pw"
}
])
const { proxy } = getCurrentInstance()
const toUrl = (item) => {
// 新开页面跳转
window.open(item.to)
}
const handleUseAgent = (item) => {
proxy.$emit('handleUseAgent', item)
}
defineExpose({
handleUseAgent
})
</script>
<template>
<div class="card" v-for="(item) in list" :key="item.agentId">
<div class="card-header">
<img
:src="item.logo"
:alt="item.description" class="card-image"/>
<div class="card-content">
<h3>{{ item.name }}</h3>
<p>{{ item.description }}</p>
</div>
</div>
<div class="card-footer">
<span class="status">已发布</span>
<div class="buttons">
<button class="btn edit" @click="toUrl(item)">点击在线体验</button>
<button class="btn experience" @click="handleUseAgent(item)">使用智能体</button>
</div>
</div>
</div>
</template>
<style scoped lang="scss">
@keyframes shake {
0% { transform: rotate(0deg); }
25% { transform: rotate(5deg); }
50% { transform: rotate(0deg); }
75% { transform: rotate(-5deg); }
100% { transform: rotate(0deg); }
}
.card {
border: 1px solid #eaeaea;
border-radius: 10px;
padding: 20px;
max-width: 400px;
background-color: #fff;
margin: 10px auto;
}
.card-header {
display: flex;
align-items: center;
}
.card-image {
width: 60px;
height: 60px;
border-radius: 50%;
margin-right: 15px;
animation: shake 1s infinite;
}
.card-content {
flex: 1;
}
.card-content h3 {
margin: 0;
font-size: 18px;
font-weight: bold;
}
.card-content p {
margin: 5px 0;
font-size: 14px;
color: #666;
}
.timestamp {
font-size: 12px;
color: #999;
}
.card-footer {
display: flex;
justify-content: space-between;
align-items: center;
margin-top: 15px;
}
.status {
font-size: 14px;
color: #28a745;
}
.buttons {
display: flex;
gap: 10px;
}
.btn {
padding: 5px 10px;
border: 1px solid #ddd;
border-radius: 5px;
background-color: #fff;
cursor: pointer;
font-size: 14px;
}
.btn.edit {
color: #28a745;
}
.btn.experience {
color: #007bff;
}
.btn.more {
color: #333;
}
</style>
wechatAgent
页面代码<!--
- 您可以更改此项目但请不要删除作者署名谢谢,否则根据中华人民共和国版权法进行处理.
- You may change this item but please do not remove the author's signature,
- otherwise it will be dealt with according to the Copyright Law of the People's Republic of China.
-
- yangbuyi Copyright (c) https://yby6.com 2024.
-->
<script setup>
import { chatCompletionV1 } from "@/api/audio.js";
import { VueMarkdownIt } from "@f3ve/vue-markdown-it";
import UseChatStore from "@/stores/useChatStore.js";
import UseWebAudioSpeechRecognizer from "@/components/useWebAudioSpeechRecognizer.vue";
import UseCard from "@/components/useCard.vue";
const { proxy } = getCurrentInstance();
const chatStore = UseChatStore();
const durationTime = ref(0)
// 实时音频识别数据
const resultData = ref('');
// 对话按钮
const btnText = ref('按住说话');
// 是否播放
const isPlay = ref(false);
// 音频数据
const audio = ref(null);
const loading = ref(false);
// 在脚本的顶部添加移动距离阈值
const cancelThreshold = ref(50); // 阈值可以根据需要调整
// 记录当前位置
const startY = ref(0)
// 限制是否取消问答
let shouldCancel = ref(false);
// 导航
const open = ref(true)
// 抽屉
const drawer = ref(false)
const direction = ref("rtl")
// 对接实时音频识别
const resultSpeechRecognizer = (areaDom, countdown) => {
console.log('resultSpeechRecognizer', areaDom);
resultData.value = areaDom.replace(/\n/g, '');
}
const onMousedown = (event) => {
startY.value = event.clientY; // 或使用event.touches[0].clientY对于触摸事件
btnText.value = '松开结束, 下滑取消问答';
proxy.$refs.useWebAudioSpeechRecognizer.openVoice()
};
const onMouseup = (event) => {
proxy.$refs.useWebAudioSpeechRecognizer.close()
if (shouldCancel.value) {
proxy.$modal.msgSuccess("取消问答")
setTimeout(() => {
shouldCancel.value = false;
btnText.value = '按住说话';
// 重置按钮按下位置
startY.value = 0
}, 20)
return; // 如果应该取消,则不执行任何操作
}
setTimeout(() => {
//成功也要取消
shouldCancel.value = false;
btnText.value = '按住说话';
// 重置按钮按下位置
startY.value = 0
console.log('发送请求: ', resultData.value);
// 发送请求
sendRemote()
}, 200)
};
// 在鼠标移动时
const onMouseMove = (event) => {
if (startY.value <= 10) {
return
}
const currentY = event.clientY;
const distance = Math.sqrt(Math.pow(currentY - startY.value, 2));
// console.log(distance, cancelThreshold.value);
// 超过距离
if (distance > cancelThreshold.value) {
btnText.value = '松开取消';
// 设置 true 限制发送请求
shouldCancel.value = true;
} else {
shouldCancel.value = false;
btnText.value = '松开结束, 下滑取消问答';
}
};
const onTouchmove = (event) => {
event.preventDefault();
onMouseMove(event.touches[0]);
};
/**
* 根据索引播放机器人音频
*/
const onPlay = (index) => {
let item = chatStore.messageList[index];
console.log(item, isPlay.value);
item.wink = false;
audio.value.src = item.stream;
// 正在播放
if (isPlay.value) {
// 则表示暂停
audio.value.pause();
isPlay.value = false
} else {
audio.value.play();
isPlay.value = true
}
chatStore.messageList[index] = item
bindAudioEvent(index);
};
/**
* 计算音频时长
*/
const calculateDuration = (base64String) => {
if (base64String === '') {
durationTime.value = 6
return
}
const binaryString = window.atob(base64String);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
const blob = new Blob([ bytes ], { type: 'audio/mp3' });
const audio = new Audio();
audio.src = URL.createObjectURL(blob);
audio.addEventListener('loadedmetadata', () => {
durationTime.value = audio.duration;
});
}
/**
* 绑定音频事件,变更动画
*/
const bindAudioEvent = (index) => {
let item = chatStore.messageList[index];
audio.value.onplaying = () => {
item.wink = true;
};
audio.value.onended = () => {
item.wink = false;
};
// 动画变更
chatStore.messageList[index] = item
};
/**
* 发送请求
*/
const sendRemote = async () => {
console.log('sendRemote', resultData.value);
if ([ undefined, null, '' ].includes(resultData.value)) {
console.log("说点话吧");
return
}
// TODO 用户
chatStore.pushMessage({
duration: 999,
stream: undefined,
isUser: true,
text: resultData.value,
sort: chatStore.addIndex(), // 当前消息排序 +1
})
// 占位符
chatStore.pushMessage({
duration: 0,
stream: null,
isUser: false,
text: "#",
sort: chatStore.addIndex(), // 当前消息排序 +1
});
// 使用的是实时音频识别则直接拿到文本去调用智能体和语音合成
// loading.value = true
chatCompletionV1({
"audioFile": resultData.value
}).then(res => {
// 机器人音频
const botVoice = res.data.botVoice
const base64String = botVoice.split(',')[1];
calculateDuration(base64String)
// 机器人文本
const botMessage = res.data.botMessage
// 最后一个
const lastIndex = chatStore.getLastIndex();
// TODO 机器人
setTimeout(() => {
// 替换占位符消息
chatStore.messageList.splice(lastIndex - 1, 1, {
duration: Math.round(durationTime.value),
stream: botVoice,
isUser: false,
text: "",
sort: chatStore.addIndex(), // 当前消息排序 +1
})
// 滚动
scrollToBottom()
setTimeout(() => {
// 进行流试(伪)
chatStore.updateMessage(chatStore.messageList, botMessage, true)
loading.value = false
console.log(chatStore.messageList.length - 1, chatStore.messageList[chatStore.getLastIndex() - 1]);
// 设置未播放
isPlay.value = false
// 播放最新的一个语音
onPlay(chatStore.messageList.length - 1)
// 清空录音数据
resultData.value = '';
}, 50)
}, 50)
})
}
const handleOpen = () => {
drawer.value = true
}
const handleClose = (done) => {
done();
}
// 使用智能体
const handleUseAgent = (item) => {
chatStore.saveAgentData(item)
drawer.value = false
}
/**
* 滚动到底部
*/
const scrollToBottom = () => {
setTimeout(() => {
// 确保 scrollContainerRef.value 是定义的
if (proxy.$refs.scrollContainerRef) {
proxy.$refs.scrollContainerRef.scrollTop = proxy.$refs.scrollContainerRef.scrollHeight;
}
}, 10)
};
// 监听文本变化
watch(chatStore.messageList, (newValue, oldValue) => {
scrollToBottom()
});
/**
* 生命周期
*/
onMounted(() => {
if (!navigator.mediaDevices) {
proxy.$modal.notifyError("您的浏览器不支持获取用户设备")
return;
}
if (!window.MediaRecorder) {
proxy.$modal.notifyError("您的浏览器不支持录音")
return;
}
// 拿到页面上的audio
audio.value = proxy.$refs.audio;
scrollToBottom()
// // console.log(chatStore.messageList[chatStore.messageList.length - 1]);
// chatStore.updateMessage(chatStore.messageList, "" +
// "好的,已为您找到这张哆啦A梦的画像,希望您会喜欢!" +
// "![哆啦A梦的画像](https://cdn.yuanqi.tencent.com/hunyuan_open/default/0c03167875a9b4823cf42b135b195b1b.png?sign=1719328114-1719328114-0-033594fe35f39736b7ca26bad4214667616d3ec673fcc72e56403a49e14cfd5e)"
// + "好的,已为您找到这张哆啦A梦的画像,希望您会喜欢!"
// + "好的,已为您找到这张哆啦A梦的画像,希望您会喜欢!"
// + "好的,已为您找到这张哆啦A梦的画像,希望您会喜欢!"
// + "![哆啦A梦的画像](https://cdn.yuanqi.tencent.com/hunyuan_open/default/0c03167875a9b4823cf42b135b195b1b.png?sign=1719328114-1719328114-0-033594fe35f39736b7ca26bad4214667616d3ec673fcc72e56403a49e14cfd5e)"
// + "好的,已为您找到这张哆啦A梦的画像,希望您会喜欢!"
// + "好的,已为您找到这张哆啦A梦的画像,希望您会喜欢!"
// , true)
});
</script>
<template>
<div class="audio-wrapper">
<div class="phone">
<div class="phone-body">
<div class="phone-head">
<div class="img" id="phone-head-logo" @click="handleOpen"></div>
<span>童话匠</span>
</div>
<div class="phone-content" id="phone-content">
<div class="msg-list" name="fade" ref="scrollContainerRef">
<div class="msg">
<div class="avatar2"></div>
<div class="audio2">
欢迎来到玩转新声 - 童话匠 By腾讯云社区领袖杨不易呀
</div>
</div>
<ul v-for="(item, index) in chatStore.messageList" :key="index">
<!-- 用户 -->
<li v-if="item.isUser" class="msg">
<div :class="{avatar:item.isUser,avatar2:!item.isUser}"></div>
<div v-cloak
:class="{
audio:item.isUser,
audio2:!item.isUser
}">
{{ item.text }}
</div>
</li>
<!-- 机器人 -->
<li v-else class="msg">
<div :class="{avatar:item.isUser,avatar2:!item.isUser}"></div>
<div v-cloak
:class="{
audio2: true,
'duration-item':true,
wink: item.wink
}">
<div class="markdown">
<vue-markdown-it v-if="item.duration > 0" class="markdown-it" :source="item.text "/>
<div class="loader" v-else></div>
</div>
<div class="voice-content" @click="onPlay(index)" v-if="item.duration > 0"
@touchend.prevent="onPlay(index)">
<div class="bg voicePlay"></div>
<div class="duration2">{{ item.duration }}"</div>
</div>
</div>
</li>
<!-- <li style="color: #222222">{{ item.text }}</li>-->
</ul>
</div>
</div>
<div id="phone-operate" :class="{'phone-operate': true, 'phone-operate-red': shouldCancel}"
@mousedown="onMousedown"
@touchstart.prevent="onMousedown"
@mouseup="onMouseup"
@touchend.prevent="onMouseup"
@mousemove="onMouseMove"
@touchmove.prevent="onTouchmove"
>
{{ btnText }}
<UseWebAudioSpeechRecognizer ref="useWebAudioSpeechRecognizer"
@resultSpeechRecognizer="resultSpeechRecognizer"></UseWebAudioSpeechRecognizer>
</div>
</div>
</div>
<audio ref="audio"></audio>
</div>
<el-drawer
v-model="drawer"
title="选择喜欢的智能体进行对话"
:direction="direction"
:before-close="handleClose"
size="30%"
>
<div style="width: 100%;">
<use-card @handleUseAgent="handleUseAgent"></use-card>
</div>
</el-drawer>
<el-tour v-model="open">
<el-tour-step target="#phone-head-logo" title="童话匠Logo"
description="这里是Logo,您点击下去将会弹出抽屉供您选择不同智能体的对话氛围!"
/>
<el-tour-step
target="#phone-content"
title="问答区域"
description="用户与机器人对话"
/>
<el-tour-step
target="#phone-operate"
title="按住说话"
description="进行语音识别发送问答信息"
/>
</el-tour>
</template>
<style scoped lang="scss">
.audio-wrapper {
margin-top: 10px;
padding: 20px;
width: 100%;
}
.phone {
margin: 168px auto;
padding: 55px 11px 53px;
width: 221px;
height: 448px;
font-size: 12px;
border-radius: 35px;
background-image: url("../assets/img/iphone-bg.png");
box-sizing: border-box;
user-select: none;
transform: scale(1.7);
}
.phone-body {
height: 100%;
background-color: #fff;
}
.phone-head {
height: 30px;
line-height: 30px;
color: #000;
background-color: transparent;
text-align: center;
position: relative;
}
.phone-head .img {
width: 20px;
height: 20px;
background: url("https://pinia.vuejs.org/logo.svg") 1px 28px;
background-size: 100%;
border-radius: 50%;
position: absolute;
left: 5px;
top: 5px;
animation: shake_logo_img 1s infinite;
@keyframes shake_logo_img {
0% {
transform: rotate(0deg);
}
25% {
transform: rotate(5deg);
}
50% {
transform: rotate(0deg);
}
75% {
transform: rotate(-5deg);
}
100% {
transform: rotate(0deg);
}
}
}
.phone-head span {
display: inline-block;
}
.phone-head span:nth-child(2) {
//width: 100px;
text-align: center;
}
.phone-head span:nth-child(3) {
float: right;
margin-right: 10px;
}
.phone-content {
height: 282px;
background-color: #f1eded;
}
.phone-container {
line-height: 28px;
width: 100%;
}
.phone-operate {
color: #222222;
position: relative;
line-height: 28px;
text-align: center;
cursor: pointer;
font-weight: bold;
box-shadow: 0 -1px 1px rgba(0, 0, 0, .1);
}
.phone-operate:active {
background-color: rgba(255, 255, 255, 0.65);
}
.phone-operate-red {
background-color: rgba(215, 53, 53, 0.91) !important;
}
.phone-operate:active:before {
position: absolute;
left: 50%;
transform: translate(-50%, 0);
top: -2px;
content: '';
width: 0%;
height: 4px;
background-color: #7bed9f;
animation: loading 1s ease-in-out infinite backwards;
}
.msg-list {
margin: 0;
padding: 0;
height: 100%;
overflow-y: auto;
-webkit-overflow-scrolling: touch;
}
.msg-list::-webkit-scrollbar {
display: none;
}
.msg-list .msg {
list-style: none;
padding: 0 8px;
margin: 10px 0;
overflow: hidden;
//cursor: pointer;
}
.msg-list .msg .avatar,
.msg-list .msg .audio,
.msg-list .msg .duration {
float: right;
}
.msg-list .msg .avatar2,
.msg-list .msg .audio2 {
float: left;
}
.msg-list .msg .avatar, .msg-list .msg .avatar2 {
width: 24px;
height: 24px;
border-radius: 50%;
line-height: 24px;
text-align: center;
background-color: #000;
//background: url('https://qiniu.yby6.com/yby-img/image-1701417890342.png?imageView2/0/q/75|watermark/2/text/5p2o5LiN5piT5ZGA/font/5oCd5rqQ6buR5L2T/fontsize/320/fill/IzAwMDAwMA==/dissolve/100/gravity/SouthEast/dx/10/dy/10') 0 0;
background: url("../assets/img/yby.png") 0 0;
background-size: 100%;
}
.msg-list .msg .avatar2 {
background: url('../assets/img/yq.png') -25px 74px !important;
background-size: 100% !important;
transform: scale(1.0);
}
.msg-list .msg .audio, .msg-list .msg .audio2 {
position: relative;
margin-right: 6px;
max-width: 125px;
min-width: 30px;
height: 24px;
line-height: 24px;
padding: 0 4px 0 10px;
border-radius: 2px;
color: #000;
text-align: left;
background-color: rgba(107, 197, 107, 0.85);
}
.msg-list .msg .audio2 {
margin-left: 6px;
text-align: left;
}
.msg-list .msg.eg {
cursor: default;
}
.msg-list .msg.eg .audio {
text-align: left;
}
.msg-list .msg .audio:before {
position: absolute;
right: -8px;
top: 8px;
content: '';
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 4px;
border-color: transparent transparent transparent rgba(107, 197, 107, 0.85);
}
.msg-list .msg .audio2:before {
position: absolute;
left: -8px;
top: 8px;
content: '';
display: inline-block;
width: 0;
height: 0;
border-style: solid;
border-width: 4px;
border-color: transparent rgba(107, 197, 107, 0.85) transparent transparent;
}
.msg-list .msg .audio span, .msg-list .msg .audio2 span {
color: rgba(255, 255, 255, .8);
display: inline-block;
transform-origin: center;
}
.msg-list .msg .audio span:nth-child(1) {
font-weight: 400;
}
.msg-list .msg .audio span:nth-child(2) {
transform: scale(0.8);
font-weight: 500;
}
.msg-list .msg .audio span:nth-child(3) {
transform: scale(0.5);
font-weight: 700
}
.msg-list .msg .audio2 span:nth-child(1) {
transform: scale(0.5);
font-weight: 300;
}
.msg-list .msg .audio2 span:nth-child(2) {
transform: scale(0.8);
font-weight: 400;
}
.msg-list .msg .audio2 span:nth-child(3) {
font-weight: 500;
}
.msg-list .msg .audio.wink .voicePlay,
.msg-list .msg .audio2.wink .voicePlay {
animation-name: voicePlay;
animation-duration: 1s;
animation-direction: normal;
animation-iteration-count: infinite;
animation-timing-function: steps(3);
top: 0 !important;
}
.duration-item {
position: relative;
}
//.msg-list .msg .duration, .msg-list .msg .duration2 {
// //margin: 3px 2px;
// color: rgba(255, 255, 255, 0.73);
// position: absolute;
// right: 18px;
// top: 0;
//}
.msg-list .msg .duration2 {
color: rgba(255, 255, 255, 0.73);
margin-left: 1px;
font-size: 10px;
}
.fade-enter-active, .fade-leave-active {
transition: opacity .5s;
}
.fade-enter, .fade-leave-to {
opacity: 0;
}
//@keyframes wink {
// from {
// color: rgba(255, 255, 255, .8);
// }
// to {
// color: rgba(255, 255, 255, .1);
// }
//}
@keyframes loading {
from {
width: 0%;
}
to {
width: 100%;
}
}
.msg-list .msg .audio, .msg-list .msg .audio2 {
font-size: 9px !important;
line-height: 14px !important;
padding: 5px !important;
box-sizing: border-box !important;
height: auto !important;
}
.bg, .bg2 {
background: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAGAAAAAYCAYAAAAF6fiUAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuNi1jMDY3IDc5LjE1Nzc0NywgMjAxNS8wMy8zMC0yMzo0MDo0MiAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wTU09Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9tbS8iIHhtbG5zOnN0UmVmPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvc1R5cGUvUmVzb3VyY2VSZWYjIiB4bWxuczp4bXA9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC8iIHhtcE1NOkRvY3VtZW50SUQ9InhtcC5kaWQ6NzlFRDZDRDNENzlFMTFFNkJDN0NFMjA2QTFFRTRDQkIiIHhtcE1NOkluc3RhbmNlSUQ9InhtcC5paWQ6NzlFRDZDRDJENzlFMTFFNkJDN0NFMjA2QTFFRTRDQkIiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENDIDIwMTcgKFdpbmRvd3MpIj4gPHhtcE1NOkRlcml2ZWRGcm9tIHN0UmVmOmluc3RhbmNlSUQ9InhtcC5paWQ6MTAxQkEzQ0RENzM2MTFFNjgyMEI5MTNDRkQ0OTM5QUEiIHN0UmVmOmRvY3VtZW50SUQ9InhtcC5kaWQ6MTAxQkEzQ0VENzM2MTFFNjgyMEI5MTNDRkQ0OTM5QUEiLz4gPC9yZGY6RGVzY3JpcHRpb24+IDwvcmRmOlJERj4gPC94OnhtcG1ldGE+IDw/eHBhY2tldCBlbmQ9InIiPz4K4iKVAAACUUlEQVR42uSazytEURTHvTHjR4kaU8xsSDZSdmbjx4oSK8XGQrJlpSwYTSmxEWWhUIpsZK3kD7DRNBuSBZFCNjZ+JPKcV6ecXu/d3sy7595bc+vbfXPue5/749z77o83lm3bZYYFC8RZqAbQAigP2tXNj5aZF7gdkAZNk9+7WvnOCCgxRUCb9n/o1sk3pUH6QDHF/GNsoM+QeYfiy6qkFeLZDBb0GlTB4AAR/xXT9nXxZVa0WCekQd9Y0HOJjg3CHySviiZmfjO3AyIhnu0gBc0wjAIR/wLtW8z87aAOWAI9gqaYRoAff4ZUoi7EKCiUP462j4CdSCrfK4N1Ahpi6I0i/hPa50M4oFB+Dbm/SzXfL5MD4rUogxP8+Itozynm59E+q5ovyuQdHxphWh568XvR5kxq1SEn40L4e0XMA1L4EcEe7RTjLqYdqRf/gezQUwr5LxjXq+aLHPCFcTmTA7z4tutIQhXfLiJPKXyRA/oxzgW8v9DgxU+S62eF/ATGr6r5fg26Corj9RHD4Z0fvwfjS9CbQn4bxrfK+R6TyzxZNk260solTL4i/g3al10TsMXIryA72T7VfK8MnJO8X9CKy14lafXjxx8jFUsSeyUzfxhtPwHPoqTy/TJJMJzJiPgNpJdsuNJizPwztB/q4JtwHN2KW3sn3HuMOouR30l6bbsOvgkOyGIBnaPbRldalJl/h2knuvgmOKAWNAFKMUz4Iv4O6Z1xXXxTPxtazHy6khnVyS/Fb8IDpHGyuvmWgX9L4Q4toDnQFWhNN/9PgAEAR4w1ULjdCbEAAAAASUVORK5CYII=) right 0 no-repeat;
width: 14px;
height: 24px;
background-size: 400%;
position: relative;
top: 5px;
}
.bg2 {
transform: rotateY(200deg);
}
.voice-content {
display: flex;
align-items: center;
cursor: pointer;
}
@keyframes voicePlay {
0% {
background-position: 0;
}
100% {
background-position: 100%;
}
}
.markdown :deep(.markdown-it) {
img {
width: 100%;
}
a {
color: #0052d9;
font-weight: bold;
}
}
.loader {
width: 15px;
height: 15px;
border: 2px solid #ffffff;
border-bottom-color: #9cecb0;
border-radius: 50%;
box-sizing: border-box;
animation: rotation-1 1s linear infinite;
}
@keyframes rotation-1 {
0% {
transform: rotate(0deg);
}
100% {
transform: rotate(360deg);
}
}
</style>
那么到这里我们的项目实战就结束啦, 期待我们下期再见拜拜!
前面
我们
本期结束咱们下次再见👋~
关注我不迷路,如果本篇文章对你有所帮助,或者你有什么疑问,欢迎在评论区留言,我一般看到都会回复的。大家点赞支持一下哟~ 💗
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。
原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。
如有侵权,请联系 cloudcommunity@tencent.com 删除。