上一篇讲了一下如何用web audio api实现播放,这一篇讲一下如何画音轨吧?
最近做的功能中包含音频选段,自然就少不了需要展示出音频的音轨

话不多说,就直接开始代码实现画出如图中所示的音轨吧
html 结构大概是这样,可根据需求调整;
<div class="audio-track">
<div class="audio-track-scroll">
<div class="audio-track-canvas" ref="audioTrack"></div>
</div>
</div>首先添加一个显示的背景色
addAudioTrackBg() {
const bgColor = 'linear-gradient(270deg, #A282FF 0.15%, #4AC9FF 37.07%, #479CFF 66.72%, #C377FF 100%)'
const bgDom = document.createElement('div')
this.setElementStyle(bgDom.style, {
width: '100%',
height: '100%',
position: 'absolute',
left: '0',
top: '0',
backgroundImage: bgColor,
zIndex: '1'
});
(this.$refs.audioTrack as HTMLDivElement).appendChild(bgDom)
}接着我们画音轨
addAudioTrack() {
this.audioTrackDom = document.createElement('canvas')
this.canvasContext = this.audioTrackDom.getContext('2d') as CanvasRenderingContext2D
const { width: containerWidth, height: containerHeight } = (this.$refs.audioTrack as HTMLElement).getBoundingClientRect()
this.audioTrackDom.setAttribute('width', `${containerWidth}`)
this.audioTrackDom.setAttribute('height', `${containerHeight}`)
// 根据需求场景设值,按需求这里的计算较复杂,为方便展示写成这样
this.audioTrackWidth = containerWidth
this.audioTrackHeight = containerHeight
this.setElementStyle(this.audioTrackDom.style, {
transform: `translateX(${this.marginLeft}px)`,
position: 'absolute',
top: '0',
left: '0',
zIndex: '4',
cursor: 'pointer',
});
(this.$refs.audioTrack as HTMLDivElement).appendChild(this.audioTrackDom)
}
initAudio() {
this.addAudioTrackBg()
this.addAudioTrack()
// 拿到audioUrl,不多做介绍
// 音频文件转成ArrayBuffer,再转成audioBuffer
const arrayBuffer = await getAudioArrayBuffer(audioUrl)
const audioBuffer = await getAudioBuffer(arrayBuffer, new AudioContext())
// getChannelData()返回一个Float32Array,包含每一个通道的PCM数据,0代表第一个通道
this.channelData = audioBuffer.getChannelData(0)
if (this.channelData) this.drawTrack()
}
drawTrack() {
this.audioTrackDom.setAttribute('width', `${this.audioTrackWidth}`)
this.canvasContext.clearRect(0, 0, this.audioTrackWidth,
this.audioTrackHeight)
this.canvasContext.fillStyle = 'rgb(255, 255, 255)'
// 计算获取音频帧的集合,然后绘出每一帧
const audioTrackList = this.getAudioTrackList(this.channelData, this.audioTrackWidth)
audioTrackList.forEach((item, index) => {
if (index % 2) return
const barHeight = Math.max(1, item * this.audioTrackHeight * 0.4)
const barWidth = 1
this.canvasContext.fillRect(index, (this.audioTrackHeight - barHeight) / 2, barWidth, barHeight)
})
}
getAudioTrackList(channelData: Float32Array, trackTotalWidth: number) {
const list = []
let min = 1, max = -1
let i = 0, stepIndex = 0
const unitWidth = Math.floor(channelData.length / trackTotalWidth)
while (stepIndex++ < trackTotalWidth) {
min = 1
max = -1
const end = Math.min(stepIndex * unitWidth, channelData.length)
while (i++ < end) {
const current = channelData[i]
if (current > max) max = current
if (current < min) min = current
}
list.push(max - min)
}
return list
}
// 这两个方法和上篇的一致,为公用方法,为方便看,我在这里贴一下
async function getAudioArrayBuffer(audioUrl: string): Promise<ArrayBuffer> {
const res = await fetch(audioUrl)
return res.arrayBuffer()
}
async function getAudioBuffer(arrayBuffer: ArrayBuffer, audioContext: AudioContext): Promise<AudioBuffer> {
let resolveFn
const promise = new Promise(resolve => resolveFn = resolve)
audioContext.decodeAudioData(arrayBuffer, resolveFn)
return promise as Promise<AudioBuffer>
}
这样,就可以绘出音频的音轨了,如果还需进行其他操作,可根据需求场景进行拓展,后续新的拓展再更新~~~
版权声明:本文为srj15110129498原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接和本声明。