首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
社区首页 >问答首页 >使用android的mp4 MediaMuxer连接多个MediaMuxer音频文件

使用android的mp4 MediaMuxer连接多个MediaMuxer音频文件
EN

Stack Overflow用户
提问于 2014-04-29 01:40:43
回答 3查看 3.5K关注 0票数 3

我试图使用以下函数将多个mp4音频文件(每个文件只包含一个音频轨道,所有这些音频都用相同的MediaRecorder和相同的参数录制)连接到一个:

代码语言:javascript
运行
AI代码解释
复制
@TargetApi(Build.VERSION_CODES.JELLY_BEAN_MR2)
public static boolean concatenateFiles(File dst, File... sources) {
    if ((sources == null) || (sources.length == 0)) {
        return false;
    }

    boolean result;
    MediaExtractor extractor = null;
    MediaMuxer muxer = null;
    try {
        // Set up MediaMuxer for the destination.
        muxer = new MediaMuxer(dst.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4);

        // Copy the samples from MediaExtractor to MediaMuxer.
        boolean sawEOS = false;
        int bufferSize = MAX_SAMPLE_SIZE;
        int frameCount = 0;
        int offset = 100;

        ByteBuffer dstBuf = ByteBuffer.allocate(bufferSize);
        BufferInfo bufferInfo = new BufferInfo();

        long timeOffsetUs = 0;
        int dstTrackIndex = -1;

        for (int fileIndex = 0; fileIndex < sources.length; fileIndex++) {
            int numberOfSamplesInSource = getNumberOfSamples(sources[fileIndex]);
            if (VERBOSE) {
                Log.d(TAG, String.format("Source file: %s", sources[fileIndex].getPath()));
            }

            // Set up MediaExtractor to read from the source.
            extractor = new MediaExtractor();
            extractor.setDataSource(sources[fileIndex].getPath());

            // Set up the tracks.
            SparseIntArray indexMap = new SparseIntArray(extractor.getTrackCount());
            for (int i = 0; i < extractor.getTrackCount(); i++) {
                extractor.selectTrack(i);
                MediaFormat format = extractor.getTrackFormat(i);
                if (dstTrackIndex < 0) {
                    dstTrackIndex = muxer.addTrack(format);
                    muxer.start();
                }
                indexMap.put(i, dstTrackIndex);
            }

            long lastPresentationTimeUs = 0;
            int currentSample = 0;

            while (!sawEOS) {
                bufferInfo.offset = offset;
                bufferInfo.size = extractor.readSampleData(dstBuf, offset);

                if (bufferInfo.size < 0) {
                    sawEOS = true;
                    bufferInfo.size = 0;
                    timeOffsetUs += (lastPresentationTimeUs + APPEND_DELAY);
                }
                else {
                    lastPresentationTimeUs = extractor.getSampleTime();
                    bufferInfo.presentationTimeUs = extractor.getSampleTime() + timeOffsetUs;
                    bufferInfo.flags = extractor.getSampleFlags();
                    int trackIndex = extractor.getSampleTrackIndex();

                    if ((currentSample < numberOfSamplesInSource) || (fileIndex == sources.length - 1)) {
                        muxer.writeSampleData(indexMap.get(trackIndex), dstBuf, bufferInfo);
                    }
                    extractor.advance();

                    frameCount++;
                    currentSample++;
                    if (VERBOSE) {
                        Log.d(TAG, "Frame (" + frameCount + ") " +
                                "PresentationTimeUs:" + bufferInfo.presentationTimeUs +
                                " Flags:" + bufferInfo.flags +
                                " TrackIndex:" + trackIndex +
                                " Size(KB) " + bufferInfo.size / 1024);
                    }
                }
            }
            extractor.release();
            extractor = null;
        }

        result = true;
    }
    catch (IOException e) {
        result = false;
    }
    finally {
        if (extractor != null) {
            extractor.release();
        }
        if (muxer != null) {
            muxer.stop();
            muxer.release();
        }
    }
    return result;
}

@TargetApi(Build.VERSION_CODES.JELLY_BEAN)
public static int getNumberOfSamples(File src) {
    MediaExtractor extractor = new MediaExtractor();
    int result;
    try {
        extractor.setDataSource(src.getPath());
        extractor.selectTrack(0);

        result = 0;
        while (extractor.advance()) {
            result ++;
        }
    }
    catch(IOException e) {
        result = -1;
    }
    finally {
        extractor.release();
    }
    return result;
}

代码编译并运行,但在播放结果文件时,我只听到第一个文件的内容。我看不出我做错了什么。

然而,在马龙把我引向那个方向之后,我从MediaMuxer得到的信息有些奇怪。下面是它们:

代码语言:javascript
运行
AI代码解释
复制
05-04 15:30:01.869: D/MediaMuxerTest(5455): Source file: /storage/emulated/0/Android/data/de.absprojects.catalogizer/files/copy.mp4
05-04 15:30:01.889: D/QCUtils(5455): extended extractor not needed, return default
05-04 15:30:01.889: I/MPEG4Writer(5455): limits: 2147483647/0 bytes/us, bit rate: -1 bps and the estimated moov size 3072 bytes
05-04 15:30:01.889: I/MPEG4Writer(5455): setStartTimestampUs: 0
05-04 15:30:01.889: I/MPEG4Writer(5455): Earliest track starting time: 0
05-04 15:30:01.889: D/MediaMuxerTest(5455): Frame (1) PresentationTimeUs:0 Flags:1 TrackIndex:0 Size(KB) 0
05-04 15:30:01.889: D/MediaMuxerTest(5455): Frame (2) PresentationTimeUs:23219 Flags:1 TrackIndex:0 Size(KB) 0
05-04 15:30:01.889: D/MediaMuxerTest(5455): Frame (3) PresentationTimeUs:46439 Flags:1 TrackIndex:0 Size(KB) 0
[...]
05-04 15:30:01.959: D/MediaMuxerTest(5455): Frame (117) PresentationTimeUs:2693401 Flags:1 TrackIndex:0 Size(KB) 0
05-04 15:30:01.959: D/MediaMuxerTest(5455): Frame (118) PresentationTimeUs:2716621 Flags:1 TrackIndex:0 Size(KB) 0
05-04 15:30:01.959: D/MediaMuxerTest(5455): Frame (119) PresentationTimeUs:2739841 Flags:1 TrackIndex:0 Size(KB) 0
05-04 15:30:01.959: D/MediaMuxerTest(5455): Frame (120) PresentationTimeUs:2763061 Flags:1 TrackIndex:0 Size(KB) 0
05-04 15:30:01.979: D/QCUtils(5455): extended extractor not needed, return default
05-04 15:30:01.979: D/MediaMuxerTest(5455): Source file: /storage/emulated/0/Android/data/de.absprojects.catalogizer/files/temp.mp4
05-04 15:30:01.979: I/MPEG4Writer(5455): Received total/0-length (120/0) buffers and encoded 120 frames. - audio
05-04 15:30:01.979: I/MPEG4Writer(5455): Audio track drift time: 0 us
05-04 15:30:01.979: D/MPEG4Writer(5455): Setting Audio track to done
05-04 15:30:01.979: D/MPEG4Writer(5455): Stopping Audio track
05-04 15:30:01.979: D/MPEG4Writer(5455): Stopping Audio track source
05-04 15:30:01.979: D/MPEG4Writer(5455): Audio track stopped
05-04 15:30:01.979: D/MPEG4Writer(5455): Stopping writer thread
05-04 15:30:01.979: D/MPEG4Writer(5455): 0 chunks are written in the last batch
05-04 15:30:01.979: D/MPEG4Writer(5455): Writer thread stopped
05-04 15:30:01.979: D/MPEG4Writer(5455): Stopping Audio track
05-04 15:30:01.979: E/MPEG4Writer(5455): Stop() called but track is not started
05-04 15:30:01.999: D/QCUtils(5455): extended extractor not needed, return default
05-04 15:30:01.999: D/copyOriginalFile()(5455): 120 samples in original file
05-04 15:30:02.009: D/QCUtils(5455): extended extractor not needed, return default
05-04 15:30:02.019: D/copyOriginalFile()(5455): 120 samples in copied file
05-04 15:30:02.019: W/MediaRecorder(5455): mediarecorder went away with unhandled events
05-04 15:30:02.099: I/dalvikvm(5455): Jit: resizing JitTable from 4096 to 8192

似乎在从第一个文件复制数据之后,MPEG4Writer (为什么不是MediaMuxer?)停止跟踪并不再写入进一步的数据。我怎么才能防止这种情况?我是否必须直接操作标头,如果需要,如何操作?

任何帮助都将不胜感激。

诚挚的问候,

克里斯蒂安

EN

回答 3

Stack Overflow用户

回答已采纳

发布于 2014-04-30 11:07:13

形式上,你不能加入2个编码的音频轨道:每个音轨可以用不同的参数进行编码,这些参数存储在标头中。可以肯定的是,如果两个文件都是由同一个编码器\muxer创建的,相同的编码参数和两个标头是相等的,则可以工作,但这是相当严格的限制。据我所见,您将音频格式(它包含头)设置为muxer中的音频轨道,从第一个文件设置为格式。因此,如果第二文件音频格式不同,则会导致不同种类的错误,导致第二文件音频不正确。

请尝试将一个源文件放置两次到dst文件,作为第一个和第二个。如果它起作用的话--问题就在头上。如果没有-那就去别的地方,我想。

票数 3
EN

Stack Overflow用户

发布于 2017-09-27 12:34:35

我也想做同样的事情,但是想的更多,这是行不通的。希望如此,因为我也需要它。这就像试图把两个瓶子挤在一起,期望它们变成一个更大的瓶子。你得拿着..。啤酒?从每个文件(从每个文件解码音频),然后将它倒进一个新的瓶子(再次编码音频,从第二个当第一个完成).瓶盖好后,你就不能再加啤酒了。

票数 0
EN

Stack Overflow用户

发布于 2021-12-01 12:27:55

如果两个视频文件具有相同的视频分辨率、视频编解码器、fps、音频采样率和音频编解码器,则此代码可以工作。

代码语言:javascript
运行
AI代码解释
复制
private const val MAX_SAMPLE_SIZE = 256 * 1024

fun concatenateFiles(dst: File, sources: ArrayList<File>): Boolean {

    println("---------------------")
    println("concatenateFiles")
    println("---------------------")

    if (sources.isEmpty()) {

        return false

    }

    var result : Boolean
    var muxer : MediaMuxer? = null

    try {

        // Set up MediaMuxer for the destination.

        muxer = MediaMuxer(dst.path, MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4)

        // Copy the samples from MediaExtractor to MediaMuxer.

        var videoFormat : MediaFormat? = null
        var audioFormat : MediaFormat? = null

        var idx = 0

        var muxerStarted : Boolean = false

        var videoTrackIndex = -1
        var audioTrackIndex = -1

        var totalDuration = 0

        for (file in sources) {

            println("-------------------")
            println("file: $idx")
            println("-------------------")

            // new

            // MediaMetadataRetriever

            val m = MediaMetadataRetriever()
            m.setDataSource(file.absolutePath)

            var trackDuration : Int = 0

            try {

                trackDuration = m.extractMetadata(MediaMetadataRetriever.METADATA_KEY_DURATION)!!.toInt()

            } catch (e: java.lang.Exception) {

                // error

            }

            // extractorVideo

            var extractorVideo = MediaExtractor()

            extractorVideo.setDataSource(file.path)

            val tracks = extractorVideo.trackCount

            for (i in 0 until tracks) {

                val mf = extractorVideo.getTrackFormat(i)

                val mime = mf.getString(MediaFormat.KEY_MIME)

                println("mime: $mime")

                if (mime!!.startsWith("video/")) {

                    extractorVideo.selectTrack(i)
                    videoFormat = extractorVideo.getTrackFormat(i)

                    break

                }

            }

            // extractorAudio

            var extractorAudio = MediaExtractor()

            extractorAudio.setDataSource(file.path)

            for (i in 0 until tracks) {

                val mf = extractorAudio.getTrackFormat(i)

                val mime = mf.getString(MediaFormat.KEY_MIME)

                if (mime!!.startsWith("audio/")) {

                    extractorAudio.selectTrack(i)
                    audioFormat = extractorAudio.getTrackFormat(i)

                    break

                }

            }

            // audioTracks

            val audioTracks = extractorAudio.trackCount

            println("audioTracks: $audioTracks")

            // videoTrackIndex

            if (videoTrackIndex == -1) {

                videoTrackIndex = muxer.addTrack(videoFormat!!)

            }

            // audioTrackIndex

            if (audioTrackIndex == -1) {

                audioTrackIndex = muxer.addTrack(audioFormat!!)

            }

            var sawEOS = false
            var sawAudioEOS = false
            val bufferSize = MAX_SAMPLE_SIZE
            val dstBuf = ByteBuffer.allocate(bufferSize)
            val offset = 0
            val bufferInfo = BufferInfo()

            // start muxer

            println("muxer.start()")

            if (!muxerStarted) {

                muxer.start()

                muxerStarted = true

            }

            // write video

            println("write video")

            while (!sawEOS) {

                bufferInfo.offset = offset
                bufferInfo.size = extractorVideo.readSampleData(dstBuf, offset)

                if (bufferInfo.size < 0) {

                    //println("videoBufferInfo.size < 0")

                    sawEOS = true
                    bufferInfo.size = 0

                } else {

                    bufferInfo.presentationTimeUs = extractorVideo.sampleTime + totalDuration
                    bufferInfo.flags = MediaCodec.BUFFER_FLAG_KEY_FRAME
                    muxer.writeSampleData(videoTrackIndex, dstBuf, bufferInfo)
                    extractorVideo.advance()

                }

            }

            // write audio

            println("write audio")

            val audioBuf = ByteBuffer.allocate(bufferSize)

            while (!sawAudioEOS) {

                bufferInfo.offset = offset
                bufferInfo.size = extractorAudio.readSampleData(audioBuf, offset)

                if (bufferInfo.size < 0) {

                    //println("audioBufferInfo.size < 0")

                    sawAudioEOS = true
                    bufferInfo.size = 0

                } else {

                    bufferInfo.presentationTimeUs = extractorAudio.sampleTime + totalDuration
                    bufferInfo.flags = MediaCodec.BUFFER_FLAG_KEY_FRAME
                    muxer.writeSampleData(audioTrackIndex, audioBuf, bufferInfo)
                    extractorAudio.advance()

                }

            }

            extractorVideo.release()
            extractorAudio.release()

            // should match

            totalDuration += (trackDuration * 1_000)

            if (VERBOSE) {
                println("PresentationTimeUs:" + bufferInfo.presentationTimeUs)
                println("totalDuration: $totalDuration")
            }

            // increment file index

            idx += 1

        }

        result = true

    } catch (e: IOException) {

        result = false

    } finally {

        if (muxer != null) {
            muxer.stop()
            muxer.release()
        }

    }

    return result

}
票数 0
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/23361005

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
查看详情【社区公告】 技术创作特训营有奖征文