MTK 多帧算法集成实现流程

7f39b7d5dea60c2c775bbcd5fbd71178.gif

和你一起终身学习,这里是程序员Android

经典好文推荐,通过阅读本文,您将收获以下知识点:

一、选择feature和配置feature table
二、 挂载算法
三、自定义metadata
四、APP调用算法
五、结语

一、选择feature和配置feature table

1.1 选择feature

多帧降噪算法(MFNR)是一种很常见的多帧算法,在MTK已预置的feature中有MTK_FEATURE_MFNR和TP_FEATURE_MFNR。因此,我们可以对号入座,不用再额外添加feature。这里我们是第三方算法,所以我们选择TP_FEATURE_MFNR。

1.2 配置feature table

确定了feature为TP_FEATURE_MFNR后,我们还需要将其添加到feature table中:

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
index f14ff8a6e2..38365e0602 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mtk_scenario_mgr.cpp
@@ -106,6 +106,7 @@ using namespace NSCam::v3::pipeline::policy::scenariomgr;
 #define MTK_FEATURE_COMBINATION_TP_VSDOF_MFNR     (MTK_FEATURE_MFNR    | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_VSDOF| TP_FEATURE_WATERMARK)
 #define MTK_FEATURE_COMBINATION_TP_FUSION         (NO_FEATURE_NORMAL   | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_FUSION| TP_FEATURE_WATERMARK)
 #define MTK_FEATURE_COMBINATION_TP_PUREBOKEH      (NO_FEATURE_NORMAL   | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| TP_FEATURE_PUREBOKEH| TP_FEATURE_WATERMARK)
+#define MTK_FEATURE_COMBINATION_TP_MFNR           (TP_FEATURE_MFNR     | MTK_FEATURE_NR| MTK_FEATURE_ABF| MTK_FEATURE_CZ| MTK_FEATURE_DRE| MTK_FEATURE_HFG| MTK_FEATURE_DCE | MTK_FEATURE_FB| MTK_FEATURE_MFNR)

 // streaming feature combination (TODO: it should be refined by streaming scenario feature)
 #define MTK_FEATURE_COMBINATION_VIDEO_NORMAL     (MTK_FEATURE_FB|TP_FEATURE_FB|TP_FEATURE_WATERMARK)
@@ -136,6 +137,7 @@ const std::vector<std::unordered_map<int32_t, ScenarioFeatures>>  gMtkScenarioFe
         ADD_CAMERA_FEATURE_SET(TP_FEATURE_HDR,       MTK_FEATURE_COMBINATION_HDR)
         ADD_CAMERA_FEATURE_SET(MTK_FEATURE_AINR,     MTK_FEATURE_COMBINATION_AINR)
         ADD_CAMERA_FEATURE_SET(MTK_FEATURE_MFNR,     MTK_FEATURE_COMBINATION_MFNR)
+        ADD_CAMERA_FEATURE_SET(TP_FEATURE_MFNR,      MTK_FEATURE_COMBINATION_TP_MFNR)
         ADD_CAMERA_FEATURE_SET(MTK_FEATURE_REMOSAIC, MTK_FEATURE_COMBINATION_REMOSAIC)
         ADD_CAMERA_FEATURE_SET(NO_FEATURE_NORMAL,    MTK_FEATURE_COMBINATION_SINGLE)
         CAMERA_SCENARIO_END

注意:

MTK在Android Q(10.0)及更高版本上优化了scenario配置表的客制化,Android Q及更高版本,feature需要在:
vendor/mediatek/proprietary/custom/[platform]/hal/camera/camera_custom_feature_table.cpp中配置,[platform]是诸如mt6580,mt6763之类的。

二、 挂载算法

2.1 为算法选择plugin

MTK HAL3在vendor/mediatek/proprietary/hardware/mtkcam3/include/mtkcam3/3rdparty/plugin/PipelinePluginType.h 中将三方算法的挂载点大致分为以下几类:

  • BokehPlugin:Bokeh算法挂载点,双摄景深算法的虚化部分。

  • DepthPlugin:Depth算法挂载点,双摄景深算法的计算深度部分。

  • FusionPlugin:Depth和Bokeh放在1个算法中,即合并的双摄景深算法挂载点。

  • JoinPlugin:Streaming相关算法挂载点,预览算法都挂载在这里。

  • MultiFramePlugin:多帧算法挂载点,包括YUV与RAW,例如MFNR/HDR

  • RawPlugin:RAW算法挂载点,例如remosaic

  • YuvPlugin:Yuv单帧算法挂载点,例如美颜、广角镜头畸变校正等。

对号入座,为要集成的算法选择相应的plugin。这里是多帧算法,只能选择MultiFramePlugin。并且,一般情况下多帧算法只用于拍照,不用于预览。

2.2 添加全局宏控

为了能控制某个项目是否集成此算法,我们在device/mediateksample/[platform]/ProjectConfig.mk中添加一个宏,用于控制新接入算法的编译:

QXT_MFNR_SUPPORT = yes

当某个项目不需要这个算法时,将device/mediateksample/[platform]/ProjectConfig.mk的QXT_MFNR_SUPPORT的值设为 no 就可以了。

2.3 编写算法集成文件

参照vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/mfnr/MFNRImpl.cpp中实现MFNR拍照。目录结构如下:
vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/
├── Android.mk
├── include
│ └── mf_processor.h
├── lib
│ ├── arm64-v8a
│ │ └── libmultiframe.so
│ └── armeabi-v7a
│ └── libmultiframe.so
└── MFNRImpl.cpp

文件说明:

  • Android.mk中配置算法库、头文件、集成的源代码MFNRImpl.cpp文件,将它们编译成库libmtkcam.plugin.tp_mfnr,供libmtkcam_3rdparty.customer依赖调用。

  • libmultiframe.so实现了将连续4帧图像缩小,并拼接成一张图的功能,libmultiframe.so用来模拟需要接入的第三方多帧算法库。mf_processor.h是头文件。

  • MFNRImpl.cpp是集成的源代码CPP文件。

2.3.1 mtkcam3/3rdparty/customer/cp_tp_mfnr/Android.mk
ifeq ($(QXT_MFNR_SUPPORT),yes)
LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)
LOCAL_MODULE := libmultiframe
LOCAL_SRC_FILES_32 := lib/armeabi-v7a/libmultiframe.so
LOCAL_SRC_FILES_64 := lib/arm64-v8a/libmultiframe.so
LOCAL_MODULE_TAGS := optional
LOCAL_MODULE_CLASS := SHARED_LIBRARIES
LOCAL_MODULE_SUFFIX := .so
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MULTILIB := both
include $(BUILD_PREBUILT)

################################################################################
#
################################################################################
include $(CLEAR_VARS)

#-----------------------------------------------------------
-include $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam/mtkcam.mk

#-----------------------------------------------------------
LOCAL_SRC_FILES += MFNRImpl.cpp

#-----------------------------------------------------------
LOCAL_C_INCLUDES += $(MTKCAM_C_INCLUDES)
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/include $(MTK_PATH_SOURCE)/hardware/mtkcam/include
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_COMMON)/hal/inc
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_CUSTOM_PLATFORM)/hal/inc
LOCAL_C_INCLUDES += $(TOP)/external/libyuv/files/include/
LOCAL_C_INCLUDES += $(TOP)/$(MTK_PATH_SOURCE)/hardware/mtkcam3/3rdparty/customer/cp_tp_mfnr/include
#
LOCAL_C_INCLUDES += system/media/camera/include

#-----------------------------------------------------------
LOCAL_CFLAGS += $(MTKCAM_CFLAGS)
#

#-----------------------------------------------------------
LOCAL_STATIC_LIBRARIES +=
#
LOCAL_WHOLE_STATIC_LIBRARIES +=

#-----------------------------------------------------------
LOCAL_SHARED_LIBRARIES += liblog
LOCAL_SHARED_LIBRARIES += libutils
LOCAL_SHARED_LIBRARIES += libcutils
LOCAL_SHARED_LIBRARIES += libmtkcam_modulehelper
LOCAL_SHARED_LIBRARIES += libmtkcam_stdutils
LOCAL_SHARED_LIBRARIES += libmtkcam_pipeline
LOCAL_SHARED_LIBRARIES += libmtkcam_metadata
LOCAL_SHARED_LIBRARIES += libmtkcam_metastore
LOCAL_SHARED_LIBRARIES += libmtkcam_streamutils
LOCAL_SHARED_LIBRARIES += libmtkcam_imgbuf
LOCAL_SHARED_LIBRARIES += libmtkcam_exif
#LOCAL_SHARED_LIBRARIES += libmtkcam_3rdparty

#-----------------------------------------------------------
LOCAL_HEADER_LIBRARIES := libutils_headers liblog_headers libhardware_headers

#-----------------------------------------------------------
LOCAL_MODULE := libmtkcam.plugin.tp_mfnr
LOCAL_PROPRIETARY_MODULE := true
LOCAL_MODULE_OWNER := mtk
LOCAL_MODULE_TAGS := optional
include $(MTK_STATIC_LIBRARY)

################################################################################
#
################################################################################
include $(call all-makefiles-under,$(LOCAL_PATH))
endif
2.3.2 mtkcam3/3rdparty/customer/cp_tp_mfnr/include/mf_processor.h
#ifndef QXT_MULTI_FRAME_H
#define QXT_MULTI_FRAME_H

class MFProcessor {

public:
    virtual ~MFProcessor() {}

    virtual void setFrameCount(int num) = 0;

    virtual void setParams() = 0;

    virtual void addFrame(unsigned char *src, int srcWidth, int srcHeight) = 0;

    virtual void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
            int srcWidth, int srcHeight) = 0;

    virtual void scale(unsigned char *src, int srcWidth, int srcHeight,
                       unsigned char *dst, int dstWidth, int dstHeight) = 0;

    virtual void process(unsigned char *output, int outputWidth, int outputHeight) = 0;

    virtual void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
            int outputWidth, int outputHeight) = 0;

    static MFProcessor* createInstance(int width, int height);
};

#endif //QXT_MULTI_FRAME_H

头文件中的接口函数介绍:

  • setFrameCount:没有实际作用,用于模拟设置第三方多帧算法的帧数。因为部分第三方多帧算法在不同场景下需要的帧数可能是不同的。

  • setParams:也没有实际作用,用于模拟设置第三方多帧算法所需的参数。

  • addFrame:用于添加一帧图像数据,用于模拟第三方多帧算法添加图像数据。

  • process:将前面添加的4帧图像数据,缩小并拼接成一张原大小的图。

  • createInstance:创建接口类对象。

为了方便有兴趣的童鞋们,实现代码mf_processor_impl.cpp也一并贴上:

#include <libyuv/scale.h>
#include <cstring>
#include "mf_processor.h"

using namespace std;
using namespace libyuv;

class MFProcessorImpl : public MFProcessor {
private:
    int frameCount = 4;
    int currentIndex = 0;
    unsigned char *dstBuf = nullptr;
    unsigned char *tmpBuf = nullptr;

public:
    MFProcessorImpl();

    MFProcessorImpl(int width, int height);

    ~MFProcessorImpl() override;

    void setFrameCount(int num) override;

    void setParams() override;

    void addFrame(unsigned char *src, int srcWidth, int srcHeight) override;

    void addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
                  int srcWidth, int srcHeight) override;

    void scale(unsigned char *src, int srcWidth, int srcHeight,
               unsigned char *dst, int dstWidth, int dstHeight) override;

    void process(unsigned char *output, int outputWidth, int outputHeight) override;

    void process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
                 int outputWidth, int outputHeight) override;

    static MFProcessor *createInstance(int width, int height);
};

MFProcessorImpl::MFProcessorImpl() = default;

MFProcessorImpl::MFProcessorImpl(int width, int height) {
    if (dstBuf == nullptr) {
        dstBuf = new unsigned char[width * height * 3 / 2];
    }
    if (tmpBuf == nullptr) {
        tmpBuf = new unsigned char[width / 2 * height / 2 * 3 / 2];
    }
}

MFProcessorImpl::~MFProcessorImpl() {
    if (dstBuf != nullptr) {
        delete[] dstBuf;
    }

    if (tmpBuf != nullptr) {
        delete[] tmpBuf;
    }
}

void MFProcessorImpl::setFrameCount(int num) {
    frameCount = num;
}

void MFProcessorImpl::setParams() {

}

void MFProcessorImpl::addFrame(unsigned char *src, int srcWidth, int srcHeight) {
    int srcYCount = srcWidth * srcHeight;
    int srcUVCount = srcWidth * srcHeight / 4;
    int tmpWidth = srcWidth >> 1;
    int tmpHeight = srcHeight >> 1;
    int tmpYCount = tmpWidth * tmpHeight;
    int tmpUVCount = tmpWidth * tmpHeight / 4;
    //scale
    I420Scale(src, srcWidth,
              src + srcYCount, srcWidth >> 1,
              src + srcYCount + srcUVCount, srcWidth >> 1,
              srcWidth, srcHeight,
              tmpBuf, tmpWidth,
              tmpBuf + tmpYCount, tmpWidth >> 1,
              tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,
              tmpWidth, tmpHeight,
              kFilterNone);

    //merge
    unsigned char *pDstY;
    unsigned char *pTmpY;
    for (int i = 0; i < tmpHeight; i++) {
        pTmpY = tmpBuf + i * tmpWidth;
        if (currentIndex == 0) {
            pDstY = dstBuf + i * srcWidth;
        } else if (currentIndex == 1) {
            pDstY = dstBuf + i * srcWidth + tmpWidth;
        } else if (currentIndex == 2) {
            pDstY = dstBuf + (i + tmpHeight) * srcWidth;
        } else {
            pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;
        }
        memcpy(pDstY, pTmpY, tmpWidth);
    }

    int uvHeight = tmpHeight / 2;
    int uvWidth = tmpWidth / 2;
    unsigned char *pDstU;
    unsigned char *pDstV;
    unsigned char *pTmpU;
    unsigned char *pTmpV;
    for (int i = 0; i < uvHeight; i++) {
        pTmpU = tmpBuf + tmpYCount + uvWidth * i;
        pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;
        if (currentIndex == 0) {
            pDstU = dstBuf + srcYCount + i * tmpWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;
        } else if (currentIndex == 1) {
            pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;
        } else if (currentIndex == 2) {
            pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;
        } else {
            pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;
        }
        memcpy(pDstU, pTmpU, uvWidth);
        memcpy(pDstV, pTmpV, uvWidth);
    }
    if (currentIndex < frameCount) currentIndex++;
}

void MFProcessorImpl::addFrame(unsigned char *srcY, unsigned char *srcU, unsigned char *srcV,
                               int srcWidth, int srcHeight) {
    int srcYCount = srcWidth * srcHeight;
    int srcUVCount = srcWidth * srcHeight / 4;
    int tmpWidth = srcWidth >> 1;
    int tmpHeight = srcHeight >> 1;
    int tmpYCount = tmpWidth * tmpHeight;
    int tmpUVCount = tmpWidth * tmpHeight / 4;
    //scale
    I420Scale(srcY, srcWidth,
              srcU, srcWidth >> 1,
              srcV, srcWidth >> 1,
              srcWidth, srcHeight,
              tmpBuf, tmpWidth,
              tmpBuf + tmpYCount, tmpWidth >> 1,
              tmpBuf + tmpYCount + tmpUVCount, tmpWidth >> 1,
              tmpWidth, tmpHeight,
              kFilterNone);

    //merge
    unsigned char *pDstY;
    unsigned char *pTmpY;
    for (int i = 0; i < tmpHeight; i++) {
        pTmpY = tmpBuf + i * tmpWidth;
        if (currentIndex == 0) {
            pDstY = dstBuf + i * srcWidth;
        } else if (currentIndex == 1) {
            pDstY = dstBuf + i * srcWidth + tmpWidth;
        } else if (currentIndex == 2) {
            pDstY = dstBuf + (i + tmpHeight) * srcWidth;
        } else {
            pDstY = dstBuf + (i + tmpHeight) * srcWidth + tmpWidth;
        }
        memcpy(pDstY, pTmpY, tmpWidth);
    }

    int uvHeight = tmpHeight / 2;
    int uvWidth = tmpWidth / 2;
    unsigned char *pDstU;
    unsigned char *pDstV;
    unsigned char *pTmpU;
    unsigned char *pTmpV;
    for (int i = 0; i < uvHeight; i++) {
        pTmpU = tmpBuf + tmpYCount + uvWidth * i;
        pTmpV = tmpBuf + tmpYCount + tmpUVCount + uvWidth * i;
        if (currentIndex == 0) {
            pDstU = dstBuf + srcYCount + i * tmpWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth;
        } else if (currentIndex == 1) {
            pDstU = dstBuf + srcYCount + i * tmpWidth + uvWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + i * tmpWidth + uvWidth;
        } else if (currentIndex == 2) {
            pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth;
        } else {
            pDstU = dstBuf + srcYCount + (i + uvHeight) * tmpWidth + uvWidth;
            pDstV = dstBuf + srcYCount + srcUVCount + (i + uvHeight) * tmpWidth + uvWidth;
        }
        memcpy(pDstU, pTmpU, uvWidth);
        memcpy(pDstV, pTmpV, uvWidth);
    }
    if (currentIndex < frameCount) currentIndex++;
}

void MFProcessorImpl::scale(unsigned char *src, int srcWidth, int srcHeight,
                            unsigned char *dst, int dstWidth, int dstHeight) {
    I420Scale(src, srcWidth,//Y
              src + srcWidth * srcHeight, srcWidth >> 1,//U
              src + srcWidth * srcHeight * 5 / 4, srcWidth >> 1,//V
              srcWidth, srcHeight,
              dst, dstWidth,//Y
              dst + dstWidth * dstHeight, dstWidth >> 1,//U
              dst + dstWidth * dstHeight * 5 / 4, dstWidth >> 1,//V
              dstWidth, dstHeight,
              kFilterNone);
}

void MFProcessorImpl::process(unsigned char *output, int outputWidth, int outputHeight) {
    memcpy(output, dstBuf, outputWidth * outputHeight * 3 / 2);
    currentIndex = 0;
}

void MFProcessorImpl::process(unsigned char *outputY, unsigned char *outputU, unsigned char *outputV,
                              int outputWidth, int outputHeight) {
    int yCount = outputWidth * outputHeight;
    int uvCount = yCount / 4;
    memcpy(outputY, dstBuf, yCount);
    memcpy(outputU, dstBuf + yCount, uvCount);
    memcpy(outputV, dstBuf + yCount + uvCount, uvCount);
    currentIndex = 0;
}

MFProcessor* MFProcessor::createInstance(int width, int height) {
    return new MFProcessorImpl(width, height);
}
2.3.3 mtkcam3/3rdparty/customer/cp_tp_mfnr/MFNRImpl.cpp
#ifdef LOG_TAG
#undef LOG_TAG
#endif // LOG_TAG
#define LOG_TAG "MFNRProvider"
static const char *__CALLERNAME__ = LOG_TAG;

//
#include <mtkcam/utils/std/Log.h>
//
#include <stdlib.h>
#include <utils/Errors.h>
#include <utils/List.h>
#include <utils/RefBase.h>
#include <sstream>
#include <unordered_map> // std::unordered_map
//
#include <mtkcam/utils/metadata/client/mtk_metadata_tag.h>
#include <mtkcam/utils/metadata/hal/mtk_platform_metadata_tag.h>
//zHDR
#include <mtkcam/utils/hw/HwInfoHelper.h> // NSCamHw::HwInfoHelper
#include <mtkcam3/feature/utils/FeatureProfileHelper.h> //ProfileParam
#include <mtkcam/drv/IHalSensor.h>
//
#include <mtkcam/utils/imgbuf/IIonImageBufferHeap.h>
//
#include <mtkcam/utils/std/Format.h>
#include <mtkcam/utils/std/Time.h>
//
#include <mtkcam3/pipeline/hwnode/NodeId.h>
//
#include <mtkcam/utils/metastore/IMetadataProvider.h>
#include <mtkcam/utils/metastore/ITemplateRequest.h>
#include <mtkcam/utils/metastore/IMetadataProvider.h>
#include <mtkcam3/3rdparty/plugin/PipelinePlugin.h>
#include <mtkcam3/3rdparty/plugin/PipelinePluginType.h>
//
#include <isp_tuning/isp_tuning.h>  //EIspProfile_T, EOperMode_*

//
#include <custom_metadata/custom_metadata_tag.h>

//
#include <libyuv.h>
#include <mf_processor.h>

using namespace NSCam;
using namespace android;
using namespace std;
using namespace NSCam::NSPipelinePlugin;
using namespace NSIspTuning;
/******************************************************************************
 *
 ******************************************************************************/
#define MY_LOGV(fmt, arg...)        CAM_LOGV("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGD(fmt, arg...)        CAM_LOGD("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGI(fmt, arg...)        CAM_LOGI("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGW(fmt, arg...)        CAM_LOGW("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
#define MY_LOGE(fmt, arg...)        CAM_LOGE("(%d)[%s] " fmt, ::gettid(), __FUNCTION__, ##arg)
//
#define MY_LOGV_IF(cond, ...)       do { if ( (cond) ) { MY_LOGV(__VA_ARGS__); } }while(0)
#define MY_LOGD_IF(cond, ...)       do { if ( (cond) ) { MY_LOGD(__VA_ARGS__); } }while(0)
#define MY_LOGI_IF(cond, ...)       do { if ( (cond) ) { MY_LOGI(__VA_ARGS__); } }while(0)
#define MY_LOGW_IF(cond, ...)       do { if ( (cond) ) { MY_LOGW(__VA_ARGS__); } }while(0)
#define MY_LOGE_IF(cond, ...)       do { if ( (cond) ) { MY_LOGE(__VA_ARGS__); } }while(0)
//
#define ASSERT(cond, msg)           do { if (!(cond)) { printf("Failed: %s\n", msg); return; } }while(0)

#define __DEBUG // enable debug

#ifdef __DEBUG
#include <memory>
#define FUNCTION_SCOPE \
auto __scope_logger__ = [](char const* f)->std::shared_ptr<const char>{ \
    CAM_LOGD("(%d)[%s] + ", ::gettid(), f); \
    return std::shared_ptr<const char>(f, [](char const* p){CAM_LOGD("(%d)[%s] -", ::gettid(), p);}); \
}(__FUNCTION__)
#else
#define FUNCTION_SCOPE
#endif

template <typename T>
inline MBOOL
tryGetMetadata(
    IMetadata* pMetadata,
    MUINT32 const tag,
    T & rVal
)
{
    if (pMetadata == NULL) {
        MY_LOGW("pMetadata == NULL");
        return MFALSE;
    }

    IMetadata::IEntry entry = pMetadata->entryFor(tag);
    if (!entry.isEmpty()) {
        rVal = entry.itemAt(0, Type2Type<T>());
        return MTRUE;
    }
    return MFALSE;
}

#define MFNR_FRAME_COUNT 4
/******************************************************************************
*
******************************************************************************/
class MFNRProviderImpl : public MultiFramePlugin::IProvider {
    typedef MultiFramePlugin::Property Property;
    typedef MultiFramePlugin::Selection Selection;
    typedef MultiFramePlugin::Request::Ptr RequestPtr;
    typedef MultiFramePlugin::RequestCallback::Ptr RequestCallbackPtr;

public:

    virtual void set(MINT32 iOpenId, MINT32 iOpenId2) {
        MY_LOGD("set openId:%d openId2:%d", iOpenId, iOpenId2);
        mOpenId = iOpenId;
    }

    virtual const Property& property() {
        FUNCTION_SCOPE;
        static Property prop;
        static bool inited;

        if (!inited) {
            prop.mName              = "TP_MFNR";
            prop.mFeatures          = TP_FEATURE_MFNR;
            prop.mThumbnailTiming   = eTiming_P2;
            prop.mPriority          = ePriority_Highest;
            prop.mZsdBufferMaxNum   = 8; // maximum frames requirement
            prop.mNeedRrzoBuffer    = MTRUE; // rrzo requirement for BSS
            inited                  = MTRUE;
        }
        return prop;
    };

    virtual MERROR negotiate(Selection& sel) {
        FUNCTION_SCOPE;

        IMetadata* appInMeta = sel.mIMetadataApp.getControl().get();
        tryGetMetadata<MINT32>(appInMeta, QXT_FEATURE_MFNR, mEnable);
        MY_LOGD("mEnable: %d", mEnable);
        if (!mEnable) {
            MY_LOGD("Force off TP_MFNR shot");
            return BAD_VALUE;
        }

        sel.mRequestCount = MFNR_FRAME_COUNT;

        MY_LOGD("mRequestCount=%d", sel.mRequestCount);
        sel.mIBufferFull
                .setRequired(MTRUE)
                .addAcceptedFormat(eImgFmt_I420) // I420 first
                .addAcceptedFormat(eImgFmt_YV12)
                .addAcceptedFormat(eImgFmt_NV21)
                .addAcceptedFormat(eImgFmt_NV12)
                .addAcceptedSize(eImgSize_Full);
        //sel.mIBufferSpecified.setRequired(MTRUE).setAlignment(16, 16);
        sel.mIMetadataDynamic.setRequired(MTRUE);
        sel.mIMetadataApp.setRequired(MTRUE);
        sel.mIMetadataHal.setRequired(MTRUE);
        if (sel.mRequestIndex == 0) {
            sel.mOBufferFull
                .setRequired(MTRUE)
                .addAcceptedFormat(eImgFmt_I420) // I420 first
                .addAcceptedFormat(eImgFmt_YV12)
                .addAcceptedFormat(eImgFmt_NV21)
                .addAcceptedFormat(eImgFmt_NV12)
                .addAcceptedSize(eImgSize_Full);
            sel.mOMetadataApp.setRequired(MTRUE);
            sel.mOMetadataHal.setRequired(MTRUE);
        } else {
            sel.mOBufferFull.setRequired(MFALSE);
            sel.mOMetadataApp.setRequired(MFALSE);
            sel.mOMetadataHal.setRequired(MFALSE);
        }

        return OK;
    };

    virtual void init() {
        FUNCTION_SCOPE;
        mDump = property_get_bool("vendor.debug.camera.mfnr.dump", 0);
        //nothing to do for MFNR
    };

    virtual MERROR process(RequestPtr pRequest, RequestCallbackPtr pCallback) {
        FUNCTION_SCOPE;
        MERROR ret = 0;
        // restore callback function for abort API
        if (pCallback != nullptr) {
            m_callbackprt = pCallback;
        }
        //maybe need to keep a copy in member<sp>
        IMetadata* pAppMeta = pRequest->mIMetadataApp->acquire();
        IMetadata* pHalMeta = pRequest->mIMetadataHal->acquire();
        IMetadata* pHalMetaDynamic = pRequest->mIMetadataDynamic->acquire();
        MINT32 processUniqueKey = 0;
        IImageBuffer* pInImgBuffer = NULL;
        uint32_t width = 0;
        uint32_t height = 0;
        if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {
            MY_LOGE("cannot get unique about MFNR capture");
            return BAD_VALUE;
        }

        if (pRequest->mIBufferFull != nullptr) {
            pInImgBuffer = pRequest->mIBufferFull->acquire();
            width = pInImgBuffer->getImgSize().w;
            height = pInImgBuffer->getImgSize().h;
            MY_LOGD("[IN] Full image VA: 0x%p, Size(%dx%d), Format: %s",
                pInImgBuffer->getBufVA(0), width, height, format2String(pInImgBuffer->getImgFormat()));
            if (mDump) {
                char path[256];
                snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_in_%d_%dx%d.%s",
                        pRequest->mRequestIndex, width, height, format2String(pInImgBuffer->getImgFormat()));
                pInImgBuffer->saveToFile(path);
            }
        }
        if (pRequest->mIBufferSpecified != nullptr) {
            IImageBuffer* pImgBuffer = pRequest->mIBufferSpecified->acquire();
            MY_LOGD("[IN] Specified image VA: 0x%p, Size(%dx%d)", pImgBuffer->getBufVA(0), pImgBuffer->getImgSize().w, pImgBuffer->getImgSize().h);
        }
        if (pRequest->mOBufferFull != nullptr) {
            mOutImgBuffer = pRequest->mOBufferFull->acquire();
            MY_LOGD("[OUT] Full image VA: 0x%p, Size(%dx%d)", mOutImgBuffer->getBufVA(0), mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h);
        }
        if (pRequest->mIMetadataDynamic != nullptr) {
            IMetadata *meta = pRequest->mIMetadataDynamic->acquire();
            if (meta != NULL)
                MY_LOGD("[IN] Dynamic metadata count: ", meta->count());
            else
                MY_LOGD("[IN] Dynamic metadata Empty");
        }

        MY_LOGD("frame:%d/%d, width:%d, height:%d", pRequest->mRequestIndex, pRequest->mRequestCount, width, height);

        if (pInImgBuffer != NULL && mOutImgBuffer != NULL) {
            uint32_t yLength = pInImgBuffer->getBufSizeInBytes(0);
            uint32_t uLength = pInImgBuffer->getBufSizeInBytes(1);
            uint32_t vLength = pInImgBuffer->getBufSizeInBytes(2);
            uint32_t yuvLength = yLength + uLength + vLength;

            if (pRequest->mRequestIndex == 0) {//First frame
                //When width or height changed, recreate multiFrame
                if (mLatestWidth != width || mLatestHeight != height) {
                    if (mMFProcessor != NULL) {
                        delete mMFProcessor;
                        mMFProcessor = NULL;
                    }
                    mLatestWidth = width;
                    mLatestHeight = height;
                }
                if (mMFProcessor == NULL) {
                    MY_LOGD("create mMFProcessor %dx%d", mLatestWidth, mLatestHeight);
                    mMFProcessor = MFProcessor::createInstance(mLatestWidth, mLatestHeight);
                    mMFProcessor->setFrameCount(pRequest->mRequestCount);
                }
            }

            mMFProcessor->addFrame((uint8_t *)pInImgBuffer->getBufVA(0),
                                  (uint8_t *)pInImgBuffer->getBufVA(1),
                                  (uint8_t *)pInImgBuffer->getBufVA(2),
                                  mLatestWidth, mLatestHeight);

            if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {//Last frame
                if (mMFProcessor != NULL) {
                    mMFProcessor->process((uint8_t *)mOutImgBuffer->getBufVA(0),
                                         (uint8_t *)mOutImgBuffer->getBufVA(1),
                                         (uint8_t *)mOutImgBuffer->getBufVA(2),
                                         mLatestWidth, mLatestHeight);
                    if (mDump) {
                        char path[256];
                        snprintf(path, sizeof(path), "/data/vendor/camera_dump/mfnr_capture_out_%d_%dx%d.%s",
                            pRequest->mRequestIndex, mOutImgBuffer->getImgSize().w, mOutImgBuffer->getImgSize().h,
                            format2String(mOutImgBuffer->getImgFormat()));
                        mOutImgBuffer->saveToFile(path);
                    }
                } else {
                    memcpy((uint8_t *)mOutImgBuffer->getBufVA(0),
                           (uint8_t *)pInImgBuffer->getBufVA(0),
                           pInImgBuffer->getBufSizeInBytes(0));
                    memcpy((uint8_t *)mOutImgBuffer->getBufVA(1),
                           (uint8_t *)pInImgBuffer->getBufVA(1),
                           pInImgBuffer->getBufSizeInBytes(1));
                    memcpy((uint8_t *)mOutImgBuffer->getBufVA(2),
                           (uint8_t *)pInImgBuffer->getBufVA(2),
                           pInImgBuffer->getBufSizeInBytes(2));
                }
                mOutImgBuffer = NULL;
            }
        }

        if (pRequest->mIBufferFull != nullptr) {
            pRequest->mIBufferFull->release();
        }
        if (pRequest->mIBufferSpecified != nullptr) {
            pRequest->mIBufferSpecified->release();
        }
        if (pRequest->mOBufferFull != nullptr) {
            pRequest->mOBufferFull->release();
        }
        if (pRequest->mIMetadataDynamic != nullptr) {
            pRequest->mIMetadataDynamic->release();
        }

        mvRequests.push_back(pRequest);
        MY_LOGD("collected request(%d/%d)", pRequest->mRequestIndex, pRequest->mRequestCount);
        if (pRequest->mRequestIndex == pRequest->mRequestCount - 1) {
            for (auto req : mvRequests) {
                MY_LOGD("callback request(%d/%d) %p", req->mRequestIndex, req->mRequestCount, pCallback.get());
                if (pCallback != nullptr) {
                    pCallback->onCompleted(req, 0);
                }
            }
            mvRequests.clear();
        }
        return ret;
    };

    virtual void abort(vector<RequestPtr>& pRequests) {
        FUNCTION_SCOPE;

        bool bAbort = false;
        IMetadata *pHalMeta;
        MINT32 processUniqueKey = 0;

        for (auto req:pRequests) {
            bAbort = false;
            pHalMeta = req->mIMetadataHal->acquire();
            if (!IMetadata::getEntry<MINT32>(pHalMeta, MTK_PIPELINE_UNIQUE_KEY, processUniqueKey)) {
                MY_LOGW("cannot get unique about MFNR capture");
            }

            if (m_callbackprt != nullptr) {
                MY_LOGD("m_callbackprt is %p", m_callbackprt.get());
               /*MFNR plugin callback request to MultiFrameNode */
               for (Vector<RequestPtr>::iterator it = mvRequests.begin() ; it != mvRequests.end(); it++) {
                    if ((*it) == req) {
                        mvRequests.erase(it);
                        m_callbackprt->onAborted(req);
                        bAbort = true;
                        break;
                    }
               }
            } else {
               MY_LOGW("callbackptr is null");
            }

            if (!bAbort) {
               MY_LOGW("Desire abort request[%d] is not found", req->mRequestIndex);
            }

        }
    };

    virtual void uninit() {
        FUNCTION_SCOPE;
        if (mMFProcessor != NULL) {
            delete mMFProcessor;
            mMFProcessor = NULL;
        }
        mLatestWidth = 0;
        mLatestHeight = 0;
    };

    virtual ~MFNRProviderImpl() {
        FUNCTION_SCOPE;
    };

    const char * format2String(MINT format) {
        switch(format) {
           case NSCam::eImgFmt_RGBA8888:          return "rgba";
           case NSCam::eImgFmt_RGB888:            return "rgb";
           case NSCam::eImgFmt_RGB565:            return "rgb565";
           case NSCam::eImgFmt_STA_BYTE:          return "byte";
           case NSCam::eImgFmt_YVYU:              return "yvyu";
           case NSCam::eImgFmt_UYVY:              return "uyvy";
           case NSCam::eImgFmt_VYUY:              return "vyuy";
           case NSCam::eImgFmt_YUY2:              return "yuy2";
           case NSCam::eImgFmt_YV12:              return "yv12";
           case NSCam::eImgFmt_YV16:              return "yv16";
           case NSCam::eImgFmt_NV16:              return "nv16";
           case NSCam::eImgFmt_NV61:              return "nv61";
           case NSCam::eImgFmt_NV12:              return "nv12";
           case NSCam::eImgFmt_NV21:              return "nv21";
           case NSCam::eImgFmt_I420:              return "i420";
           case NSCam::eImgFmt_I422:              return "i422";
           case NSCam::eImgFmt_Y800:              return "y800";
           case NSCam::eImgFmt_BAYER8:            return "bayer8";
           case NSCam::eImgFmt_BAYER10:           return "bayer10";
           case NSCam::eImgFmt_BAYER12:           return "bayer12";
           case NSCam::eImgFmt_BAYER14:           return "bayer14";
           case NSCam::eImgFmt_FG_BAYER8:         return "fg_bayer8";
           case NSCam::eImgFmt_FG_BAYER10:        return "fg_bayer10";
           case NSCam::eImgFmt_FG_BAYER12:        return "fg_bayer12";
           case NSCam::eImgFmt_FG_BAYER14:        return "fg_bayer14";
           default:                               return "unknown";
        };
    };

private:

    MINT32                          mUniqueKey;
    MINT32                          mOpenId;
    MINT32                          mRealIso;
    MINT32                          mShutterTime;
    MBOOL                           mZSDMode;
    MBOOL                           mFlashOn;

    Vector<RequestPtr>              mvRequests;

    RequestCallbackPtr              m_callbackprt;
    MFProcessor* mMFProcessor = NULL;
    IImageBuffer* mOutImgBuffer = NULL;
    uint32_t mLatestWidth = 0;
    uint32_t mLatestHeight = 0;
    MINT32 mEnable = 0;
    MINT32 mDump = 0;
    // add end
};

REGISTER_PLUGIN_PROVIDER(MultiFrame, MFNRProviderImpl);

主要函数介绍:

  • 在property函数中feature类型设置成TP_FEATURE_MFNR,并设置名称、优先级、最大帧数等等属性。尤其注意mNeedRrzoBuffer属性,一般情况下,多帧算法必须要设置为MTRUE。

  • 在negotiate函数中配置算法需要的输入、输出图像的格式、尺寸。注意,多帧算法有多帧输入,但是只需要一帧输出。因此这里设置了mRequestIndex == 0时才需要mOBufferFull。也就是只有第一帧才有输入和输出,其它帧只有输入。
    另外,还在negotiate函数中获取上层传下来的metadata参数,根据参数决定算法是否运行。

  • 在process函数中接入算法。第一帧时创建算法接口类对象,然后每一帧都调用算法接口函数addFrame加入,最后一帧再调用算法接口函数process进行处理并获取输出。

2.3.4 mtkcam3/3rdparty/customer/Android.mk

最终vendor.img需要的目标共享库是libmtkcam_3rdparty.customer.so。因此,我们还需要修改Android.mk,使模块libmtkcam_3rdparty.customer依赖libmtkcam.plugin.tp_mfnr。

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
index ff5763d3c2..5e5dd6524f 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/customer/Android.mk
@@ -77,6 +77,12 @@ LOCAL_SHARED_LIBRARIES += libyuv.vendor
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_watermark
 endif

+ifeq ($(QXT_MFNR_SUPPORT), yes)
+LOCAL_SHARED_LIBRARIES += libmultiframe
+LOCAL_SHARED_LIBRARIES += libyuv.vendor
+LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.tp_mfnr
+endif
+
 # for app super night ev decision (experimental for customer only)
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.control.customersupernightevdecision
 ################################################################################
2.3.5 移除MTK示例的MFNR算法

一般情况下,MFNR 算法同一时间只允许运行一个。因此,需要移除 MTK 示例的 MFNR 算法。我们可以使用宏控来移除,这里就简单粗暴,直接注释掉了。

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
index 4e2bc68dff..da98ebd0ad 100644
--- a/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
+++ b/vendor/mediatek/proprietary/hardware/mtkcam3/3rdparty/mtk/Android.mk
@@ -118,7 +118,7 @@ LOCAL_SHARED_LIBRARIES += libfeature.stereo.provider

 #-----------------------------------------------------------
 ifneq ($(strip $(MTKCAM_HAVE_MFB_SUPPORT)),0)
-LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr
+#LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.mfnr
 endif
 #4 Cell
 LOCAL_WHOLE_STATIC_LIBRARIES += libmtkcam.plugin.remosaic

三、自定义metadata

添加metadata是为了让APP层能够通过metadata传递相应的参数给HAL层,以此来控制算法在运行时是否启用。APP层是通过CaptureRequest.Builder.set(@NonNull Key<T> key, T value)来设置参数的。由于MTK原生相机APP没有多帧降噪模式,因此,我们自定义metadata来验证集成效果。

vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h:

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
index b020352092..714d05f350 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag.h
@@ -602,6 +602,7 @@ typedef enum mtk_camera_metadata_tag {
     MTK_FLASH_FEATURE_END,

     QXT_FEATURE_WATERMARK = QXT_FEATURE_START,
+    QXT_FEATURE_MFNR,
     QXT_FEATURE_END,
 } mtk_camera_metadata_tag_t;

vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl:

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
index 1b4fc75a0e..cba4511511 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/include/mtkcam/utils/metadata/client/mtk_metadata_tag_info.inl
@@ -95,6 +95,8 @@ _IMP_SECTION_INFO_(QXT_FEATURE,      "com.qxt.camera")

 _IMP_TAG_INFO_( QXT_FEATURE_WATERMARK,
                 MINT32,     "watermark")
+_IMP_TAG_INFO_( QXT_FEATURE_MFNR,
+                MINT32,     "mfnr")

 /******************************************************************************
  *

vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h :

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
index 33e581adfd..4f4772424d 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metadata/vendortag/VendorTagTable.h
@@ -383,6 +383,8 @@ static auto& _QxtFeature_()
     sInst = {
         _TAG_(QXT_FEATURE_WATERMARK,
             "watermark",   TYPE_INT32),
+        _TAG_(QXT_FEATURE_MFNR,
+            "mfnr",   TYPE_INT32),
      };
      //
      return sInst;

vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp :

diff --git a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
index 591b25b162..9c3db8b1d1 100755
--- a/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
+++ b/vendor/mediatek/proprietary/hardware/mtkcam/utils/metastore/metadataprovider/constructStaticMetadata.cpp
@@ -583,10 +583,12 @@ updateData(IMetadata &rMetadata)
     {
         IMetadata::IEntry qxtAvailRequestEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_REQUEST_KEYS);
         qxtAvailRequestEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
+        qxtAvailRequestEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());
         rMetadata.update(qxtAvailRequestEntry.tag(), qxtAvailRequestEntry);

         IMetadata::IEntry qxtAvailSessionEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_SESSION_KEYS);
         qxtAvailSessionEntry.push_back(QXT_FEATURE_WATERMARK , Type2Type< MINT32 >());
+        qxtAvailSessionEntry.push_back(QXT_FEATURE_MFNR , Type2Type< MINT32 >());
         rMetadata.update(qxtAvailSessionEntry.tag(), qxtAvailSessionEntry);
     }
 #endif
@@ -605,7 +607,7 @@ updateData(IMetadata &rMetadata)
             // to store manual update metadata for sensor driver.
             IMetadata::IEntry availCharactsEntry = rMetadata.entryFor(MTK_REQUEST_AVAILABLE_CHARACTERISTICS_KEYS);
             availCharactsEntry.push_back(MTK_MULTI_CAM_FEATURE_SENSOR_MANUAL_UPDATED , Type2Type< MINT32 >());
-            rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);
+            rMetadata.update(availCharactsEntry.tag(), availCharactsEntry);
         }
         if(physicIdsList.size() > 1)
         {

前面这些步骤完成之后,集成工作就基本完成了。我们需要重新编译一下系统源码,为节约时间,也可以只编译vendor.img。

四、APP调用算法

验证算法我们无需再重新写APP,继续使用《MTK HAL算法集成之单帧算法》中的APP代码,只需要将KEY_WATERMARK的值改为"com.qxt.camera.mfnr"即可。为样机刷入系统整包或者vendor.img,开机后,安装demo验证。我们来拍一张看看效果:

2b9029dd8f0ea40faa4a87099e0698e6.jpeg

image

可以看到,集成后,这个模拟MFNR的多帧算法已经将连续的4帧图像缩小并拼接成一张图了。

五、结语

真正的多帧算法要复杂一些,例如,MFNR算法可能会根据曝光值决定是否启用,光线好就不启用,光线差就启用;HDR算法,可能会要求获取连续几帧不同曝光的图像。可能还会有智能的场景检测等等。但是不管怎么变,多帧算法大体上的集成步骤都是类似的。如果遇到不同的需求,可能要根据需求灵活调整一下代码。

原文链接:https://www.jianshu.com/p/f0

参考文献:

【腾讯文档】Camera学习知识库
https://docs.qq.com/doc/DSWZ6dUlNemtUWndv

至此,本篇已结束。转载网络的文章,小编觉得很优秀,欢迎点击阅读原文,支持原创作者,如有侵权,恳请联系小编删除,欢迎您的建议与指正。同时期待您的关注,感谢您的阅读,谢谢!

c47c6411b925d7025b994318727ff461.jpeg

点个在看,方便您使用时快速查找!

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mfbz.cn/a/381695.html

如若内容造成侵权/违法违规/事实不符,请联系我们进行投诉反馈qq邮箱809451989@qq.com,一经查实,立即删除!

相关文章

C++入门篇(4)—— 类与对象(1)

目录 1.类的引入 2.类的定义 3.类的访问限定符 4.类的作用域 5. 类对象的存储方式 6. this指针 6.1 this指针的引入 6.2 this指针的特性 6.3有意思的面试题 1.类的引入 C语言struct 结构体中只能定义变量&#xff0c;而C中可以定义函数。 struct Date {void Init(int…

Go语言每日一练——链表篇(八)

传送门 牛客面试笔试必刷101题 ----------------两个链表的第一个公共结点 题目以及解析 题目 解题代码及解析 解析 这一道题使用的还是双指针算法&#xff0c;我们先求出两个链表的长度差n&#xff0c;然后定义快慢指针&#xff0c;让快指针先走n步&#xff0c;最后快慢指…

IntelliScraper 更新 --可自定义最大输出和相似度 支持Html的内容相似度匹配

场景 之前我们在使用IntelliScraper 初代版本的时候&#xff0c;不少人和我反馈一个问题&#xff0c;那就是最大输出结果只有50个&#xff0c;而且还带有html内容&#xff0c;不支持自动化&#xff0c;我声明一下&#xff0c;自动化目前不会支持&#xff0c;以后也不会支持&am…

02 数据库管理 数据表管理

文章目录 数据库管理数据表管理基础数据类型表的基本操作 数据库管理 查看已有库 show databases; 创建库 create database 库名 [character set utf8]; e.g. 创建stu数据库&#xff0c;编码为utf8 create database stu character set utf8; create database stu charsetutf8;…

第二十七回 武松威镇安平寨 施恩义夺快活林-人人爱用的Python编程语言

张青提议武松不要去牢城营受苦&#xff0c;可以把公差杀掉然后去二龙山入伙鲁智深。武松却坚持他的道义原则&#xff0c;不愿意伤害一路上照顾他的两位公人。张青尊重他的决定&#xff0c;救醒了两位公人。 张青、孙二娘和武松以及两位公人一起喝酒吃饭&#xff0c;张青还向武…

python+django高校教务选课成绩系统v0143

系统主要实现了以下功能模块&#xff1a; 本课题使用Python语言进行开发。基于web,代码层面的操作主要在PyCharm中进行&#xff0c;将系统所使用到的表以及数据存储到MySQL数据库中 使用说明 使用Navicat或者其它工具&#xff0c;在mysql中创建对应名称的数据库&#xff0c;并…

leetcode:51.N皇后

起初会想到暴力&#xff0c;但是N不确定&#xff0c;所以不确定for的嵌套层数&#xff0c;所以我们采用回溯算法。 树形结构&#xff1a; 1.树的深度是第depth层 2.树的宽度是对每一行进行遍历 代码实现&#xff1a; 1.result是三维数组&#xff0c;一个棋盘是二维&#x…

KAJIMA CORPORATION CONTEST 2024(AtCoder Beginner Contest 340)ABCDEF 视频讲解

这场比较郁闷&#xff0c;C题短路&#xff0c;连续4次WA&#xff0c;导致罚时太多 A - Arithmetic Progression Problem Statement Print an arithmetic sequence with first term A A A, last term B B B, and common difference D D D. You are only given inputs for w…

蓝桥杯官网练习题(翻转)

问题描述 小蓝用黑白棋的 n 个棋子排成了一行&#xff0c;他在脑海里想象出了一个长度为 n 的 01 串 T&#xff0c;他发现如果把黑棋当做 1&#xff0c;白棋当做 0&#xff0c;这一行棋子也是一个长度为 n 的 01 串 S。 小蓝决定&#xff0c;如果在 S 中发现一个棋子…

英伟达进军定制芯片领域,有望“再造一个Arm”?

隔夜美股AI总龙头英伟达收高3.58%&#xff0c;再创历史新高。该股本周上涨逾9%&#xff0c;今年迄今上涨45.7%。总市值站上1.78万亿美元&#xff0c;逼近亚马逊与谷歌。 消息面上&#xff0c;据媒体报道&#xff0c;据至少九位知情人士透露&#xff0c;英伟达正在建立一个新的业…

微服务学习 | Spring Cloud 中使用 Sentinel 实现服务限流

前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c;忍不住分享一下给大家。点击跳转到网站https://www.captainbed.cn/kitie。 目录 前言 通过代码实现限流 定义资源 通过代码定义资源 通过注解方式定义资源 定义限流规则 通过…

4核8g服务器能访问多少人?2024年测评

腾讯云轻量4核8G12M轻量应用服务器支持多少人同时在线&#xff1f;通用型-4核8G-180G-2000G&#xff0c;2000GB月流量&#xff0c;系统盘为180GB SSD盘&#xff0c;12M公网带宽&#xff0c;下载速度峰值为1536KB/s&#xff0c;即1.5M/秒&#xff0c;假设网站内页平均大小为60KB…

Zabbix6.x配置中文界面 解决乱码问题

Zabbix6.x配置中文界面 解决乱码问题 Zabbix6.x界面无法选择中文&#xff0c;通过安装语言包解决。后面也解决了zabbix6中文方块&#xff08;乱码&#xff09;问题。 配置中文语言包 系统中默认没有携带中文语言包&#xff0c;可以通过以下命令查看 localectl list-locales #…

编曲学习:旋律创作基础概念 和弦进行作曲 和弦外音使用 作曲技巧

旋律创作基础概念 和弦进行作曲 和弦外音使用 作曲技巧https://app8epdhy0u9502.pc.xiaoe-tech.com/live_pc/l_65be1ba7e4b064a83b92a3d7?course_id=course_2XLKtQnQx9GrQHac7OPmHD9tqbv文档https://app8epdhy0u9502.pc.xiaoe-tech.com/p/t_pc/course_pc_detail/camp_pro/cour…

奇异值分解(SVD)的应用——图像压缩

SVD方法是模型降阶的一类重要方法&#xff0c;本征正交分解&#xff08;POD&#xff09;和平衡截断&#xff08;BT&#xff09;都属于SVD类方法。 要想深入了解模型降阶技术&#xff0c;我们可以先从SVD的应用入手&#xff0c;做一个直观的了解。 1. SVD的定义和分类 我们想寻找…

nginx添加lua模块

目录 已安装了nginx&#xff0c;后追加lua模块nginx 重新编译知识参考&#xff1a; 从零安装一、首先需要安装必要的库&#xff08;pcre、zlib、openssl&#xff09;二、安装LUA环境及相关库 &#xff08;LuaJIT、ngx_devel_kit、lua-nginx-module&#xff09;注意&#xff1a;…

「云原生可观测团队」获选「InfoQ 年度技术内容贡献奖」

随着云原生、人工智能逐渐成为各行各业的创新生产力工具。可以预见&#xff0c;我们即将进入全新的智能化时代。随着数据成为新型生产要素&#xff0c;云和 AI 正走向深度融合。云原生通过提供大规模多元算力的高效供给&#xff0c;可观测成为业务创新的核心基础设施&#xff0…

Android---Jetpack Compose学习002

Compose 布局。Compose 布局的目标&#xff1a;1&#xff09;实现高性能&#xff1b;2&#xff09;让开发者能够轻松编写自定义布局&#xff1b;3&#xff09;在 Compose 中&#xff0c;通过避免多次测量布局子级可实现高性能。如果需要进行多次测量&#xff0c;Compose 具有一…

数字孪生:构建未来智慧社区的关键技术

随着科技的快速发展&#xff0c;数字孪生技术作为构建未来智慧社区的关键技术&#xff0c;正逐渐受到广泛关注。数字孪生技术能够实现物理世界与数字世界的交互映射&#xff0c;为智慧社区的建设提供强有力的支持。本文将探讨数字孪生技术在构建未来智慧社区中的作用和意义&…

JavaIO读取C101.txt文件

一、split分割带空格的字符串&#xff08;四种方法及其区别&#xff09; 参考&#xff1a;https://blog.csdn.net/yezonghui/article/details/106455940 String str "a b c d";String[] arr1 str.split(" "); //仅分割一个空格 String[] arr2 str…
最新文章