news

Motiff launches UI multimodal model: reaching the level of "design expert"

2024-08-19

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

On August 19, at the IXDC2024 International Experience Design Conference,MotiffMiaoduo launched its self-developedUIMultimodalityLarge ModelMotiff Miaoduoda Model. It is reported that Motiff Miaoduoda Model has excellent UI understanding ability and the ability to execute open instructions.

It is understood that the UI capabilities recognized in five industriesBenchmarksIn the test set, Motiff's model surpassed GPT-4o and Apple's Ferret UI in all indicators, and also surpassed Google's ScreenAI in Screen2Words (interface description and inference) and Widget Captioning (component description). The Widget Captioning indicator was as high as 161.77, refreshing SoTA. Compared with existing solutions such as Ferret UI and ScreenAI, Motiff's model can flexiblyContextUnderstand interface elements and reach the level of "design expert", which is closest to human's understanding and expression of UI interface. (Dingxi)