论文标题

基于内容的Tiktok和Douyin之间文化差异的分析

Content-based Analysis of the Cultural Differences between TikTok and Douyin

论文作者

Sun, Li, Zhang, Haoqi, Zhang, Songyang, Luo, Jiebo

论文摘要

简短的视频社交媒体通过向观众讲述一个动态的故事来吸引他们的注意力,从传统媒体范式转移了。特别是,可以使用不同的对象的不同组合来代表一个既有趣又易于理解的独特场景。 Tiktok和Douyin由同一家公司提供,是近年来已经流行的这种新媒体的流行例子,同时是针对不同市场量身定制的(例如,美国和中国)。他们与媒体时尚和社会特质一起表达文化差异的假设是我们研究的主要目标。为此,我们首先采用更快的区域卷积神经网络(更快的R-CNN),并在上下文(MS-COCO)数据集中对Microsoft Common对象进行了预训练来执行对象检测。基于从视频中检测到的一组对象,我们执行统计分析,包括标签统计,标签相似性和标签人物分布。我们进一步使用预先训练动力学数据集的两流膨胀的3D Convnet(I3D)来分类和分析人类行为。通过比较Tiktok和Douyin的分布结果,我们发现了两个紧密相关的视频社交媒体平台沿对象数量,对象类别和人类行动类别的内容维度之间的相似性和对比度。

Short-form video social media shifts away from the traditional media paradigm by telling the audience a dynamic story to attract their attention. In particular, different combinations of everyday objects can be employed to represent a unique scene that is both interesting and understandable. Offered by the same company, TikTok and Douyin are popular examples of such new media that has become popular in recent years, while being tailored for different markets (e.g. the United States and China). The hypothesis that they express cultural differences together with media fashion and social idiosyncrasy is the primary target of our research. To that end, we first employ the Faster Regional Convolutional Neural Network (Faster R-CNN) pre-trained with the Microsoft Common Objects in COntext (MS-COCO) dataset to perform object detection. Based on a suite of objects detected from videos, we perform statistical analysis including label statistics, label similarity, and label-person distribution. We further use the Two-Stream Inflated 3D ConvNet (I3D) pre-trained with the Kinetics dataset to categorize and analyze human actions. By comparing the distributional results of TikTok and Douyin, we uncover a wealth of similarity and contrast between the two closely related video social media platforms along the content dimensions of object quantity, object categories, and human action categories.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源