Detecting Visual Triggers in Cannabis Imagery: A CLIP-Based Multi-Labeling Framework with Local-Global Aggregation
Authors: Linqi Lu, Xianshi Yu, Akhil Perumal Reddy
Abstract: This study investigates the interplay of visual and textual features in online discussions about cannabis edibles and their impact on user engagement. Leveraging the CLIP model, we analyzed 42,743 images from Facebook (March 1 to August 31, 2021), with a focus on detecting food-related visuals and examining the influence of image attributes such as colorfulness and brightness on user interaction. For textual analysis, we utilized the BART model as a denoising autoencoder to classify ten topics derived from structural topic modeling, exploring their relationship with user engagement. Linear regression analysis identified significant positive correlations between food-related visuals (e.g., fruit, candy, and bakery) and user engagement scores, as well as between engagement and text topics such as cannabis legalization. In contrast, negative associations were observed with image colorfulness and certain textual themes. These findings offer actionable insights for policymakers and regulatory bodies in designing warning labels and marketing regulations to address potential risks associated with recreational cannabis edibles.
Explore the paper tree
Click on the tree nodes to be redirected to a given paper and access their summaries and virtual assistant
Look for similar papers (in beta version)
By clicking on the button above, our algorithm will scan all papers in our database to find the closest based on the contents of the full papers and not just on metadata. Please note that it only works for papers that we have generated summaries for and you can rerun it from time to time to get a more accurate result while our database grows.