Publications

The objective of this paper is to explore the development of the Blackmail-as-a-Service business model within the Digital Thrift Shop. This service model involves the sale of doxing files to customers, as well as the removal of the content from the shop and its dissemination to victims or individuals closely connected to them.

To access the Digital Thrift shop, this study relies on the Tor browser and a darknet indexing site. The authors then use an application to collect and store the web pages as PDFs. Finally, content analysis is performed on these PDFs to understand the Blackmail-as-a-Service business model developed by the Digital Thrift.

Full Abstract

The doxing data available on the Digital Thrift is primarily targeted toward women. Digital Thrift has also established a way to value the purchase, sale and the removal of blackmail data. The presence of Blackmail-as-a-Service on darknet sites poses unique challenges for criminal justice agencies in terms of jurisdiction, due to the lack of harmonized laws and the obstacles involved in taking down content from darknet sites. Finally, the use of a service model for blackmail allows criminals without technical skills to engage in cyber-victimization using blackmail.

Research into Blackmail-as-a-Service from boutique providers has not been conducted. To the best of the authors’ knowledge, this is one of the first study seeking to understand the Blackmail-as-a-Service business model on the darknet when used to target adults. This study presents evidence of a lack of connection between the buyers of the compromising material and the potential victim, challenging preconceived notions about image-based sexual abuse and its connection to individuals involved in interpersonal relationships.

Logie, K. and Maras, M.-H. (2024), “Doxing: how a darknet shop created the blackmail as a service business model”, Journal of Aggression, Conflict and Peace Research, Vol. 16 No. 4, pp. 348-362. https://doi-org.ez.lib.jjay.cuny.edu/10.1108/JACPR-04-2024-0894

Funding Source

Center for Cybercrime Studies, John Jay College of Criminal Justice

FaceApp is a smartphone photo-editing application that enables users to modify their images, which are either created using the app or uploaded to the app from users’ phones (by giving FaceApp permission to access users’ cameras).

The app uses artificial intelligence to modify users’ images, such as altering the “age” of users, adding facial hair to images, and/or enhancing images. These transformed images can be shared on social media platforms or stored on users’ personal devices for private use.

FaceApp was made popular by social media challenges involving its most popular filters and celebrities use of the app. FaceApp has faced significant public backlash for some of these filters. First, FaceApp’s “hot” filter, when applied to users’ images, lightened their skin color. Following criticism, the founder of FaceApp and CEO of Wireless Lab (the Russia-based company responsible for FaceApp), Yaroslav Goncharov, apologized for the filter, claiming that the output of the filter was “an unfortunate side-effect of the underlying neural network caused by the training set bias.” Shortly after the “hot” filter was renamed as “spark,” the filter was removed from FaceApp. Second, FaceApp released “ethnicity” filters (“Asian,” “Black,” “Caucasian,” and “Indian”), whereby users’ facial features and hair were changed based on their filter selection. These filters were widely criticized for being racist and offensive, leading FaceApp to remove these features from its app. FaceApp is not the only app that has launched racist and offensive filters. Snapchat has also been on the receiving end of backlash for its “Bob Marley” filter, which darkened users’ skin and placed a knit cap and dreadlocks on the heads of users’ images, and its “anime-inspired” filter that created racist and offensive caricatures of Asians.

Full Abstract

FaceApp has been criticized worldwide not only for its racist and offensive filters, but also for its lack of transparency in its data processing practices, which include “any operation or set of operations…performed on personal data or on sets of personal data, whether or not by automated means, such as [the] collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction” of data. Particularly, the breadth of FaceApp’s privacy policy and terms of use agreement in regards to data processing were criticized. Even though FaceApp’s privacy policy and terms of use agreement are similar to those of U.S.-based technology, app, and social media companies, users’ privacy and data protection concerns are warranted. This article critically examines FaceApp’s original and updated terms of use agreement and privacy policy, looking in particular at their data processing practices. The article concludes by providing policy recommendations for apps, like FaceApp, which operate with minimal transparency, opaque terms of service agreements, and vague privacy policies.

Maras, M.-H., & Logie, K. (2021). Understanding What it Really Takes to Control Your Data: A Critical Evaluation of FaceApp. Journal of Internet Law24(8), 1 and 11–18. https://www.researchgate.net/publication/354552230_Understanding_What_it_Really_Takes_to_Control_Your_Data_A_Critical_Evaluation_of_FaceApp

Darknet marketplaces (DNMs) are global digital marketplaces used primarily to buy and sell illicit drugs online. High rates of adulter- ated substances have contributed to the creation of harm reduction policies by DNM administrators to address growing rates of overdose worldwide. This paper explores the extent to which harm reduction occurs in buyer feedback of Adderall and Oxycodone purchased on AlphaBay and how these comments are impacted by AlphaBay’s administrator-led “harm reduction” policy.

Full Abstract

This study finds that harm reduction strategies are present in buyer feedback of Oxycodone and Adderall pills, but AlphaBay’s policy has very little impact on the preexisting harm reduction communication within buyer feedback.

Logie, K., Pugliese, K., & Acevedo, A. (2023). An examination of harm reduction strategies in Oxycodone and Adderall buyer feedback on AlphaBay. Criminology & Public Policy22(4), 695–733. https://doi.org/10.1111/1745-9133.12652

 

Manipulated media – images, video, and audio – have proliferated online. Widely available modification software, rudimentary practices, and advanced techniques involving machine learning and artificial intelligence have been used to manipulate media. This article critically explores illegal and nonconsensual nude and sexually explicit (NSE) deepfakes and the measures implemented to counter them.

Full Abstract

The motivating questions for this analysis are: What measures are in place to counter illegal and nonconsensual NSE deepfakes? Are these measures sufficient? The objectives of this article are three-fold: (1) to identify illegal and nonconsensual NSE deepfakes and illegal uses of them; (2) to critically evaluate the current legal and technological countermeasures available in various jurisdictions to combat illicit and nonconsensual NSE deepfakes; and (3) to make recommendations based on challenges and deficits in existing legal and technological mechanisms employed to tackle them. Ultimately, our findings indicate that there are technologies and legal measures that could effectively reduce the harm experienced by victims if: there is a collective will from the corporate, legislative, and political spheres to effectively execute these changes; and legal liability to remove NSE deepfakes is placed on the online platforms and websites that host and distribute this content.

Maras, M.-H., & Logie, K. (2024). Countering the complex, multifaceted nature of nude and sexually explicit deepfakes: an Augean task? Crime Science13(1). https://doi.org/10.1186/s40163-024-00226-6

Funding Source

Center for Cybercrime Studies, John Jay College of Criminal Justice

Cyber-enabled fraud has transformed, becoming more complex and making it harder for targets and law enforcement to detect its occurrence. This study aims to recontextualize a major manifestation of this transformation, a crime called hybrid investment fraud, colloquially known as pig butchering. Hybrid investment fraud describes a cyber-enabled fraud whereby criminals gain the trust of victims by forming connections and relationships, and then exploit this trust by using a series of confidence building and coercive measures designed to encourage victims to continuously invest in securities or commodities until they become unable or unwilling to continue to make payments or the offenders become unreachable. This study further aims to address the existing knowledge gap by focusing on understudied elements of this fraud, such victim and offender characteristics and the ways hybrid investment fraud is perpetrated.

Full Abstract

To achieve this, we conducted an in-depth analysis of more than 1,300 news articles and court documents between January 1, 2018, and November 1, 2023, to identify 59 cases of hybrid investment fraud targeting victims in the United States. This article both situates hybrid investment fraud within the broader fraud literature and conducts a comprehensive of analysis of hybrid investment fraud cases to identify the types of hybrid investment fraud committed, their impact, victim and offender demographics, and offenders’ tactics, tools, and methods of operation. The findings from this study can inform criminal justice practices and future research of this fraud.

Maras, M.-H., & Ives, E. R. (2024). Deconstructing a Form of Hybrid Investment Fraud: Examining “Pig Butchering” in the United States. Journal of Economic Criminology5, 100066–100066. https://doi.org/10.1016/j.jeconc.2024.100066

We examined darknet user discussions on the Dread forum to identify key themes and emerging topics in fraud planning, focusing additionally on elderly victimization. Using the conceptual framework of criminogenic learning to study the process of fraud planning in darknet communities of practice, we analyzed the content of original user posts (n = 818) and comments (n = 1365) collected from the Dread forum.

Full Abstract

We identified nine unique fraud categories, of which payments-related fraud was the most-discussed fraud category and accounted for 25% of original user posts. We further found our sampled forum content to be dominated by the theme of learning or knowledge sharing in eight of the nine fraud categories, which appeared in more than 44% of original user posts. Our content analysis revealed specific insights into why and how darknet forum users may target people, including the elderly subpopulation, for different types of fraud. Overall, our research demonstrates the diversity of opinion and knowledge sharing among darknet forum users in relation to planning and committing fraud against the elderly, views on who is a suitable target and why, and what veteran and aspiring fraudsters consider to be important information for success in fraud.

Research relating to hate speech and the darknet have both grown significantly in the previous decade. Nonetheless, there is a dearth of empirical research exploring how hate speech manifests within the darknet, the groups targeted. This study seeks to fill this gap in the literature by investigating the different targets of hate speech within the darknet forum Dread and how posts within this forum are affected by hate motivated events.

Full Abstract

Through analysis of posts (n = 1,047) 3 months before and after major hate-motivated events, this study finds that approximately 13% (n =135) of posts in our sample contain hate speech targeting several groups. In addition we also examined the variations in targets between forum-specific subjects (internal) and targets outside of the forum (external). Our findings suggest that there is limited conversation surrounding hate-motivated events discussed in mainstream media on Dread. However, instances of hate speech, predominantly targeting religious, racial, and gender-related groups, are present at a lower percentage in comparison to research conducted about hate speech on social media platforms.

Logie, K., Cohen, N. D., Taylor, E., & Perry, K. (2025). Hidden Hate: Analysis of Hate Speech on a Darknet Forum. Justice Quarterly, 1–24. https://doi.org/10.1080/07418825.2025.2501544

Illicit darknet markets (DNMs) are highly uncertain and in a perpetual state of flux. These markets thrive in a zero-trust, high-risk environment. However, the trustworthiness of vendors plays a critical role in illicit transactions and the sustainability of the illegal trade of goods and services on DNMs. Focusing on the illicit fentanyl trade and applying signaling theory and embedded mixed methods design, we examined different ways that trustworthiness is signaled by vendors on darknet sites. Fentanyl, a synthetic opioid, in recent years, has been declared a public health emergency in the United States due to its high potency and unprecedented number of deaths associated with its use; however, the topic remains understudied and requires urgent attention.

Full Abstract

There are few studies that have focused on fentanyl trafficking on DNMs and no mixed method studies that have focused specifically on trust signals in DNM fentanyl networks. In our research, first, we conducted a focus group and in-depth interviews with criminal justice professionals to understand the inner workings of darknet sites, fentanyl networks, and how trust is assessed. Second, we scraped select darknet sites to collect and curate scraped data for later examination of vendor trustworthiness on DNMs. Third, using signaling theory to understand how vendors signal trustworthiness on select darknet sites selling drugs, including fentanyl, we applied both qualitative and quantitative content analysis of DNM features, and language used in vendor profiles, listings, and product/vendor reviews, to inform the development of a trustworthiness index. In this research, we used software, such as Atlas.ti and Python, to analyze our data. The main purpose of this article is to provide an in-depth description of the mixed methods approach we used to inform the development of a vendor trustworthiness index, which we used to examine trust between illicit fentanyl vendors and buyers. Our research can serve as a guide for the development of DNM vendor trustworthiness index for future research on other illegal markets.

Criminals have long leveraged information and communications technology to commit crimes that pose significant threats to public safety, economic security, and national security. Illegal goods and services are marketed on websites accessible through traditional search engines (i.e., clearnet) and non-indexed websites that cannot be identified and accessed through traditional clearnet search engines such as Google or Bing (i.e., Deep Web). The Deep Web, which includes Intranets, websites that are password-protected, and websites accessible only using specialized browsers (e.g., Tor, “The Onion Router”), are part of what is known as the Dark Web. Within the Dark Web, the term darknet has been used to describe spaces used to facilitate criminal activities, such as the trade of illicit goods and services.

Full Abstract

Academic research and media coverage of darknet marketplaces (DNMs) have predominately focused on cryptocurrencies, the sale of illegal drugs, firearms, stolen data (e.g., personal, financial, and medical information), counterfeit money and goods, child sexual exploitation material, and malware.

A crime not commonly associated with the darknet is securities fraud—a criminal offense under 18 USC § 1348, which involves the use of deceptive practices to influence or manipulate financial markets and/or others’ financial investment decisions. Nevertheless, two recent criminal cases drew attention to the use of darknet to commit securities fraud—Apostolos Trovias (“The Bull”) and James Roland Jones (“Millionaire Mike”).

The article can be accessed by clicking here.

This study examines darknet markets through the lens of a business theory on knowledge management. Taking epistemological and ontological dimensions into consideration, this study uses Nonaka’s (1991) SECI model as a theoretical framework to identify and describe how tacit and explicit knowledge is created and shared on Silk Road, Pandora and Agora darknet markets, and how people affect this process. By studying this process, insights can be obtained into darknet market criminal organizations and communities of practice and their impact on the continuity and resilience of illicit darknet markets.

Full Abstract

This project used data from the Internet Archive collection of publicly available darknet market scrapes between 2011 and 2015 from Branwen et al. (2015). We observed instances of the SECI model (socialization, externalization, combination, and internalization) on darknet markets in both criminal organizations and communities of practice. Darknet market leaders and groups facilitated both knowledge creation and sharing. This study is the first to test the SECI model on darknet markets. The study provides an understanding of the complexity and resilience of darknet markets, as well as valuable information to help guide law enforcement agencies efforts to stop the illicit trade of goods and services.

Maras, M.-H., Arsovska, J., Wandt, A. S., Knieps, M., & Logie, K. (2024). The SECI model and darknet markets: Knowledge creation in criminal organizations and communities of practice. European Journal of Criminology, 21(2), 165-190. https://doi.org/10.1177/14773708221115167

The article can be found here: https://journals.sagepub.com/doi/10.1177/14773708221115167