QTCinderella - DEEPFAKE Victim

April 4th 2023 , 10:53 am QtCinderella crying over being exploited.

QTCinderella is one of the hardest working, most influential creators in gaming right now. She is a creative powerhouse and pioneer who is best known for being a deepfake victim.

Deepfakes: The Impact on Identity Theft, Fraud, and Privacy

With the rapid advancements in artificial intelligence (AI) and machine learning, deepfakes have become a growing concern in our digital world. A deepfake is a type of manipulated media that uses AI to create realistic videos or images of people doing or saying things they never actually did. The implications of this technology are far-reaching, and it poses a serious threat to our privacy, security, and identity.

Impact on Identity Theft and Fraud
Deepfakes can be used to steal identities, financial information, and sensitive personal data. Cybercriminals can create fake videos or images of individuals, making it appear as though they are committing a crime or engaging in illicit activities. This can lead to false accusations, loss of reputation, and financial loss. Deepfakes can also be used to create fake bank statements, documents, and other records that can be used to commit fraud or identity theft.

Non-Consented Illicit Material
Another concerning aspect of deepfakes is the creation and dissemination of non-consented illicit material. It is possible to create fake videos or images of individuals engaging in sexual activities, making it look as if they participated in such activities willingly. This type of content can be used for blackmail, revenge, and humiliation.

Impact on Children
The impact of deepfakes on children is also a growing concern. Children are more vulnerable to manipulation and may not be able to distinguish between what is real and what is fake. They may be exposed to inappropriate content or be the victim of cyberbullying, which can have long-lasting psychological effects.

Risk to Public Figures
Public figures, such as politicians, celebrities, and journalists, are at a higher risk of deepfake attacks. Deepfakes can be used to spread false information, discredit individuals, and create chaos. The widespread use of deepfakes can undermine public trust in institutions and leaders.

What Are Our Rights with Deepfakes?
As deepfakes become more prevalent, it is essential to understand our rights and how we can protect ourselves. The legal framework around deepfakes is still evolving, but there are some steps that individuals can take to protect themselves. For instance, individuals can use watermarks or other methods to indicate that their images or videos are authentic. Social media companies and tech firms can also play a critical role in detecting and removing deepfakes.

It is also important to educate ourselves on the risks and implications of deepfakes. By being aware of the potential threats, we can better protect ourselves and our privacy.

In conclusion, deepfakes pose a serious threat to our privacy, security, and identity. The impact of deepfakes on identity theft, fraud, non-consented illicit material, children, and public figures is concerning. As a society, we must take steps to protect ourselves and our privacy rights.

The legal framework around deepfakes needs to be strengthened, and tech firms and social media companies must take responsibility for detecting and removing deepfakes. By working together, we can ensure that the benefits of technology are not outweighed by its risks.

Only California and Virginia have explicit laws surrounding deepfake pornography: AB 602 and HB 2678, respectively. AB 602 allows California residents to sue if their image is used for sexually explicit content, and “seek injunctive relief and recover reasonable attorney’s fees and costs.” But according to Andy I. Chen, a California-based lawyer I spoke with over the phone, it could be difficult for the victim to sue any defendant outside of the state.

Back in 2019 Virginia added an amendment to its revenge porn law that covers “falsely created videographic or still images.” Violation of that law is a class 1 misdemeanor, which could result in “confinement in jail for not more than 12 months and/or a possible fine of not more than $2,500.”

Deepfake pornography is not a new problem, but it’s incredibly disheartening, saddening, and infuriating that it has now bled into the streaming world.

Deepfake porn economy
March 27, 2023, 11:56 AM EDT

Courtesy of Kat Tenbarge is a tech and culture reporter for NBC News Digital.

Digitally edited pornographic videos featuring the faces of hundreds of unconsenting women are attracting tens of millions of visitors on websites, one of which can be found at the top of Google search results.

The people who create the videos charge as little as $5 to download thousands of clips featuring the faces of celebrities, and they accept payment via Visa, Mastercard and cryptocurrency.


While such videos, often called deepfakes, have existed online for years, advances in artificial intelligence and the growing availability of the technology have made it easier — and more lucrative — to make nonconsensual sexually explicit material.

An NBC News review of two of the largest websites that host sexually explicit deepfake videos found that they were easily accessible through Google and that creators on the websites also used the online chat platform Discord to advertise videos for sale and the creation of custom videos.

The deepfakes are created using AI software that can take an existing video and seamlessly replace one person’s face with another’s, even mirroring facial expressions. Some lighthearted deepfake videos of celebrities have gone viral, but the most common use is for sexually explicit videos. According to Sensity, an Amsterdam-based company that detects and monitors AI-developed synthetic media for industries like banking and fintech, 96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content.

Most deepfake videos are of female celebrities, but creators now also offer to make videos of anyone. A creator offered on Discord to make a 5-minute deepfake of a “personal girl,” meaning anyone with fewer than 2 million Instagram followers, for $65.

The nonconsensual deepfake economy has remained largely out of sight, but it recently had a surge of interest after a popular livestreamer admitted this year to having looked at sexually explicit deepfake videos of other livestreamers. Right around that time, Google search traffic spiked for “deepfake porn.”

The spike also coincided with an uptick in the number of videos uploaded to MrDeepFakes, one of the most prominent websites in the world of deepfake porn. The website hosts thousands of sexually explicit deepfake videos that are free to view. It gets 17 million visitors a month, according to the web analytics firm SimilarWeb. A Google search for “deepfake porn” returned MrDeepFakes as the first result.

In a statement to NBC News, a Google spokesperson said people who are the subjects of deepfakes can request removal of pages from Google Search that include “involuntary fake pornography.”

“In addition, we fundamentally design our ranking systems to surface high quality information, and to avoid shocking people with unexpected harmful or explicit content when they aren’t looking for it,” the statement went on to say.

Genevieve Oh, an independent internet researcher who has tracked the rise of MrDeepFakes, said video uploads to the website have steadily increased. In February, the website had its most uploads yet — more than 1,400.

Noelle Martin, a lawyer and legal advocate from Western Australia who works to raise awareness of technology-facilitated sexual abuse, said that, based on her conversations with other survivors of sexual abuse, it is becoming more common for noncelebrities to be victims of such nonconsensual videos.

“More and more people are targeted,” said Martin, who was targeted with deepfake sexual abuse herself. “We’ll actually hear a lot more victims of this who are ordinary people, everyday people, who are being targeted.”

The videos on MrDeepFakes are usually only a few minutes long, acting like teaser trailers for much longer deepfake videos, which are usually available for purchase on another website: Fan-Topia. The website bills itself on Instagram as “the highest paying adult content creator platform.”

When deepfake consumers find videos they like on MrDeepFakes, clicking creators’ profiles often takes them to Fan-Topia links, where they can pay for access to libraries of deepfake videos with their credit cards. On the Fan-Topia payment page, the logos for Visa and Mastercard appear alongside the fields where users can enter credit card information. The purchases are made through an internet payment service provider called Verotel, which is based in the Netherlands and advertises to what it calls “high-risk” webmasters running adult services.

Verotel didn’t respond to a request for comment.

Some deepfake creators take requests through Discord, a chatroom platform. The creator of MrDeepFake’s most-watched video, according to the website’s view counter, had a profile and a chatroom on Discord where subscribers could message directly to make custom requests featuring a “personal girl.” Discord removed the server for violating its rules around “content or behavior that sexualizes or sexually degrades others without their apparent consent” after NBC News asked for comment.

The creator didn’t respond to a message sent over Discord.

Discord’s community guidelines prohibit “the coordination, participation, or encouragement of sexual harassment,” including “unwanted sexualization.” NBC News has reviewed other Discord communities devoted to creating sexually explicit deepfake images through an AI development method known as Stable Diffusion, one of which featured nonconsensual imagery of celebrities and was shut down after NBC News asked for comment.


In a statement, Discord said it expressly prohibits “the promotion or sharing of non-consensual deepfakes.”“Our Safety Team takes action when we become aware of this content, including removing content, banning users, and shutting down servers,” the statement said.

In addition to making videos, Deepfake creators also sell access to libraries with thousands of videos for subscription fees as low as $5 a month. Others are free.

“Subscribe today and fill up your hard drive tomorrow!” a deepfake creator’s Fan-Topia description reads.

While Fan-Topia doesn’t explicitly market itself as a space for deepfake creators, it has become one of the most popular homes for them and their content. Searching “deepfakes” and terms associated with the genre on Fan-Topia returned over 100 accounts of deepfake creators.


Screenshots from the Fan-Topia website


Some of those creators are hiring. On the MrDeepFake Forums, a message board where creators and consumers can make requests, ask technical questions and talk about the AI technology, two popular deepfake creators are advertising for paid positions to help them create content. Both listings were posted in the past week and offer cryptocurrency as payment.

People from YouTube and Twitch creators to women who star in big-budget franchises are all commonly featured in deepfake videos on Fan-Topia and MrDeepFakes. The two women featured in the most content on MrDeepFakes, according to the website’s rankings, are actors Emma Watson and Scarlett Johansson. They were also featured in a sexually suggestive Facebook ad campaign for a deepfake face-swap app that ran for two days before NBC News reported on it (after the article was published, Meta took down the ad campaigns, and the app featured in them was removed from Apple’s App Store and Google Play).

“It’s not a porn site. It’s a predatory website that doesn’t rely on the consent of the people on the actual website,” Martin said about MrDeepFakes. “The fact that it’s even allowed to operate and is known is a complete indictment of every regulator in the space, of all law enforcement, of the entire system, that this is even allowed to exist.”

Visa and Mastercard have previously cracked down on their use as payment processors for sexually exploitative videos, but they remain available to use on Fan-Topia. In December 2020, after a New York Times op-ed said child sexual abuse material was hosted on Pornhub, the credit card companies stopped allowing transactions on the website. Pornhub said the assertion it allowed such material was “irresponsible and flagrantly untrue.” In August, the companies suspended payments for advertisements on Pornhub, too. Pornhub prohibits deepfakes of all kinds.

After that decision, Visa CEO and Chairman Al Kelly said in a statement that Visa’s rules “explicitly and unequivocally prohibit the use of our products to pay for content that depicts nonconsensual sexual behavior.”

Visa and Mastercard did not respond to requests for comment.

Other deepfake websites have found different profit models.

Unlike Fan-Topia and its paywalled model, MrDeepFakes appears to generate revenue through advertisements and relies on the large audience that has been boosted by its positioning in Google search results.

Created in 2018, MrDeepFakes has faced some efforts to shutter its operation. A petition to take it down created by the nonprofit #MyImageMyChoice campaign has over 52,000 signatures, making it one of the most popular petitions on the platform, and it has been shared by influencers targeted on the platform.

Since 2018, when consumer face-swap technology entered the market, the apps and programs used to make sexually explicit deepfakes have become more refined and widespread. Dozens of apps and programs are free or offer free trials.

“In the past, even a couple years ago, the predominant way people were being affected by this kind of abuse was the nonconsensual sharing of intimate images,” Martin said. “It wasn’t even doctored images.”

Now, Martin said, survivors of sexual abuse, both online and off, have been targeted with deepfakes. In Western Australia, Martin successfully campaigned to outlaw nonconsensual deepfakes and image-based sexual abuse, but, she said, law enforcement and regulators are limited by jurisdiction, because the deepfakes can be made and published online from anywhere in the world.

In the U.S., only four states have passed legislation specifically about deepfakes. Victims are similarly disadvantaged because of jurisdiction and because some of the laws pertain only to elections or child sex abuse material.