Sol Luckman Uncensored Updates & Uploads

Share this post
👨‍👩‍👧‍👦 Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos
solluckman.substack.com

👨‍👩‍👧‍👦 Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos

Yet Many Still Deny the Possibility That We Live in a Simulation ...

Sol Luckman
Jan 24
9
5
Share this post
👨‍👩‍👧‍👦 Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos
solluckman.substack.com
Artificial faces generated by a computer.

These faces may look realistic, but they were generated by a computer. NVIDIA, via thispersondoesnotexist.com

Sol Luckman Uncensored Updates & Uploads is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Manos Tsakiris, Ph.D., The Conversation

Even if you think you are good at analysing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.

Sol Luckman Uncensored Updates & Uploads
🔫 Trigger Alert: There’s ABUNDANT Evidence Supporting Simulation Theory & the Phoenix Phenomenon (SLUUU Exclusive)
Sol Luckman Uncensored Updates & Uploads is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber…
Read more
2 months ago · 29 likes · 48 comments · Sol Luckman

Recently, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.

These deep fakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising and social media. The images are also being used for malicious purposes, such as political propaganda, espionage and information warfare.

Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by exposing it to increasingly large data sets of real faces.

In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for Generative Adversarial Networks. The process generates novel images that are statistically indistinguishable from the training images.

In our study published in iScience, we showed that a failure to distinguish these artificial faces from the real thing has implications for our online behaviour. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.

My colleagues and I found that people perceived GAN faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

All faces apart from one have been created by a generative adversarial network (GAN)
All faces apart from one have been created by a generative adversarial network (GAN). Read to the end of article to find out which one is real. NVIDIA via thispersondoesnotexist.com, Author provided (no reuse)

And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people—a concept known as “social trust”.

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overal—independently of whether the faces were real or not.

Sol Luckman Uncensored Updates & Uploads
🔬 Is the “Scientific Method” Broken—or Did It Never Actually Exist in the First Place? (SLUUU Exclusive)
Sol Luckman Uncensored Updates & Uploads is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Sol Luckman As in one of my all-time favorite songs, “San Jacinto” by the great Peter Gabriel, something has been moving in, something I taste in my mouth and heart, something that feels like a slow death—or at least the letting go of a previous life…
Read more
2 months ago · 21 likes · 72 comments · Sol Luckman

This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.

Changing Our Defaults

And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people — a concept known as “social trust”.

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

Sol Luckman Uncensored Updates & Uploads
💡 A Holographic Concept of Reality
Sol Luckman Uncensored Updates & Uploads is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. [SL: This landmark article was initially presented at the First International Congress of Psychotronics in Prague in 1973 and was first printed in the JOURNAL OF PSYCHOENERGETIC SYSTEMS in 1975. In 2007 I had the pleasure to republish it in my popular free ezine back in the day, DNA MONTHLY. May it provide some foundational or additional perspectives on my “holographic” healing modality, the…
Read more
4 months ago · 9 likes · 16 comments · Sol Luckman

It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall—independently of whether the faces were real or not.

This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.

This article, discovered here, is offered under a creative commons license.

Manos Tsakiris, a Professor of Psychology, is Director of the Centre for the Politics of Feelings at the Royal Holloway University of London.

Share

Leave a comment

Sol Luckman Uncensored Updates & Uploads
🪄 Playing in the MAGIC: How to Manifest Whatever You Desire in the Simulation (SLUUU Exclusive Ebook)
Sol Luckman Uncensored Updates & Uploads is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. 🎩 In this inspiring, empowering, hot-off-the-press SLUUU exclusive ebook, renowned sound healer and international bestselling author…
Read more
2 months ago · 23 likes · 34 comments · Sol Luckman

Substack the Ebook

Start writing today. Use the button below to create your Substack and connect your publication with Sol Luckman Uncensored Updates & Uploads

Start a Substack

5
Share this post
👨‍👩‍👧‍👦 Deepfakes: Faces Created by AI Now Look More Real Than Genuine Photos
solluckman.substack.com
5 Comments
author
Sol Luckman
Jan 24Pinned

Will the Real Joe Biden Please Fall Down? (& Other Freakish Mysteries on the World Stage) https://solluckman.substack.com/p/will-the-real-joe-biden-please-fall I’m presenting the following images without initial comment to facilitate a thought experiment for those of us still capable of thinking for ourselves in the face of so much media gaslighting, fearmongering and outright propaganda.

Expand full comment
ReplyCollapse
Christine Massey FOIs
Writes Christine Massey's "germ" FOI N…
Jan 24Liked by Sol Luckman

So creepy! Over a decade ago there was a tv show in Japan, similar to American Idol but without a live audience. One of the contestants was CGI. Eventually people caught on, but they got away with it for a while. Yikes.

Expand full comment
ReplyGift a subscriptionCollapse
1 reply by Sol Luckman
3 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Sol Luckman
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing