Headshot of Patrick Kulp

An eerily lifelike digital avatar of self-help guru Deepak Chopra holds forth about the “endless possibilities of meta-human” from a video screen. He’s an artificially intelligent clone from Silicon Valley-based AI Foundation.

The startup wants to eventually digitize everyone in this manner, uploading the sum total of the subject’s recorded words and text output to train a machine learning system cast in his or her likeness with deepfake-like tech. The head of the foundation said he wanted to start with people who have a big fan base, mission and purpose.

“What we want to do is to basically give them their own AI, digital version of themselves, trained in a clearly defined topic range to speak on their behalf,” says AI Foundation CEO Lars Butler.

The idea is that such a future would allow people to have one-on-one conversations with another’s digital likenesses on a large scale, an especially useful tool for celebrities and other influential personalities. That possibility was what first enticed Chopra, who agreed to be an early test case and who speaks of AI and other technology in dystopian terms. Chopra agreed to be an early test case.

“This is a perilous time for the world where you can hack into anybody anywhere. … Technology could literally lead to our extinction,” Chopra says. “There needs to be a deeper system of intelligence that can actually call out all the life and guard against unethical behavior, and this is what really convinced me to participate.”

The apocalyptic backdrop to which Chopra alludes ties into the other half of the AI Foundation’s unconventional dual mission. The organization’s banner also encompasses a nonprofit arm that’s primarily focused on combatting deepfakes (AI-powered fabricated footage that can be used for fake news purposes).

To that end, Butler says the company has been in contact with various 2020 presidential campaigns about using its latest software, Reality Defender, to protect against doctored videos of the candidates and vet opposition research. “We perceive it as a really big risk at the moment,” he says.

While the connection between these nonprofit and for-profit imperatives isn’t necessarily intuitive, they do somewhat complement one another; the for-profit persona business puts deepfake tech towards an ostensibly positive purpose while the nonprofit side guards against the technology being corrupted.

The for-profit side of the venture is still in the earliest stages. The company has produced a few AI likenesses, but Chopra is the only one it can publicly reveal for now. For Chopra, that development involved various spoken questionnaires and the uploading of books, speeches and other work that he has produced throughout the years.

Chopra said he was amazed at the ability of the finished product to not only remember things the replica told in conversation but also recognize people it with whom it interacts and draw on information from previous interactions: “It grows in the way a baby grows, you know, through social interaction. And that to me was both unnerving but fascinating that this guy was, in many places, getting smarter than I am. And you know, ultimately it will be.”

Butler declined to say how much the company charges for the service. The plan is to release a tool by next year that will allow anyone to undergo the AI training process and build their own AI avatar on their own.

The AI Foundation also recently posted job listings for freelance writers with entertainment backgrounds who can “develop interesting and engaging user experiences for our celebrity AI personas.” Butler claims those scripts are only used to fill in gaps that the AI can’t cover or when working to recreate someone who’s no longer alive to work with directly.

Patrick Kulp is an emerging tech reporter for Adweek. He covers creative innovation, product development, artificial intelligence and the future of 5G.