Use Code: OOHInsider70 for 70% OFF Registration to Placer.AI Discovery NYC (Dec. 10, 2024)
Nov. 12, 2024

Emmy-Award Winning Media Researcher Bill Harvey teaches us about Attention vs Resonance

Summary

In this conversation, Tim and Emmy-Award Winning Media Researcher, Bill Harvey, Founder/Chairman at Research Measurement Technologies, Inc., unpack the evolution of addressable TV advertising technology, focusing on the development of driver tags and their impact on recommendation engines. Bill shares insights from his early work in the industry, the creation of a system that distilled a million words into 265 driver tags, became the concept of 'Resonance' (vs Attention), and how these tags have proven to be more effective than other traditional attention metrics in predicting sales outcomes.

The discussion covers the integration of resonance in digital advertising, the relationship between resonance and physical context, and the future of addressability and AI in advertising.

Takeaways

  • Driver tags were developed to empirically deduce behavioral drivers.
  • Driver tags have shown to be more effective than attention metrics in predicting sales.
  • Privacy can be maintained while optimizing marketing strategies through data clean rooms (DCRs).
  • Resonance drives Attention and can maintain interest in ads themselves.
  • Bill's literary pursuits reflect his lifelong interest in consciousness, improving the world, and using advertising as a vehicle for positive impact.

Chapters

  • 00:00 Introduction and Upcoming Events
  • 01:25 The Genesis of Driver Tags
  • 12:47 Evolution of Recommendation Engines
  • 17:41 Effectiveness of Driver Tags vs. Attention
  • 30:30 Integrating Resonance in Digital Advertising
  • 33:26 Resonance and Physical Context in Advertising
  • 38:43 The Future of Addressability and AI in Advertising
  • 43:22 Creative Challenges and AI Solutions
  • 47:17 Bill's Literary Pursuits and Motivations


Connect with Bill and learn more about his work here:

On LinkedIn - https://www.linkedin.com/in/bill-harvey-9581801/
On his website - https://billharveyconsulting.com/we-just-won-an-emmy/
And learn more about Semasio at https://www.semasio.com/

Join OOH Insider and Placer.ai at The Premier Leadership Conference for those Building the Future with Location Analytics, December 10th, 2024 at Pier Sixty. Use discount code OOHInsider70 to save 70% at registration. Learn more here.




Try our custom-built GPT for FREE!

Built on more than 300+ pages of curated OOH Insider transcripts to build The Ultimate Insider.

The Ultimate Insider Programmatic DOOH and OOH Attribution GPT AI

Transcript

Tim Rowe:
Welcome back to OOH Insider. In case you missed last week's conversation with Jeromy Sonne, Daypart AI. Learn about the evolution of a DSP through AI media buying to ABM and solving some pretty unique B2B use cases today. So go back and check out that episode. Looking ahead to next week, we will be meeting with Fat Tail. Fat Tail solves long tail challenges for DOOH publishers, end to end ad ops, including a self-service ad manager. So looking forward to next week, that'll be a great conversation with Al from Fat Tail. And there are only four weeks left until the Placer AI event at Chelsea Pier in New York City on December 10th. 

And if you would like to be there, and save 70% on registration, go to theOOHinsider.com. Right across the top there, you can't miss it, there's a big old banner with a promo code. OOOHInsider70 is gonna save you 70% on registration to the Placer AI Discover event, December 10th in New York City. Tickets are wildly reasonable, they're like 399, but with the promo code, it's gonna bring you down to like 120 bucks. So go to theOOHinsider.com and I will see you at the event on December 10th. Today's conversation, Mr. Bill Harvey, I am incredibly excited and honored to have you here. Our friend, Kim Frank introduced us a few weeks back and we had a great conversation about how you took a million words in the unabridged Oxford dictionary with a team distilled it down to T265, what you call driver tags, the early incarnations, the first TV discovery recommendation engines. Your, your technology, your research, research has powered an incredible part of the media landscape. 

And I'm excited to, uh, to have a conversation about some of that here today. Thank you, Bill. Thank you for joining us. My pleasure. Thank you, Tim. Absolutely. Maybe start us at the beginning. How did, uh, where did you come up with this idea of taking a dictionary and figuring out which words are associated with the behaviors that drive outcomes for brands?

Bill Harvey: It started when I was a newcomer to the advertising business fresh out of school and I became obsessed with the idea of psychographics but then disappointed because there was no mutually agreed upon set of psychological variables that everyone wanted to use. Every practitioner came up with their own and that kind of gave me a feeling of it's pretty queasy, loosey-goosey and unsatisfying. I wanted to have some empirical means of actually deducing what are the drivers of behavior? What are the psychological elements that could be in an ad? What are the psychological elements that could be in a television show or on a website? What are the psychological elements that cause people to do what they do, the motivations within people? How could that all be made manageable? Because it's obviously a lot of different things all munged together. Motivations, values, mindsets, beliefs, you name it. There aren't even two definitions for the same thing. You have to go from one dictionary to another. So that was the drive behind the effort. I actually owe it to the dearly departed Dr. Timothy Joyce, who was my partner in this in the early going. He came up with the idea. He was a Brit. He came up with the idea of using the Oxford on a bridge dictionary. And he split the cost with me of hiring 22 coders. 22 because there were 22 volumes, each about a thousand pages long in the Oxford Unabridged Dictionary. When is this?

Tim Rowe: Circa when? Just for a timeline, when was this? Today we might just give it to AI, but you're doing this far in advance of AI being what it is today. When were you working on this initially?

Bill Harvey: That was the late 1970s. Wow.

Tim Rowe: Okay. So predating any semblance of modern AI, machine learning, we're not even there yet.

Bill Harvey: Well, early days of machine learning, but that's about it. Very early days. Then what we did was we gave an instruction sheet to the coders and it said, these are the kinds of things we consider to be psychological. And if you're not sure, over-include. because our process is going to be distilling. And so it'd be better for you if you're not sure to put it in. So the coders gave us 13,000 words. Of course, that took months, but eventually we had 13,000 words. Then we discovered that many of them were words that were only used by technical, you know, people, psychologists, clinicians, and so on, or they were old words that aren't being used anymore. And then many of them were synonyms with other words that were more commonly used. We managed to get it down to about 10,000 words. Those 10,000 words, what we had to do next was to somehow find a way of getting it down to a manageable number. And What we did was we set up a system for asking people, how well does this word describe you? Then we bought a national probability sample and we provided a six point scale. And each person was given a thousand of the 10,000 words to go through. And since the scale was fixed, you could go through it pretty quickly and say, Oh, I'm not conceited at all. And blah, blah, blah. Here's where I fall on this word. Then we did a factor analysis, which enabled us to cluster those scores. And we were able to cluster the 10,000 words into 1,562 clusters. Within each cluster, we had a correlation of each word with the cluster. So we could pick the one word in the cluster that best represented the whole cluster. So now we had 1,562 words. This now brings us into the 1990s. So in 1997, I had a company called Next Century Media, which the work I'm about to describe won an Emmy Award 22 years later, after we did the work. Twenty-two years after the work. After the work. Actually, it was 25 years. Delayed gratification. So very delayed, yes. So in 1997, this company, NCM, had built the first system for collecting the first set-top box data, delivering the first addressable TV commercials, optimizing those commercials both for the buyer and seller, enabling the programmatic buying and selling of those addressable commercials. And most relevant to our story, it had an Proprietary AI, we actually called it that in those days. We designed it. It wasn't a large language model, but it was an artificial intelligence of sorts. A deep learning model is what they would call it nowadays. We didn't have that terminology back then. what we called it an AI, and it provided recommendations for what people would probably like to watch among shows that they had never watched before. So the way that worked was, if you were one of the nearly million households that had digital set-top boxes, which were being used by the largest cable operator in the world at the time, which was TCI, owned by Liberty Media, John Malone, Ed Al, If you had gotten one of those new set-top boxes, you had suddenly gone from 70 channels to 500 channels of programming to choose from. So the concept was postcards were sent out by Comcast to these subscribers saying, you're probably going to miss some shows. that you'll regret having missed because we're giving you so much choice now, here's something we're going to give you for free. If you don't know what to watch in the next time period, press the A button on your remote control channel changer. Within a second, our AI will recommend one show coming up in the next time period that you're most likely to watch based on the shows you've watched in the past. So the way that worked was, since we had set-top box data, We knew all the shows they watched. We even know how many seconds they watched each episode of that show. And we knew the codes from the 1,562 codes that we had at that point. We knew which of those codes characterized the shows they were watching. So here's a set-top box that maybe was watching a lot of shows about cruelty, sadism, blah, blah, blah. Here's another household whose set-top box shows a record of watching shows about compassion and kindness and so on. So we had those profiles so that the system, when it saw a request for a recommendation, knew, first of all, which shows not to recommend because they had been watched before by that set-top box. So again, our promise was to help them discover shows they hadn't tried. So we disqualified those from the recommendation. And then the system picked the best match between the characteristics of that of the shows coming up, which one best matched the profile of that set-top box. So then the main way we judged whether that was a success or a failure was the degree to which that recommended show became loyally viewed by the viewer who we recommended it to. We defined loyalty as three out of the next four episodes after the recommendation landed. And so if people did watch three out of four, we call that a conversion or an adoption. So initially, we found that our baseline was 3%. 3% of the recommendations were succeeding, according to our definition. That's when the fun began because we were using machine learning to make that better. So the way that worked was we looked at, of the 1,562 predictors, which ones were showing up at an above average rate in the adoptions. And we gave them, the machine learning algorithm gave them more weight in making the recommendations. And it did the opposite in the case of predictors that were showing up below averagely in the adoptions. So then the 3% started to go up and up and up, 4%, 5%, 6%, and eventually topped out at 18%. At 18%, when we looked at what do we have, what we had was 265 remaining words that had non-zero weights. All of the other words had been de-weighted to zero in favor of these 265. So we gave them the name driver tags. So that's how we got a million plus words in the English language. down to 265 words that were known to have some kind of a driving effect on program choice.

Tim Rowe: That's incredible. That's an incredible process to walk through. And there's technology that you mentioned that today we take for granted, but was really transformative at the time. The idea of your TV first going to 500 channels, the set top box, but then the recommendation engine, the recommendation engine is something that we enjoy on every platform that, that we consume content on today. How, how is, how are today's recommendation engines, how do they pay homage to the work that you did or how are they, how are they similar or different from those initial variations?

Bill Harvey: Well, they're all different. Generally speaking, they're based on collaborative filtering, which is, to use the most familiar example from Amazon book buying, it's people who enjoy this book also enjoyed these books. So what it leaves out when you use collaborative filtering is people who never became aware of a program before, didn't have the chance to use it or not use it. Now, Screen Engine today shows that the average awareness level of a new show is between 20 and 30 percent. So you're leaving out 70 or 80 percent of the picture by using collaborative filtering. Also, a more practical problem with collaborative filtering is you need like a month to build up the duplication patterns between this program and other programs. And in the first month, which is exactly when new shows start to get canceled, it's kind of too late to help people discover a new show. So that's a problem with most of today's recommenders. Now, the best of today's recommenders is Netflix, and we have a good basis for comparison, even though their approach is quite different. Their approach uses collaborative filtering, and it uses a number of other algorithms, and the one that they've talked about openly and so we, you know, as outsiders know something about. is really a basis for clustering programs together into program types. But at last report, they have 78,000 program types, which means there's only a handful of shows within each program type. So like a given program type might be a James Cagney gangster movies of the 90s. Well, it's only about a dozen of those, you know, so after they've exhausted recommending other Cagney shows, then they've got to make some assumptions about what the adjacent cell is, like it might be Humphrey Bogart gangster shows of the 1940s and so on, but there's some human judgment involved in that. Anyway, I think that Netflix, although it doesn't work for me in terms of my personal program viewing, it has shown, according to what they put on their website, a 70% satisfaction score, which is interesting because What I didn't mention before about our 1997 test was we also used the B button and the C button on the remote control channel changer. to give the viewer a chance to train the system faster. We said, you don't have to do this because the system's going to learn anyway, but you could train it faster by telling it that it's a good robot or a bad robot. You look at the recommendation and you know, right, right off the bat, you've heard of the show before from a friend. This is not the kind of show you want to watch. then give it a C. Press the C button. That means bad robot. If you like the recommendation on the face of it, press the B button. So we got, based on that, the B button was over 90%. So over 90% satisfaction for our method versus 70% for Netflix, which I think is still quite a respectable score. Most of the program recommenders that I've tested are not as good as the Netflix.

Tim Rowe: I think that's important for, for us to know is brands and advertisers approach these, these platforms really enthusiastically. I'd love to come back and talk maybe about addressability, how programmatic buying and selling changed with this technology, but to get there, walk us through the effectiveness. You shared some really powerful case studies when we first talked about the effectiveness of using driver tag technology over attention even, that the driver tags were more indicative of an outcome than premium attention-grabbing formats. Can you walk us through maybe some of those? Absolutely.

Bill Harvey: Let's start with the one you chose, the comparison to attention. So, the Advertising Research Foundation did a study, released the results in May of this year, and what it showed was attention has only a 0.12 ability to predict sales effects and that compares to 0.48 for the RMT driver tag resonance. Now, what does that actually mean and why has it come out that way? Well, if you dig into it more deeply, you learn a couple of things. So attention is capable of of predicting ad recall. Like if you get more attention, it's quite likely you're going to get a higher score on ad recall. And any other memory-based top-of-the-funnel measure, aided awareness, unaided awareness, attention is good for all of those things. But it's necessary, it's a gating factor. If you don't have it, it hurts you. If you do have it, it doesn't guarantee any real success because advertising still has to maintain your interest get you to connect it to a specific brand, be persuaded to have more of a higher valuation of that brand based on new perceptions of attributes of that brand that you didn't have before. You have to do all of those things before you get a sales effect. The resonance predicts all of that, and the reason resonance predicts it is because these psychological motivators that we call driver tags probably correspond to what neuroscientists call value signals in the brain. That's a postulate of Michael Platt, neuroscientist at Wharton Neuroscience who's studied the effects of resonance and attention on the effects of advertising. And so the reason why it works, why the driver tags work to predict sales effects of advertising is because the way advertising works is not just to get your attention, but to get you to last through the process of storytelling to the point where it changes your perception along the motivational things you care about. Like, let's say it changed your perception of a brand, but it was about things you don't care about. Well, then you wouldn't go buy the brand. So it has to resonate with you. The ad must resonate with you in order to have a sales effect. it's got to make a connection with your motivations. Now, the interesting thing about that, when you go even further, according to neuroscience, the thing about attention is that we're all assuming conscious attention is what's needed. But neuroscience teaches us that 95% of all consumer decision-making is subconscious, that we don't have to be conscious of an ad. We could be influenced by, and we especially don't have to have our eyes pointed at the screen. You know, the attention measures are ignoring the audio element. You could be hearing stuff that could affect your purchase intent and your actual purchase. So now the thing that brings your eyes to the screen, you might like this given your interests in audio, which I know you have, that you can your eyes can be brought to the TV screen by what's in the audio. This was first discovered by Turner. Yeah, Turner discovered this. They were the first to discover this many years ago in their advanced TV laboratory. And so, in other words, if let's say you're motivated by wanting to be a leader, Now, people aren't conscious necessarily of their motivations. They may have some conscious motivations, but that might not be the motivations that really drive their behavior. At any rate, let's say somebody has strongly, let's say they're strongly motivated by leadership, by being a leader. If They're sitting playing with their smartphone during a pot of commercials and the third commercial in the pot starts. Something about leading or leadership or even heroism, which is a related concept, or even innovation, which is sometimes related to being a leader. Any of those things might trigger that motivation, resonate with that person, bring their eyes to the screen. And so what we say is that the driver tags drive attention. They also drive continuing interest, maintaining of attention. They drive sticking around for the storytelling, the communication, the overcoming of disbelief, the gaining of comprehension, trust, and learning about a new attribute of a brand that increases your perception of its value. So it works throughout the whole cognitive processing of a commercial. So that's why it works. So some of the data that you alluded to. So Turner did a study using Nielsen Catalina NC solutions as it's now called, and it showed a 36% increase in sales effect when the resonance scores were above average. 605 did a study for a major CPG company, which showed that purchase intent was increased 37% and first brand mentioned was increased 62%. Now, there are other studies, like ARF Cognition Council did a study which showed that the RMT data account for 48% of the variance in IRI sales data over a six-year period for 19 brands in three product categories that the ARF chose for this analysis. There are two types of resonance and you asked about programmatic and this kind of gets into that subject a bit. We wanted to be able to do this not just for television. We were getting a lot of validation that this works for television. When you put an ad into a program, If you include this as part of your process, you'll get more sales effects and more attention. And it can be combined with attention. You don't have to make it an either or choice. You can combine it. But we wanted to get into digital. And in order to do that, we felt we had to, to some extent, start with a simpler model. And at the same time, Clients were asking us, they said, you know, you've got 265 variables. First of all, you don't tell us what they are because you say you won't have a business if you do that. So we would like you to cluster them into something where you could give us a name for the cluster. And that way we could make our ads better. Like maybe we need to have more compassionate ads. Maybe we need to have more bold ads. We don't have any idea. So could you do some clustering? So we clustered the 265 signals. first into 86 need states. And the clients were very happy with that for about a second. And then they said, you fool. Always want more. You didn't get what we were saying. You know, it's still too much, you know, cluster it more. So we did, we clustered it with our apologies into 15 super clusters, which we call motivations. They said, ah, this is, this is what we want. So we teamed up with an AI company called Sumacio. What Sumacio does is it tracks anonymized buyer behavior, sorry, browser behavior for 700 million people in the world, 300 million of them in the US. So it tracks their browser behavior, and every time one of these anonymized but persistent IDs lands on a URL, it does a full text grab of every word on the URL. As let's say, as I'm moving through cyberspace, landing on one domain after another, it's creating a word cloud around me. And it's noticing that I'm landing on a lot of sites that are about consciousness, quantum theory, how the universe began, media research, a whole bunch of different subjects. And that kind of creates a word cloud around me, the ID. At the same time, of course, they create word clouds around each piece of content, each website, each page on each website. And so we use those for targeting purposes. A client gives us an ad and says, here's my ad. I want you to determine what are the driver tags in the ad. And because I'm mostly using programmatic, I also want to know the motivations in my ad because I know. With Somacio, I can target a motivational group either by IDs or by contexts or both. So retail media chain, I'm sorry, retail chain, I won't name names, one of the major retail chains in the U.S., brick and mortar and online, came to us, gave us an ad, we coded it, and we saw that love and competency and power were the three motivations that were strongest in that ad. We gave that to Somacio. Somacio came up with about 20 million IDs in the US that were good for love, about 22 million that were good for power and about 21 million that were good for competency. And those were the ones that were targeted by the DSP. I think it was DB 360 for this advertiser for this campaign. And what they did was they used a different set of targets that they had been using with good, good results. They were kind of satisfied with the results, but they used that as the control group so they can make a comparison. between what they were was Google affinity IDs, lookalikes of pet owners. And that was working pretty well for them. And then they gave all of the results, all of the data to Newstar, all of the online sales, all of the offline sales, all of the media exposure data across all the media, TV, digital, all types, social, mobile, whatever. And Newstar came back and said that our targets got 95% higher return on ad spend in terms of incremental sales than the Google Affinity ID. If you were looking at new to brand, people who had never used the retail chain before, we were 115% higher. So whether you're looking at resonance with the context or resonance with the ultimate audience. You get more sales effect and all full funnel effect. You get more full funnel effect at any level. by using resonance and there's no harm using resonance with attention. It can only help.

Tim Rowe: It's incredible. It's incredible to hear just how impactful it is, but to hear how it's actually being implemented and incorporated into the buy today. Is that generally still the workflow? Do, do companies work with you to understand what this looks like for their customer? And then they, they use that to traffic the campaign. Does that, is that how it works?

Bill Harvey: That's one way that it works for digital programmatic, and we're always seeking to make it easier to use. For example, if you're a user of LiveRamp, all you have to do is go to LiveRamp and click on the choice of one of these motivational targets, and it's as easy as pie. We're hoping to make it easier to use also in television. Now, in television, you know, the workflow is very well worn in, very established. the role of MediaOcean and the role of all of the standard platforms and tools that people use, they would prefer. The buyers and the sellers would prefer that whatever you're doing, no matter what the benefit, it would be darn nice to have it fit in easily to the workflow. And so one of the things we're doing now is we're partnering with a company called DataFuelX. Now it's ironic and in a nice way that Howard Schimmel, who is the guy at Turner who kind of gave us our big break in getting into the advertising business with DriverTax, is one of the founders of DataFuelX that we're now kind of returning the favor with. So there are a number of different networks that are using DataFuelX and are now saying that they're going to use one of them that's already got a contract. And at least two others are strongly interested, look like they'll be soon incorporating it. So wherever we can become part of a well-used platform, Sumacio, for example, is integrated with the Trade Desk, with DB3, so 22 of the major DSPs and SSPs and DMPs. So that's all by way of making this as, you know, frictionless as possible for people.

Tim Rowe: So this isn't just something that, that lives in the textbook. It's something that we can use and brands are activating against today, every day. So I love the progression. Thank you for, for taking us fully through the story. A large part of this audience, as you know, are OOH, DOOH folks. How do you, how do you think about the relationship between resonance and physical context? We don't necessarily always have the content story to tell, but we do have a really strong thesis around context and the places people consume media. How do you think about that?

Bill Harvey: Well, you know, I came very close to cracking the code on that a year or two ago. Kim Frank was the CEO of Geopath. It may have been three years ago. At any rate, whenever it was, what we were talking about was, we've got these 300 million people in the Sumatio database. We know their motivations. If they can be GPS tracked past points of DOOH or, you know, any OOH. Then by time of day, we're able to say what are the motivations that would be best presented in those digital signs or digital video displays at that moment in time. You know, are there patterns at lunchtime? for this particular group of outdoor points in terms of the motivations of people passing those points. So, same thing with automotive, you know, like reaching people in their cars through radio and now even TV. The ability to use the data that we've got compiled within the Somacio database would only require, let's say, clean room or spine-to-spine integration with ID graphs. And there you have the ability to connect this to pretty much any media type, even to store location planning, even to time of day, changing what you put on special at quick serve restaurants and, you know, all kinds of implications. Now, the fact that We have clean rooms means we can do all of this without imposing on anyone's privacy. You know, back in 1995, I was, I guess I volunteered for the assignment of being the spearhead for what was the first industry privacy principle. It was called the Casey Privacy Principles, Casey for Coalition for Advertiser Supported Information and Entertainment, which was a joint project of the ANA, 4As, and the ARF. So I, over a period of a year, working with 80 other people, came up with these privacy principles, which still stand the test of time. Consumer choice, whole transparency. anonymization and so on. So I've always been in favor of maximum privacy. And we're finding as an industry way, two different paths for achieving privacy. One of which degrades our marketing capacity to maximize return on investment. That's, you know, using, you know, purposely using virtual IDs. And when we use virtual IDs, everything regresses to the mean. Everything becomes dictated by averages. We wash out all the individuality. And so it gives up a lot to gain privacy control. That was like the initial XUN solution. But I think we can go back to a more surgical solution with clean rooms. With clean rooms, we don't have to screw up ROI. we can continue to go to spend marketing dollars more effectively, cost effectively, which is going to have a very big effect on world economies. Something like 20% of the of the spend of most major corporations is on marketing. The degree to which that's efficient versus not efficient has social implications as big as the privacy protection social implications. So we have to go past the simplistic solution that we jumped on initially and look down the road to the clean rooms as the saner long-term solution. So we're going to have our cake and eat it.

Tim Rowe: And brands being able to own the whole story. I think that there's unspoken value in that of just, all right, hey, we're actually going to, for the first time ever, maybe have this fully in-house or the first time in a couple of decades since so much money has shifted to to the big three DSPs, thinking more about addressability. We're going to go from 40, we're at 40%. It's going to become 70% of the audience will not be addressable by 2030. First party data plus resonance. How are you thinking about the convergence of all of these things?

Bill Harvey: Well, the great challenge is to optimize and to optimize requires a prediction engine. You have to basically start from what has the brand been doing? What are the campaigns? You know, what, what media types are being used? What kind of creative is being used? What are the targeting? What are the different advanced audiences that are being used by the brand? And then how well is it doing in achieving its KPIs and each funnel level? And then how can, how would you predict that would change if you change these various things? If the media type mix changed, if the creative messaging changed along different motivational lines, if it was aligned better with the targets that have been chosen, how should the targeting be revised based on the motivational insights? You've got several things going in different directions at once. So it's a multivariate solution. that from a brute force method is best approached by trial and error of predictions. So you make 10,000 predictions and you see, has there been increases in a certain direction? And you see, oh, the directions that were tested in these shifts in media and these shifts in creative seem to be leaning uphill toward higher ROI and so on. higher brand growth. Of course, a layer of complexity on top of that is getting it right in terms of how you balance long and short-term outcomes. If you're too focused on short-term outcomes, you're going to get the most increase in short-term sales, but year two sales won't be increased. Brand growth might be reduced in year two if you concentrate entirely on short-term sales performance. There's got to be some balance between the increase in willingness to consider brand and other indicators of the brand having become more salient in the mind, or you might say the minds and hearts of the consumers. If you get onto the consideration list this year, you're in a much better striking distance for getting the sale next year and for years to come. So the optimization itself has to look at how do you test different mixes of short and long-term effects while you're testing all these other things. Ultimately then, now the mistake that's been made over and over again is you take the word of these various methods. and I'm espousing here, and you immediately go into the field with them. You say, okay, I'm going to make these changes. And the best way to do that isn't to just make those changes, but to do an experiment in the field. Use in-field, in-market experimentation. Don't go whole hog into switching in to what any model, whether I designed it or anybody designed it, Don't just jump into it a whole hog. You don't have to do that. Try in-market experimentation. Feed the results back into the model. Let it learn from the experiment. See how much the recommendation change as to what you should be doing, buying and activating in the marketplace. And you might do a bit of that before you can totally go whole hog. You'll be able to see, okay, we've done three layers of random control trials. We've done it using programmatics. So fast feedback, only three months have passed. We've learned a ton. We think we're ready now. It's month four. It's still pretty fast. Now we go into it whole hog. Whatever the model says. That would be the kind of recommendation I would make.

Tim Rowe: A masterful recommendation. How does AI solve the creative challenge? It seems like creative remains this blocker. I don't have the creative. Does AI solve that? Can it? It will.

Bill Harvey: It will. You know, we tend to give it too much credit now because, you know, we're all astounded how fantastically good it looks. Now, sometimes it's lying, hallucinating, as they say. There's no truth value to it, but it sounds great. I'm particularly excited being a writer myself in generative AI, the ability to take my sci-fi novels and convert them into sci-fi movies. I can't wait. I keep testing Runway and Mid-Journey and everything. I can't wait for that moment when I can do that. But what I found is, even the best of the current generative AI tools, You can get the character to look like you want, but getting them to appear consistently in the background that you want, doing the actions you want, ain't so easy. All my AI collaborators are saying, Bill, you know, we're a year away. We're a year away. We're going to be able to make that movie, blah, blah, blah. And I believe it because I see the progress and we are getting there. One of the things that's going to be a stumbling block is whether you use computer vision or text analysis, it's very hard to generate the driver tags, which again, we're now calling value signals because Wharton Neuroscience postulates that's what they really are, is the corresponding value signals in the brain. But it's very hard to generate automated AI-driven driver tags or value signals based on a piece of content. The AI using computer vision can look at the content and it can look at even faces who appear in the content and to some extent code the faces by emotion. Except as Nielsen IQ has reported, the number of faces that show no emotion at all tends to be around 85%. for most video content, but it's still some signal. Nevertheless, when you use that technique, well, we've tested this for years. We started with IBM Watson at their request. We've tested this and it's not there yet. What you tend to get are literal objects in the scene, you know, truck, woman wearing red hat, so on and so forth. That isn't anything like the kinds of psychological information that we use in the value signals, you know, conceited. It is not easy to pick up a conceited face, or so far it isn't. Now, we have plans for how to do that. We have very detailed plans for how to do it. It will require more than just computer vision. We will need all of the audio signals. The audio signals will be just as important, not just for the text analysis, but for the sound effects and for the tone of voice, all of that's got to be calibrated to the value signals. There's years of work ahead. We've already embarked on it. There are others we may partner with. There may be others who get there ahead of us, but all of that's got to be done in order to totally scale the driver tags value signals to fulfill its entire potential globally for all content, for all ads, for all people.

Tim Rowe: Bill, when Kim Frank sent me your Amazon author profile, I almost didn't think that it was the same Bill Harvey. I'm like, there's no way someone who has done as much for the media and advertising landscape also has time to write books. But you do. You're a published author. What do you write about?

Bill Harvey: Well, I, I guess my, Consciousness is, if you had to boil it down to one word, I write about consciousness, both fiction and nonfiction. Like my book, Mind Magic, is all about how to optimize the use of your consciousness. My book, A Theory of Everything Including Consciousness and quote-unquote God, is a nonfiction book about why the universe shows all of the potential to be a single consciousness and matter to be something consciousness has created as a virtual reality within consciousness. And of course, my sci-fi novels are about consciousness, but they're about a fictional group of agents of cosmic intelligence sent by the universal cosmic intelligence to Earth. to protect the earth against a rebel uprising who are rebelling against universal consciousness. Even though they're part of it, they refuse to believe they're part of it. Therefore, they're attacking it. And that's the overall theme of the four novels I've written so far. To give away the why, what motivates me to write about all this stuff and what motivated me to get into the advertising business. I decided as a kid that I wanted to do something to make the world a better place. I wanted to leave it better than I found it. And then I discovered that there was some leverage to do that by using the media to influence consciousness. So I had to get into the media somehow. And what I wanted to do is be a writer and just, you know, live in Hollywood and be a writer. My parents being in show business dissuaded me from that, saying, look, here are the actual odds and do that, but get a day job. And so I said, OK, I'll get a day job in the advertising business. That'd be my hedge. I'll continue to try to be a writer who can get movies made and TV series made, but I won't count on it because my parents are pretty smart. I'll trust them in that. But even if I fail in that, maybe I can make the tools better in advertising that will lead to some kind of an improvement approach. And in fact, when you look at the 15 motivations, most of the 15 motivations are positive things. Love, creativity, aspiration, competency, belonging. Now, then there is self-transcendence, meaning altruism, self-knowledge, which is, I would say my entire childhood was based on a search for self-knowledge. And my book, Mind Magic, was all about self-knowledge. So, all of those are good things. So, are there any bad things in the motivations? Well, maybe power. Power may be dangerous. Depending. Yeah, depending entirely. And then wealth and success. That could get us into trouble. And then status and prestige. I mean, those are probably of the 15 motivations, the three to most look out for. But, you know, leadership, heroism, altruism, you know, 12 good, three bad. And even those three aren't necessarily bad. It's a matter of how it comes out in the wash, how you use them, how you manifest them, but mostly good stuff. So if motivations get used more in planning advertising and creating advertising communication strategies, writing creative briefs, creating, you know, if AIs and humans working together or separately, can work on bringing out those motivations in their ads, the odds are it's going to improve behavior a little bit. It's just probabilistically.

Tim Rowe: Bill, I couldn't think of a better place to bring this conversation home. I think this is part one of a multi-part series. I would love to extend the conversation into some of the things that drive you outside of the research and the relationship that it does all have with advertising and how each of us listening or watching this show has the opportunity to make a positive impact through the work that we do today. So Bill, I can't thank you enough for being here. Give us the Latin long. Where do we, where do we connect with you? Where do we learn more about what you're working on today?

Bill Harvey: Well, um, the, uh, human effectiveness Institute is probably the best place. Um, It's humaneffectivenessinstitute.org. Somebody who wants to know more about the driver tags per se, it would be rmt.solutions. The only other website of mine that might contain stuff relevant to this is billharveyconsulting.com. Those are the three sites that I'd recommend to learn more about my work.

Tim Rowe: Excellent. We will link to all of those in the show notes and accompanying addendums to this show. Bill, thank you again for giving as generously as you have and sharing a bit of history with us. Thank you.

Bill Harvey: Thank you, Tim. It was my pleasure. This is a tremendous amount of fun for me. So thank you so much.

Tim Rowe: Good. We'll have you back for part two. Looking forward to it. Thank you. Absolutely. Maybe in person. We'll do it in person. Love it. I think that'd be a lot of fun. If you found this episode to be helpful, please share it with someone who could benefit. As always, make sure to smash that subscribe button and we'll see y'all next time.