AI Deployment at Colgate-Palmolive
Today, we’re diving deep into the real-world implementation of AI at Colgate Palmolive with Klee Pappas, the Senior Director of Global Predictive Analytics and AI. Klee’s got some serious street cred when it comes to deploying AI effectively across a massive organization, and he’s here to share how he did it. He breaks down the strategies that led to a successful rollout to over 30,000 employees, balancing the needs of both office workers and manufacturing staff. We’ll explore how Klee tackled common barriers, like legal concerns, by fostering a collaborative culture with the legal team that encourages innovation rather than stifling it. Tune in to learn actionable insights that could help you navigate AI deployment in your own organization.
Takeaways:
- Deploying AI effectively requires collaboration with legal teams to navigate risks and approvals.
- Colgate Palmolive has successfully implemented AI across various departments, including manufacturing and logistics.
- The democratization of AI tools allows employees to build their own solutions tailored to specific needs.
- Engaging employees through training and hands-on projects significantly boosts adoption and innovation in AI initiatives.
Links referenced in this episode:
Transcript
Hi, this is Adam Davidson, one of the co founders of FeedForward and today's podcast is a must listen.
Speaker A:It's one I've been excited to bring you for months.
Speaker A:This is an interview with Klee Pappas, a very active member of the feedforward community.
Speaker A:He runs AI deployment at Colgate Palmolive and you may well have heard me talk about him.
Speaker A:In my view, he is the member who has Most successfully deployed AI across, across the entire global enterprise.
Speaker B:The 30,000 people who work at Colgate.
Speaker A:Palmolive, about half of whom are knowledge workers in offices, and about half are in manufacturing plants, shipping and logistics.
Speaker A:And he's deployed it to both groups really well.
Speaker A:So on today's show you're going to learn how he did it and I'm sure get some very valuable lessons about how you might deploy AI at your firm.
Speaker B:Hi, Klee.
Speaker C:Hi, Adam.
Speaker B:What is your proper title at Colgate Palmolive?
Speaker C:I am the Senior Director of Global Predictive analytics and head of AI.
Speaker C:Great.
Speaker C:That's two jobs.
Speaker B:Yeah.
Speaker B:So probably most members of FeedForward have heard me talk about you and Colgate Palmolive as the model of deploying AI.
Speaker B:I see you as having the most comprehensive approach, the most engaged approach throughout a massive organization.
Speaker B:What I want to start with is really just walking through how you did it and how it all happened.
Speaker B:You and I have been talking for almost a year and I think when it started you had no idea that you were going to be way ahead of folks.
Speaker B:Maybe we just start at the beginning.
Speaker B:How did you have the idea to start working with AI?
Speaker C:Yep, story.
Speaker C:And like most stories, I guess had many twist and turns that led us to where we are now.
Speaker C:So, you know, before ChatGPT GPT 3.5 was released, I had an existing team working with models, so predecessors to the GPT models that we thought were interesting.
Speaker C:So we were trying to engage models to write product descriptions which that we canceled because it wasn't any good at it.
Speaker C:Year before ChatGPT came out and was starting to see what the potential could be but, you know, was not near full enough.
Speaker C:By testing it with marketers, they said this is terrible.
Speaker C:I thought AI was supposed to be good.
Speaker C:It's the class, right?
Speaker C:That's the future.
Speaker C:This is not the future.
Speaker B:And I should have just set some groundwork.
Speaker B:You're actually a chemist, right.
Speaker B:Your background is figuring out the chemistry that goes into, say, toothpaste or other products.
Speaker C:Right.
Speaker C:Been a Colgate to go as a bench scientist who mixes together 20 ingredients that's pretty much how you might imagine it.
Speaker C:And then it's a toothpaste or it's an antiperspirant.
Speaker C:I worked on antiperspirants and deodorants for a long time.
Speaker C:So if you have any questions, I am the guy.
Speaker C:I've got seven patents in the antiperspirant deodorant space and did a lot of fundamental research there.
Speaker C:Chemists and I did my PhD in chemistry.
Speaker C:And that was about the time when Python was really coming into its own data science as we understand it now.
Speaker C:Cloud, native, open source was starting to move outside of technology companies.
Speaker C:And in my day job making formulas, I made an observation that bases we had something like 50,000 experimental toothpastes that scientists had made through the years.
Speaker B:Wow.
Speaker C:And they test it and it goes into a database and then they get another marketing request for a new product and they just, you know, kind of start anew again because all the data is locked in.
Speaker B: Someone in: Speaker C:Someone, always done the thing.
Speaker C:There's a joke in chemistry that if you think you've discovered anything in chemistry, actually you just don't read Russian because there's all these Russian journals that no.
Speaker C:Western scientists read a lot of work in chemistry for a very long period of time and never entered the Western, you know, what is now mainstream.
Speaker C:Some, you know, basic machine learning modeling, where they're building machine learning models.
Speaker C:You can give it a list of ingredients and it will predict the properties of that formula, whether it's the mouthfeel or viscosity or flavor profile.
Speaker C:And that's sort of how I got my part in data science.
Speaker C:And building out a data science team at Colgate evolved into more spaces.
Speaker C:And then when ChatGPT emerged, there's obviously a lot more attention on AI, and I was poised as a person to take it over.
Speaker B:Great.
Speaker B:So you early on have this idea like, hey, we should figure out how to use ChatGPT and Colgate Palmolive.
Speaker B:I guess one interesting thing that has come up in our conversations, I remember telling you that one place I see people really tripped up in large organizations is the general counsel often just shuts down AI deployments.
Speaker B:Oh, we got to look into this.
Speaker B:There's a lot of risks.
Speaker B:Colgate Palmolive obviously has a lot of ip.
Speaker B:And you said, oh, that wasn't a problem.
Speaker B:Can you explain why you were able to work so well with the general counsel?
Speaker C:Yeah.
Speaker C:One of the key pillars to what we've done has been the engagement of our legal.
Speaker C:So our internally for a couple years now has had a mantra called path to yes which is just blanket.
Speaker C:Say no to most things.
Speaker C:There needs to be a conversation around.
Speaker B:What is yes you can.
Speaker B:I've dealt with legal teams.
Speaker B:That definitely blankets it.
Speaker B:No to most things.
Speaker B:Yes, so you can do it.
Speaker C:So that is yes, you can do it, you shouldn't.
Speaker C:Right.
Speaker C:And the internal mantra for coup identified AI as prize risk, which puts some formalization around a process.
Speaker C:What are the end the company.
Speaker C:And so that Risk team itself was myself and person our in our legal organization.
Speaker C:One of our chief counsels.
Speaker C:And so she started from the very beginning is actually a member of the Feet Forward collective that is at the hip from the very start.
Speaker C:At the very beginning of ChatGPT coming out.
Speaker C:Even the most basic thing of ChatGPT is here use it is confidentiality maintained.
Speaker C:I think a lot of firms at that earliest state just said no, you can't use it.
Speaker C:The news tells me that it's going to steal your information team went through the terms of service and read them.
Speaker C:Reached out to which took some time to hear back because everyone was reaching out but just engaged.
Speaker C:And of course, when you engage with them, you find out that there are paths forward to all these things.
Speaker C:There are paths forward on terms of service at the time.
Speaker C:There's paths forward to engage with them on a contract.
Speaker C:Foes have never been something that I've felt.
Speaker C:And I think it's because at the top of all the decision making, myself and a senior person from the legal organization who has this mantra which is what is the path to yes across all of these different areas and bringing them in early.
Speaker B:So it wasn't like you were coming up with a comprehensive plan and then legal reviewed it.
Speaker B:It sounds like they were kind of holding your.
Speaker B:You were together throughout.
Speaker B:Which probably also built trust.
Speaker C:Yeah.
Speaker C:And legal accountability for the outcomes.
Speaker C:I think there's some ways that it's set up where you reach out to legal for approval and it's transactional.
Speaker C:It's in their interest to be very conservative.
Speaker C:You know, you ask question, I give answer.
Speaker C:And I'd prefer not to have to eat my words on the answer later or hear that something went wrong.
Speaker C:The legal team fundamentally part of what are we doing with AI from the very beginning meant they were part of the project team and the others who were involved from her end vested interest in making it successful.
Speaker C:Not just in de risking because you can de risk entirely thing.
Speaker C:But that doesn't get you anywhere in the end.
Speaker B:All right.
Speaker B:So fabulous.
Speaker B:So you're.
Speaker B:You're allowed to launch.
Speaker B:I basically want to walk through what.
Speaker C:You bet comes out and we immediately realize, well, we can't buy licenses for the whole company.
Speaker C:So this was before they even had an enterprise plan.
Speaker C:It was just everyone gets the same plan and you could have a paid plan or not.
Speaker C:I mean not to buy a small number of licenses for, you know, an Illuminati group of people and everyone else gets nothing.
Speaker C:That just seemed to me like entirely the wrong thing to do.
Speaker C:Licenses for all of the at Colgate, which is about 15, 16,000 people, economically it was not feasible, you know, midway through a budget season, no way to really justify the spend.
Speaker C:And so the question is, well, what are we going to do?
Speaker C:Because I believed adamantly we just have a select group of people who have the capabilities and a group of people who don't.
Speaker C:But on the other hand we can't.
Speaker C:It's a buy for everybody.
Speaker C:So that led us to saying, well, we just need to use the API and it has to be consumption based and that means I have to build a platform that to push people to the OpenAI API.
Speaker B:And I think pretty much everyone listening understands, but I can't help be the former Planet Money guy and explain any technical terms.
Speaker B:So that means you could buy an account with ChatGPT or create your own web interface using an API means you are able to call on that same core model potentially with a different system prompt.
Speaker B:You're just paying for however many tokens, however much people use it, as opposed to paying per user.
Speaker C:Yeah, that's right.
Speaker C:We just a site that looked kind of like TT basic chat model was you pay per month for unlimited use for a person we knew especially at that time, we're not going to get our money's worth out of unlimited use for everybody in the company.
Speaker C:People are still learning what this thing is and where to use it.
Speaker C:So what was more is getting as many people as possible to do something small base pricing where every time someone asks a question we pay per token, you know, per cent.
Speaker C:It meant company could have access paying for the actual usage.
Speaker C:At that time a ton of sense usage was basically zero because no one was using anything and needed to ramp across the organization.
Speaker B:And so that secondary benefit is you're just building a team that knows how to spin up that's also its own.
Speaker C:Custom dedicated group of people who kind of are in the know or whose roles are now on a path to, you know, turn into a product team or a team who at that time forum we decided to call AI Hub the idea was there were lots of language models we were going to be agnostic and so it's a hub and maybe we'll also want to have link outs to other AI tools that have been emerging across the company.
Speaker C:What we initially called AI Hub.
Speaker C:So we launched honestly like three months I think so in January or February.
Speaker C:ChatGPT came out in November.
Speaker B:So we're talking about early 23 or.
Speaker C: Is that: Speaker B: o we're talking about January: Speaker C:2023 yeah, was the first incarnation of.
Speaker B:AI and when I saw it it was integrated into the intranet.
Speaker C:Yeah.
Speaker C:So for Colgate if you're on our to the AI hub URL and you can get in and start BT there's almost no difference between the back end and chatgpt.com except the user interface which simple to write I'd say.
Speaker B: u launched this thing January: Speaker B:Yep.
Speaker B:Do people use it?
Speaker C:We captured maybe 5 or 10% of people who what was happening with Gen AI around the world had heard about chat GPT and were proactively reaching out.
Speaker C:There was fanfare, global communication and we put out training which we made mandatory for the whole company which was really more policy oriented.
Speaker C:A little bit about the internal platform.
Speaker C:The history of the platform and number of users is a tonic slightly increasing straight line like Every adding another 20 people, another 20 unique users since they adding people as they would do a thing and then their colleagues would find out that they were something.
Speaker B:Yeah there's some data that I mean literally just physical proximity to someone who's using it in a company makes you like if the person one or two desks over is using it, you're significantly more likely to use it than if the person's five desks over or something like that.
Speaker B:So there is this word of mouth phenomenon.
Speaker C:Nothing makes me more happy than when I'm walking through our offices and AI hub open on a person's screen talking about using it.
Speaker C:It's like oh wow this is the real thing early 23 company I'm one of the biggest use cases was enterprise search.
Speaker C:This was native people thinking what does Gen AI mean for my company?
Speaker C:Oh well finally we can solve this problem of people wanting to find company information which for me was always first use case to start with and I have to show value.
Speaker C:So how are you going to show the value of someone found their vacation days faster?
Speaker C:I get a big impact value long term for it is a problem.
Speaker C:Secondly the nobleman in especially large organizations is that data doesn't really live anywhere.
Speaker C:It's not structured.
Speaker C:And you have lots of files that are essentially the same thing, but actually one is outdated and doesn't have the right information.
Speaker C:Enterprise search.
Speaker C:How are we going to do enterprise search?
Speaker C:And we sort of decide, as the team at the time was, we're never going to be able to find these documents and get them cleaned up and organized enough to build a centralized chatbot.
Speaker C:So he said, what if we let people build their own chatbots and then they can curate their own data into that chatbot and share it with other people.
Speaker C:So instead of me in the center building out a grand master enterprise search system, the HR team build their HR search bot.
Speaker C:And what the right data is for HR and what's current and they own it.
Speaker C:The long king was eventually, if team, you know, build localized AI search systems, then we can connect them all together, and they've already established who owns the documents and who's in charge of keeping them clean.
Speaker C:So this was how we were going to do enterprise search.
Speaker C:We're going to let teams build chatbots, them curate their own documents, and that's the what became what we call AI assistance.
Speaker B:So the democratization, your job is not to solve the end problem.
Speaker B:Your job is to give the tools so that people can solve their own.
Speaker B:Which to me today feels like a fairly obvious move.
Speaker B: conversations I was having in: Speaker B:And there's still a lot of firms where locking it down, controlling it, preventing mistakes, or whatever exists in a lot of firms.
Speaker C:Or at that time, most people were zigging.
Speaker C:And we zagged on that one, right, like, you know, Cumber doing the thing where only the I team or a small set of people got access.
Speaker C:And what we said was, everyone's gonna get access.
Speaker C:Oh, and by the way, all of you can build your own bots.
Speaker C:And the reason for that was we had no central way to curate all the data.
Speaker C:And so the only do that was to say, well, you need to build your own bot, and then you own what data goes into it, and you're the data steward for very clear.
Speaker C:What we realized from that was aren't just starting to build, you know, an A bot where they, you know, upload patient calendars.
Speaker C:They were building an AI bot where in a manufacturing plant is putting instrument manuals in, or someone who works in marketing is putting SEO guidelines in or brand.
Speaker C:Brand voice guidelines.
Speaker C:What we saw is people were starting to build These things that were not just make it easier for people to search through enterprise data, it was give it context and an instruction.
Speaker C: for a particular beauty about: Speaker C:And we said boy, isn't that great?
Speaker C:That's a great idea.
Speaker C:You can build it and you know, it's looks like the App Store and you can share it with other people.
Speaker C:Built up the platform and built out the capabilities for that and so that people could share with each other in a low barrier way.
Speaker C:One of the key elements to making the whole flywheel turn, which is if you built AI assistant, you could share it with people to ask permission.
Speaker C:You could just do it because you're in a platform, the platform is secure already.
Speaker C:We've already got that figured out.
Speaker C:And if you're not sharing with a large group of people, it's unlikely that you're going to cause a lot of harm or create a lot of risk.
Speaker C:At the same time that there was a study published by LinkedIn called the Work Index for a year and there was this in there that percent of employees say that they have used AI at work and 70% of that, 70% say that they use their own AI tools.
Speaker C:So we really looked at this from a risk standpoint, which was if we don't give people these capabilities, the data is telling us that they're going to do it anyway and they're going to do it through an unsecure platform.
Speaker C:And that was a big part of making the risk equation work because not everyone's on the same page that we should be opening things up in that way.
Speaker C:If you let them create, don't do a good job or people make decisions off of hallucinated, but the alternative, they're going to do it anyway on an unsecure platform, that this was actually the least risky thing to do.
Speaker C:So even from a risk standpoint we could say building this is really a risk mitigation platform.
Speaker C:First and second is about capabilities, which I think brought us some Runway on the, you know, value of the whole thing because it just became about how do we keep people from buying a public account or access like ChatGPT.
Speaker B:Right.
Speaker B:And I love all of like first off, there's just a steady drumbeat of more access.
Speaker B:Put it in people's hands.
Speaker B:Thoughtful risk.
Speaker B:Nothing you're saying speaks of a cavalier attitude towards risk.
Speaker B:It's actually very thoughtful about the risk.
Speaker B:I love the design.
Speaker B:I want to get into that.
Speaker B:Anyone can create anything, but once it hits a certain critical mass of users, that's when we'll take a peek at it.
Speaker B:It's really great.
Speaker B:I make that point all the time.
Speaker B:A corporation doesn't get to decide if their employees use AI for their work because they use AI for their work.
Speaker B:They get to decide if they're going to do it within company guidelines or not.
Speaker B:That's the decision you get to make.
Speaker B:And also, will they tell you about it?
Speaker B:Will they come up with some really cool tool that allows them to do their job better and then feel like they better not tell anyone because you might fire them?
Speaker B:So they're going to use that tool and not let anyone else use it so you don't get any of that benefit.
Speaker B:So, okay, people are allowed to create a tool.
Speaker B:They can share it with friends, with colleagues.
Speaker B:But once it hits 10 or 20, what happens then?
Speaker B:10 first and now 20 other people.
Speaker B:Colgate Palmolive use that same tool, right?
Speaker C:Yep.
Speaker C:The important part about having the form AI hub built internally is that it's tied in with our employee management system so we know what country they're from and what function they're from.
Speaker C:This means build an AI assistant and want to share it.
Speaker C:You pair it with all of marketing or all of marketing in Germany actual platform you can press a button that says deploy and the groups that you want to share with based on their organizational, you know, function or where where in the world they are.
Speaker C:If a a large group of people which also starts set up domains of assistant marketing comes onto the platform we can show them the marketing assistance.
Speaker C:But on top of who ended for works is to deploy to a larger group or to a whole function push over to a governance process that we have in industry.
Speaker C:Risk team writes on their assistant answer pair.
Speaker C:Here's a question that I asked that I expected and here's the answer that I got.
Speaker C:Does it pass fail?
Speaker C:And what was the quality of the response?
Speaker C:There is for writing some evals in different gories from the technical review the evals and someone from the local team like a designated individual from marketing would review it from the marketing side to make sure this is strategically aligned.
Speaker C:This is not the fifth version of the same thing created Just the visibility into what people are doing.
Speaker C:Designated individuals from different functions are separately being asked what are we doing on AI?
Speaker C:As the drum be increased mechanism to go into what people have built.
Speaker C:This is what we're doing.
Speaker C:Someone made an assistant for the manufacturing plan or did some other write copy flies what people are doing.
Speaker C:Because we could say who is it shared with?
Speaker C:What function is it for?
Speaker C:You know the description of the assistant and have a designated person from the business who has some ownership and accountability but also visibility into it.
Speaker C:Because otherwise you know they would happening.
Speaker B:And just to be clear, that is all triggered once the 10th user or the 20th user uses it.
Speaker C:The person who builds it, they'll say oh, I'm sharing with these 10 people.
Speaker C:If they want to go further to say who's the audience that they want to share with the audience they select.
Speaker C:The right approver from the business side would get tagged onto it to be a business approver and someone from the technical team would get tagged on to the request approver.
Speaker C:Part is you generally don't have to anybody as low barrier as possible.
Speaker C:You don't need to send an email to someone.
Speaker C:You just need to in experience click the drop down, say who you want to share it with.
Speaker C:It sends you to a form.
Speaker C:You answer the questions on the form and then if there's problems but bring the barrier to people is important part of the whole thing.
Speaker B:And what percentage would you say are approved?
Speaker B:Do you have a sense are most approved?
Speaker C:All are approved.
Speaker C:A back and forth where we say your e aren't good or business team will look at it and say something.
Speaker C:You reach out to this person and ask if you can combine.
Speaker C:Think of any cases where we just said no, you can't do this.
Speaker C:Usually it's back and forth on iterating or combining efforts.
Speaker C:If any flat out rejected.
Speaker B:And then how does any one person know what tools are available to them?
Speaker B:Like what colleague created tools?
Speaker C:Yeah, so we have this AI registry process which anything not just AI bots or assistants you've built, but any AI tool around the company.
Speaker C:Software is another registry and that is publicly searchable by the whole company.
Speaker C:There's a link to that registry from our intranet main page.
Speaker C:If you go there very prominently there's a button that you can click on and you can see all the AI tools, systems, platforms, agencies use cases company.
Speaker C:So that's central place to search all things AI in AI hub itself depending on your Persona.
Speaker C:When you authenticate in.
Speaker C:If you're in sales, we show you all the assistants that have been built for sales.
Speaker C:They are arraign Cards with nice little picture authenticate way in employee credentials.
Speaker C:We already know a lot about you and can show you the things that are relevant for you.
Speaker C:So there's also in app discovery now that would be called you know like a agent platform.
Speaker C:You know if we're talking agent paradigm.
Speaker C:But that's essentially what we the horizon platform.
Speaker B:That's awesome.
Speaker B:All right.
Speaker B:Do you know how people are using that?
Speaker B:Can you read their discussions with the AI?
Speaker C:No.
Speaker C:As we get at how internally we have mechanisms where we can see anything that's going on on the platform.
Speaker C:If you send emails from a company email we could get it though we are in the contents of people's chats and poignantly looking into your actual chat.
Speaker C:What we're looking at is the types of assistants that get built choice to share an assistant.
Speaker C:We're looking at the metadata which tells us a lot about use cases outside sort of a productivity stuff like help me write this email which we could guess what those things are.
Speaker C:Most of the interesting high value business use cases require underlying documents and some sort of system prompt.
Speaker C:I'll mean that you have built an assistant did and have done a lot of meta analysis around the types of assistance that people were building and then assistant and share it.
Speaker C:We know how many people talk to it, how often they talk to it each time they talk to it, how many chats they have back and forth.
Speaker C:Being a marketplace in the same way Apple gauges the success of the apps on the App Store based on engagement we also have engagement metrics with the assistance that people have built and shared that help us surface which are the ones that are turning out to be so we've done topic modeling and looking at the different types of assistants then sort of creates this loop because we can feed that information back to the business people the accountable people in each of their divisions and functions and tell them these are the types of things that different teams are doing and they could you know feed that to their respective teams.
Speaker B:I want to get because this is just one of the things and we'll get to some other things you're doing but do you have a sense like roughly how many agents or tools have been built?
Speaker B:What are the metrics around how they're being used?
Speaker C:Yeah so of like 10,000 at the.
Speaker B:And you told me and about the.
Speaker C:Number manufacturing and I do love you.
Speaker B:Told me once that one of the best ones was a guy at a factory in Saudi Arabia.
Speaker C:In Saudi Arabia and a guy to plant Greece.
Speaker C:The thing that's amazing in the first six months 3,000 of these assistants got built right Dozen people if you look have computers in front of them.
Speaker C:Lots of manufacturing plant workers.
Speaker C:They don't have a physical computer.
Speaker C:They work from their workstations.
Speaker C:But hence we have who are consistent workers.
Speaker C:Stories of people from a plant in Greece reaches out sending me screenshots and I see that he has assistants built, one for each line written in German where a guy who only speaks Greek can ask about an alarm code of a manual in German.
Speaker C:Other parts of the world within supply chain we saw a lot of immediate liftoff because just a the nature of what they do.
Speaker C:They're dealing with mentation.
Speaker C:They have standard operating procedures they have that govern how they do the work that they do.
Speaker C:Complex training where the manuals are hard to interpret and written in a different language.
Speaker C:They really.
Speaker C:So we're 15,000 desk workers.
Speaker C:About 5,000 use the AI every week.
Speaker C:So about a third everyone has logged in and used.
Speaker C:Every thousand have been for their own purposes.
Speaker C:There's surprising metric percentage of people who took the next step part.
Speaker C:I think the people who are weekly users were likely already the self selected group who are sort of out front.
Speaker C:And so it's not surprising that them decided to build a thing over a six month period.
Speaker C:We went these questions and surveys sometimes like how many AI use cases are productionized?
Speaker C:Is it one, is it two or is it three?
Speaker C:And I'm like well I don't know how to answer this because I have like 3,000 chat bots that people built and they're technically productionized.
Speaker C:Like it's a production system.
Speaker C:They've shared it with other people for more than three or do they mean something different than what we've done?
Speaker C: ite in the Comments like well: Speaker C:So it made me even answer that question of like what does productionize mean in a world where you have thousands of people who can build their own thing and share it with a couple people or share it with other people.
Speaker B:Yeah, that's awesome.
Speaker B:And I want to kind of go through the whole.
Speaker B:I have a bunch of questions around.
Speaker B:I feel like it'll be helpful to get.
Speaker B:So when I came into your offices, which was at least six months ago, you were in the early stages of planning some additional AI deployments, some sprints.
Speaker B:Can you walk through and let me know what happened?
Speaker B:Because all I know is you were planning it.
Speaker C:Yep.
Speaker C:Key implemented in AHub that made this program possible and that was around the assistance.
Speaker C:And so the topic right now been early on to think about which is how do we communicate the value that people get from this?
Speaker C:It's one thing to say a million messages into the platform.
Speaker C:Therefore it must be valuable because people generally don't like wasting their time.
Speaker B:That has not been my experience.
Speaker C:It was also something seems obvious how this has been replicated in tools like Gemini.
Speaker C:To an assistant, after a certain number of conversations, you're forced to answer where you have to sit a couple questions.
Speaker C:They're actually based on the Microsoft LinkedIn questions point Likert scale.
Speaker C:How much time has this saved you?
Speaker C:5 point liker scale.
Speaker C:How has this impacted the creativity of your work?
Speaker C:From significantly decreased to significantly increase how does impacted the quality of your work?
Speaker C:Has it replaced?
Speaker C:If so, which vendor did it replace?
Speaker C:Has it offset spending?
Speaker C:So simple.
Speaker C:Six, seven questions that we ask locked out of AI hub until you answer.
Speaker C:So you can't to the assistant anymore until you answer our questions.
Speaker C:For every single assistant that gets built, not only do we know how many people talk to it and how often they talk to it and where are they in the world.
Speaker C:Have a per person metric of when this person has a conversation, it saves them 10 minutes.
Speaker C:And when that person in a different function or a different part of the world has a conversation, it saves them 20 minutes.
Speaker C:So then if I want to add up total savings, I don't have to do this crazy statistical exercise.
Speaker C:Adam has had 10 conversations and he's told me over time that his conversations save him 10 minutes.
Speaker C:And so that's 100 minutes against Adam.
Speaker C:And I can add that up for every person.
Speaker C:It gives us a way to capture the value.
Speaker C:And also what that means is that for the people who build the assistants, they can see the impact, right?
Speaker C:Because they're spending time curating materials and building this thing.
Speaker C:And if you've spent the time to build it, boy, isn't it great that you can get a report that says this thing that I built has saved the company 10,000 hours.
Speaker C:And 500 people have said that 80% of the chats increase the quality of their work.
Speaker C:To be able to bring that back to your manager and say, manager, I'm spending time on this.
Speaker C:Spend more time.
Speaker B:I spent 10 hours creating this, which probably is long.
Speaker B:It saved the company 10,000 hours.
Speaker B:In some cases I see time as a really good metric.
Speaker B:I did something weekend that would have taken me a month.
Speaker B:But I would say in many cases I would measure AI not in time saved.
Speaker B:It might even add time, but in kind of level of ambition of the projects I'm doing.
Speaker B:It allows me to be more creative.
Speaker B:It allows me to think of bigger things to do.
Speaker B:So I'm not always sold that that's.
Speaker B:And I know you're not saying that's the only metric that matters, but that time saved is one dimension but there's so many others that I think matter too.
Speaker B:Those are much harder to capture.
Speaker B:But you do ask at least, right?
Speaker B:I remember.
Speaker C:Yeah.
Speaker B:There's something about strategic thinking and how does it help?
Speaker C:Right.
Speaker C:I mean self reported on the.
Speaker C:And that's really task quality time is a funny one because ask people the impact they will answer with some data to time.
Speaker C:When you're telling someone what the impact is, they don't want to hear time because the numbers don't even make Sense.
Speaker C:It's like 200,000 hours and compute what that means.
Speaker C:Even if that breaks down into three hours per person per week for a couple thousand people, that ends up being some number that's too large to understand.
Speaker C:It's not really saved way that I talk about.
Speaker C:I don't saved.
Speaker C:I use the word generated.
Speaker C:You're still spending that time doing work.
Speaker C:You're just doing a different type of work.
Speaker B:What are you doing instead?
Speaker C:Right.
Speaker B:Ten hours.
Speaker B:Do I just go hang out with my friends for 10 hours or do I spend those 10 hours doing something really high value?
Speaker C:Yeah, yeah.
Speaker C:I were to make a presentation, you will make that presentation in an hour.
Speaker C:But if in that hour you use Gen, you will end up with a presentation that would have taken you three hours.
Speaker C:Is that time generated or is that quality of work?
Speaker C:You read the question.
Speaker C:You might say it saved me two hours or you might say it saved me no hours because I was going to spend an hour anyway.
Speaker C:But what I produced was much higher quality or would have.
Speaker C:Time is a weird one.
Speaker C:It's a convenient metric because it's a number that you can apply and add and people have some sense of what it means.
Speaker C:But it is.
Speaker C:Oh yeah, no, go get into original genesis of the question which was around the sprint.
Speaker C:Many people were building these assistants and had a mechanism to track the value.
Speaker C:We established what the company, you know were going to be.
Speaker C:What's the company strategy around AI?
Speaker C:Where are we going to focus and okay, well on top of all the assistance that people are building, you're targeted against these just like dings that people have to do and easier to do those things or improve the quality of time people to start working on these high level strategic goals.
Speaker C:Team large enough to build all the things in AI that need to be the company so the idea was, well, let's just put together these sprint teams and different functions in different geographies around the world and pick, you know, two dozen people.
Speaker C:You're going to get quality hands on training for how to prioritize use cases and assistance and test them.
Speaker C:And your job is to any strategy do this prioritization of what's possible with the current technology and what will the most value.
Speaker C:And at the end of this program you're going to actually build this thing and deploy it.
Speaker C:So most training programs you do the training but articulating the value from it is very difficult because it's like well, a bunch of people learned a bunch of things and we all feel like that's good but what did you get from it?
Speaker C:And so all you sprints because you had these, you know, group of people who learned a thing but then at the end they built and deployed this assistant which was aligned with the company strategy and the thing built employed in value tracking mechanism.
Speaker C:And so one month later they can say how many people use it and what the value was.
Speaker C:And two months later and so on we passed fall extraordinarily successful.
Speaker C:We're involved good about it because they were building a thing and what feels good to have a tangible thing that you've put together and can be proud of.
Speaker C:It was very happy because it was a mechanism to get these strategic goals that we were trying to build but have it be done by the users.
Speaker C:Like the people not only understand it the most, but the people who also are best poised to sell it in to change the other processes as opposed.
Speaker B:To just hiring some consultant or outside.
Speaker C:Developer in New York build it and do a good job because we don't understand the underlying problems.
Speaker C:And talk the talk to be able to to sell it to the teams or divisional teams.
Speaker B:If you flew to a factory in Saudi Arabia to tell them how you had just solved all their problems.
Speaker B:Yeah, you wouldn't know.
Speaker B:So I think our members are going to want to really know like precisely how you did this.
Speaker B:So walk us through how you set up the teams.
Speaker B:Maybe can we start with just who is your team?
Speaker B:How many people are on your team?
Speaker C:My team, I work on a mobile team, New Jersey, two different jobs.
Speaker C:So I have a team.
Speaker C:But part of the people are allocated to one thing and part of the people do the AI thing over technical teams, less than a dozen people nears some governance experts and so on.
Speaker C:The entire world.
Speaker C:AI, not just gen AI, it's pretty wide auctions, right.
Speaker C:And so a really wide remit For a small team.
Speaker B:Are you in IT or who do.
Speaker C:You right report into?
Speaker C:Our chief analytics officer with an analytics is it interesting AI started in it.
Speaker C:I'm slightly closer to the business side of the company working analytics business stakeholders.
Speaker C:But we're more tied into what the business needs.
Speaker C:So that's my small team have to get creative with how do we scale things because there's only so many people.
Speaker B:Right.
Speaker B:Walk me through the Sprint design.
Speaker B:How many teams were there?
Speaker C:There were five sprint teams and each one was and people.
Speaker B:And how did you pick who got.
Speaker A:To be in that?
Speaker C:Pick who the team were.
Speaker C:That was based on company strategy.
Speaker C:Priority areas for us is marketing, innovation and operations.
Speaker C:We selected teams that laddered up to to those, you know like a marketing team, a supply chain team.
Speaker C:So on.
Speaker C:Governance structure is every of business has one designated person who is part of our AI leadership team.
Speaker C:Supply chain has one person who represents AI for all of supply chain and marketing has one person.
Speaker C:And each unit like our Europe unit has a person.
Speaker C:They were the one the people because they know the people in their organizations who are sort of go getters and and who are who are best poised to do it.
Speaker B:So it was like and this is something we talk about a lot.
Speaker B:Ethan talks about a lot.
Speaker B:It's sort of like a lab, a short term lab.
Speaker B:And then the other thing I was curious about is how proscriptive versus open were the strategic goals?
Speaker B:Was it we want to make more stuff figure out what that means or was it more narrow than that?
Speaker B:Like we want more pediatric healthcare products in Q4.
Speaker B:How did you define the strategy in the problem space?
Speaker C:In marketing we want to build engagement.
Speaker C:The marketing Sprint team was tasked with that.
Speaker C:Part of the program for them was they had to understand what language models can do today or you know, when we ran it the topic of engagement, what are all the things that we need to do and ranking whether or not it was feasible and so deciding you know what actually going to build and go after with a broad level goal of engagement.
Speaker C:Unplanned downtime like these are in marketing you're trying to build engagement.
Speaker C:But I think it was important to not be prescriptive because we don't know from this enterprise standpoint that far down in into how work get gets, you know, like.
Speaker C:Like briefing out a consent where you need something, you know, very specific and you can tell them bring an assistant.
Speaker C:That is being people literally firmly on the ground doing the doing.
Speaker C:And so they have to be the ones to define what is actually going to you know move the needle on engagement.
Speaker C:So there was a lot of structure around the program in terms of okay, week one you're going to meet twice.
Speaker C:First you're going to get an overview of what which models are and then you're going to learn about assistants.
Speaker C:And week two you're going to get a framework that will help you write out the use cases and help you score how complex they would be to build with an assistant.
Speaker B:The training was these were people who were super interested and proactive but they weren't necessarily coders or super experienced people.
Speaker C:One level, nothing really complicated, just like YouTube videos and things.
Speaker C:I'm prompting because about this with this 10 hours thing.
Speaker C:But by the time the first, you know, core of the program was done, they were already starting to build and each team ended up with three or four assistants.
Speaker C:They 12 or more during the course.
Speaker C:Some don't work and you leave them and build a different one follow up on it is that they were early on engaged in building these things and testing them.
Speaker C:We put a around go find 10 people who do this thing and ask them to use it and get their back.
Speaker C:Part of the structure was just structuring the program design thinking program that happens to be directed against building an AI assistant.
Speaker C:But it was very much technical other than the minimum things you need to know to be able to write a system prompt which is just telling someone explain it like they're an intern.
Speaker B:Yeah, that's awesome.
Speaker B:And how So I think there's a million questions and I bet a lot of the members will want to know more and how they can copy it.
Speaker B:Are you okay with folks reaching out to you?
Speaker B:You're active on the discord and yeah, I'm on there.
Speaker C:Have reached out to me already.
Speaker B:Yeah, you're pretty busy.
Speaker B:Okay, so and then how did you like are there kind of intangible or tangible ways to make clear that this is a good thing for the company to do?
Speaker B:Whether it's participate in a Sprint or create a.
Speaker B:You know, are people rewarded either through actual rewards or just recognized in some.
Speaker C:Way is yet to hear.
Speaker C:And maybe it's because of the genesis of how we set this up and started with open and for the company and we're not gonna spy on your chats and tell your manager if you do or don't use it.
Speaker C:I had a lot of pushback from people who were involved in any of these things.
Speaker C:Commitment of wuthering the Sprints themselves became very high profile and so everyone who was involved presenting this to very senior people at the company, they Got to feel good about it in that way.
Speaker C:And then older of the assistant, you get to see the impact as you see how many people use it.
Speaker C:And you know how the self reporting.
Speaker C:The third piece is we people at random.
Speaker C:We had the AI leads from the line of business pick the people who they thought would be the best ones to participate in the sprint.
Speaker C:Right.
Speaker C:So people who were probably already interested if large enough, Even if just 5% love this stuff that ends up being a decent number who can be a street team to things done.
Speaker C:It's been on the other side.
Speaker C:There's just a lot of of excitement I found from the people who are involved.
Speaker C:Oh, talks about AI all the time.
Speaker C:Every single meeting he goes to.
Speaker C:Whenever he flies to a local team, on our investor calls, on our quarterly calls, he's talking about AI.
Speaker C:His direct reports are talking about AI.
Speaker C:You know, maybe I take for granted sometimes because I read it in consulting articles.
Speaker C:Your senior staff should be driving important part of it which is from the top of the house they're talking AI and saying how important it is.
Speaker C:And that really has a huge impact on.
Speaker B:Yeah, that's really important.
Speaker B:Absolutely.
Speaker B:All right, what is next with AI at Colgate Palmolive?
Speaker B:What can you share about the next few months?
Speaker B:Next year?
Speaker C:Yeah, more of a thing.
Speaker C:But with better technology, we're looking at agents have a big impact.
Speaker C:But the thing with agents is the same challenge that initially led us down the path that we went down, which is for agents to be useful, they need to have the right underlying information.
Speaker C:In the case of agents, they need to have the right underlying business logic, like how a process unfolds and what the decision points are implies that they really need to be built by the people who own or live the processes and data.
Speaker C:The future is going to look very much like what we've done so far.
Speaker C:But instead of the technology being, you know, we're going to democratize the building of assistants and set up Sprint teams to build assistants.
Speaker C:It will be democratize the building of agents and set up Sprint teams to let people build agents.
Speaker C:And even the exercise that we did with the initial value sprints, which was what are all the use cases and rate them by what you can do with an assistant, that same rating is still there.
Speaker C:It's just more of those use cases are now possible because agent logic can do it, whereas you know, basic chat system could.
Speaker C:So the agent path.
Speaker C:But I think that change management side of it the same and sort of has to look the same.
Speaker C:If anything, it's more important with agents because agents do processes.
Speaker C:At a big company, processes are not really written live in between people.
Speaker C:As people change, processes change.
Speaker C:Joe used to do this for me, but now Bill took Joe's job and Bill doesn't like to do that thing.
Speaker C:So that was a different process.
Speaker C:You can't identify a process without having the people who live the process, business logic and codify it.
Speaker C:That's where we're going.
Speaker C:Much of it is going to be the management standpoint, just with much better technology.
Speaker B:That's awesome.
Speaker B:All right, Klee, this was phenomenal.
Speaker B:Thank you very much.
Speaker B:I'm obviously going to be carefully monitoring the future and I do know there's so much more we could talk about, but that'll be part two.
Speaker B:Thank you so much, Klee.
Speaker C:So happy, Adam, thanks.
Speaker C:Happy to be here.
Speaker A:If you want to talk to Klee, he said he'd be happy to just reach out to him on the discord.
Speaker A:Or you can reach me.
Speaker A:Adameedforward AI or jessicaeedforward AI and we'll be sure to create the connection.
Speaker A:Please let us know what you would like to hear more of on this podcast.
Speaker A:Are there people you'd love us to talk with?
Speaker A:Do you have examples from your company you think members would want to hear about, or do you have some big questions that you'd love us to dive into?
Speaker A:Please let us know at those same addresses.
Speaker A:Adam at FeedForward AI or Jessica at FeedForward AI.
Speaker A:Thank you for listening.