AI 2024: Enhancing Human Potential
2024 marked a significant turning point for AI, shifting the narrative from skepticism to enthusiasm as individuals and organizations grappled with its transformative potential. Experts in the FeedForward community reflect on how AI has evolved, emphasizing its role as a human enabler rather than a replacer. From the insights of Lilach Mollick on the increasing acceptance of AI in education to Matt Beane's warnings about the hidden costs of productivity gains, the conversation highlights the need for a thoughtful integration of AI into various sectors. Rekefet Rusak Aminoach shares a compelling example of how AI can democratize expertise in banking, while Christa Stout discusses the overwhelming pace of change in the sports industry. As we head into 2025, the consensus is clear: AI's best use lies in empowering people to achieve deeper, more meaningful work, paving the way for a future where collaboration between humans and machines thrives.
Transcript
I mean, just think where we are right now compared to where we were a year ago.
Adam Davidson:And then you start to imagine, well, where are we going to be a year from now?
Adam Davidson:We're lucky.
Adam Davidson:Here at FeedForward we have some really smart people in this community and I asked a bunch of them and some others people outside of the community, what are you thinking about at this moment?
Adam Davidson:What are you looking back at?
Adam Davidson:What are you looking forward to?
Adam Davidson:Let's start with one of the people I turn to all the time to understand what's going on in these AI models.
Adam Davidson:How do we talk to them more effectively?
Adam Davidson:How do we learn from them?
Adam Davidson:That's one of our co founders, Lilach Malik.
Adam Davidson:As you probably know, Lilach is not just a co founder of FeedForward.
Adam Davidson:She also runs the Generative AI Lab with Ethan at Wharton, the University of Pennsylvania's business school.
Adam Davidson:I think we all know that and has become one of the leading voices on how to prompt AI models.
Adam Davidson:More than that, how to talk to them, how to think about them, how to listen to them.
Adam Davidson:She's somebody that OpenAI and Anthropic and Google turn to all the time to figure out what can our models do and how do we interact with them better.
Lilach Malik:So I wear many hats both inside and outside of FeedForward, including teaching and training how to use AI.
Lilach Malik:I'm a researcher who studies prompt engineering and I'm building with AI and I'm on the forefront of how AI is integrated in education.
Lilach Malik:So I want to talk about AI using these three hats.
Lilach Malik:I've talked to a lot of people this year and I've seen some massive changes in attitude towards AI.
Lilach Malik:So just putting on my training hat earlier in the year across educational institutions and across organizations, this is something people were very skeptical about whether they should use it, having mostly not touched it.
Lilach Malik:But later in the year there was a shift to where while there's still quite a few people who haven't touched it, the organizational imperatives are there across both education and elsewhere where people have realized this is something they should do.
Lilach Malik:They don't know what to do with it yet, but they know they have to do something.
Lilach Malik:So there's generally this year been a shift towards a more willingness to adapt.
Lilach Malik:Putting on my prompting hat, we still haven't crafted how to distribute prompts or well developed use cases or AI bots with specific useful behaviors to anyone who wants them.
Lilach Malik:GPTs were a good start, but not a lot of progress has been made there and the result is that there's still a need to that you see coming up this year that of distributing use cases to non technical people remains a challenge.
Lilach Malik:But that's a race against the idea that the AIs are just getting smarter at doing everything and that prompting is getting less useful for many cases.
Lilach Malik:It used to be that you'd want everyone to learn prompting and prompting techniques, and now it's just more important to get your hands on these things.
Lilach Malik:Develop an intuition for what works in your case and how to work with these Systems as the AIs are getting smarter.
Lilach Malik:I've also been watching the improvement in intelligence without the improvement in complexity of using these systems.
Lilach Malik:So Zero1 is a much smarter system.
Lilach Malik:Cloudsonnet is a much smarter system, but they're easier to use rather than harder because there are fewer prompting tricks to make them do stuff.
Lilach Malik:They're simply smarter out of the box.
Lilach Malik:Putting on my education hat There is a growing recognition that this could be a breakthrough in training, but also a recognition inside organizations that they're in trouble because they don't understand what to do with these systems to let them solve the problem of the breakdown of the professional mentorship pipeline.
Lilach Malik:What I'm looking forward to next year is I think agentic systems will create opportunities for figuring out how humans fit in the loop and how we monitor these systems.
Lilach Malik:So I think there'll be lots of opportunities for people to figure that out and how you prompt and work and watch agentic systems will really be very interesting.
Lilach Malik:I'm also interested in how these multimodal voice and vision systems change.
Lilach Malik:Training and education.
Adam Davidson:Moving forward, Lilach has identified one of the core questions about AI.
Adam Davidson:I think of it as an almost civilizational question.
Adam Davidson:Will AI on balance, enhance human capabilities?
Adam Davidson:Will it allow us to learn more, do more, dream more?
Adam Davidson:Or will it replace us?
Adam Davidson:Will it replace human capabilities?
Adam Davidson:One of our feedforward expert voices Matt Beam thinks about little else.
Adam Davidson:This is his main question.
Adam Davidson:This is his obsession and field of study at University of California, Santa Barbara.
Adam Davidson:You should read his book, if you haven't already.
Adam Davidson:The Skill how to Save Human Ability in an Age of Intelligent Machines.
Adam Davidson:He and Lilac have helped me understand how crucial it is to think about AI's impact from several perspectives.
Adam Davidson:The individual using it right now, typing on it right now, as well as how individuals use AI over the course of their lives, over the course of their careers, to increase their skills, to have it do things they don't want to do anymore.
Adam Davidson:And also this is very important to look at AI not just through the lens of the individual, but through the lens of the overall organization and all the formal and informal systems that different organizations have to get things done.
Matt Beam: I think: Matt Beam:Whether they were working in an organization or on their own, that's where the primary gains were being had.
Matt Beam:2024 was the year that organizations foot the bill for that individual productivity gain.
Matt Beam: And I think: Matt Beam: And I think: Matt Beam:At the very least, they will see that the age old problem that comes with new forms of automation is going to rear its head here, which is that we drop a new form of automation, a new kind of intelligent tech, in this case into preexisting processes to try to amp them up and get more juice out of them.
Matt Beam:And that's not bad necessarily, but it is in the end not aligned with the best, highest use case for the tech.
Matt Beam:Refactoring processes in more wholesale ways to take advantage of the tech is really where we all need to go.
Matt Beam:So I think this sort of short run, short and small gain will come out of the tech and then eventually organizations are going to figure it out.
Matt Beam: I don't know if: Matt Beam:The second unintended consequence of all this productivity gains is one that you would definitely expect from me, Matt Bean, the author of the Skill Code how to Save Human Ability in an Age of Intelligent Machines.
Matt Beam:And that is that we are typically changing workflows as we introduce intelligent technologies in ways that disenfranchise novices in the work, separate them from experts who are doing the work, and thereby break the primary means that humans have for building reliable skill.
Matt Beam: ave been paying this price in: Matt Beam:And it's not as if this is an either or situation.
Matt Beam:I think the savvy organizations and individuals will see that there are some ways of Getting both and skill enhancement by the way we use these tools, but that it's a higher energy state, it's a more unstable equilibrium, it's harder to achieve than just grabbing for the productivity gain.
Matt Beam:But savvy well run firms and individuals who are focused on their longer term success and not just short run productivity, are going to start to see data that shows definitively that this is a problem, a serious one, and we'll start to correct for it.
Matt Beam:Of course, all of that presumes an incredible explosion of technological advancement.
Matt Beam: I just think in: Matt Beam:Those folks are going to get laughed out of court.
Matt Beam:Especially given the quantum computing announcement that just came out of Google a couple of days ago.
Matt Beam:I just think we haven't seen anything yet.
Adam Davidson:This idea that AI can transform entire organizations, entire ways of working made me think of my friend Rekefet Rusak Aminoach.
Adam Davidson:She was CEO of Israel's largest bank, Bank Liyomi.
Adam Davidson:Well, she became CEO when it was not Israel's largest bank.
Adam Davidson:In fact, it was kind of famously a very old fashioned, bureaucratic, slow moving company that had lost its first place position.
Adam Davidson:But Kefet transformed it into a fast moving digital first operation.
Adam Davidson:It's, it's a big story in Israel.
Adam Davidson:Raket is, you know, she goes into a coffee shop, people are taking pictures, asking for selfies.
Adam Davidson:She's sort of a rock star there.
Adam Davidson:She told me that she could have done so much more with Bangkl me if she had today's AI.
Adam Davidson:In fact, that's what Rick Evatt does now.
Adam Davidson:She's managing director of Team8, a major VC firm and venture capital studio where she works in fintech and other fields to help use AI to create really impactful digital transformation.
Rekefet Rusak Aminoach:What we can imagine today is far more than what we could have imagined have imagined just a year ago.
Rekefet Rusak Aminoach:And I want to give you an example from the world of credit.
Rekefet Rusak Aminoach:And I don't know if you remember, but before being the CEO of Bank Lumi, which is the largest financial institution in Israel, I was the chief credit officer of the bank at the time.
Rekefet Rusak Aminoach:In order to build the best credit portfolio for the bank risk wise, we needed consultations and credit committees and discussions and we needed people that have the expertise of regulatory requirements and the credit policy of the bank and macroeconomic.
Rekefet Rusak Aminoach:Imagine if we had a new customer coming to the bank and ask for $100 million, not a huge amount.
Rekefet Rusak Aminoach:Sometimes it took us hours of 20 people in one room in order to reach a decision.
Rekefet Rusak Aminoach:It was really blood, sweat, tears and toil.
Rekefet Rusak Aminoach:As Churchill said a year ago, when we thought about AI copilot, we could have a copilot of regulation and other requirements and a copilot regarding Greece and the ability to assist us with the credit policy of the bank and with compliance.
Rekefet Rusak Aminoach:This in itself was quite amazing.
Rekefet Rusak Aminoach:But now when we talk about copilots, this is old news.
Rekefet Rusak Aminoach:Now we are talking about a copilot that can mimic the thinking of top credit experts.
Rekefet Rusak Aminoach:Can you believe this?
Rekefet Rusak Aminoach:We will build a copilot that is mimicking the thinking, the expert level thinking.
Rekefet Rusak Aminoach:It's huge.
Rekefet Rusak Aminoach:What does this mean?
Rekefet Rusak Aminoach:This is the collective wisdom of an organization.
Rekefet Rusak Aminoach:When we come back to credit, it means that every junior credit analyst at the bank that doesn't know much, he can actually access all the wisdom of the credit committee.
Rekefet Rusak Aminoach:Like having the credit committee in his cell phone.
Rekefet Rusak Aminoach:This is the collective wisdom of the organization in his cell phone.
Rekefet Rusak Aminoach:And for me it means that we reached a democratization or we are going to reach a democratization of credit underwriting.
Rekefet Rusak Aminoach:And now you may ask me, okay, why does this matter?
Rekefet Rusak Aminoach:Why should we care?
Rekefet Rusak Aminoach:So it matters a lot and for two reasons.
Rekefet Rusak Aminoach:First of all, because credit is the engine of economies and it fuels growth, innovation, investment, consumption, everything.
Rekefet Rusak Aminoach:And with everything that we discussed now, credit will be more accurate, we will have less human mistakes and the price of credit will go down.
Rekefet Rusak Aminoach:And this is huge.
Rekefet Rusak Aminoach:But besides, and more importantly, this is just one example, think about what else AI will democratize.
Rekefet Rusak Aminoach:Everything I said about credit, you can just say about other domains and in other companies about their domain and their expertise.
Rekefet Rusak Aminoach:This is huge.
Adam Davidson: Why is the end of: Adam Davidson:Ethan Malik, Do I need to introduce him?
Adam Davidson:You know who he is, right?
Adam Davidson:He's the author of Cointelligence, he's a professor at Wharton and he's most importantly co founder of FeedForward.
Adam Davidson:He's probably the reason you're listening to this podcast right now.
Adam Davidson:He reminds us that AI today is like an entirely different industry than it was a year ago.
Ethan Malik: ogy and business of LLMs over: Ethan Malik:So we started the year with one one GPT4 class model and we ended the year with somewhere between six and nine GPT4 class models and we're beginning to see the new models come out.
Ethan Malik:So it was an expansion.
Ethan Malik:I think December was a very busy month and something worth thinking more about where there was a release of O1 and all these sort of new kinds of approaches to AI.
Ethan Malik:But before that it really was a catch up game.
Ethan Malik:At the same time, I think organizationally we saw a catch up game which was people realizing AI is real, it's here to stay, is very capable.
Ethan Malik: ericans using AI by August of: Ethan Malik: expand and access expand over: Ethan Malik:But people still haven't figured out what to do with the system.
Ethan Malik: they're quite ready for what: Ethan Malik: So I think: Ethan Malik:Not in everything we're not seeing AGI, but in specific areas that used to be human.
Ethan Malik:Only the AI will be better than people in, in specific forms of statistics, maybe even math.
Ethan Malik:We'll see where things go with that with better the programming than most people, better than writing and most people, not everybody.
Ethan Malik:And I think we're going to see this capability game start to become clear to people and that's going to create some real changes.
Ethan Malik:And I mentioned agency.
Ethan Malik:The idea of AI being able to do things in the world is going to matter a lot.
Ethan Malik:And all of the AI companies are racing out models that can do things in the world.
Ethan Malik:And I think companionship and connection.
Ethan Malik:The idea that all of these systems do real time voice interaction, you can share your screen, you can share a camera with it is going to create new kinds of closeness with AI system.
Ethan Malik:So I think AI is going to spread out its tendrils both up the chain.
Ethan Malik:In the area where we have increasingly powerful systems doing managerial work, high end scientific work, but also down the chain in terms of being a companion and a connection and something most people turn to to get help with things.
Ethan Malik:We'll see what this all means.
Ethan Malik:I think that the capabilities are going to continue to have an overhang.
Ethan Malik:They're going to do more than we think they can do.
Ethan Malik:And in large part the next year is going to unlocking those capabilities for organizations in a way that allows us to start taking advantage of these things and getting a sense of where the future is coming.
Ethan Malik:The main thing I don't see is a slowdown.
Ethan Malik:I think we are at the early days still of this experience and we still have a ways left to go.
Adam Davidson:That point Ethan made about AI companions reminded me of something James Cham told me once.
Adam Davidson:James is one of our experts here at FeedForward.
Adam Davidson:He runs Bloomberg Beta, an early stage venture fund that focuses on making work more fulfilling for people.
Adam Davidson:It's part of the Bloomberg universe and its returns fund, Bloomberg Philanthropies.
Adam Davidson:He thinks a lot about this interplay between AI, technology and human flourishing.
James Cham:There are probably three things that feel like they're obvious now but go under acknowledged around AI.
James Cham:The first is that the LLMs are now probably more emotionally intelligent than the median guy, maybe even more emotionally intelligent than the 90th percentile guy.
James Cham:There's a way that if you copied and pasted a chat between multiple people into Claude or ChatGPT and asked it for observations around what was some subtext?
James Cham:The LLMs consistently do better than most people, or at least they do consistently better than I do.
James Cham:That is also coupled with the fact that if you look at the model card for some of the newer models, when they show you their results around persuasiveness, they're at the 70th or 75th percentile.
James Cham:And what that means is that the LLMs are not like Spock or data.
James Cham:They're actually much more fluid along the dimension that we think of as human.
James Cham:And for some reason that is obviously true.
James Cham:And we haven't quite figured out how to look at this clearly you get that uneasy sense, that uneasy feeling when you look at, say, the terrible thing that happened with the kid who killed himself because of character AI.
James Cham:And so to me, the interesting opportunity there perhaps is to look clearly at how the LLMs are more emotionally intelligent and persuasive and find unambiguously good use cases for them and find ways to apply that in something that will help rather than hurt humanity.
James Cham:So that's the first thing.
James Cham:The second thing that feels obviously true is that there's a natural desire right now for us to deploy these AIs towards low risk, low reward at scale tasks.
James Cham:And I think that's important.
James Cham:The little bits of glue or the customer service thing here and there, I think those are all really valuable.
James Cham:But at the same time, and of course I'm speaking now as a venture capitalist, so I'm always looking for the big opportunity or the arbitrage that's going to be around because something's become possible right now.
James Cham:It does feel like those sort of high risk, high reward, or those really expensive processes that might Yield big, big rewards.
James Cham:It does feel like that's the part that remains underexplored.
James Cham:And I think that's because we're reluctant to admit that the models will be better at another thing that we think of as uniquely human.
James Cham:But they are quite good at it.
James Cham:And there's a way in which just by doing a slight bit of twisting and turning, we can make them surprisingly effective in a few new use cases.
James Cham:And then the third thing that I'd say is that the information ecosystem around AI continues to shift in interesting ways.
James Cham:If you were to ask me at the beginning of this year what I thought would happen, I would have told you that all of the interesting developments would have happened on the fringe.
James Cham:They would happen three blocks away from Berkeley, as some, you know, 26 year old grad student builds a system that uses AI in an original way.
James Cham:To be honest, that hasn't happened as much as I thought it would.
James Cham:Instead, what's happened is that the development of use cases and the development of finding ways to take advantage of these models, that's actually happened in a much more diverse set of circumstances and all over the place.
James Cham:And what that means is that the diffusion of knowledge problem ends up being a little bit harder.
James Cham:Rather than being able to rely on San Francisco or Silicon Valley as the origin point of so much interesting stuff, I think we're in this weird place where we are looking instead for signal everywhere.
James Cham:And then the hard part there is that so much of the signal is unreliable, which is to say that there's so much bad information and so much slightly askew things that come up as more and more of this gets commercialized.
James Cham:And that's of course part of the reason why this entire service is so useful in general, but also so useful to me.
James Cham:It's really inspiring to see folks actually building and testing and running and redesigning organizations based on all the stuff that's happening here.
James Cham:So I'm glad to be part of it.
James Cham:Thanks.
Adam Davidson:I absolutely love this last point, that developments in AI are not just happening in a handful of labs somewhere within driving distance of Palo Alto, California.
Adam Davidson:No developments are happening all over the globe in a pretty chaotic way.
Adam Davidson:There's tens, hundreds of millions of people playing around, trying stuff, stretching AI in ways nobody else had thought of.
Adam Davidson:This is a big part of why I love FeedForward.
Adam Davidson:We have such a unique vantage point here.
Adam Davidson:We have all these companies in different industries trying different things with different approaches, different goals.
Adam Davidson:Take the Portland Trailblazers, the NBA team that has won awards for being on the cutting Edge of technology and sport.
Adam Davidson:That's in part because of Krista Stout, who leads innovation for the team and is a FeedForward member.
Krista Stout: When I think about: Krista Stout:And people who don't use it seem to be overwhelmed by just AI in general and hearing about it and either still being nervous about it or not knowing how to use it effectively, or trying it a long time ago and not realizing how they could be using it more effectively.
Krista Stout: So my hope is that going into: Krista Stout:I'm still not clear on exactly what the future of the sports industry looks like because of and thanks to Gen AI, but I am quite confident that it will look very different in the coming years.
Krista Stout:And again, like, I always go back to how personally Gen can be helpful for me, if it's like taking a picture of my lunch and knowing how much protein is in it and whatever, I think that's still a really important way for people to connect to the technology in a more relatable way that will hopefully unlock ideas for them so that they can apply it to their jobs more usefully.
Krista Stout: So going into: Krista Stout:The past year it was very clear that when you give people tools and confidence to experiment, they create solutions that are practical and impactful, whether that's saving time, improving experiences for our guests and fans, or solving new challenges, even just like creating more valuable additions to or more valuable work in whatever they're doing.
Krista Stout:And next year I want to keep building on the momentum, scaling what works, getting more people to adopt AI responsibly.
Krista Stout:I need to figure out the tool that we're like really going to roll out and access to it in a way that is obviously safe and all of that.
Krista Stout:So that's definitely a big focus of ours so that we can see the joy that AI can bring to individuals when they free up time to focus on deep, meaningful work and things that matter.
Krista Stout:And also, again, as we Try to start to figure out what this means for our business and our industry more broadly in terms of transformative impact.
Adam Davidson:There is a theme developing on this podcast.
Adam Davidson:Have you noticed that?
Adam Davidson:And it's a great theme.
Adam Davidson:It's that AI is at its best when it's aiding people, helping people do their work better, do their work in more satisfying ways.
Adam Davidson:That, for me, is one of the big lessons of right this moment.
Adam Davidson:A year ago, there was a lot of talk about a future in which human beings are almost unnecessary.
Adam Davidson:The idea of a world where AI does everything we do better than we do it.
Adam Davidson:Now I hear that talk less.
Adam Davidson:I mean, it's still there, but it's not the dominant narrative anymore.
Adam Davidson:And you can hear in all of the reflections on this podcast a deeply informed view that AI is best seen as a human enabler, not a human replacer.
Adam Davidson:Alex Komorowsky says this particularly well.
Adam Davidson:Alex was a legendary product manager at Google, one of the best.
Adam Davidson:Then he became head of strategy at Stripe.
Adam Davidson:He stepped down not long ago to launch a startup, Common Tools, that has, I think, one of the biggest ideas in AI that I've heard this year.
Adam Davidson:And it's, well, you should reach out to him and let him explain it to you.
Adam Davidson:But there's a decent chance that he is building the core backbone, the fundamental layer for the future of AI.
Alex Komorowsky: I think that: Alex Komorowsky:LLMs are their best when you are using them not as an oracle, but as a thought partner where you can bounce ideas off of them like any knowledgeable person, someone who knows a lot about basically any topic, almost certainly more than you on almost any topic.
Alex Komorowsky:Not as much as an expert would in that domain, but someone that you can have a great conversation with to bounce ideas back and forth and to come up with a much stronger version of that idea.
Alex Komorowsky:I also think of LLMs.
Alex Komorowsky:They fill in the gaps of whatever you give them with what I would call like a frog DNA kind of approach.
Alex Komorowsky:Like in Jurassic park, they have the dinosaur DNA and it's got gaps in it, and they fill in those gaps in the DNA with frog DNA, just generic reptile, amphibian DNA to make the thing viable.
Alex Komorowsky:I think LLMs will give you the most mushy frog DNA answer that is possible to give based on your question.
Alex Komorowsky:So the challenge becomes asking it good questions.
Alex Komorowsky:How do I encourage it to give you multiple different options to find cross disciplinary insights to tackle problems that you otherwise wouldn't have bothered tackling.
Alex Komorowsky:One of the LLM superpowers is their patience.
Alex Komorowsky:They don't get tired of doing tasks that real humans would get bored doing.
Alex Komorowsky:So if it requires sifting through a whole bunch of information or making connections across material, LLMs are great at finding those insights that a human could find, which has just been taken too much cost and effort and attention to do in the first place.
Alex Komorowsky:I think that LLMs are at their best in cross pollination and emergence, helping find the ideas between ideas between different areas of expertise where if you're an expert in any given domain, then you're going to be better at coming up with answers than the LLM can do for sure.
Alex Komorowsky:But the vast majority of us are not experts in the vast majority of domains and so having that thought partner along with you I think can be really useful.
Alex Komorowsky:The way that organizations work I think will change as more and more people use LLMs to help find ideas and figure out how to sift through them to figure out more efficient ways of communicating inside of an organization.
Alex Komorowsky:I think we're just getting started with the LLMs applications and if anything it's going to get weirder and weirder.
Alex Komorowsky:And we're just in the early innings today, so I'm extremely excited to see as we get increasingly good performance across increasingly large number of multimodal models.
Alex Komorowsky:Again, we're just seeing the beginning and it's pretty exciting right from here, I think.
Adam Davidson:Have you noticed how excited everyone on this episode is?
Adam Davidson:How positive they are, even recognizing some limitations and some worries?
Adam Davidson:That wasn't some plan I had.
Adam Davidson:I didn't go out and say, okay, I only want to talk to people who are happily excited.
Adam Davidson:I reached out to the smartest minds I know and asked them, well, what's going on?
Adam Davidson:What are you thinking about these days?
Adam Davidson:A year ago, if I had done this exercise, I think I would have heard a lot more fear and uncertainty.
Adam Davidson: ere close to it at the end of: Adam Davidson:And there was even some deep uncertainty about whether or not there's even a real business use case for AI.
Adam Davidson:Could AI ever actually move the needle?
Adam Davidson:Or is this just another hype cycle that I don't hear anymore?
Adam Davidson:I don't hear the idea that it's exhausted its capabilities and I don't hear the idea that there's no business use case, at least rarely I do hear it, I guess every once in a while, but pretty rarely.
Adam Davidson:Peter Goldstein is chief product and AI strategist at Hearst, the media company.
Adam Davidson:He's also one of the most active and thoughtful members of FeedForward.
Adam Davidson:The guy has a PhD in physics, so I don't always understand everything he writes, but I always get something out of it.
Adam Davidson: or real business use cases in: Peter Goldstein:It's become very, very clear that generative AI can be applied to a variety of use cases that are not just kind of the make my text better, edit what I'm doing, give me better copy, et cetera.
Peter Goldstein:That really we're talking about generative AI can be used to solve real business problems, can be used to solve potentially real academic problems, research and the like.
Peter Goldstein: irmatively over the course of: Peter Goldstein:The second thing that I think has been answered pretty affirmatively is that there's no obvious limitation at this point to the capabilities.
Peter Goldstein:We continue to see real growth from training time scaling.
Peter Goldstein:We see real growth from parameter scaling.
Peter Goldstein:I think recent models have made it pretty clear that we have not hit limits on parameter scaling in the sense that we're not using the parameters we have efficiently.
Peter Goldstein:And so some of the recent models that have been come out that have much smaller numbers of parameters, down to like single digit billions, are competitive with those with an order of magnitude or more higher number parameters.
Peter Goldstein:That strongly indicates that ultimately we'll be able to get much smaller models that are much easier to distribute to a variety of computing platforms.
Peter Goldstein: That leads into what I think: Peter Goldstein:I think it'll build on all of those trends.
Peter Goldstein:First, I think we will see really extensive rollout of generative AI to applications to real world environment.
Peter Goldstein: at has only been hinted at in: Peter Goldstein:2.
Peter Goldstein:I think we will see it really effectively move out to the edge this year.
Peter Goldstein:Last year, realistically, if you wanted to use cutting edge generative AI, you had to use an API from one of the labs that was producing it.
Peter Goldstein:And I don't think that'll be the case next year, at least by the end of the year.
Peter Goldstein:Right now, on my MacBook Pro I can download a model, I can run a model locally, that is, I think, safe to say, sort of a GPT4 level model.
Peter Goldstein:That's astounding.
Peter Goldstein:So I expect that we will see increased push in that direction to get it onto smaller devices and to be able to do things like frankly run a state of the art model on your phone.
Peter Goldstein:I think we'll also see an increased commoditization of the models.
Peter Goldstein: In: Peter Goldstein:At the end of the year we have several additional players, many of which really develop their cutting edge capabilities over the course of less than a year.
Peter Goldstein:Xai Amazon both by the end of the year release state of the art models that are competitive with those produced by say OpenAI or anthropic or Google format.
Peter Goldstein: AI moves everywhere and over: Adam Davidson:The future we're rolling towards.
Adam Davidson:This is a great place to end.
Adam Davidson:We're going to end with one last comment from Jessica Johnston.
Adam Davidson:You all surely know Jessica.
Adam Davidson:She is one of the co founders of FeedForward and is our primary interactor with our membership.
Adam Davidson: k and lives of our members in: Jessica Johnston: ust in the last few months of: Jessica Johnston:One is the need to be consistently searching at the edges is going to be crucial for business leaders who are on the hook to lead their companies AI initiatives, AI strategies, not just the thinking in their organizations but ultimately the decisions and the actions and the investments that they take.
Jessica Johnston:A couple other notes that that are on on the slide on my computer.
Jessica Johnston:The other is that constant sensing for early warning signs is key.
Jessica Johnston:There have been times when a new model drops like Sora and I'm going well how important is that really?
Jessica Johnston:If you are an executive at jpmort, how do we think through what model updates matter, what to pay attention to and so on.
Jessica Johnston:But if I zoom out I think it is about understanding the bigger picture that updates continue to happen, the capabilities continue to grow and it matters when a PELICAN increasingly can ride a bicycle and easily my favorite benchmark for testing the capabilities of new models.
Adam Davidson:I'm just going to jump in here quickly to explain.
Adam Davidson:Our fabulous feedforward expert Simon Willison has this simple, lovely benchmark he does when any new model comes out.
Adam Davidson:He asks it to create an image of a pelican riding a bicycle.
Adam Davidson:He does that for the image generators, but he also does it for the text generators.
Adam Davidson:He uses text based image generation or asks them to A year ago they were pretty lousy at this.
Adam Davidson:No model could do a good job of creating an image of a pelican riding a bicycle.
Adam Davidson:Now they almost all do at least an okay job and some will really blow you away.
Adam Davidson:I love this benchmark as Jessica does, as everyone does.
Adam Davidson:It's just a quick, funny way to instantly get an image that tells you is this model doing something fundamentally more advanced than previous models?
Adam Davidson:I think all of us should have our equivalent of a pelican on a bicycle.
Adam Davidson:What is our quick and easy way in a couple minutes to find out if a new model is actually going to be able to do things we need it to do better than previous models.
Adam Davidson:So come up with your own.
Adam Davidson:Or you know, I think Simon's fine if you just steal his idea.
Jessica Johnston:And then My last note that that I keep is that questions that the leaders ask will shape what organizations pay attention to.
Jessica Johnston:I'm keenly aware of the information that we are exchanging, the insights that we are sharing.
Jessica Johnston:How is that informing the questions that we can ask inside of our organizations to help drive the right decisions?
Jessica Johnston:Again, regardless if we are a financial services firm, media organization, manufacturing consumer goods, all the things.
Jessica Johnston: of my concluding thoughts in: Adam Davidson: tening to the last episode of: Adam Davidson:We are so thrilled by this community.
Adam Davidson:One big thing, maybe the biggest thing that happened to us this year is the feedforward community.
Adam Davidson:Earlier in the year we were dreaming of it, hoping for it, and it now is flourishing and just barely getting started.
Adam Davidson:We're going to have a whole lot of new members, whole lot of new events, new things to give to you to help you do your job more effectively and we hope, a whole lot more discussion with you, a whole lot of conversations and interactions that help us all better understand AI, better understand how it can contribute to business success and to human flourishing.
Adam Davidson:So I hope you have very happy holidays.
Adam Davidson:Merry Christmas, happy Hanukkah, happy New Year, happy everything.
Adam Davidson:I'm Adam Davidson.