Thank you! View the recording below.

HUMAN-CENTERED DESIGN IN AN AI WORLD

Enterprise-level innovation with design thinking

jeff-clark

Jeff Eyet and Clark Kellogg,
Co-Founders,
The Berkeley Innovation Group

ENJOY THE WEBINAR RECORDING



ABOUT THE WEBINAR

As Picasso famously said, "we are all born artists, the challenge is to remain one as we age." How can we expect innovation to emerge from the same mindset?

We teach the framework of design thinking that requires participants to suspend their belief in "the way we've always done things," and adopt a "beginner's mindset," of curiosity and creativity.

Through shared tools, practices, and mindsets you can infuse these skills into your team or across your organization. The webinar will introduce you to the tools and practices of innovation, to deep customer insights, and to Design Thinking in a variety of business contexts.

YOU'LL LEARN

  • The value of gathering new information before making decisions
  • Apply design thinking principles at the team level (with case study)
  • Adopting a “design is strategy” mindset across the organization (or the “Monday Morning” problem)

ABOUT OUR GUESTS

jeff-clark

Jeff Eyet and Clark Kellogg

Jeff Eyet is a radical diverger who prototypes through stories, and Clark Kellogg is a curious soul who brings an unending passion for creativity to their Design Thinking practice at the UC Berkeley Haas School of Business and the Berkeley Innovation Group.

Clark and Jeff have led strategy work with Recology, the Oakland A's, UC Health, and Mills College. Also, they offer an online design thinking certification to compliment their in-person facilitation.

INTERVIEW TRANSCRIPT

Rahim Rahemtulla:
Welcome everybody to this Silicon Valley Innovation Center webinar. We are delighted that you could join us today, because we do have a wonderful speaker for you. It is Jeff Hyatt, he’s one of the Co-founders at the Berkeley Innovation Group. And his partner in crime, Clark Kellogg, unfortunately couldn’t be with us today, but Jeff is here and he’s ready to go, he’s got a wonderful presentation lined up for us today. Jeff, welcome.


Jeff Eyet:
Thank you.

Rahim Rahemtulla:
Very pleased to have you on the program. And I think, before we get started, I just want to let the audience know that you’re going to present for us for about 35 minutes, wonderful presentation lined up, and then we’re gonna go to questions and discussion with everyone who’s here with us today. And we also want to do a poll as well, so we do encourage you, the audience members, to look out for that poll. Jeff’s going to let us know when we’re going to launch that during his presentation.

Jeff Eyet:
Sure.

Rahim Rahemtulla:
We really would love you to participate, so please do take part in the poll. Please do send in your questions as well using the Q&A function on the webinar panel. And, as I said, after Jeff does his presentation, we’re going to take those questions and he’s going to answer as many of those as he can. So I think that’s all, really, I need to say, at this stage. I think, Jeff, it’s over to you, please take us away. I think you’ve got the screen-sharing function there lined up.

Jeff Eyet:
Yeah.

Rahim Rahemtulla:
And I’m going to step back and give you give you the floor.

Jeff Eyet:
Thank you. Can you see that okay?

Rahim Rahemtulla:
Yes, I think so. Yes, there it is. Yeah, perfect. Perfect.

Jeff Eyet:
Alright, perfect. Well, welcome everyone. Thank you for joining us today. I know a 10 am webinar is probably in the midst of pulling yourself in a few other directions. But design thinking is a human-centered approach to innovation. If you sign off now, that’s the key takeaway. What we’d like to do today is frame design thinking in a more nuanced context and in the context of artificial intelligence and machine learning, because our experience shows that this is a big disconnect within organizations, let alone among organizations.

So just by way of background, if I may introduce myself, I’ve got my MBA from Haas Business School and while I was there I built a strong relationship with the professors and we launched the Berkeley Innovation Group, where to take our classroom curriculum and deliver it to corporate clients. That was our original mandate. Of course, that’s evolved as we’ve learned to understand, again, these disconnects between what we teach in the classroom and what reality is in the real world. And I think one of the most sobering moments I’ve had in terms of understanding that disconnect is we were doing a two day workshop at GE at their digital center in San Ramon and one of the executives at the end of the session came up to us and, while we’re expecting a pat on the back, he very bluntly said, “Son, if this isn’t $100 million idea, we’re not interested.” And my initial reaction was, “Oh, to have such problems.” And then we see where GE is today and maybe they should have taken some smaller bets. Nonetheless, it really pushed me forward on this path to understand how do we connect what we teach every MBA student at Haas and what the students face when they get into the real world. And not just students, but the students are coming in at manager and director levels and many people have earned those manager and director rolls through growing organically through an organization. So, whichever path you’ve taken to get where you are today, we’re here to speak directly to you. And for those of you in the C-suite, or who are advising the C-suite, we can certainly tell you a bit about how to help take what is a very abstract concept and make it literal.

So, seeding innovation. This is sort of the very basic chart that is the foundation of most of modern innovation thinking, this is by Clayton Christensen. And if we go back to the GE example that I shared, GE was on that red line, that trajectory that’s progressing through the middle of the screen. However, as they got to a certain size, there were these opportunities – in their case, under $100 million – where they were leaving crumbs. And more than crumbs, they were leaving sizable meals for startups to begin to engage.

So although not a client, I use it as a business case to think of a company like Cisco. Cisco isn’t making a million dollar bets internally. Or Google, or name the tech company. They allow their engineers to go out into the world, start these startups, which de-risks the innovation process for the corporation, and they’re quite happy to acquire these companies, often for what the market says is a premium. But in reality, it fits the risk-return model of these corporate organizations. So when we’re talking about innovation, we’re framing it in two phases. We’re framing it on that main trajectory of the incumbent and then we’re also going to talk about that trajectory that exists for the startup or the entrepreneur. Or even more importantly, the manager or director, who’s the intrapreneur within an organization trying to launch a new endeavor.

So once we’ve decided that we want to be innovative, how do we frame that into the type of innovation or the tactics we want to use to employ that innovation? This is a very detailed chart but I think, very briefly, what I take from it is design thinking is best suited for domains where the problem is well-defined and also where the domain is well-defined. So if you see in the upper right-hand corner, you think design thinking is where we’re looking to sustain innovation. Me coming from the university, there are many professors who have these great ideas that are locked in a laboratory – that’s found in the lower left-hand corner. Once they define the problem, then they seek grant funding or some very additional seed funding to begin to create this kind of skunkworks or break off as a maverick from the organization to move these ideas forward. Once they have traction, it moves to the lower right, venture capitalists get involved. And then, once that company or that venture has launched, its ideas like internal R&D and design thinking that help move us forward. So I just wanted to say that to help frame us into our conversation.

And, as we like to say, “Today’s science is technology for tomorrow. Every once in a while, a new technology and an old problem come together with a big idea and turn into an innovation.” In the business school we say, “The business models to be successful – they exist. What are we looking for? We’re looking for new applications. And not just new applications, but we’re looking for human-centered applications.” And this is the big difference between machine learning and artificial intelligence. So if I can just take a moment here to pause and share how I feel those are different – and as we go through, we’ll talk more about it – but very briefly, oftentimes, artificial intelligence, when it comes from above, I discount it as a buzzword. Because by my definition – which is the Turing test of “Can you have a conversation with the computer as if it were a human?” – that doesn’t exist today at any type of scale. So I hesitate to use AI as the benchmark. What we often have are big data efforts that are highly successful and, from those big data efforts, we begin to layer on neural networks and machine learning. However, we haven’t gotten to that level of artificial intelligence yet, which is exactly why this topic is so important, because before we turn control over to the machines, we need to ensure that the machines are designed for those who grant them power – the humans. It’s a key, key difference.

So let’s think of design thinking. I’m sure you’re at your desk, this exercise won’t, by any means, embarrass you in front of your colleagues. But simply read the paragraph on the screen and close your eyes and imagine what you see. For those of you with your eyes closed, I’ll read. “It was one of the last warm days of summer. He removed his shirt, put it around his neck and stood at the water’s edge squinting at a sailboat on the horizon. ‘I’ve been so many incredible places,’ he said. ‘But none of them are any more beautiful than this.’” That grounding and that presence is where we need to begin our design thinking journey. But I challenge you. Many of you have pictured something similar to this: a man standing at a beach with rolling ocean waves in front of him. But if that’s our world, what are we missing? That’s why in design thinking, our first step is to make sure that we’re asking the right question, because the question may look something like this. We need the broader perspective to understand the entirety of the opportunity before we converge on a solution. So, again, the second big takeaway today is that design thinking is merely the process of diverging to ensure that we’re asking the right question before converging to seek answers. And we’ll make that more complex as we go, but for those of you who are new to the process, it’s a very simple definition: first diverging to understand the question before converging on the answer.

The idea of design thinking is defined in three elements. First, desirability. Let’s think of an innovation, perhaps a smartphone because it’s known to all. Desirability: do people desire to have their entire life the entire knowledge of the world at their fingertips? At the time, some felt it was an intrusion, but as we come to understand these machines, we understand that it is quite desirable. In fact, almost addictive, but that’s another story. Second, is it feasible? As we know, Apple had made many forays into the handheld computer market as had other competitors, such as 3Com and Blackberry. Why hadn’t they been successful? Partly, because the technology wasn’t there to make that device fully capable and differentiated from the existing telephone and existing computers. And finally, is it viable? Apple had the scale to be able to say, “While the first iPhone may not be profitable, our vision is for an ecosystem. And then we have the ecosystem where the iPhone is a component, we get to this level of viability.” So desirability, feasibility and viability.

Let’s take that same framework and apply it to artificial intelligence. In a business setting, is artificial intelligence desirable? I think we can debate that. It’s certainly desirable for very targeted outcomes, but if we’re only designing for the technology, we will only get those targeted outcomes and we will lose that broader perspective, which is the foundation of design thinking. Yes, artificial intelligence or perhaps the lighter version, machine learning, is quite feasible. And is it viable? Absolutely. So I would argue that, by the definition of design thinking, as we looked at advanced machine learning and artificial intelligence, my instinct says, Is it really something we want? And if it is, let’s be sure that we’re designing for the user in mind. I think that’s the very, very key point. And as I started, that’s the difference. Executives don’t give a crap. They care about efficiency, they care about cutting costs, so to them it’s highly desirable. But to the line worker, or the call center representative, or the fast food employee, artificial intelligence is an abject threat. Well, how do we balance something that’s threatening to 99% of a company’s workforce, when it’s desirable to only 1% of the workforce, those focused on profitability? That’s the tension that we hope a human-centered approach can begin to address.

At that point, I just want to pause very quickly to see if any questions have come in, or if there are any comments, or if there’s any of the polls that might speak to where we just shared.

Rahim Rahemtulla:
Jeff, we’re okay with the questions, there’s nothing coming in at the moment, so I think we’ll get to carry on. We have a poll, we have a poll. Are you seeing that too, Jeff?

Jeff Eyet:
I do. Have you noticed my squint?

[Laughter]

Rahim Rahemtulla:
Very good. So let me read this out because I think everyone can read it, but like you said, it is a little small on the screen. So the question we have is, “In your company, is there a large divide between leaders’ expectations and managers’ realistic expectations for the use of AI?” And so Jeff, I think this is very much what you were just speaking to, about the 99% and the 1%. And so ladies and gentlemen, we do encourage you, please do take part. We, as the panelists here, we can’t vote, unfortunately. Jeff, you and I can’t have a say here, so we’re relying on you, our audience, to say. Do you strongly agree that there is a large divide? Do you agree neutral? Do you disagree? Or do you strongly disagree?

So that’s our first poll question. And then a second poll question that we have, which relates a little more widely. In fact, we have three questions for you today, so do have a look at all three of those and give us your thoughts. I’ll just run you through them quickly, the questions here. Number two: “What obstacles to innovation does your company face?” So have a think on that one. And third and finally: “What is the role of artificial intelligence in your company?” So do cast your votes, we’re going to have that open for a little while there while Jeff carries on with his presentation. We’d love to hear your answers, because it’s really, really interesting for us to understand, for you guys out there in the world, how these technologies and how this sort of thinking is being applied. And then we’ll come back to it in the discussion.

Jeff Eyet:
Thank you. And it’s vitally important because this is the type of research that we’re asking ourselves at the university level to begin to understand this connection between human-centered design and artificial intelligence. So, if nothing else, you’re getting a sneak peek into some of the more advanced university-level thinking.

This is our design thinking process here and I just want to highlight it very briefly. We’ll walk through this and share how it applies in the current setting and then how it applies as we go forward. So the first phase of the design thinking process is, first, we think on this vertical axis of abstract and concrete. We start at the bottom. If this were a clock, we would be getting at 6am. What is the concrete opportunity that’s before us? So again, I’m applying the artificial intelligence example and we’ll apply it as we saw it with one of our clients.

One of our clients was a large municipal waste company. In English, that’s a garbage company. They started with a concrete challenge, that they need to increase the percentage of the waste stream that is either recycled or composted. Very real, very grounded in the present. So from our process – I think this is a rhetorical question – but as consultants, what was our first instinct? Was our first instinct to solve that challenge or was our first instinct to reach out into the world and understand more than what the company was telling us, but understand what was the users’ experience? It was clearly to understand the users’ experience. How did we do this? Well, very simply, we went to an underprivileged community and we talked to them about their recycling habits. And very frankly, they were in overcrowded situations or they didn’t have enough money to afford larger garbage cans, so when their existing garbage can was full, what did they do? They simply threw garbage into the recycle. That’s a problem. Translate that to the financial district of a major city in the company’s area and we found that people were throwing latte cups into recycle. Why? Not because the cups aren’t recyclable, but because they were aspirational consumers. They said, “I paid $6 for this latte, I drive a Prius or a Tesla, of course this is recyclable.” But in reality, from the company’s perspective, that was the exact same level of contamination, whether it was coming from a low-income community or an affluent business district. So that was the first phase of understanding and observing.

Those observations were coupled with observations that we had from the company, which included the fact that they couldn’t control consumption and disposal habits until the garbage can actually reached the curb. So think of your house. You live in an apartment building or you live in a home, you wheel that garbage can out to the curb once a week. Or you’re in a high-rise building and you simply throw the garbage down the chute. The garbage company believes that they can’t control what you do until it’s aggregated at this community level. So that was our a-ha moment. Our research in the field showed that you could in fact change this but, on the other hand, the company had the orthodoxy that they couldn’t. That’s an insight. It’s an opportunity where we say – sounds very cheesy, but it’s true – it’s in the sense that one plus one does in fact equal three.

At this point, we take that insight and we frame it as a “how might we” question. A “how might we” question is meant to be aspirational. Not “can we?” Not “should we?” Not “will we?” It’s “how might we?” Because that employs what I like to call “Yes, and…” thinking. My least favorite people at a meeting, and I’m sure you’ve met these folks, are the “Yeah, but”s. How many times have you been in a meeting or talking to your boss, come in with an innovative idea, only to be told, “Yeah, but…”? We like to use a “Yes, and…” process where we build upon the ideas of others. That’s a key, key point.

So I pause here because I want to overlay our topic of artificial intelligence and machine learning. In what we did, does there exist today the ability for computers to go out and run those experiments that we ran with those users? Clearly not. What might a computer do? Well, the first challenge that a computer faces is how do we take all of this data and clean it up into a format that is even able to be analyzed. That’s the key point. As we know, the biggest obstacle to any machine learning or artificial intelligence application is the cleanliness of the training data. And then, as important, the cleanliness of the data coming in that we’re running through that algorithm or through that network. So with that said, computers are already at a disadvantage, if only because human beings create that data.

But let’s assume that we’ve cleaned that data and it’s now able to be presented. On one hand, we have a human’s observations of the activities of other humans and, on the other hand, we may have data at a community level about rates of disposal, rates of recycling, rates of composting, trends over time. But without that human interaction, we can’t understand what the users’ need is. That’s the value of divergent thinking, because when we come to extract the insights, algorithms are fantastic at taking the past and extrapolating it into the future in a linear fashion. Algorithms are meant to exclude the extreme cases. Think if I’m an accountant. I do a great job at looking at the past three to five years of financials and projecting or budgeting, as we say, what the next year will be. How does that differ for a finance person? Someone in finance is going to look at the past and say, “That’s great, but looking forward, I see these qualitative opportunities that may impact our financial performance.” That’s the difference between an accountant and someone in the finance profession. Similarly, algorithms and machine learning are great at extrapolating from the past, but it takes a human being to understand the qualitative factors that will influence the future. In short, when we’re looking at insights, an algorithm will do a fantastic job at one plus one is two. A human being is required to take one plus one to make it three.

Now, can a machine come up with that insight cheaper? Can it come up with that insight faster? Absolutely. But it will be a very narrow application and will be solving very narrow problems. For example, we can train a car, an autonomous vehicle, to recognize a stop sign as an eight-sided figure with a red background and white letters. But how much more work will it require to get that machine to recognize the difference between that stop sign and an elderly woman in a red jacket with white hair standing next to it? It takes that human perspective, it takes our qualitative knowledge of past experiences to extrapolate not just from the data, but to look at those extreme cases and understand the difference.

So I pause here, because this is very, very important, to understand that the value of any innovation effort is to make sure that we’re asking the right question. And at this point, most forms of leadership have already lost because their financial results are driven on, at best, a quarterly basis and, if that’s the case, then machine learning and artificial intelligence to improve quarterly results is their goal. But particularly in private institutions, private organizations who have longer than a three-month time horizon, they can begin to benefit from this human-centered approach, bringing these qualitative factors and think about what might exist 6 to 12 months from now.

Wrapping up here, we’ll go through the second half of our design thinking process, where first we begin to look at how do we generate ideas. Well, generating ideas is something that, again, this is the product of the algorithm looking at the past data and designing for what may come in the future. But again, I argued that they don’t understand the entire scope of a user’s needs because they’re only given the needs that the programmer thinks is vital. And this is a key point. We talk often about bias in machine learning and bias in algorithms, but I have yet to see a completely unbiased data set to train these algorithms. So we have to understand that at the ideation phase of the process machine learning will develop biased outcomes, if only because humans are giving them biased data to train them. That’s very important to understand. Garbage in, garbage out. It takes human beings to understand the entire breath of the process to come up with truly human-centered and holistic solutions.

And then, finally, we move on to experimentation and prototyping. At this point, this is where artificial intelligence would come in because our artificial intelligence is to build upon the insights extracted from big data, the ideas generated from machine learning, and then begin to create these human-like interfaces. The most famous examples of artificial intelligence in action: beating Kasparov in chess, winning at Go. But again, what can we extract from that that has a human-centered need? Or were the scientists merely trying to win at a targeted exercise?

So the future of AI or the future, I should say, of machine learning is artificial intelligence to help us experiment and prototype. But my argument is, with a biased data set developing a limited algorithm, we’re going to get results or ideas that are very close to what we have today. Thus, artificial intelligence is going to take a long time to really come up with prototypes and experiments that are on par with what humans can create with just a little more time and a little more money. If you’re on a three-month financial cycle, the outcomes of that machine learning algorithm may be valuable. On the other hand, if we put a little bit more time, maybe six months, human beings can create a world of ideas that address the full spectrum of our customer base.

I’m happy to go into greater detail on any of these points. But I wanted to just pause here and allow everyone to check in. Do we have any questions? Or does anyone have a story about how they’ve used machine learning or what their bosses think is artificial intelligence? And were there any shortcomings from a human-centered perspective?

Rahim Rahemtulla:
Well, look, one question has come in and maybe I can read this one to you now. So the question is, “What will happen to the people employed?” And it’s coming from Mario Thompson, so Mario, thank you very much for sending this one in. And so he asked, “What will happen to the people employed that are threatened by artificial intelligence after they get laid off? Will they lay back for the rest of their life? And so it’s interesting what you said there about how, actually, machine learning is good for a narrow use case, but the message you say there is actually the assumption in this question won’t really come to pass, I feel.

Jeff Eyet:
Right. That’s a great question, Mario, and thanks for bringing that up because this is where I believe a university setting is the right place to begin to explore these questions, because it brings in a diversity of disciplines to help us really come up with a holistic picture. In short, I like to say to folks, Microsoft Excel didn’t put accountants out of business. Simply Excel may have put a few bookkeepers out of business or forced those bookkeepers to improve upon their existing skill set to maybe graduate in the profession or to seek other applications where humans are still highly valuable. I mean, Microsoft Excel certainly couldn’t do any extrapolation, they couldn’t do any prediction. Microsoft Excel couldn’t walk into that meeting and present a financial statement to a board of directors. And so that’s a classic, classic use case.

Again, I think, if your job is focused on – not you Mario, but if the individual’s job – is viewed as a cost center, I think it’s going to be easy for management to look at that person or that department and put it in the crosshairs, if you will. But in reality, once you start cutting into the muscle of an organization, as I believe most people are, you’re beginning the decline, in my opinion. And at least for this generation of workers, machine learning is clearly overblown in what it can do. Artificial intelligence doesn’t exist in a form that will displace individual workers. And I think this is that breath, that pocket of air for individual workers who maybe are 20, 25 years removed from their last formal education, their bachelor’s degree. It’s time to maybe double down and train up into what that next role might be if machine learning were to displace part or all of your current role.

Rahim Rahemtulla:
That’s very interesting, Jeff. Thank you, yes. And so, do you feel that as individuals, as workers, we’re not all executives, obviously, and so this does seem to have implications for us and what we should and the way we should maybe be thinking about our careers?

Jeff Eyet:
You’re absolutely right. I think the gig economy is overblown. There’s certainly a role as individual contributors find their voices outside of organizations. That absolutely is a trend, but is it a complete disruption? I would argue it’s not. And I view machine learning as the same type of trend. It can help us in our analysis, it can do it faster, it can do it cheaper but, again, executives are relying on one key assumption and that is that the data we are entering into those algorithms are a) going to continue to be available as privacy becomes a bigger issue and b) is that data of a broad enough caliber that it can derive the outcomes or the insights that we seek?

Rahim Rahemtulla:
Yeah, and Jeff, I’m really glad you said that, because Mario has sent a follow-up question and it concerns exactly that topic. So let’s make this our last one before we move on. And so he says, “Without that unbiased data, are we going to be stuck in a loop? We’re not going to actually move forward until we’re getting clean unbiased data?”

Jeff Eyet:
Yes.

Rahim Rahemtulla:
Yes.

Jeff Eyet:
Yes.

Rahim Rahemtulla:
How do we get out of that? Is there a way out of that?

Jeff Eyet:
Of course there’s a way out. The problem is… Let’s go back to this disposal company that I spoke about. They don’t have a centralized database for all of their customers in their biggest markets. That’s challenging. I mean, they are data dinosaurs, if you will. And how do we leverage that into a machine-learning application? We can’t. They have trucks driving around with seven different sensor platforms. Do you think those seven different platforms are communicating data that’s aggregated in a common searchable database which can be analyzed? It can’t. And as long as organizations continue to view the IT department as a cost center, their ability to leverage IT for profit will be minimized. Now, if your organization begins to view IT as a profit center – which, if you’re a data-focused company, it should be a profit center – then, yes, I think as an individual, that’s when you need to have upskilled yourself to participate in that change.

Rahim Rahemtulla:
Fantastic. Well, thank you, Jeff. That’s very interesting. We could we could go on, I’m sure.

Jeff Eyet:
I’ve thought a little bit about this.

Rahim Rahemtulla:
[Laughter] Yes, I can see that, I can see that. Very much so.

Jeff Eyet:
Let me, just for the sake of time, I can talk to you about any of these things, but let’s just go and let me show you an example of design thinking in action. And we can dive deep into any of these other realms, but I want to let everybody go with about 15 minutes to spare. So this was an example of how we apply the three-phase experimentation process. So we’ve gone through the entire design thinking cycle, we have our insight, we have come up with some ideas and the idea, if you will, is, “Okay, how do we come up with a smart garbage can?” And that’s a very broad question, but valid. So rather than immediately putting on our engineer hat and diving into what that solution might look like, we simply put a garbage can in the company’s headquarters, a clean garbage can in the company’s headquarters. And, as you can see from the picture, there’s 3×5 cards and a pen on the top and the sign simply says, “I am a smart cart, what do I do?”

Well, folks from the organization would walk by and, as it was socialized, they would write down anonymous suggestions and put it into the can. And after two weeks, we took that stack of cards out and begin to cluster them around common themes. “Why isn’t it solar powered?” “Why doesn’t the garbage can have motorized tracks to move the can to the curb, to just make it easier on the user and the company?” But also, I’ll always remember this, we got a suggestion that said, “Why the hell would you make a smart can? These things are always catching on fire,” which is really fascinating. And the person that that insight came from, that that observation came from, was actually the receptionist at the front desk of this company, someone people walk by every day. But why was she able to come up with that observation that no one else was? Because she handles the phone calls that come into the company with people complaining, “You missed my service,” “My garbage can got knocked over,” “My garbage can caught on fire, I need a new one.” So she had a direct link to the customer, but how many times a day had executives walked by this person without seeking her information?

So when you go with this market research, you go to very basic level. It’s the least expensive and you get the fastest feedback. And please, please, please, do not consider focus groups market research; they’re absolutely biased. Please do not use surveys; they’re biased, most people can’t write good survey questions and also you’re going to get a biased response. I mean, just think of our survey. Is there anybody who is manning a phone in a call center? Is there anyone who’s a line cook at McDonald’s? Jobs that are meant to be displaced by machine learning and artificial intelligence, are we gathering their opinions in this informal survey? We’re clearly not. So again, that’s why I say surveys are good to give us some general direction and allow us to ask “Yes, and…,” but they’re very bad at drawing firm conclusions. I had to get that in. That was my little soapbox.

All right. The next that we do is we move up to experiments. And so building on the insight that we shared earlier, that we can control people’s sorting decisions – is this garbage? is this recycle? is this compost? – we believed we could address it in the home and so we developed this experiment. So experiments, the key is that it has a $100 budget, no more, excluding the cost of our time. Why? Because it prevents scope creep, it forces us to focus on our hypothesis and it forces us to get creative and really drive in to that key thesis that we’re trying to explore. Because we know that the outcome of this experiment is not final – it’s merely descriptive of the next step.

So what do we have here? We have a picture of a Raspberry Pi. So for those of you who aren’t familiar, it’s a small circuit board. And we’ve put it in a protective housing, attached a camera eye and programmed it with Google TensorFlow, which is a free classification software. And so then we just started throwing bottles and cans at the camera. And when the camera sensed the motion, it would take a picture and over time we built up a training database of images of recyclable objects. And what you can’t see in this picture is that there’s a green light or a red light and so we can tune the sensitivity of the camera, but as we throw things into the can, it would say Yes, green light, you threw it in the correct can or No, red light, you throw it in the wrong can. Now, the algorithm was not correct, but this is a case of where we did a control versus the camera. The control was what we like to call a “Wizard of Oz experiment” where, in reality, we created what we thought or what the user thought was an automated solution. But around the corner, somebody had a button and would push red light/green light based on what they saw go into the can. So we had an experiment over here solely relying on the algorithm and then we had an experiment over here where it actually had a human behind the curtain, and the difference between those two experiments deeply informed the next iteration of the process.

What was the next iteration of that process? Well, as the company wanted to put more money into this pilot, it’s very clear that they have a lot more confidence in doing so because they have what we call this “trail of breadcrumbs” – small evidence from small experiments that gave them the competence to move forward. In this case, we came up with a pilot. So you probably have read that in a large major city here in the Bay Area, they’ve installed cameras underneath the garbage cans. Now, these cameras merely sensed “are these garbage cans full?” and then dispatched workers to collect that garbage to reduce trash on the street. But once that sensor is embedded and the company begins to adjust their processes, particularly the routes that they drive, how many drivers they need, once they begin to adjust those variables, then they can begin to add features onto these existing cameras based on some of the work we’ve done in terms of object identification. For example, wouldn’t the police department be interested if there’s a large parade going through a major street and there’s a camera that could detect if there were any nefarious items being placed in that garbage can? Highly, highly valuable.

So again, it’s not trying to overcome the solution in one fell swoop. We needed a human-centered understanding that the insight that we can control people’s decisions, augmented with technology and, when we had a defined outcome, we applied machine learning to help us with that. But, at the end of the day, it was still a human-centered process. And now that these sensors are employed, they are gateway with infinite amounts of possible applications. Like I said, they could classify what’s in the garbage can. Is it safe? Is it not safe? They can tell you when it’s full. And they can also provide other data, perhaps they’re capturing the amount of CO2 emitted from that garbage and help the city get a sense of their greenhouse gas emissions or help them identify particularly polluted neighborhoods and add additional resources. So think of these sensors as Trojan horses and there’s a lot that can be done. And a lot of what can be done is technology-based but it required that initial human-centered approach.

So that is just an example of this work in action and how design thinking with a human-centered approach overlayed the technology of machine learning and how a company utilized that to improve upon their products and services for a community.

And so I will wrap with our five truisms of design thinking. But before we do that, have any other questions come in? Or do we have any feedback on the poll results that we’d like to talk about?

Rahim Rahemtulla:
I do have some poll results for you, Jeff. So this is very exciting, let me tell you. And you can see this on your screen as well. Would you like me to read them out or have you got them there?

Jeff Eyet:
Give us the highlight. I see them as well, but I’m interested to see that the divide between leaders’ and managers’ expectations is actually neutral.

Rahim Rahemtulla:
Yes.

Jeff Eyet:
That strikes me.

Rahim Rahemtulla:
Not what you were expecting?

Jeff Eyet:
I was expecting it to be biased to either extreme. My first reaction when I see neutral. But if I can just turn it over to the chat, and maybe if you voted neutral, do you mind sharing with us why?

Rahim Rahemtulla:
Could it be, Jeff, perhaps – just a hypothesis here that I’m throwing out utterly without evidence – that because AI, machine learning is still a relatively new technology, no one actually knows, neither managers nor executives necessarily feel comfortable having an opinion about it?

Jeff Eyet:
That would be my hypothesis as well. Is that it? That response tells me that the domain is still highly undefined.

Rahim Rahemtulla:
And, Jeff, if we look at our second question.

Jeff Eyet:
Yes.

Rahim Rahemtulla:
“What obstacles to innovation does your company face?” And we have a bit of a spread here.

Jeff Eyet:
Yeah.

Rahim Rahemtulla:
It’s two thirds, one third. We have four categories. It’s not quite 25% each, but we’ve got a couple of 33% here. So “Lack of resources” – 33%. “Organizational culture” – 33%. And then “No clear strategy” in third place there at 20%. And finally, “A leadership skills gap,” not the biggest one by far. So actually maybe the leadership there is sort of up to speed with this, but materially it’s perhaps not there yet. Is that consistent with what you see in your practice and your team?

Jeff Eyet:
I’m looking at question number three first. And that is “What is the role of artificial intelligence?” and it’s “We don’t use it but we’d like to.” And that reinforces the neutral response in the first question. Looking back at the second question, it’s interesting that if we kind of tell the story 1-3-2, “We have a neutral understanding in our company because we’d like to use it, but we’re not sure.” Why? “Well, on one hand, it’s a lack of resources and, on the other hand, it’s a distance in organizational culture.” And that is a narrative. I may be cherry picking my data points, but that’s a narrative that’s consistent with what we see. It’s a “nice to have” at this point; it’s not a “need to have.”

So when Mario asks about the future of workers, I’m just going to say this, I can’t not be frank, the institutional inertia of the status quo is what’s going to slow the adoption of AI. And as someone who may feel threatened by machine learning or AI, and perhaps disrupting their role, understand that institutional inertia is a level of job security. And unless that’s mandated from the top with clear and defined goals, the bullets may fly over your foxhole, so to speak.

Rahim Rahemtulla:
Well, that’s sort of a comforting thought.

Jeff Eyet:
Yeah, sure.

Rahim Rahemtulla:
My generation, maybe we are in the firing line, perhaps a little bit more.

Jeff Eyet:
Yeah, absolutely. Cool. Were there any other any other questions on the chat?

Rahim Rahemtulla:
Well, on the chat, I was just going to say, yeah, from Ritva Laine. So thank you very much, Ritva, for writing in just to follow up on this discussion on the poll. To give some more context, I believe she’s speaking here to perhaps the second question about obstacles to innovation, of which, of course, machine learning, artificial intelligence is one. Perhaps you may see this too now. She says, “A lack of competence, not just lack of resources.” And perhaps that speaks to the fact that the people who are good at this and who really know how to use these technologies in the corporate context are, at the moment, perhaps not not so plentiful.

Jeff Eyet:
Yes, and. So, I totally agree with everything you’ve said. I would add to that this concept that, if I have this knowledge, is a large organization where I want to embed myself? It’s a younger generation that has this skill set, they’re obviously more inclined to take risks, the industry or the domain is still in its infancy, so those factors lead to a strong startup ecosystem. But even those who are successful in that ecosystem, do they really have enough of an audience in potential acquirers to truly value the technology for what it’s worth? Salesforce has obviously done a tremendous job with bringing artificial intelligence in-house. Save for IBM, Facebook, Google, thoses computer-focused companies, but think of a company that’s software-focused or even customer-focused. And in the case of Salesforce, they’re the leaders. And other than Salesforce, I mean, I really haven’t seen it that many places, which leads me to believe, and from everything that I see at school with where students are, what organizations students are joining as they leave and where entrepreneurial efforts at the university level, it’s still very much in its infancy. And it’s still very much going into a rich startup ecosystem, but it’s still so fragmented because there hasn’t been the acquisition activity.

And I would just add, I know it’s outside of the scope of this call, but that’s very similar in the blockchain world as well. Blockchain has very targeted applications, but even if you went to McKinsey and said, “I want a blockchain, an end-to-end blockchain strategy,” McKinsey couldn’t deliver. If you went to McKinsey or the like and said, “I want an end-to-end artificial intelligence solution,” even if on a department level, that would be difficult to uncover.

Rahim Rahemtulla:
Jeff, there’s so many interesting questions that I would ask you just based on what you said, I want to go into so much more detail, but we’re only really scheduled to have about 10 minutes left. And so that’s about two minutes per design thinking truism.

Jeff Eyet:
And I will it make much faster that so we can let folks go. So if there aren’t any other questions in the chat, feel free to chime in. Otherwise, I’ll go through these very quickly and then we can send folks on your way. At least you have something to talk about at lunch today.

Rahim Rahemtulla:
[Laughter] Absolutely, if they get nothing else from this, they get that.

Jeff Eyet:
Nothing else. So the principle of design thinking is to learn by doing. And doing is not writing Post-it notes. Doing is physically going out and talking to customers, non-customers, physically going out and talking to people who are outside of your domain or outside of your perspective. And then doing isn’t simply putting together a PowerPoint to share your ideas. Doing is building a prototype and taking that prototype not only to leadership, but to customers. The best type of prototype is the story at Starbucks. If you were pitching a movie, the first thing I would tell you to do is go to Starbucks and pitch your idea to 20 people, because I guarantee, after telling that story 20 times, you’ll come back with a different movie. It’s the exact same approach with design thinking as a process of innovation.

Second is that curiosity is better than judgment. This is just “Yes, and…” versus “Yes, but…” in fancier language. So I would challenge those of you who are on the call, as you go back into your day, or even when you’re at home tonight. I have a five-year-old daughter, I practice this all the times “Dad, I want this.” “Yeah, but…” and I need to keep fresh in my “Yes, and…” mentality. Build upon what your colleagues say.

Third is make your teammates successful. Design thinking as a process of innovation is not an individual effort – it is a team sport. And there’s no individual that has the right idea. Every time this process has been successful, it’s been a combination of two, three or four ideas. And those two or three ideas come from two or three different people. So it’s only being able to work efficiently and cohesively with your colleagues that you can be successful.

Fourth, simplicity lives on the far side of complexity. This is why most executives are uncomfortable with innovation. Think about it. They’ve worked 15, 20, 25 years to get into a leadership position. They know they’re only going to be in that leadership position 2, 3, 5 years before they retire. Are they willing to risk all of that time and effort that they’ve put in to make real lasting change by muddying the waters? It’s highly unlikely. It takes a very special person to have that strength. But design thinking requires that we first diverge, become more complex, make the waters muddier before we can converge on that solution. And find that simplicity that exists on the far side of complexity.

And finally, when all else fails, trust the process, do the work. Get out, talk to your customers, meet with your team, combine those observations to make insight, reframe that insight as a “how might we?” question in a way that encourages people to come up with “Yes, and…” responses. Combine those “Yes, and…” responses into ideas. And then once you have those ideas, prototype them to gain initial feedback before launching on a series of experiments that include market research, pilots, and then experiments. And with those experiments, we gather the feedback and start the process all over, which I caution. Design thinking is not an endless loop. It is just a way that we can begin to cycle until we have something that’s strong enough to stand on its own. And like an atom, it flings off an electron out into the world. Your ideas are the electrons being flown into the world. And it’s only through this constant motion and this electrical charge of our own enthusiasm that our ideas take flight and can become their own organisms.

So again, whether you’re an entrepreneur or an intrapreneur or just someone who’s in a company that believes there’s a better way, we’d love to talk to you. We’ve worked with SVIC on many occasions, this is a great process, particularly when you come to Silicon Valley and SVIC has a week-long program set up for you. Our program is a great kickoff, but it’s also a great conclusion when you’ve received all this new information and then how do you begin to synthesize it. So, as you think about coming out to visit, I’d encourage you to include design thinking in your itinerary. And for those of you on the call, we look forward to seeing you the next time you’re here.

Rahim Rahemtulla:
Jeff, thank you so much. That is wonderful. I think that’s inspiring, insightful, so many interesting thoughts. One of them right here on our screen. Tell us about this.

Jeff Eyet:
So Alan Kay, the father of modern computing, said, “The best way to predict the future is to design it.” And so again, back to Mario’s original question, we can all sit back and worry about artificial intelligence and machine learning displacing us or we can take an assertive role and insert the human and their needs into these conversations so that when we do turn over tasks to the machines, they’re working with our best interest in mind.

Rahim Rahemtulla:
Jeff, we have a couple of couple of minutes left, I think, and we can just discuss a little.

Jeff Eyet:
Sure. Yeah, anything that comes in or questions you have, I’m happy to share.

Rahim Rahemtulla:
Absolutely. And we do have one and I’m going to ask you that too because it goes back to one of the steps in the design thinking process.

Jeff Eyet:
Sure.

Rahim Rahemtulla:
And just, before we go there, you ended there and you say, “Go back to the process. When all else fails, go back to the process.”

Jeff Eyet:
Yes, yes.

Rahim Rahemtulla:
The process is just like any other piece of work. It’s a discipline, right? As much as it may seem like sometimes it’s fun and you’ll come up with these crazy things and do that sort of out-of-the-box things, it can be hard. It might seem like it’s difficult to take that seriously, but it is a disciplined process, just like anything else. But how do you know, is there a point where you can say, “We’re done here”?

Jeff Eyet:
[Laughter]

Rahim Rahemtulla:
“We’ve taken the feedback, we’ve prototyped, we’ve got it and refined it and refined it and it’s really running well now. We love it, it’s great, it’s taken on a life of its own. And now we move on and we say goodbye”? Well, how does that process work?

Jeff Eyet:
Great question, and I’ll try to keep this answer concise. The mistake that large companies make about innovation is that the number of innovative ideas are few. Just for order of magnitude, a large company with 10 ideas is much less insightful or innovative than a small company with 100 ideas, because the company with 10 ideas is going to hold on to bad ideas just so that they look innovative. AI, in a way, is something that’s not going to be solved, but as long as you have a strategy around AI, you’ll appear innovative. In contrast, what we want to do is we want to take those 100 ideas and narrow them down to only a few. How do you know when they’re ready to leave the nest, if you will? Trust me, if you have a good idea that stands out, people are going to be knocking on your door to become a part of your team. And you have to make a decision: “Am I going to remain focused on innovation and coming up with this next idea? Or am I going to shift to an intrapreneur role and join the team of executives that’s taking this idea from the drawing boards to launch?”

Rahim Rahemtulla:
Fantastic. Jeff, thank you so much. That’s a wonderful thought, I think, to end on. And we’ve even finished on time. So all that’s left for me to say is just a big thank you to you, Jeff, for taking part today in our webinar.

Jeff Eyet:
Thanks, Rahim.

Rahim Rahemtulla:
I think it’s been all the things that a webinar should be, which is educational and insightful and inspirational as well. So many, many thanks for being with us.

Jeff Eyet:
Thank you. Yeah, and please follow us on Twitter or follow us on LinkedIn, you can find the links in SVIC’s posts. And, again, allow us to continue the conversation there. I’ll definitely be writing a blog about this, so I’d appreciate any love that the online community has to boost this up.

Rahim Rahemtulla:
Fantastic. And, exactly, we also would like to thank you, the community, for taking part, for being with us today. We really welcome your feedback and we’re so glad that you could participate with your questions and with the poll, so as to really make this a really lively and interesting discussion. And as Jeff mentioned there, he has taken part in our executive immersion programs that we run and so we hope that you will think about maybe joining one, coming to Silicon Valley, because it really is all about education. It is about inspiration on one level, and then it’s also about how you apply that. You’ve got to turn that into long-term value for your company and that is exactly the sort of thing that Jeff and Clark can do for you when you learn for yourself how to really apply all these methodologies and this sort of thinking in your daily practice. So I think that’s the next step. We hope that you will take that journey. And I think all we have left to say is just a big thank you. Thank you to you for taking part, Jeff. Again, a pleasure to have you with us. And we do hope that you’ll all join us again for the next interviews, webinars. Check our website siliconvalley.center for the full program. And we’ll see you all again very soon. So have a good day and goodbye.

info@svicenter.com 1-650-274-0214
[FirstName]
[FirstName]
[LastName]
[LastName]
[Title]
[Title]
[Company]
[Company]
[email]
[email]
[w-.]
[w-.]
[w-]
[w-]
[FirstName]
[FirstName]
[LastName]
[LastName]
[Title]
[Title]
[Company]
[Company]
[email]
[email]
[w-.]
[w-.]
[w-]
[w-]
[FirstName]
[FirstName]
[LastName]
[LastName]
[Title]
[Title]
[Company]
[Company]
[email]
[email]
[w-.]
[w-.]
[w-]
[w-]
[FirstName]
[FirstName]
[LastName]
[LastName]
[Title]
[Title]
[Company]
[Company]
[email]
[email]
[w-.]
[w-.]
[w-]
[w-]
[FirstName]
[FirstName]
[LastName]
[LastName]
[Title]
[Title]
[Company]
[Company]
[email]
[email]
[w-.]
[w-.]
[w-]
[w-]
[FirstName]
[FirstName]
[LastName]
[LastName]
[Title]
[Title]
[Company]
[Company]
[email]
[email]
[w-.]
[w-.]
[w-]
[w-]